00:00:00.000 Started by upstream project "autotest-per-patch" build number 132588 00:00:00.000 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.038 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:02.137 The recommended git tool is: git 00:00:02.138 using credential 00000000-0000-0000-0000-000000000002 00:00:02.139 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:02.149 Fetching changes from the remote Git repository 00:00:02.153 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:02.162 Using shallow fetch with depth 1 00:00:02.162 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:02.162 > git --version # timeout=10 00:00:02.171 > git --version # 'git version 2.39.2' 00:00:02.171 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:02.181 Setting http proxy: proxy-dmz.intel.com:911 00:00:02.181 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:08.497 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:08.511 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:08.525 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:08.525 > git config core.sparsecheckout # timeout=10 00:00:08.536 > git read-tree -mu HEAD # timeout=10 00:00:08.553 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:08.576 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:08.576 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:08.657 [Pipeline] Start of Pipeline 00:00:08.672 [Pipeline] library 00:00:08.673 Loading library shm_lib@master 00:00:08.674 Library shm_lib@master is cached. Copying from home. 00:00:08.690 [Pipeline] node 00:00:08.698 Running on WFP8 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:08.700 [Pipeline] { 00:00:08.707 [Pipeline] catchError 00:00:08.709 [Pipeline] { 00:00:08.720 [Pipeline] wrap 00:00:08.728 [Pipeline] { 00:00:08.736 [Pipeline] stage 00:00:08.738 [Pipeline] { (Prologue) 00:00:08.910 [Pipeline] sh 00:00:09.216 + logger -p user.info -t JENKINS-CI 00:00:09.239 [Pipeline] echo 00:00:09.240 Node: WFP8 00:00:09.248 [Pipeline] sh 00:00:09.548 [Pipeline] setCustomBuildProperty 00:00:09.562 [Pipeline] echo 00:00:09.564 Cleanup processes 00:00:09.570 [Pipeline] sh 00:00:09.856 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:09.856 2248784 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:09.870 [Pipeline] sh 00:00:10.156 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:10.156 ++ grep -v 'sudo pgrep' 00:00:10.156 ++ awk '{print $1}' 00:00:10.156 + sudo kill -9 00:00:10.156 + true 00:00:10.175 [Pipeline] cleanWs 00:00:10.186 [WS-CLEANUP] Deleting project workspace... 00:00:10.186 [WS-CLEANUP] Deferred wipeout is used... 00:00:10.194 [WS-CLEANUP] done 00:00:10.198 [Pipeline] setCustomBuildProperty 00:00:10.212 [Pipeline] sh 00:00:10.496 + sudo git config --global --replace-all safe.directory '*' 00:00:10.597 [Pipeline] httpRequest 00:00:11.042 [Pipeline] echo 00:00:11.044 Sorcerer 10.211.164.20 is alive 00:00:11.051 [Pipeline] retry 00:00:11.052 [Pipeline] { 00:00:11.063 [Pipeline] httpRequest 00:00:11.068 HttpMethod: GET 00:00:11.068 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:11.068 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:11.071 Response Code: HTTP/1.1 200 OK 00:00:11.071 Success: Status code 200 is in the accepted range: 200,404 00:00:11.071 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:12.317 [Pipeline] } 00:00:12.336 [Pipeline] // retry 00:00:12.344 [Pipeline] sh 00:00:12.631 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:12.648 [Pipeline] httpRequest 00:00:13.018 [Pipeline] echo 00:00:13.019 Sorcerer 10.211.164.20 is alive 00:00:13.028 [Pipeline] retry 00:00:13.030 [Pipeline] { 00:00:13.045 [Pipeline] httpRequest 00:00:13.050 HttpMethod: GET 00:00:13.050 URL: http://10.211.164.20/packages/spdk_bf92c7a4260c2032f8d586ff3b58846993fb1d59.tar.gz 00:00:13.050 Sending request to url: http://10.211.164.20/packages/spdk_bf92c7a4260c2032f8d586ff3b58846993fb1d59.tar.gz 00:00:13.073 Response Code: HTTP/1.1 200 OK 00:00:13.074 Success: Status code 200 is in the accepted range: 200,404 00:00:13.074 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_bf92c7a4260c2032f8d586ff3b58846993fb1d59.tar.gz 00:01:27.967 [Pipeline] } 00:01:27.984 [Pipeline] // retry 00:01:27.991 [Pipeline] sh 00:01:28.274 + tar --no-same-owner -xf spdk_bf92c7a4260c2032f8d586ff3b58846993fb1d59.tar.gz 00:01:30.825 [Pipeline] sh 00:01:31.109 + git -C spdk log --oneline -n5 00:01:31.109 bf92c7a42 bdev/nvme: Use nbdev always for local nvme_bdev pointer variables 00:01:31.109 35cd3e84d bdev/part: Pass through dif_check_flags via dif_check_flags_exclude_mask 00:01:31.109 01a2c4855 bdev/passthru: Pass through dif_check_flags via dif_check_flags_exclude_mask 00:01:31.109 9094b9600 bdev: Assert to check if I/O pass dif_check_flags not enabled by bdev 00:01:31.109 2e10c84c8 nvmf: Expose DIF type of namespace to host again 00:01:31.119 [Pipeline] } 00:01:31.134 [Pipeline] // stage 00:01:31.143 [Pipeline] stage 00:01:31.145 [Pipeline] { (Prepare) 00:01:31.162 [Pipeline] writeFile 00:01:31.178 [Pipeline] sh 00:01:31.460 + logger -p user.info -t JENKINS-CI 00:01:31.472 [Pipeline] sh 00:01:31.754 + logger -p user.info -t JENKINS-CI 00:01:31.766 [Pipeline] sh 00:01:32.049 + cat autorun-spdk.conf 00:01:32.049 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:32.049 SPDK_TEST_NVMF=1 00:01:32.049 SPDK_TEST_NVME_CLI=1 00:01:32.049 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:32.049 SPDK_TEST_NVMF_NICS=e810 00:01:32.049 SPDK_TEST_VFIOUSER=1 00:01:32.049 SPDK_RUN_UBSAN=1 00:01:32.049 NET_TYPE=phy 00:01:32.056 RUN_NIGHTLY=0 00:01:32.059 [Pipeline] readFile 00:01:32.081 [Pipeline] withEnv 00:01:32.083 [Pipeline] { 00:01:32.095 [Pipeline] sh 00:01:32.378 + set -ex 00:01:32.378 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:32.378 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:32.378 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:32.378 ++ SPDK_TEST_NVMF=1 00:01:32.378 ++ SPDK_TEST_NVME_CLI=1 00:01:32.378 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:32.378 ++ SPDK_TEST_NVMF_NICS=e810 00:01:32.378 ++ SPDK_TEST_VFIOUSER=1 00:01:32.378 ++ SPDK_RUN_UBSAN=1 00:01:32.378 ++ NET_TYPE=phy 00:01:32.378 ++ RUN_NIGHTLY=0 00:01:32.378 + case $SPDK_TEST_NVMF_NICS in 00:01:32.378 + DRIVERS=ice 00:01:32.378 + [[ tcp == \r\d\m\a ]] 00:01:32.378 + [[ -n ice ]] 00:01:32.378 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:32.378 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:32.378 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:32.378 rmmod: ERROR: Module irdma is not currently loaded 00:01:32.378 rmmod: ERROR: Module i40iw is not currently loaded 00:01:32.378 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:32.378 + true 00:01:32.379 + for D in $DRIVERS 00:01:32.379 + sudo modprobe ice 00:01:32.379 + exit 0 00:01:32.387 [Pipeline] } 00:01:32.401 [Pipeline] // withEnv 00:01:32.406 [Pipeline] } 00:01:32.419 [Pipeline] // stage 00:01:32.428 [Pipeline] catchError 00:01:32.430 [Pipeline] { 00:01:32.443 [Pipeline] timeout 00:01:32.443 Timeout set to expire in 1 hr 0 min 00:01:32.444 [Pipeline] { 00:01:32.457 [Pipeline] stage 00:01:32.459 [Pipeline] { (Tests) 00:01:32.473 [Pipeline] sh 00:01:32.813 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:32.813 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:32.813 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:32.813 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:32.813 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:32.813 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:32.813 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:32.813 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:32.813 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:32.813 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:32.813 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:32.813 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:32.813 + source /etc/os-release 00:01:32.813 ++ NAME='Fedora Linux' 00:01:32.813 ++ VERSION='39 (Cloud Edition)' 00:01:32.813 ++ ID=fedora 00:01:32.813 ++ VERSION_ID=39 00:01:32.813 ++ VERSION_CODENAME= 00:01:32.813 ++ PLATFORM_ID=platform:f39 00:01:32.813 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:32.813 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:32.813 ++ LOGO=fedora-logo-icon 00:01:32.813 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:32.813 ++ HOME_URL=https://fedoraproject.org/ 00:01:32.813 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:32.813 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:32.813 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:32.813 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:32.813 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:32.813 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:32.813 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:32.813 ++ SUPPORT_END=2024-11-12 00:01:32.813 ++ VARIANT='Cloud Edition' 00:01:32.813 ++ VARIANT_ID=cloud 00:01:32.813 + uname -a 00:01:32.813 Linux spdk-wfp-08 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:32.813 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:35.350 Hugepages 00:01:35.350 node hugesize free / total 00:01:35.350 node0 1048576kB 0 / 0 00:01:35.350 node0 2048kB 0 / 0 00:01:35.350 node1 1048576kB 0 / 0 00:01:35.350 node1 2048kB 0 / 0 00:01:35.350 00:01:35.350 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:35.350 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:01:35.350 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:01:35.350 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:01:35.350 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:01:35.350 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:01:35.350 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:01:35.350 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:01:35.350 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:01:35.350 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:01:35.350 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:01:35.350 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:01:35.350 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:01:35.350 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:01:35.350 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:01:35.350 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:01:35.350 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:01:35.350 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:01:35.350 + rm -f /tmp/spdk-ld-path 00:01:35.350 + source autorun-spdk.conf 00:01:35.350 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:35.350 ++ SPDK_TEST_NVMF=1 00:01:35.350 ++ SPDK_TEST_NVME_CLI=1 00:01:35.350 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:35.350 ++ SPDK_TEST_NVMF_NICS=e810 00:01:35.350 ++ SPDK_TEST_VFIOUSER=1 00:01:35.350 ++ SPDK_RUN_UBSAN=1 00:01:35.350 ++ NET_TYPE=phy 00:01:35.350 ++ RUN_NIGHTLY=0 00:01:35.350 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:35.350 + [[ -n '' ]] 00:01:35.350 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:35.350 + for M in /var/spdk/build-*-manifest.txt 00:01:35.350 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:35.350 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:35.350 + for M in /var/spdk/build-*-manifest.txt 00:01:35.350 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:35.350 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:35.350 + for M in /var/spdk/build-*-manifest.txt 00:01:35.350 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:35.350 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:35.350 ++ uname 00:01:35.350 + [[ Linux == \L\i\n\u\x ]] 00:01:35.350 + sudo dmesg -T 00:01:35.350 + sudo dmesg --clear 00:01:35.350 + dmesg_pid=2250241 00:01:35.350 + [[ Fedora Linux == FreeBSD ]] 00:01:35.350 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:35.350 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:35.350 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:35.350 + [[ -x /usr/src/fio-static/fio ]] 00:01:35.350 + export FIO_BIN=/usr/src/fio-static/fio 00:01:35.350 + FIO_BIN=/usr/src/fio-static/fio 00:01:35.350 + sudo dmesg -Tw 00:01:35.350 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:35.350 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:35.350 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:35.350 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:35.350 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:35.350 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:35.350 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:35.350 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:35.350 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:35.350 12:25:17 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:35.350 12:25:17 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:35.350 12:25:17 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:35.350 12:25:17 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:01:35.350 12:25:17 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:01:35.350 12:25:17 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:35.350 12:25:17 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:01:35.350 12:25:17 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:01:35.350 12:25:17 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:01:35.350 12:25:17 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:01:35.350 12:25:17 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:01:35.350 12:25:17 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:35.350 12:25:17 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:35.350 12:25:17 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:35.350 12:25:17 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:35.350 12:25:17 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:35.350 12:25:17 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:35.350 12:25:17 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:35.350 12:25:17 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:35.350 12:25:17 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:35.350 12:25:17 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:35.350 12:25:17 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:35.350 12:25:17 -- paths/export.sh@5 -- $ export PATH 00:01:35.350 12:25:17 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:35.350 12:25:17 -- common/autobuild_common.sh@492 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:35.350 12:25:17 -- common/autobuild_common.sh@493 -- $ date +%s 00:01:35.350 12:25:17 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732793117.XXXXXX 00:01:35.350 12:25:17 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732793117.ZKprUS 00:01:35.350 12:25:17 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:01:35.350 12:25:17 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:01:35.350 12:25:17 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:35.350 12:25:17 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:35.351 12:25:17 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:35.351 12:25:17 -- common/autobuild_common.sh@509 -- $ get_config_params 00:01:35.351 12:25:17 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:01:35.351 12:25:17 -- common/autotest_common.sh@10 -- $ set +x 00:01:35.351 12:25:17 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:01:35.351 12:25:17 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:01:35.351 12:25:17 -- pm/common@17 -- $ local monitor 00:01:35.351 12:25:17 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:35.351 12:25:17 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:35.351 12:25:17 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:35.351 12:25:17 -- pm/common@21 -- $ date +%s 00:01:35.351 12:25:17 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:35.351 12:25:17 -- pm/common@21 -- $ date +%s 00:01:35.351 12:25:17 -- pm/common@25 -- $ sleep 1 00:01:35.610 12:25:17 -- pm/common@21 -- $ date +%s 00:01:35.610 12:25:17 -- pm/common@21 -- $ date +%s 00:01:35.610 12:25:17 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732793117 00:01:35.610 12:25:17 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732793117 00:01:35.610 12:25:17 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732793117 00:01:35.610 12:25:17 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732793117 00:01:35.610 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732793117_collect-cpu-load.pm.log 00:01:35.610 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732793117_collect-vmstat.pm.log 00:01:35.610 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732793117_collect-cpu-temp.pm.log 00:01:35.610 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732793117_collect-bmc-pm.bmc.pm.log 00:01:36.548 12:25:18 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:01:36.548 12:25:18 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:36.548 12:25:18 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:36.548 12:25:18 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:36.548 12:25:18 -- spdk/autobuild.sh@16 -- $ date -u 00:01:36.548 Thu Nov 28 11:25:18 AM UTC 2024 00:01:36.548 12:25:18 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:36.548 v25.01-pre-277-gbf92c7a42 00:01:36.549 12:25:18 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:36.549 12:25:18 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:36.549 12:25:18 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:36.549 12:25:18 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:36.549 12:25:18 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:36.549 12:25:18 -- common/autotest_common.sh@10 -- $ set +x 00:01:36.549 ************************************ 00:01:36.549 START TEST ubsan 00:01:36.549 ************************************ 00:01:36.549 12:25:18 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:01:36.549 using ubsan 00:01:36.549 00:01:36.549 real 0m0.000s 00:01:36.549 user 0m0.000s 00:01:36.549 sys 0m0.000s 00:01:36.549 12:25:18 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:36.549 12:25:18 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:36.549 ************************************ 00:01:36.549 END TEST ubsan 00:01:36.549 ************************************ 00:01:36.549 12:25:18 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:36.549 12:25:18 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:36.549 12:25:18 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:36.549 12:25:18 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:36.549 12:25:18 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:36.549 12:25:18 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:36.549 12:25:18 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:36.549 12:25:18 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:36.549 12:25:18 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:01:36.808 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:36.808 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:37.066 Using 'verbs' RDMA provider 00:01:49.841 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:02:02.047 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:02:02.047 Creating mk/config.mk...done. 00:02:02.047 Creating mk/cc.flags.mk...done. 00:02:02.047 Type 'make' to build. 00:02:02.047 12:25:43 -- spdk/autobuild.sh@70 -- $ run_test make make -j96 00:02:02.047 12:25:43 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:02.047 12:25:43 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:02.047 12:25:43 -- common/autotest_common.sh@10 -- $ set +x 00:02:02.048 ************************************ 00:02:02.048 START TEST make 00:02:02.048 ************************************ 00:02:02.048 12:25:43 make -- common/autotest_common.sh@1129 -- $ make -j96 00:02:02.048 make[1]: Nothing to be done for 'all'. 00:02:02.626 The Meson build system 00:02:02.626 Version: 1.5.0 00:02:02.626 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:02:02.626 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:02.626 Build type: native build 00:02:02.626 Project name: libvfio-user 00:02:02.626 Project version: 0.0.1 00:02:02.626 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:02.626 C linker for the host machine: cc ld.bfd 2.40-14 00:02:02.626 Host machine cpu family: x86_64 00:02:02.626 Host machine cpu: x86_64 00:02:02.626 Run-time dependency threads found: YES 00:02:02.626 Library dl found: YES 00:02:02.626 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:02.626 Run-time dependency json-c found: YES 0.17 00:02:02.626 Run-time dependency cmocka found: YES 1.1.7 00:02:02.626 Program pytest-3 found: NO 00:02:02.626 Program flake8 found: NO 00:02:02.626 Program misspell-fixer found: NO 00:02:02.626 Program restructuredtext-lint found: NO 00:02:02.626 Program valgrind found: YES (/usr/bin/valgrind) 00:02:02.626 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:02.626 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:02.626 Compiler for C supports arguments -Wwrite-strings: YES 00:02:02.626 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:02.626 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:02:02.626 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:02:02.626 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:02.626 Build targets in project: 8 00:02:02.626 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:02:02.626 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:02:02.626 00:02:02.626 libvfio-user 0.0.1 00:02:02.626 00:02:02.626 User defined options 00:02:02.626 buildtype : debug 00:02:02.626 default_library: shared 00:02:02.626 libdir : /usr/local/lib 00:02:02.626 00:02:02.626 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:03.564 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:03.564 [1/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:02:03.564 [2/37] Compiling C object samples/lspci.p/lspci.c.o 00:02:03.564 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:02:03.564 [4/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:02:03.564 [5/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:02:03.564 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:02:03.564 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:02:03.564 [8/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:02:03.564 [9/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:02:03.564 [10/37] Compiling C object samples/null.p/null.c.o 00:02:03.564 [11/37] Compiling C object test/unit_tests.p/mocks.c.o 00:02:03.564 [12/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:02:03.564 [13/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:02:03.564 [14/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:02:03.564 [15/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:02:03.564 [16/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:02:03.564 [17/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:02:03.564 [18/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:02:03.564 [19/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:02:03.564 [20/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:02:03.564 [21/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:02:03.564 [22/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:02:03.564 [23/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:02:03.564 [24/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:02:03.564 [25/37] Compiling C object samples/server.p/server.c.o 00:02:03.564 [26/37] Compiling C object samples/client.p/client.c.o 00:02:03.564 [27/37] Linking target samples/client 00:02:03.564 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:02:03.823 [29/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:02:03.823 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:02:03.823 [31/37] Linking target test/unit_tests 00:02:03.823 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:02:03.823 [33/37] Linking target samples/null 00:02:03.823 [34/37] Linking target samples/server 00:02:03.823 [35/37] Linking target samples/gpio-pci-idio-16 00:02:03.823 [36/37] Linking target samples/lspci 00:02:03.823 [37/37] Linking target samples/shadow_ioeventfd_server 00:02:03.823 INFO: autodetecting backend as ninja 00:02:03.823 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:03.823 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:04.391 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:04.391 ninja: no work to do. 00:02:08.581 The Meson build system 00:02:08.581 Version: 1.5.0 00:02:08.581 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:02:08.581 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:02:08.581 Build type: native build 00:02:08.581 Program cat found: YES (/usr/bin/cat) 00:02:08.581 Project name: DPDK 00:02:08.581 Project version: 24.03.0 00:02:08.581 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:08.581 C linker for the host machine: cc ld.bfd 2.40-14 00:02:08.581 Host machine cpu family: x86_64 00:02:08.581 Host machine cpu: x86_64 00:02:08.581 Message: ## Building in Developer Mode ## 00:02:08.581 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:08.581 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:02:08.581 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:08.581 Program python3 found: YES (/usr/bin/python3) 00:02:08.581 Program cat found: YES (/usr/bin/cat) 00:02:08.581 Compiler for C supports arguments -march=native: YES 00:02:08.581 Checking for size of "void *" : 8 00:02:08.581 Checking for size of "void *" : 8 (cached) 00:02:08.581 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:08.581 Library m found: YES 00:02:08.581 Library numa found: YES 00:02:08.581 Has header "numaif.h" : YES 00:02:08.581 Library fdt found: NO 00:02:08.581 Library execinfo found: NO 00:02:08.581 Has header "execinfo.h" : YES 00:02:08.581 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:08.581 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:08.581 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:08.581 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:08.581 Run-time dependency openssl found: YES 3.1.1 00:02:08.581 Run-time dependency libpcap found: YES 1.10.4 00:02:08.581 Has header "pcap.h" with dependency libpcap: YES 00:02:08.581 Compiler for C supports arguments -Wcast-qual: YES 00:02:08.581 Compiler for C supports arguments -Wdeprecated: YES 00:02:08.581 Compiler for C supports arguments -Wformat: YES 00:02:08.581 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:08.581 Compiler for C supports arguments -Wformat-security: NO 00:02:08.581 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:08.581 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:08.581 Compiler for C supports arguments -Wnested-externs: YES 00:02:08.581 Compiler for C supports arguments -Wold-style-definition: YES 00:02:08.581 Compiler for C supports arguments -Wpointer-arith: YES 00:02:08.581 Compiler for C supports arguments -Wsign-compare: YES 00:02:08.581 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:08.581 Compiler for C supports arguments -Wundef: YES 00:02:08.581 Compiler for C supports arguments -Wwrite-strings: YES 00:02:08.581 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:08.581 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:08.581 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:08.581 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:08.581 Program objdump found: YES (/usr/bin/objdump) 00:02:08.581 Compiler for C supports arguments -mavx512f: YES 00:02:08.581 Checking if "AVX512 checking" compiles: YES 00:02:08.581 Fetching value of define "__SSE4_2__" : 1 00:02:08.581 Fetching value of define "__AES__" : 1 00:02:08.581 Fetching value of define "__AVX__" : 1 00:02:08.581 Fetching value of define "__AVX2__" : 1 00:02:08.581 Fetching value of define "__AVX512BW__" : 1 00:02:08.581 Fetching value of define "__AVX512CD__" : 1 00:02:08.581 Fetching value of define "__AVX512DQ__" : 1 00:02:08.581 Fetching value of define "__AVX512F__" : 1 00:02:08.581 Fetching value of define "__AVX512VL__" : 1 00:02:08.581 Fetching value of define "__PCLMUL__" : 1 00:02:08.581 Fetching value of define "__RDRND__" : 1 00:02:08.581 Fetching value of define "__RDSEED__" : 1 00:02:08.581 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:08.581 Fetching value of define "__znver1__" : (undefined) 00:02:08.581 Fetching value of define "__znver2__" : (undefined) 00:02:08.581 Fetching value of define "__znver3__" : (undefined) 00:02:08.581 Fetching value of define "__znver4__" : (undefined) 00:02:08.581 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:08.581 Message: lib/log: Defining dependency "log" 00:02:08.581 Message: lib/kvargs: Defining dependency "kvargs" 00:02:08.581 Message: lib/telemetry: Defining dependency "telemetry" 00:02:08.581 Checking for function "getentropy" : NO 00:02:08.581 Message: lib/eal: Defining dependency "eal" 00:02:08.581 Message: lib/ring: Defining dependency "ring" 00:02:08.581 Message: lib/rcu: Defining dependency "rcu" 00:02:08.581 Message: lib/mempool: Defining dependency "mempool" 00:02:08.581 Message: lib/mbuf: Defining dependency "mbuf" 00:02:08.581 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:08.581 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:08.581 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:08.581 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:08.581 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:08.581 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:08.581 Compiler for C supports arguments -mpclmul: YES 00:02:08.581 Compiler for C supports arguments -maes: YES 00:02:08.581 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:08.581 Compiler for C supports arguments -mavx512bw: YES 00:02:08.581 Compiler for C supports arguments -mavx512dq: YES 00:02:08.581 Compiler for C supports arguments -mavx512vl: YES 00:02:08.581 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:08.581 Compiler for C supports arguments -mavx2: YES 00:02:08.581 Compiler for C supports arguments -mavx: YES 00:02:08.581 Message: lib/net: Defining dependency "net" 00:02:08.581 Message: lib/meter: Defining dependency "meter" 00:02:08.581 Message: lib/ethdev: Defining dependency "ethdev" 00:02:08.581 Message: lib/pci: Defining dependency "pci" 00:02:08.581 Message: lib/cmdline: Defining dependency "cmdline" 00:02:08.581 Message: lib/hash: Defining dependency "hash" 00:02:08.581 Message: lib/timer: Defining dependency "timer" 00:02:08.581 Message: lib/compressdev: Defining dependency "compressdev" 00:02:08.581 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:08.581 Message: lib/dmadev: Defining dependency "dmadev" 00:02:08.581 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:08.581 Message: lib/power: Defining dependency "power" 00:02:08.581 Message: lib/reorder: Defining dependency "reorder" 00:02:08.581 Message: lib/security: Defining dependency "security" 00:02:08.581 Has header "linux/userfaultfd.h" : YES 00:02:08.581 Has header "linux/vduse.h" : YES 00:02:08.581 Message: lib/vhost: Defining dependency "vhost" 00:02:08.581 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:08.581 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:08.581 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:08.581 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:08.581 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:08.581 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:08.581 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:08.581 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:08.581 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:08.581 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:08.581 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:08.581 Configuring doxy-api-html.conf using configuration 00:02:08.581 Configuring doxy-api-man.conf using configuration 00:02:08.581 Program mandb found: YES (/usr/bin/mandb) 00:02:08.581 Program sphinx-build found: NO 00:02:08.582 Configuring rte_build_config.h using configuration 00:02:08.582 Message: 00:02:08.582 ================= 00:02:08.582 Applications Enabled 00:02:08.582 ================= 00:02:08.582 00:02:08.582 apps: 00:02:08.582 00:02:08.582 00:02:08.582 Message: 00:02:08.582 ================= 00:02:08.582 Libraries Enabled 00:02:08.582 ================= 00:02:08.582 00:02:08.582 libs: 00:02:08.582 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:08.582 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:08.582 cryptodev, dmadev, power, reorder, security, vhost, 00:02:08.582 00:02:08.582 Message: 00:02:08.582 =============== 00:02:08.582 Drivers Enabled 00:02:08.582 =============== 00:02:08.582 00:02:08.582 common: 00:02:08.582 00:02:08.582 bus: 00:02:08.582 pci, vdev, 00:02:08.582 mempool: 00:02:08.582 ring, 00:02:08.582 dma: 00:02:08.582 00:02:08.582 net: 00:02:08.582 00:02:08.582 crypto: 00:02:08.582 00:02:08.582 compress: 00:02:08.582 00:02:08.582 vdpa: 00:02:08.582 00:02:08.582 00:02:08.582 Message: 00:02:08.582 ================= 00:02:08.582 Content Skipped 00:02:08.582 ================= 00:02:08.582 00:02:08.582 apps: 00:02:08.582 dumpcap: explicitly disabled via build config 00:02:08.582 graph: explicitly disabled via build config 00:02:08.582 pdump: explicitly disabled via build config 00:02:08.582 proc-info: explicitly disabled via build config 00:02:08.582 test-acl: explicitly disabled via build config 00:02:08.582 test-bbdev: explicitly disabled via build config 00:02:08.582 test-cmdline: explicitly disabled via build config 00:02:08.582 test-compress-perf: explicitly disabled via build config 00:02:08.582 test-crypto-perf: explicitly disabled via build config 00:02:08.582 test-dma-perf: explicitly disabled via build config 00:02:08.582 test-eventdev: explicitly disabled via build config 00:02:08.582 test-fib: explicitly disabled via build config 00:02:08.582 test-flow-perf: explicitly disabled via build config 00:02:08.582 test-gpudev: explicitly disabled via build config 00:02:08.582 test-mldev: explicitly disabled via build config 00:02:08.582 test-pipeline: explicitly disabled via build config 00:02:08.582 test-pmd: explicitly disabled via build config 00:02:08.582 test-regex: explicitly disabled via build config 00:02:08.582 test-sad: explicitly disabled via build config 00:02:08.582 test-security-perf: explicitly disabled via build config 00:02:08.582 00:02:08.582 libs: 00:02:08.582 argparse: explicitly disabled via build config 00:02:08.582 metrics: explicitly disabled via build config 00:02:08.582 acl: explicitly disabled via build config 00:02:08.582 bbdev: explicitly disabled via build config 00:02:08.582 bitratestats: explicitly disabled via build config 00:02:08.582 bpf: explicitly disabled via build config 00:02:08.582 cfgfile: explicitly disabled via build config 00:02:08.582 distributor: explicitly disabled via build config 00:02:08.582 efd: explicitly disabled via build config 00:02:08.582 eventdev: explicitly disabled via build config 00:02:08.582 dispatcher: explicitly disabled via build config 00:02:08.582 gpudev: explicitly disabled via build config 00:02:08.582 gro: explicitly disabled via build config 00:02:08.582 gso: explicitly disabled via build config 00:02:08.582 ip_frag: explicitly disabled via build config 00:02:08.582 jobstats: explicitly disabled via build config 00:02:08.582 latencystats: explicitly disabled via build config 00:02:08.582 lpm: explicitly disabled via build config 00:02:08.582 member: explicitly disabled via build config 00:02:08.582 pcapng: explicitly disabled via build config 00:02:08.582 rawdev: explicitly disabled via build config 00:02:08.582 regexdev: explicitly disabled via build config 00:02:08.582 mldev: explicitly disabled via build config 00:02:08.582 rib: explicitly disabled via build config 00:02:08.582 sched: explicitly disabled via build config 00:02:08.582 stack: explicitly disabled via build config 00:02:08.582 ipsec: explicitly disabled via build config 00:02:08.582 pdcp: explicitly disabled via build config 00:02:08.582 fib: explicitly disabled via build config 00:02:08.582 port: explicitly disabled via build config 00:02:08.582 pdump: explicitly disabled via build config 00:02:08.582 table: explicitly disabled via build config 00:02:08.582 pipeline: explicitly disabled via build config 00:02:08.582 graph: explicitly disabled via build config 00:02:08.582 node: explicitly disabled via build config 00:02:08.582 00:02:08.582 drivers: 00:02:08.582 common/cpt: not in enabled drivers build config 00:02:08.582 common/dpaax: not in enabled drivers build config 00:02:08.582 common/iavf: not in enabled drivers build config 00:02:08.582 common/idpf: not in enabled drivers build config 00:02:08.582 common/ionic: not in enabled drivers build config 00:02:08.582 common/mvep: not in enabled drivers build config 00:02:08.582 common/octeontx: not in enabled drivers build config 00:02:08.582 bus/auxiliary: not in enabled drivers build config 00:02:08.582 bus/cdx: not in enabled drivers build config 00:02:08.582 bus/dpaa: not in enabled drivers build config 00:02:08.582 bus/fslmc: not in enabled drivers build config 00:02:08.582 bus/ifpga: not in enabled drivers build config 00:02:08.582 bus/platform: not in enabled drivers build config 00:02:08.582 bus/uacce: not in enabled drivers build config 00:02:08.582 bus/vmbus: not in enabled drivers build config 00:02:08.582 common/cnxk: not in enabled drivers build config 00:02:08.582 common/mlx5: not in enabled drivers build config 00:02:08.582 common/nfp: not in enabled drivers build config 00:02:08.582 common/nitrox: not in enabled drivers build config 00:02:08.582 common/qat: not in enabled drivers build config 00:02:08.582 common/sfc_efx: not in enabled drivers build config 00:02:08.582 mempool/bucket: not in enabled drivers build config 00:02:08.582 mempool/cnxk: not in enabled drivers build config 00:02:08.582 mempool/dpaa: not in enabled drivers build config 00:02:08.582 mempool/dpaa2: not in enabled drivers build config 00:02:08.582 mempool/octeontx: not in enabled drivers build config 00:02:08.582 mempool/stack: not in enabled drivers build config 00:02:08.582 dma/cnxk: not in enabled drivers build config 00:02:08.582 dma/dpaa: not in enabled drivers build config 00:02:08.582 dma/dpaa2: not in enabled drivers build config 00:02:08.582 dma/hisilicon: not in enabled drivers build config 00:02:08.582 dma/idxd: not in enabled drivers build config 00:02:08.582 dma/ioat: not in enabled drivers build config 00:02:08.582 dma/skeleton: not in enabled drivers build config 00:02:08.582 net/af_packet: not in enabled drivers build config 00:02:08.582 net/af_xdp: not in enabled drivers build config 00:02:08.582 net/ark: not in enabled drivers build config 00:02:08.582 net/atlantic: not in enabled drivers build config 00:02:08.582 net/avp: not in enabled drivers build config 00:02:08.582 net/axgbe: not in enabled drivers build config 00:02:08.582 net/bnx2x: not in enabled drivers build config 00:02:08.582 net/bnxt: not in enabled drivers build config 00:02:08.582 net/bonding: not in enabled drivers build config 00:02:08.582 net/cnxk: not in enabled drivers build config 00:02:08.582 net/cpfl: not in enabled drivers build config 00:02:08.582 net/cxgbe: not in enabled drivers build config 00:02:08.582 net/dpaa: not in enabled drivers build config 00:02:08.582 net/dpaa2: not in enabled drivers build config 00:02:08.582 net/e1000: not in enabled drivers build config 00:02:08.582 net/ena: not in enabled drivers build config 00:02:08.582 net/enetc: not in enabled drivers build config 00:02:08.582 net/enetfec: not in enabled drivers build config 00:02:08.582 net/enic: not in enabled drivers build config 00:02:08.582 net/failsafe: not in enabled drivers build config 00:02:08.582 net/fm10k: not in enabled drivers build config 00:02:08.582 net/gve: not in enabled drivers build config 00:02:08.582 net/hinic: not in enabled drivers build config 00:02:08.582 net/hns3: not in enabled drivers build config 00:02:08.582 net/i40e: not in enabled drivers build config 00:02:08.582 net/iavf: not in enabled drivers build config 00:02:08.582 net/ice: not in enabled drivers build config 00:02:08.582 net/idpf: not in enabled drivers build config 00:02:08.582 net/igc: not in enabled drivers build config 00:02:08.582 net/ionic: not in enabled drivers build config 00:02:08.582 net/ipn3ke: not in enabled drivers build config 00:02:08.582 net/ixgbe: not in enabled drivers build config 00:02:08.582 net/mana: not in enabled drivers build config 00:02:08.582 net/memif: not in enabled drivers build config 00:02:08.582 net/mlx4: not in enabled drivers build config 00:02:08.582 net/mlx5: not in enabled drivers build config 00:02:08.582 net/mvneta: not in enabled drivers build config 00:02:08.582 net/mvpp2: not in enabled drivers build config 00:02:08.582 net/netvsc: not in enabled drivers build config 00:02:08.582 net/nfb: not in enabled drivers build config 00:02:08.582 net/nfp: not in enabled drivers build config 00:02:08.582 net/ngbe: not in enabled drivers build config 00:02:08.582 net/null: not in enabled drivers build config 00:02:08.582 net/octeontx: not in enabled drivers build config 00:02:08.582 net/octeon_ep: not in enabled drivers build config 00:02:08.582 net/pcap: not in enabled drivers build config 00:02:08.582 net/pfe: not in enabled drivers build config 00:02:08.582 net/qede: not in enabled drivers build config 00:02:08.582 net/ring: not in enabled drivers build config 00:02:08.582 net/sfc: not in enabled drivers build config 00:02:08.582 net/softnic: not in enabled drivers build config 00:02:08.582 net/tap: not in enabled drivers build config 00:02:08.582 net/thunderx: not in enabled drivers build config 00:02:08.582 net/txgbe: not in enabled drivers build config 00:02:08.582 net/vdev_netvsc: not in enabled drivers build config 00:02:08.582 net/vhost: not in enabled drivers build config 00:02:08.582 net/virtio: not in enabled drivers build config 00:02:08.582 net/vmxnet3: not in enabled drivers build config 00:02:08.582 raw/*: missing internal dependency, "rawdev" 00:02:08.582 crypto/armv8: not in enabled drivers build config 00:02:08.582 crypto/bcmfs: not in enabled drivers build config 00:02:08.582 crypto/caam_jr: not in enabled drivers build config 00:02:08.582 crypto/ccp: not in enabled drivers build config 00:02:08.582 crypto/cnxk: not in enabled drivers build config 00:02:08.582 crypto/dpaa_sec: not in enabled drivers build config 00:02:08.583 crypto/dpaa2_sec: not in enabled drivers build config 00:02:08.583 crypto/ipsec_mb: not in enabled drivers build config 00:02:08.583 crypto/mlx5: not in enabled drivers build config 00:02:08.583 crypto/mvsam: not in enabled drivers build config 00:02:08.583 crypto/nitrox: not in enabled drivers build config 00:02:08.583 crypto/null: not in enabled drivers build config 00:02:08.583 crypto/octeontx: not in enabled drivers build config 00:02:08.583 crypto/openssl: not in enabled drivers build config 00:02:08.583 crypto/scheduler: not in enabled drivers build config 00:02:08.583 crypto/uadk: not in enabled drivers build config 00:02:08.583 crypto/virtio: not in enabled drivers build config 00:02:08.583 compress/isal: not in enabled drivers build config 00:02:08.583 compress/mlx5: not in enabled drivers build config 00:02:08.583 compress/nitrox: not in enabled drivers build config 00:02:08.583 compress/octeontx: not in enabled drivers build config 00:02:08.583 compress/zlib: not in enabled drivers build config 00:02:08.583 regex/*: missing internal dependency, "regexdev" 00:02:08.583 ml/*: missing internal dependency, "mldev" 00:02:08.583 vdpa/ifc: not in enabled drivers build config 00:02:08.583 vdpa/mlx5: not in enabled drivers build config 00:02:08.583 vdpa/nfp: not in enabled drivers build config 00:02:08.583 vdpa/sfc: not in enabled drivers build config 00:02:08.583 event/*: missing internal dependency, "eventdev" 00:02:08.583 baseband/*: missing internal dependency, "bbdev" 00:02:08.583 gpu/*: missing internal dependency, "gpudev" 00:02:08.583 00:02:08.583 00:02:08.842 Build targets in project: 85 00:02:08.842 00:02:08.842 DPDK 24.03.0 00:02:08.842 00:02:08.842 User defined options 00:02:08.842 buildtype : debug 00:02:08.842 default_library : shared 00:02:08.842 libdir : lib 00:02:08.842 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:02:08.842 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:08.842 c_link_args : 00:02:08.842 cpu_instruction_set: native 00:02:08.842 disable_apps : test-fib,test-sad,test,test-regex,test-security-perf,test-bbdev,dumpcap,test-crypto-perf,test-flow-perf,test-gpudev,test-cmdline,test-dma-perf,test-eventdev,test-pipeline,test-acl,proc-info,test-compress-perf,graph,test-pmd,test-mldev,pdump 00:02:08.842 disable_libs : bbdev,argparse,latencystats,member,gpudev,mldev,pipeline,lpm,efd,regexdev,sched,node,dispatcher,table,bpf,port,gro,fib,cfgfile,ip_frag,gso,rawdev,ipsec,pdcp,rib,acl,metrics,graph,pcapng,jobstats,eventdev,stack,bitratestats,distributor,pdump 00:02:08.842 enable_docs : false 00:02:08.842 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:02:08.842 enable_kmods : false 00:02:08.842 max_lcores : 128 00:02:08.842 tests : false 00:02:08.842 00:02:08.842 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:09.418 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:02:09.418 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:09.418 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:09.418 [3/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:09.418 [4/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:09.418 [5/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:09.418 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:09.418 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:09.418 [8/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:09.677 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:09.677 [10/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:09.677 [11/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:09.677 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:09.677 [13/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:09.677 [14/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:09.677 [15/268] Linking static target lib/librte_kvargs.a 00:02:09.677 [16/268] Linking static target lib/librte_log.a 00:02:09.677 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:09.677 [18/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:09.677 [19/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:09.677 [20/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:09.677 [21/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:09.677 [22/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:09.677 [23/268] Linking static target lib/librte_pci.a 00:02:09.677 [24/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:09.939 [25/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:09.939 [26/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:09.939 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:09.939 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:09.939 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:09.939 [30/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:09.939 [31/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:09.939 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:09.939 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:09.939 [34/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:09.939 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:09.939 [36/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:09.939 [37/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:09.939 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:09.939 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:09.939 [40/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:09.939 [41/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:09.939 [42/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:09.939 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:09.939 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:09.939 [45/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:09.939 [46/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:09.939 [47/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:09.939 [48/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:09.939 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:09.939 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:09.939 [51/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:09.939 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:09.939 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:09.939 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:09.939 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:09.939 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:09.939 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:09.939 [58/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:09.939 [59/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:09.939 [60/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:09.939 [61/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:09.939 [62/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:09.939 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:09.939 [64/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:09.939 [65/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:09.939 [66/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:09.939 [67/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:09.939 [68/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:09.939 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:09.939 [70/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:09.939 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:09.939 [72/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:10.199 [73/268] Linking static target lib/librte_meter.a 00:02:10.199 [74/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:10.199 [75/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:10.199 [76/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:10.199 [77/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:10.199 [78/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:10.199 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:10.199 [80/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:10.199 [81/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:10.199 [82/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:10.199 [83/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:10.199 [84/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:10.199 [85/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:10.199 [86/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:10.199 [87/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:10.199 [88/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:10.199 [89/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:10.199 [90/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:10.199 [91/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:10.199 [92/268] Linking static target lib/librte_ring.a 00:02:10.199 [93/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:10.199 [94/268] Linking static target lib/librte_telemetry.a 00:02:10.199 [95/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:10.199 [96/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:10.199 [97/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.199 [98/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:10.199 [99/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:10.199 [100/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:10.199 [101/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:10.199 [102/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:10.199 [103/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:10.199 [104/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:10.199 [105/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:10.199 [106/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:10.199 [107/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:10.199 [108/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.199 [109/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:10.199 [110/268] Linking static target lib/librte_rcu.a 00:02:10.199 [111/268] Linking static target lib/librte_net.a 00:02:10.199 [112/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:10.199 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:10.199 [114/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:10.199 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:10.199 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:10.199 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:10.199 [118/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:10.199 [119/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:10.199 [120/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:10.199 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:10.199 [122/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:10.199 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:10.199 [124/268] Linking static target lib/librte_mempool.a 00:02:10.199 [125/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:10.199 [126/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:10.199 [127/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:10.199 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:10.199 [129/268] Linking static target lib/librte_cmdline.a 00:02:10.199 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:10.199 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:10.199 [132/268] Linking static target lib/librte_eal.a 00:02:10.457 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:10.457 [134/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.457 [135/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:10.457 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:10.457 [137/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:10.457 [138/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.457 [139/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:10.457 [140/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:10.457 [141/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:10.457 [142/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:10.457 [143/268] Linking target lib/librte_log.so.24.1 00:02:10.457 [144/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.457 [145/268] Linking static target lib/librte_mbuf.a 00:02:10.457 [146/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:10.457 [147/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.457 [148/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.457 [149/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:10.457 [150/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:10.457 [151/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:10.457 [152/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:10.457 [153/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:10.457 [154/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:10.457 [155/268] Linking static target lib/librte_reorder.a 00:02:10.457 [156/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:10.457 [157/268] Linking static target lib/librte_timer.a 00:02:10.457 [158/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:10.457 [159/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:10.457 [160/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:10.457 [161/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:10.457 [162/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:10.458 [163/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:10.458 [164/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:10.458 [165/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:10.458 [166/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:10.458 [167/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:10.458 [168/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:10.458 [169/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:10.458 [170/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:10.717 [171/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:10.717 [172/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.717 [173/268] Linking static target lib/librte_compressdev.a 00:02:10.717 [174/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:10.717 [175/268] Linking target lib/librte_kvargs.so.24.1 00:02:10.717 [176/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:10.717 [177/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:10.717 [178/268] Linking target lib/librte_telemetry.so.24.1 00:02:10.717 [179/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:10.717 [180/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:10.717 [181/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:10.717 [182/268] Linking static target lib/librte_security.a 00:02:10.717 [183/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:10.717 [184/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:10.717 [185/268] Linking static target lib/librte_dmadev.a 00:02:10.717 [186/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:10.717 [187/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:10.717 [188/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:10.717 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:10.717 [190/268] Linking static target lib/librte_power.a 00:02:10.717 [191/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:10.717 [192/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:10.717 [193/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:10.717 [194/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:10.717 [195/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:10.717 [196/268] Linking static target lib/librte_hash.a 00:02:10.717 [197/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:10.717 [198/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:10.717 [199/268] Linking static target drivers/librte_mempool_ring.a 00:02:10.717 [200/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:10.717 [201/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:10.717 [202/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:10.717 [203/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:10.976 [204/268] Linking static target drivers/librte_bus_vdev.a 00:02:10.976 [205/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:10.976 [206/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:10.976 [207/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:10.976 [208/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:10.976 [209/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.976 [210/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:10.976 [211/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:10.976 [212/268] Linking static target lib/librte_cryptodev.a 00:02:10.976 [213/268] Linking static target drivers/librte_bus_pci.a 00:02:10.976 [214/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.976 [215/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.235 [216/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.235 [217/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.235 [218/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.235 [219/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.235 [220/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:11.235 [221/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.235 [222/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.235 [223/268] Linking static target lib/librte_ethdev.a 00:02:11.494 [224/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:11.494 [225/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.753 [226/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.753 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.696 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:12.696 [229/268] Linking static target lib/librte_vhost.a 00:02:12.696 [230/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.602 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.794 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.362 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.621 [234/268] Linking target lib/librte_eal.so.24.1 00:02:19.621 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:19.621 [236/268] Linking target lib/librte_dmadev.so.24.1 00:02:19.621 [237/268] Linking target lib/librte_ring.so.24.1 00:02:19.621 [238/268] Linking target lib/librte_pci.so.24.1 00:02:19.621 [239/268] Linking target lib/librte_timer.so.24.1 00:02:19.621 [240/268] Linking target lib/librte_meter.so.24.1 00:02:19.621 [241/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:19.880 [242/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:19.880 [243/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:19.880 [244/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:19.880 [245/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:19.880 [246/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:19.880 [247/268] Linking target lib/librte_mempool.so.24.1 00:02:19.880 [248/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:19.880 [249/268] Linking target lib/librte_rcu.so.24.1 00:02:19.880 [250/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:19.880 [251/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:20.140 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:20.140 [253/268] Linking target lib/librte_mbuf.so.24.1 00:02:20.140 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:20.140 [255/268] Linking target lib/librte_reorder.so.24.1 00:02:20.140 [256/268] Linking target lib/librte_compressdev.so.24.1 00:02:20.140 [257/268] Linking target lib/librte_net.so.24.1 00:02:20.140 [258/268] Linking target lib/librte_cryptodev.so.24.1 00:02:20.399 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:20.399 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:20.399 [261/268] Linking target lib/librte_security.so.24.1 00:02:20.399 [262/268] Linking target lib/librte_cmdline.so.24.1 00:02:20.399 [263/268] Linking target lib/librte_hash.so.24.1 00:02:20.399 [264/268] Linking target lib/librte_ethdev.so.24.1 00:02:20.659 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:20.659 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:20.659 [267/268] Linking target lib/librte_power.so.24.1 00:02:20.659 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:20.659 INFO: autodetecting backend as ninja 00:02:20.659 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 96 00:02:32.872 CC lib/ut/ut.o 00:02:32.872 CC lib/log/log.o 00:02:32.872 CC lib/log/log_deprecated.o 00:02:32.872 CC lib/log/log_flags.o 00:02:32.872 CC lib/ut_mock/mock.o 00:02:32.872 LIB libspdk_ut.a 00:02:32.872 LIB libspdk_log.a 00:02:32.872 SO libspdk_ut.so.2.0 00:02:32.872 LIB libspdk_ut_mock.a 00:02:32.872 SO libspdk_log.so.7.1 00:02:32.872 SO libspdk_ut_mock.so.6.0 00:02:32.872 SYMLINK libspdk_ut.so 00:02:32.872 SYMLINK libspdk_log.so 00:02:32.872 SYMLINK libspdk_ut_mock.so 00:02:32.872 CC lib/ioat/ioat.o 00:02:32.872 CXX lib/trace_parser/trace.o 00:02:32.872 CC lib/util/base64.o 00:02:32.873 CC lib/util/bit_array.o 00:02:32.873 CC lib/dma/dma.o 00:02:32.873 CC lib/util/cpuset.o 00:02:32.873 CC lib/util/crc32.o 00:02:32.873 CC lib/util/crc16.o 00:02:32.873 CC lib/util/crc32c.o 00:02:32.873 CC lib/util/dif.o 00:02:32.873 CC lib/util/crc32_ieee.o 00:02:32.873 CC lib/util/crc64.o 00:02:32.873 CC lib/util/fd.o 00:02:32.873 CC lib/util/fd_group.o 00:02:32.873 CC lib/util/file.o 00:02:32.873 CC lib/util/hexlify.o 00:02:32.873 CC lib/util/iov.o 00:02:32.873 CC lib/util/math.o 00:02:32.873 CC lib/util/net.o 00:02:32.873 CC lib/util/pipe.o 00:02:32.873 CC lib/util/strerror_tls.o 00:02:32.873 CC lib/util/string.o 00:02:32.873 CC lib/util/uuid.o 00:02:32.873 CC lib/util/xor.o 00:02:32.873 CC lib/util/zipf.o 00:02:32.873 CC lib/util/md5.o 00:02:32.873 CC lib/vfio_user/host/vfio_user.o 00:02:32.873 CC lib/vfio_user/host/vfio_user_pci.o 00:02:32.873 LIB libspdk_dma.a 00:02:32.873 SO libspdk_dma.so.5.0 00:02:32.873 LIB libspdk_ioat.a 00:02:32.873 SO libspdk_ioat.so.7.0 00:02:32.873 SYMLINK libspdk_dma.so 00:02:32.873 SYMLINK libspdk_ioat.so 00:02:33.131 LIB libspdk_vfio_user.a 00:02:33.131 SO libspdk_vfio_user.so.5.0 00:02:33.131 LIB libspdk_util.a 00:02:33.131 SYMLINK libspdk_vfio_user.so 00:02:33.131 SO libspdk_util.so.10.1 00:02:33.388 SYMLINK libspdk_util.so 00:02:33.388 LIB libspdk_trace_parser.a 00:02:33.388 SO libspdk_trace_parser.so.6.0 00:02:33.388 SYMLINK libspdk_trace_parser.so 00:02:33.647 CC lib/conf/conf.o 00:02:33.647 CC lib/idxd/idxd.o 00:02:33.647 CC lib/idxd/idxd_user.o 00:02:33.647 CC lib/idxd/idxd_kernel.o 00:02:33.647 CC lib/vmd/vmd.o 00:02:33.647 CC lib/rdma_utils/rdma_utils.o 00:02:33.647 CC lib/vmd/led.o 00:02:33.647 CC lib/env_dpdk/env.o 00:02:33.647 CC lib/env_dpdk/memory.o 00:02:33.647 CC lib/env_dpdk/pci.o 00:02:33.647 CC lib/env_dpdk/init.o 00:02:33.647 CC lib/env_dpdk/threads.o 00:02:33.647 CC lib/json/json_parse.o 00:02:33.647 CC lib/env_dpdk/pci_ioat.o 00:02:33.647 CC lib/json/json_util.o 00:02:33.647 CC lib/env_dpdk/pci_virtio.o 00:02:33.647 CC lib/env_dpdk/pci_idxd.o 00:02:33.647 CC lib/json/json_write.o 00:02:33.647 CC lib/env_dpdk/pci_vmd.o 00:02:33.647 CC lib/env_dpdk/pci_event.o 00:02:33.647 CC lib/env_dpdk/sigbus_handler.o 00:02:33.647 CC lib/env_dpdk/pci_dpdk.o 00:02:33.647 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:33.647 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:33.906 LIB libspdk_conf.a 00:02:33.906 SO libspdk_conf.so.6.0 00:02:33.906 LIB libspdk_rdma_utils.a 00:02:33.906 SO libspdk_rdma_utils.so.1.0 00:02:33.906 LIB libspdk_json.a 00:02:33.906 SYMLINK libspdk_conf.so 00:02:33.906 SO libspdk_json.so.6.0 00:02:33.906 SYMLINK libspdk_rdma_utils.so 00:02:33.906 SYMLINK libspdk_json.so 00:02:34.166 LIB libspdk_idxd.a 00:02:34.166 SO libspdk_idxd.so.12.1 00:02:34.166 LIB libspdk_vmd.a 00:02:34.166 SO libspdk_vmd.so.6.0 00:02:34.166 SYMLINK libspdk_idxd.so 00:02:34.166 SYMLINK libspdk_vmd.so 00:02:34.166 CC lib/rdma_provider/common.o 00:02:34.166 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:34.166 CC lib/jsonrpc/jsonrpc_server.o 00:02:34.166 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:34.166 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:34.166 CC lib/jsonrpc/jsonrpc_client.o 00:02:34.425 LIB libspdk_rdma_provider.a 00:02:34.425 SO libspdk_rdma_provider.so.7.0 00:02:34.425 LIB libspdk_jsonrpc.a 00:02:34.425 SYMLINK libspdk_rdma_provider.so 00:02:34.425 SO libspdk_jsonrpc.so.6.0 00:02:34.684 SYMLINK libspdk_jsonrpc.so 00:02:34.684 LIB libspdk_env_dpdk.a 00:02:34.684 SO libspdk_env_dpdk.so.15.1 00:02:34.684 SYMLINK libspdk_env_dpdk.so 00:02:34.943 CC lib/rpc/rpc.o 00:02:35.203 LIB libspdk_rpc.a 00:02:35.203 SO libspdk_rpc.so.6.0 00:02:35.203 SYMLINK libspdk_rpc.so 00:02:35.462 CC lib/keyring/keyring.o 00:02:35.462 CC lib/keyring/keyring_rpc.o 00:02:35.462 CC lib/trace/trace.o 00:02:35.462 CC lib/trace/trace_flags.o 00:02:35.462 CC lib/trace/trace_rpc.o 00:02:35.462 CC lib/notify/notify.o 00:02:35.462 CC lib/notify/notify_rpc.o 00:02:35.722 LIB libspdk_notify.a 00:02:35.722 LIB libspdk_keyring.a 00:02:35.722 LIB libspdk_trace.a 00:02:35.722 SO libspdk_keyring.so.2.0 00:02:35.722 SO libspdk_notify.so.6.0 00:02:35.722 SO libspdk_trace.so.11.0 00:02:35.722 SYMLINK libspdk_keyring.so 00:02:35.722 SYMLINK libspdk_notify.so 00:02:35.722 SYMLINK libspdk_trace.so 00:02:35.981 CC lib/thread/thread.o 00:02:35.981 CC lib/thread/iobuf.o 00:02:35.981 CC lib/sock/sock.o 00:02:35.981 CC lib/sock/sock_rpc.o 00:02:36.550 LIB libspdk_sock.a 00:02:36.550 SO libspdk_sock.so.10.0 00:02:36.550 SYMLINK libspdk_sock.so 00:02:36.809 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:36.809 CC lib/nvme/nvme_ctrlr.o 00:02:36.809 CC lib/nvme/nvme_ns_cmd.o 00:02:36.809 CC lib/nvme/nvme_fabric.o 00:02:36.809 CC lib/nvme/nvme_ns.o 00:02:36.809 CC lib/nvme/nvme_qpair.o 00:02:36.809 CC lib/nvme/nvme_pcie_common.o 00:02:36.809 CC lib/nvme/nvme_pcie.o 00:02:36.809 CC lib/nvme/nvme.o 00:02:36.809 CC lib/nvme/nvme_discovery.o 00:02:36.809 CC lib/nvme/nvme_quirks.o 00:02:36.809 CC lib/nvme/nvme_transport.o 00:02:36.809 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:36.809 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:36.809 CC lib/nvme/nvme_tcp.o 00:02:36.809 CC lib/nvme/nvme_opal.o 00:02:36.809 CC lib/nvme/nvme_io_msg.o 00:02:36.809 CC lib/nvme/nvme_poll_group.o 00:02:36.809 CC lib/nvme/nvme_zns.o 00:02:36.809 CC lib/nvme/nvme_stubs.o 00:02:36.809 CC lib/nvme/nvme_auth.o 00:02:36.809 CC lib/nvme/nvme_cuse.o 00:02:36.809 CC lib/nvme/nvme_vfio_user.o 00:02:36.809 CC lib/nvme/nvme_rdma.o 00:02:37.068 LIB libspdk_thread.a 00:02:37.068 SO libspdk_thread.so.11.0 00:02:37.326 SYMLINK libspdk_thread.so 00:02:37.584 CC lib/virtio/virtio_vhost_user.o 00:02:37.584 CC lib/virtio/virtio_vfio_user.o 00:02:37.584 CC lib/virtio/virtio.o 00:02:37.584 CC lib/virtio/virtio_pci.o 00:02:37.584 CC lib/blob/blobstore.o 00:02:37.584 CC lib/blob/blob_bs_dev.o 00:02:37.584 CC lib/blob/request.o 00:02:37.584 CC lib/blob/zeroes.o 00:02:37.584 CC lib/init/json_config.o 00:02:37.584 CC lib/accel/accel.o 00:02:37.584 CC lib/init/subsystem.o 00:02:37.584 CC lib/accel/accel_rpc.o 00:02:37.584 CC lib/accel/accel_sw.o 00:02:37.584 CC lib/init/subsystem_rpc.o 00:02:37.584 CC lib/init/rpc.o 00:02:37.584 CC lib/fsdev/fsdev.o 00:02:37.584 CC lib/fsdev/fsdev_io.o 00:02:37.584 CC lib/fsdev/fsdev_rpc.o 00:02:37.584 CC lib/vfu_tgt/tgt_endpoint.o 00:02:37.584 CC lib/vfu_tgt/tgt_rpc.o 00:02:37.843 LIB libspdk_init.a 00:02:37.843 LIB libspdk_virtio.a 00:02:37.843 SO libspdk_init.so.6.0 00:02:37.843 LIB libspdk_vfu_tgt.a 00:02:37.843 SO libspdk_virtio.so.7.0 00:02:37.843 SO libspdk_vfu_tgt.so.3.0 00:02:37.843 SYMLINK libspdk_init.so 00:02:37.843 SYMLINK libspdk_virtio.so 00:02:37.843 SYMLINK libspdk_vfu_tgt.so 00:02:38.102 LIB libspdk_fsdev.a 00:02:38.102 SO libspdk_fsdev.so.2.0 00:02:38.102 CC lib/event/app.o 00:02:38.102 CC lib/event/reactor.o 00:02:38.102 CC lib/event/log_rpc.o 00:02:38.102 CC lib/event/app_rpc.o 00:02:38.102 CC lib/event/scheduler_static.o 00:02:38.102 SYMLINK libspdk_fsdev.so 00:02:38.361 LIB libspdk_accel.a 00:02:38.361 SO libspdk_accel.so.16.0 00:02:38.361 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:38.361 SYMLINK libspdk_accel.so 00:02:38.361 LIB libspdk_event.a 00:02:38.361 LIB libspdk_nvme.a 00:02:38.361 SO libspdk_event.so.14.0 00:02:38.620 SYMLINK libspdk_event.so 00:02:38.620 SO libspdk_nvme.so.15.0 00:02:38.620 CC lib/bdev/bdev.o 00:02:38.620 CC lib/bdev/bdev_rpc.o 00:02:38.620 CC lib/bdev/bdev_zone.o 00:02:38.620 CC lib/bdev/part.o 00:02:38.620 CC lib/bdev/scsi_nvme.o 00:02:38.879 SYMLINK libspdk_nvme.so 00:02:38.879 LIB libspdk_fuse_dispatcher.a 00:02:38.879 SO libspdk_fuse_dispatcher.so.1.0 00:02:38.879 SYMLINK libspdk_fuse_dispatcher.so 00:02:39.816 LIB libspdk_blob.a 00:02:39.816 SO libspdk_blob.so.12.0 00:02:39.816 SYMLINK libspdk_blob.so 00:02:40.075 CC lib/lvol/lvol.o 00:02:40.075 CC lib/blobfs/blobfs.o 00:02:40.075 CC lib/blobfs/tree.o 00:02:40.650 LIB libspdk_blobfs.a 00:02:40.650 SO libspdk_blobfs.so.11.0 00:02:40.650 LIB libspdk_bdev.a 00:02:40.650 LIB libspdk_lvol.a 00:02:40.650 SO libspdk_lvol.so.11.0 00:02:40.650 SO libspdk_bdev.so.17.0 00:02:40.650 SYMLINK libspdk_blobfs.so 00:02:40.650 SYMLINK libspdk_lvol.so 00:02:40.909 SYMLINK libspdk_bdev.so 00:02:41.168 CC lib/nvmf/ctrlr.o 00:02:41.168 CC lib/nvmf/ctrlr_discovery.o 00:02:41.168 CC lib/nvmf/ctrlr_bdev.o 00:02:41.168 CC lib/nvmf/subsystem.o 00:02:41.168 CC lib/nvmf/nvmf.o 00:02:41.168 CC lib/nvmf/tcp.o 00:02:41.168 CC lib/nvmf/nvmf_rpc.o 00:02:41.168 CC lib/nvmf/transport.o 00:02:41.168 CC lib/nvmf/stubs.o 00:02:41.168 CC lib/nvmf/mdns_server.o 00:02:41.168 CC lib/nvmf/vfio_user.o 00:02:41.168 CC lib/nvmf/rdma.o 00:02:41.168 CC lib/scsi/dev.o 00:02:41.168 CC lib/nvmf/auth.o 00:02:41.168 CC lib/scsi/lun.o 00:02:41.168 CC lib/scsi/port.o 00:02:41.168 CC lib/scsi/scsi.o 00:02:41.168 CC lib/scsi/scsi_bdev.o 00:02:41.168 CC lib/scsi/scsi_pr.o 00:02:41.168 CC lib/scsi/scsi_rpc.o 00:02:41.168 CC lib/scsi/task.o 00:02:41.168 CC lib/ftl/ftl_core.o 00:02:41.168 CC lib/ftl/ftl_init.o 00:02:41.168 CC lib/ftl/ftl_layout.o 00:02:41.168 CC lib/ftl/ftl_debug.o 00:02:41.168 CC lib/ftl/ftl_io.o 00:02:41.168 CC lib/ftl/ftl_sb.o 00:02:41.168 CC lib/ftl/ftl_l2p.o 00:02:41.168 CC lib/ftl/ftl_nv_cache.o 00:02:41.168 CC lib/ftl/ftl_l2p_flat.o 00:02:41.168 CC lib/ftl/ftl_writer.o 00:02:41.168 CC lib/ftl/ftl_band.o 00:02:41.168 CC lib/ftl/ftl_band_ops.o 00:02:41.168 CC lib/ftl/ftl_rq.o 00:02:41.168 CC lib/ftl/ftl_reloc.o 00:02:41.168 CC lib/ftl/ftl_l2p_cache.o 00:02:41.168 CC lib/ftl/ftl_p2l_log.o 00:02:41.168 CC lib/ftl/ftl_p2l.o 00:02:41.168 CC lib/ftl/mngt/ftl_mngt.o 00:02:41.168 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:41.168 CC lib/ublk/ublk.o 00:02:41.168 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:41.168 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:41.168 CC lib/ublk/ublk_rpc.o 00:02:41.168 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:41.168 CC lib/nbd/nbd.o 00:02:41.168 CC lib/nbd/nbd_rpc.o 00:02:41.168 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:41.168 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:41.168 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:41.168 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:41.168 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:41.168 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:41.168 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:41.168 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:41.168 CC lib/ftl/utils/ftl_conf.o 00:02:41.168 CC lib/ftl/utils/ftl_md.o 00:02:41.168 CC lib/ftl/utils/ftl_mempool.o 00:02:41.168 CC lib/ftl/utils/ftl_property.o 00:02:41.168 CC lib/ftl/utils/ftl_bitmap.o 00:02:41.168 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:41.168 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:41.168 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:41.168 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:41.168 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:41.168 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:41.168 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:41.168 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:41.168 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:41.168 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:41.168 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:41.168 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:02:41.168 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:02:41.168 CC lib/ftl/base/ftl_base_bdev.o 00:02:41.168 CC lib/ftl/base/ftl_base_dev.o 00:02:41.168 CC lib/ftl/ftl_trace.o 00:02:41.733 LIB libspdk_nbd.a 00:02:41.733 SO libspdk_nbd.so.7.0 00:02:41.733 LIB libspdk_scsi.a 00:02:41.733 SO libspdk_scsi.so.9.0 00:02:41.733 SYMLINK libspdk_nbd.so 00:02:41.733 SYMLINK libspdk_scsi.so 00:02:41.992 LIB libspdk_ublk.a 00:02:41.992 SO libspdk_ublk.so.3.0 00:02:41.992 SYMLINK libspdk_ublk.so 00:02:41.992 CC lib/vhost/vhost_rpc.o 00:02:41.992 CC lib/vhost/vhost.o 00:02:41.992 CC lib/vhost/vhost_scsi.o 00:02:41.992 CC lib/vhost/vhost_blk.o 00:02:41.992 CC lib/iscsi/conn.o 00:02:41.992 CC lib/vhost/rte_vhost_user.o 00:02:41.992 CC lib/iscsi/init_grp.o 00:02:41.992 CC lib/iscsi/iscsi.o 00:02:42.251 CC lib/iscsi/param.o 00:02:42.251 CC lib/iscsi/portal_grp.o 00:02:42.251 CC lib/iscsi/iscsi_rpc.o 00:02:42.251 CC lib/iscsi/tgt_node.o 00:02:42.251 CC lib/iscsi/iscsi_subsystem.o 00:02:42.251 CC lib/iscsi/task.o 00:02:42.252 LIB libspdk_ftl.a 00:02:42.252 SO libspdk_ftl.so.9.0 00:02:42.510 SYMLINK libspdk_ftl.so 00:02:43.076 LIB libspdk_nvmf.a 00:02:43.076 LIB libspdk_vhost.a 00:02:43.076 SO libspdk_vhost.so.8.0 00:02:43.076 SO libspdk_nvmf.so.20.0 00:02:43.076 SYMLINK libspdk_vhost.so 00:02:43.076 LIB libspdk_iscsi.a 00:02:43.076 SYMLINK libspdk_nvmf.so 00:02:43.076 SO libspdk_iscsi.so.8.0 00:02:43.336 SYMLINK libspdk_iscsi.so 00:02:43.903 CC module/vfu_device/vfu_virtio.o 00:02:43.903 CC module/vfu_device/vfu_virtio_blk.o 00:02:43.903 CC module/env_dpdk/env_dpdk_rpc.o 00:02:43.903 CC module/vfu_device/vfu_virtio_rpc.o 00:02:43.903 CC module/vfu_device/vfu_virtio_scsi.o 00:02:43.903 CC module/vfu_device/vfu_virtio_fs.o 00:02:43.903 LIB libspdk_env_dpdk_rpc.a 00:02:43.903 CC module/accel/error/accel_error.o 00:02:43.903 CC module/accel/error/accel_error_rpc.o 00:02:43.903 CC module/accel/ioat/accel_ioat_rpc.o 00:02:43.903 CC module/accel/ioat/accel_ioat.o 00:02:43.903 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:43.903 CC module/blob/bdev/blob_bdev.o 00:02:43.903 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:43.903 CC module/fsdev/aio/fsdev_aio.o 00:02:43.903 CC module/fsdev/aio/fsdev_aio_rpc.o 00:02:43.903 CC module/accel/iaa/accel_iaa.o 00:02:43.903 CC module/accel/iaa/accel_iaa_rpc.o 00:02:43.903 CC module/fsdev/aio/linux_aio_mgr.o 00:02:43.903 CC module/sock/posix/posix.o 00:02:43.903 CC module/keyring/linux/keyring.o 00:02:43.903 CC module/keyring/linux/keyring_rpc.o 00:02:43.903 CC module/scheduler/gscheduler/gscheduler.o 00:02:43.903 CC module/accel/dsa/accel_dsa.o 00:02:43.903 CC module/accel/dsa/accel_dsa_rpc.o 00:02:43.903 SO libspdk_env_dpdk_rpc.so.6.0 00:02:43.903 CC module/keyring/file/keyring.o 00:02:43.903 CC module/keyring/file/keyring_rpc.o 00:02:44.162 SYMLINK libspdk_env_dpdk_rpc.so 00:02:44.162 LIB libspdk_keyring_linux.a 00:02:44.162 LIB libspdk_scheduler_gscheduler.a 00:02:44.162 LIB libspdk_scheduler_dpdk_governor.a 00:02:44.162 LIB libspdk_keyring_file.a 00:02:44.162 LIB libspdk_accel_ioat.a 00:02:44.162 LIB libspdk_accel_iaa.a 00:02:44.162 SO libspdk_keyring_linux.so.1.0 00:02:44.162 SO libspdk_scheduler_gscheduler.so.4.0 00:02:44.162 LIB libspdk_scheduler_dynamic.a 00:02:44.162 LIB libspdk_accel_error.a 00:02:44.162 SO libspdk_accel_iaa.so.3.0 00:02:44.162 SO libspdk_accel_ioat.so.6.0 00:02:44.162 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:44.162 SO libspdk_keyring_file.so.2.0 00:02:44.162 SO libspdk_scheduler_dynamic.so.4.0 00:02:44.162 SO libspdk_accel_error.so.2.0 00:02:44.162 SYMLINK libspdk_scheduler_gscheduler.so 00:02:44.162 SYMLINK libspdk_keyring_linux.so 00:02:44.162 LIB libspdk_blob_bdev.a 00:02:44.162 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:44.162 LIB libspdk_accel_dsa.a 00:02:44.162 SYMLINK libspdk_keyring_file.so 00:02:44.162 SYMLINK libspdk_accel_ioat.so 00:02:44.162 SYMLINK libspdk_scheduler_dynamic.so 00:02:44.162 SYMLINK libspdk_accel_iaa.so 00:02:44.162 SO libspdk_blob_bdev.so.12.0 00:02:44.162 SYMLINK libspdk_accel_error.so 00:02:44.162 SO libspdk_accel_dsa.so.5.0 00:02:44.420 LIB libspdk_vfu_device.a 00:02:44.420 SYMLINK libspdk_blob_bdev.so 00:02:44.420 SYMLINK libspdk_accel_dsa.so 00:02:44.420 SO libspdk_vfu_device.so.3.0 00:02:44.420 SYMLINK libspdk_vfu_device.so 00:02:44.420 LIB libspdk_fsdev_aio.a 00:02:44.420 LIB libspdk_sock_posix.a 00:02:44.420 SO libspdk_fsdev_aio.so.1.0 00:02:44.678 SO libspdk_sock_posix.so.6.0 00:02:44.678 SYMLINK libspdk_fsdev_aio.so 00:02:44.678 SYMLINK libspdk_sock_posix.so 00:02:44.678 CC module/bdev/delay/vbdev_delay.o 00:02:44.678 CC module/bdev/lvol/vbdev_lvol.o 00:02:44.679 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:44.679 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:44.679 CC module/bdev/malloc/bdev_malloc.o 00:02:44.679 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:44.679 CC module/bdev/iscsi/bdev_iscsi.o 00:02:44.679 CC module/bdev/split/vbdev_split.o 00:02:44.679 CC module/bdev/null/bdev_null.o 00:02:44.679 CC module/bdev/null/bdev_null_rpc.o 00:02:44.679 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:44.679 CC module/bdev/split/vbdev_split_rpc.o 00:02:44.679 CC module/bdev/gpt/gpt.o 00:02:44.679 CC module/bdev/gpt/vbdev_gpt.o 00:02:44.679 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:44.679 CC module/blobfs/bdev/blobfs_bdev.o 00:02:44.679 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:44.679 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:44.679 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:44.679 CC module/bdev/passthru/vbdev_passthru.o 00:02:44.679 CC module/bdev/aio/bdev_aio.o 00:02:44.679 CC module/bdev/error/vbdev_error.o 00:02:44.679 CC module/bdev/aio/bdev_aio_rpc.o 00:02:44.679 CC module/bdev/error/vbdev_error_rpc.o 00:02:44.679 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:44.679 CC module/bdev/raid/bdev_raid.o 00:02:44.679 CC module/bdev/raid/bdev_raid_rpc.o 00:02:44.679 CC module/bdev/raid/raid0.o 00:02:44.679 CC module/bdev/ftl/bdev_ftl.o 00:02:44.679 CC module/bdev/raid/bdev_raid_sb.o 00:02:44.679 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:44.679 CC module/bdev/raid/concat.o 00:02:44.679 CC module/bdev/raid/raid1.o 00:02:44.679 CC module/bdev/nvme/bdev_nvme.o 00:02:44.679 CC module/bdev/nvme/nvme_rpc.o 00:02:44.679 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:44.679 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:44.679 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:44.679 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:44.679 CC module/bdev/nvme/bdev_mdns_client.o 00:02:44.679 CC module/bdev/nvme/vbdev_opal.o 00:02:44.679 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:44.936 LIB libspdk_blobfs_bdev.a 00:02:44.936 LIB libspdk_bdev_split.a 00:02:44.936 SO libspdk_blobfs_bdev.so.6.0 00:02:44.936 LIB libspdk_bdev_null.a 00:02:44.936 SO libspdk_bdev_split.so.6.0 00:02:44.936 SO libspdk_bdev_null.so.6.0 00:02:44.936 LIB libspdk_bdev_gpt.a 00:02:45.195 SYMLINK libspdk_blobfs_bdev.so 00:02:45.195 SO libspdk_bdev_gpt.so.6.0 00:02:45.195 LIB libspdk_bdev_ftl.a 00:02:45.195 SYMLINK libspdk_bdev_split.so 00:02:45.195 LIB libspdk_bdev_error.a 00:02:45.195 LIB libspdk_bdev_passthru.a 00:02:45.195 LIB libspdk_bdev_iscsi.a 00:02:45.195 SYMLINK libspdk_bdev_null.so 00:02:45.195 LIB libspdk_bdev_aio.a 00:02:45.195 SO libspdk_bdev_ftl.so.6.0 00:02:45.195 SO libspdk_bdev_error.so.6.0 00:02:45.195 LIB libspdk_bdev_delay.a 00:02:45.195 SYMLINK libspdk_bdev_gpt.so 00:02:45.195 LIB libspdk_bdev_zone_block.a 00:02:45.195 LIB libspdk_bdev_malloc.a 00:02:45.195 SO libspdk_bdev_passthru.so.6.0 00:02:45.195 SO libspdk_bdev_iscsi.so.6.0 00:02:45.195 SO libspdk_bdev_delay.so.6.0 00:02:45.195 SO libspdk_bdev_aio.so.6.0 00:02:45.195 SO libspdk_bdev_zone_block.so.6.0 00:02:45.195 SO libspdk_bdev_malloc.so.6.0 00:02:45.195 SYMLINK libspdk_bdev_ftl.so 00:02:45.195 SYMLINK libspdk_bdev_error.so 00:02:45.195 SYMLINK libspdk_bdev_passthru.so 00:02:45.195 SYMLINK libspdk_bdev_iscsi.so 00:02:45.195 SYMLINK libspdk_bdev_aio.so 00:02:45.195 SYMLINK libspdk_bdev_delay.so 00:02:45.195 LIB libspdk_bdev_lvol.a 00:02:45.195 SYMLINK libspdk_bdev_malloc.so 00:02:45.195 SYMLINK libspdk_bdev_zone_block.so 00:02:45.195 SO libspdk_bdev_lvol.so.6.0 00:02:45.195 LIB libspdk_bdev_virtio.a 00:02:45.195 SO libspdk_bdev_virtio.so.6.0 00:02:45.453 SYMLINK libspdk_bdev_lvol.so 00:02:45.453 SYMLINK libspdk_bdev_virtio.so 00:02:45.713 LIB libspdk_bdev_raid.a 00:02:45.713 SO libspdk_bdev_raid.so.6.0 00:02:45.713 SYMLINK libspdk_bdev_raid.so 00:02:46.650 LIB libspdk_bdev_nvme.a 00:02:46.650 SO libspdk_bdev_nvme.so.7.1 00:02:46.909 SYMLINK libspdk_bdev_nvme.so 00:02:47.476 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:47.476 CC module/event/subsystems/fsdev/fsdev.o 00:02:47.476 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:47.476 CC module/event/subsystems/vmd/vmd.o 00:02:47.476 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:47.476 CC module/event/subsystems/iobuf/iobuf.o 00:02:47.476 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:47.476 CC module/event/subsystems/keyring/keyring.o 00:02:47.476 CC module/event/subsystems/scheduler/scheduler.o 00:02:47.476 CC module/event/subsystems/sock/sock.o 00:02:47.476 LIB libspdk_event_fsdev.a 00:02:47.476 LIB libspdk_event_vhost_blk.a 00:02:47.476 LIB libspdk_event_keyring.a 00:02:47.476 LIB libspdk_event_scheduler.a 00:02:47.476 LIB libspdk_event_vfu_tgt.a 00:02:47.476 LIB libspdk_event_vmd.a 00:02:47.476 SO libspdk_event_keyring.so.1.0 00:02:47.476 LIB libspdk_event_iobuf.a 00:02:47.476 SO libspdk_event_scheduler.so.4.0 00:02:47.476 LIB libspdk_event_sock.a 00:02:47.476 SO libspdk_event_fsdev.so.1.0 00:02:47.476 SO libspdk_event_vhost_blk.so.3.0 00:02:47.476 SO libspdk_event_vfu_tgt.so.3.0 00:02:47.476 SO libspdk_event_vmd.so.6.0 00:02:47.738 SO libspdk_event_iobuf.so.3.0 00:02:47.738 SO libspdk_event_sock.so.5.0 00:02:47.738 SYMLINK libspdk_event_fsdev.so 00:02:47.738 SYMLINK libspdk_event_keyring.so 00:02:47.738 SYMLINK libspdk_event_scheduler.so 00:02:47.738 SYMLINK libspdk_event_vhost_blk.so 00:02:47.738 SYMLINK libspdk_event_vfu_tgt.so 00:02:47.738 SYMLINK libspdk_event_vmd.so 00:02:47.738 SYMLINK libspdk_event_iobuf.so 00:02:47.738 SYMLINK libspdk_event_sock.so 00:02:47.999 CC module/event/subsystems/accel/accel.o 00:02:47.999 LIB libspdk_event_accel.a 00:02:47.999 SO libspdk_event_accel.so.6.0 00:02:48.257 SYMLINK libspdk_event_accel.so 00:02:48.516 CC module/event/subsystems/bdev/bdev.o 00:02:48.516 LIB libspdk_event_bdev.a 00:02:48.775 SO libspdk_event_bdev.so.6.0 00:02:48.775 SYMLINK libspdk_event_bdev.so 00:02:49.033 CC module/event/subsystems/nbd/nbd.o 00:02:49.033 CC module/event/subsystems/ublk/ublk.o 00:02:49.033 CC module/event/subsystems/scsi/scsi.o 00:02:49.033 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:49.033 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:49.033 LIB libspdk_event_ublk.a 00:02:49.033 LIB libspdk_event_nbd.a 00:02:49.033 SO libspdk_event_ublk.so.3.0 00:02:49.292 SO libspdk_event_nbd.so.6.0 00:02:49.292 LIB libspdk_event_scsi.a 00:02:49.292 SO libspdk_event_scsi.so.6.0 00:02:49.292 SYMLINK libspdk_event_nbd.so 00:02:49.292 SYMLINK libspdk_event_ublk.so 00:02:49.292 LIB libspdk_event_nvmf.a 00:02:49.292 SO libspdk_event_nvmf.so.6.0 00:02:49.292 SYMLINK libspdk_event_scsi.so 00:02:49.292 SYMLINK libspdk_event_nvmf.so 00:02:49.550 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:49.550 CC module/event/subsystems/iscsi/iscsi.o 00:02:49.550 LIB libspdk_event_vhost_scsi.a 00:02:49.809 SO libspdk_event_vhost_scsi.so.3.0 00:02:49.809 LIB libspdk_event_iscsi.a 00:02:49.809 SO libspdk_event_iscsi.so.6.0 00:02:49.809 SYMLINK libspdk_event_vhost_scsi.so 00:02:49.809 SYMLINK libspdk_event_iscsi.so 00:02:50.068 SO libspdk.so.6.0 00:02:50.068 SYMLINK libspdk.so 00:02:50.335 CC test/rpc_client/rpc_client_test.o 00:02:50.335 TEST_HEADER include/spdk/accel.h 00:02:50.335 TEST_HEADER include/spdk/accel_module.h 00:02:50.335 TEST_HEADER include/spdk/barrier.h 00:02:50.335 TEST_HEADER include/spdk/base64.h 00:02:50.335 TEST_HEADER include/spdk/bdev.h 00:02:50.335 TEST_HEADER include/spdk/assert.h 00:02:50.335 TEST_HEADER include/spdk/bdev_module.h 00:02:50.335 TEST_HEADER include/spdk/bdev_zone.h 00:02:50.335 TEST_HEADER include/spdk/bit_pool.h 00:02:50.335 TEST_HEADER include/spdk/bit_array.h 00:02:50.335 TEST_HEADER include/spdk/blob_bdev.h 00:02:50.335 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:50.335 TEST_HEADER include/spdk/blob.h 00:02:50.335 TEST_HEADER include/spdk/blobfs.h 00:02:50.335 TEST_HEADER include/spdk/conf.h 00:02:50.335 TEST_HEADER include/spdk/config.h 00:02:50.335 TEST_HEADER include/spdk/cpuset.h 00:02:50.335 TEST_HEADER include/spdk/crc32.h 00:02:50.335 TEST_HEADER include/spdk/crc64.h 00:02:50.335 TEST_HEADER include/spdk/crc16.h 00:02:50.335 TEST_HEADER include/spdk/dif.h 00:02:50.335 TEST_HEADER include/spdk/dma.h 00:02:50.335 TEST_HEADER include/spdk/env_dpdk.h 00:02:50.335 TEST_HEADER include/spdk/endian.h 00:02:50.335 CC app/trace_record/trace_record.o 00:02:50.335 TEST_HEADER include/spdk/event.h 00:02:50.335 TEST_HEADER include/spdk/env.h 00:02:50.335 TEST_HEADER include/spdk/fd_group.h 00:02:50.335 TEST_HEADER include/spdk/fd.h 00:02:50.335 TEST_HEADER include/spdk/file.h 00:02:50.335 TEST_HEADER include/spdk/fsdev.h 00:02:50.335 TEST_HEADER include/spdk/fsdev_module.h 00:02:50.335 TEST_HEADER include/spdk/ftl.h 00:02:50.335 TEST_HEADER include/spdk/fuse_dispatcher.h 00:02:50.335 TEST_HEADER include/spdk/histogram_data.h 00:02:50.335 TEST_HEADER include/spdk/hexlify.h 00:02:50.335 TEST_HEADER include/spdk/gpt_spec.h 00:02:50.335 TEST_HEADER include/spdk/idxd.h 00:02:50.335 TEST_HEADER include/spdk/ioat.h 00:02:50.335 TEST_HEADER include/spdk/init.h 00:02:50.335 TEST_HEADER include/spdk/ioat_spec.h 00:02:50.335 TEST_HEADER include/spdk/idxd_spec.h 00:02:50.335 CC app/spdk_top/spdk_top.o 00:02:50.335 TEST_HEADER include/spdk/iscsi_spec.h 00:02:50.335 CXX app/trace/trace.o 00:02:50.335 CC app/spdk_nvme_identify/identify.o 00:02:50.335 TEST_HEADER include/spdk/json.h 00:02:50.335 TEST_HEADER include/spdk/jsonrpc.h 00:02:50.335 CC app/spdk_lspci/spdk_lspci.o 00:02:50.335 TEST_HEADER include/spdk/likely.h 00:02:50.335 TEST_HEADER include/spdk/keyring.h 00:02:50.335 TEST_HEADER include/spdk/keyring_module.h 00:02:50.335 TEST_HEADER include/spdk/log.h 00:02:50.335 TEST_HEADER include/spdk/lvol.h 00:02:50.335 TEST_HEADER include/spdk/memory.h 00:02:50.335 TEST_HEADER include/spdk/md5.h 00:02:50.335 CC app/spdk_nvme_discover/discovery_aer.o 00:02:50.335 TEST_HEADER include/spdk/mmio.h 00:02:50.335 TEST_HEADER include/spdk/nbd.h 00:02:50.335 TEST_HEADER include/spdk/net.h 00:02:50.335 TEST_HEADER include/spdk/notify.h 00:02:50.335 TEST_HEADER include/spdk/nvme.h 00:02:50.335 TEST_HEADER include/spdk/nvme_intel.h 00:02:50.335 CC app/spdk_nvme_perf/perf.o 00:02:50.335 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:50.335 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:50.335 TEST_HEADER include/spdk/nvme_spec.h 00:02:50.335 TEST_HEADER include/spdk/nvme_zns.h 00:02:50.335 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:50.335 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:50.335 TEST_HEADER include/spdk/nvmf_spec.h 00:02:50.335 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:50.335 TEST_HEADER include/spdk/nvmf_transport.h 00:02:50.335 TEST_HEADER include/spdk/nvmf.h 00:02:50.335 TEST_HEADER include/spdk/opal.h 00:02:50.335 TEST_HEADER include/spdk/opal_spec.h 00:02:50.335 TEST_HEADER include/spdk/pci_ids.h 00:02:50.335 TEST_HEADER include/spdk/queue.h 00:02:50.335 TEST_HEADER include/spdk/pipe.h 00:02:50.335 TEST_HEADER include/spdk/rpc.h 00:02:50.335 TEST_HEADER include/spdk/reduce.h 00:02:50.335 TEST_HEADER include/spdk/scheduler.h 00:02:50.335 TEST_HEADER include/spdk/scsi.h 00:02:50.335 TEST_HEADER include/spdk/scsi_spec.h 00:02:50.335 TEST_HEADER include/spdk/sock.h 00:02:50.335 TEST_HEADER include/spdk/stdinc.h 00:02:50.335 TEST_HEADER include/spdk/string.h 00:02:50.336 TEST_HEADER include/spdk/thread.h 00:02:50.336 TEST_HEADER include/spdk/trace.h 00:02:50.336 TEST_HEADER include/spdk/trace_parser.h 00:02:50.336 TEST_HEADER include/spdk/tree.h 00:02:50.336 TEST_HEADER include/spdk/ublk.h 00:02:50.336 TEST_HEADER include/spdk/util.h 00:02:50.336 TEST_HEADER include/spdk/uuid.h 00:02:50.336 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:50.336 TEST_HEADER include/spdk/version.h 00:02:50.336 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:50.336 TEST_HEADER include/spdk/vmd.h 00:02:50.336 TEST_HEADER include/spdk/vhost.h 00:02:50.336 TEST_HEADER include/spdk/zipf.h 00:02:50.336 TEST_HEADER include/spdk/xor.h 00:02:50.336 CXX test/cpp_headers/accel.o 00:02:50.336 CXX test/cpp_headers/accel_module.o 00:02:50.336 CXX test/cpp_headers/assert.o 00:02:50.336 CC app/spdk_dd/spdk_dd.o 00:02:50.336 CXX test/cpp_headers/barrier.o 00:02:50.336 CXX test/cpp_headers/base64.o 00:02:50.336 CXX test/cpp_headers/bdev.o 00:02:50.336 CXX test/cpp_headers/bdev_module.o 00:02:50.336 CC app/iscsi_tgt/iscsi_tgt.o 00:02:50.336 CXX test/cpp_headers/bit_array.o 00:02:50.336 CXX test/cpp_headers/bdev_zone.o 00:02:50.336 CXX test/cpp_headers/bit_pool.o 00:02:50.336 CXX test/cpp_headers/blobfs_bdev.o 00:02:50.336 CXX test/cpp_headers/blobfs.o 00:02:50.336 CXX test/cpp_headers/blob_bdev.o 00:02:50.336 CXX test/cpp_headers/config.o 00:02:50.336 CXX test/cpp_headers/blob.o 00:02:50.336 CC app/nvmf_tgt/nvmf_main.o 00:02:50.336 CXX test/cpp_headers/conf.o 00:02:50.336 CXX test/cpp_headers/cpuset.o 00:02:50.336 CXX test/cpp_headers/crc16.o 00:02:50.336 CXX test/cpp_headers/crc64.o 00:02:50.336 CXX test/cpp_headers/crc32.o 00:02:50.336 CXX test/cpp_headers/dif.o 00:02:50.336 CXX test/cpp_headers/endian.o 00:02:50.336 CXX test/cpp_headers/dma.o 00:02:50.336 CXX test/cpp_headers/env.o 00:02:50.336 CXX test/cpp_headers/env_dpdk.o 00:02:50.336 CXX test/cpp_headers/event.o 00:02:50.336 CXX test/cpp_headers/fd_group.o 00:02:50.336 CXX test/cpp_headers/fd.o 00:02:50.336 CXX test/cpp_headers/file.o 00:02:50.336 CXX test/cpp_headers/fsdev.o 00:02:50.336 CXX test/cpp_headers/fsdev_module.o 00:02:50.336 CXX test/cpp_headers/ftl.o 00:02:50.336 CXX test/cpp_headers/fuse_dispatcher.o 00:02:50.336 CXX test/cpp_headers/gpt_spec.o 00:02:50.336 CXX test/cpp_headers/hexlify.o 00:02:50.336 CXX test/cpp_headers/idxd.o 00:02:50.336 CXX test/cpp_headers/init.o 00:02:50.336 CXX test/cpp_headers/histogram_data.o 00:02:50.336 CXX test/cpp_headers/idxd_spec.o 00:02:50.336 CXX test/cpp_headers/ioat.o 00:02:50.336 CXX test/cpp_headers/ioat_spec.o 00:02:50.336 CXX test/cpp_headers/json.o 00:02:50.336 CXX test/cpp_headers/jsonrpc.o 00:02:50.336 CXX test/cpp_headers/iscsi_spec.o 00:02:50.336 CXX test/cpp_headers/likely.o 00:02:50.336 CXX test/cpp_headers/keyring_module.o 00:02:50.336 CXX test/cpp_headers/keyring.o 00:02:50.336 CXX test/cpp_headers/log.o 00:02:50.336 CXX test/cpp_headers/lvol.o 00:02:50.336 CXX test/cpp_headers/md5.o 00:02:50.336 CXX test/cpp_headers/memory.o 00:02:50.336 CXX test/cpp_headers/mmio.o 00:02:50.336 CXX test/cpp_headers/nbd.o 00:02:50.336 CXX test/cpp_headers/net.o 00:02:50.336 CXX test/cpp_headers/nvme_intel.o 00:02:50.336 CXX test/cpp_headers/notify.o 00:02:50.336 CXX test/cpp_headers/nvme_ocssd.o 00:02:50.336 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:50.336 CXX test/cpp_headers/nvme.o 00:02:50.336 CXX test/cpp_headers/nvme_zns.o 00:02:50.336 CXX test/cpp_headers/nvmf_cmd.o 00:02:50.336 CXX test/cpp_headers/nvme_spec.o 00:02:50.336 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:50.336 CXX test/cpp_headers/nvmf.o 00:02:50.336 CXX test/cpp_headers/nvmf_transport.o 00:02:50.336 CXX test/cpp_headers/nvmf_spec.o 00:02:50.336 CC app/spdk_tgt/spdk_tgt.o 00:02:50.336 CXX test/cpp_headers/opal.o 00:02:50.336 CC test/thread/poller_perf/poller_perf.o 00:02:50.604 CC test/app/stub/stub.o 00:02:50.604 CC test/app/histogram_perf/histogram_perf.o 00:02:50.604 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:50.604 CC test/env/memory/memory_ut.o 00:02:50.604 CC test/env/vtophys/vtophys.o 00:02:50.604 CC test/app/bdev_svc/bdev_svc.o 00:02:50.604 CC test/env/pci/pci_ut.o 00:02:50.604 CC test/app/jsoncat/jsoncat.o 00:02:50.604 CC examples/ioat/perf/perf.o 00:02:50.604 CC test/dma/test_dma/test_dma.o 00:02:50.604 CC examples/util/zipf/zipf.o 00:02:50.604 CC app/fio/nvme/fio_plugin.o 00:02:50.604 CC examples/ioat/verify/verify.o 00:02:50.604 LINK rpc_client_test 00:02:50.604 CC app/fio/bdev/fio_plugin.o 00:02:50.866 LINK spdk_lspci 00:02:50.866 LINK interrupt_tgt 00:02:50.866 LINK spdk_trace_record 00:02:50.866 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:50.866 CC test/env/mem_callbacks/mem_callbacks.o 00:02:50.866 LINK histogram_perf 00:02:51.124 LINK poller_perf 00:02:51.124 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:51.124 LINK nvmf_tgt 00:02:51.124 CXX test/cpp_headers/opal_spec.o 00:02:51.124 CXX test/cpp_headers/pci_ids.o 00:02:51.124 CXX test/cpp_headers/pipe.o 00:02:51.124 CXX test/cpp_headers/queue.o 00:02:51.124 LINK spdk_nvme_discover 00:02:51.124 LINK env_dpdk_post_init 00:02:51.124 CXX test/cpp_headers/reduce.o 00:02:51.124 CXX test/cpp_headers/rpc.o 00:02:51.124 CXX test/cpp_headers/scheduler.o 00:02:51.124 CXX test/cpp_headers/scsi.o 00:02:51.124 CXX test/cpp_headers/scsi_spec.o 00:02:51.124 CXX test/cpp_headers/sock.o 00:02:51.124 CXX test/cpp_headers/stdinc.o 00:02:51.124 CXX test/cpp_headers/string.o 00:02:51.124 CXX test/cpp_headers/thread.o 00:02:51.124 CXX test/cpp_headers/trace.o 00:02:51.124 CXX test/cpp_headers/trace_parser.o 00:02:51.124 CXX test/cpp_headers/ublk.o 00:02:51.124 CXX test/cpp_headers/util.o 00:02:51.124 CXX test/cpp_headers/tree.o 00:02:51.124 CXX test/cpp_headers/uuid.o 00:02:51.124 CXX test/cpp_headers/version.o 00:02:51.124 LINK vtophys 00:02:51.124 CXX test/cpp_headers/vfio_user_pci.o 00:02:51.124 CXX test/cpp_headers/vfio_user_spec.o 00:02:51.124 LINK zipf 00:02:51.124 CXX test/cpp_headers/vhost.o 00:02:51.124 CXX test/cpp_headers/vmd.o 00:02:51.124 CXX test/cpp_headers/xor.o 00:02:51.124 CXX test/cpp_headers/zipf.o 00:02:51.124 LINK jsoncat 00:02:51.124 LINK iscsi_tgt 00:02:51.124 LINK ioat_perf 00:02:51.124 LINK stub 00:02:51.124 LINK bdev_svc 00:02:51.124 LINK spdk_tgt 00:02:51.124 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:51.124 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:51.124 LINK spdk_dd 00:02:51.124 LINK spdk_trace 00:02:51.381 LINK verify 00:02:51.382 LINK pci_ut 00:02:51.382 LINK test_dma 00:02:51.382 CC test/event/reactor/reactor.o 00:02:51.382 CC test/event/reactor_perf/reactor_perf.o 00:02:51.639 CC test/event/event_perf/event_perf.o 00:02:51.639 CC examples/idxd/perf/perf.o 00:02:51.639 CC test/event/scheduler/scheduler.o 00:02:51.639 CC test/event/app_repeat/app_repeat.o 00:02:51.639 CC examples/vmd/lsvmd/lsvmd.o 00:02:51.639 LINK spdk_bdev 00:02:51.639 CC examples/sock/hello_world/hello_sock.o 00:02:51.639 CC examples/vmd/led/led.o 00:02:51.639 LINK spdk_top 00:02:51.639 LINK nvme_fuzz 00:02:51.639 CC examples/thread/thread/thread_ex.o 00:02:51.639 LINK spdk_nvme 00:02:51.639 CC app/vhost/vhost.o 00:02:51.640 LINK reactor 00:02:51.640 LINK vhost_fuzz 00:02:51.640 LINK reactor_perf 00:02:51.640 LINK event_perf 00:02:51.640 LINK spdk_nvme_perf 00:02:51.640 LINK lsvmd 00:02:51.640 LINK app_repeat 00:02:51.640 LINK led 00:02:51.898 LINK spdk_nvme_identify 00:02:51.898 LINK hello_sock 00:02:51.898 LINK scheduler 00:02:51.898 LINK mem_callbacks 00:02:51.898 LINK idxd_perf 00:02:51.898 LINK thread 00:02:51.898 LINK vhost 00:02:51.898 CC test/nvme/reset/reset.o 00:02:51.898 CC test/nvme/compliance/nvme_compliance.o 00:02:51.898 CC test/nvme/simple_copy/simple_copy.o 00:02:51.898 CC test/nvme/err_injection/err_injection.o 00:02:51.898 CC test/nvme/e2edp/nvme_dp.o 00:02:51.898 CC test/nvme/startup/startup.o 00:02:51.898 CC test/nvme/boot_partition/boot_partition.o 00:02:51.898 CC test/nvme/cuse/cuse.o 00:02:51.898 CC test/nvme/aer/aer.o 00:02:51.898 CC test/nvme/reserve/reserve.o 00:02:51.898 CC test/nvme/fused_ordering/fused_ordering.o 00:02:51.898 CC test/nvme/connect_stress/connect_stress.o 00:02:51.898 CC test/nvme/sgl/sgl.o 00:02:51.898 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:51.898 CC test/blobfs/mkfs/mkfs.o 00:02:51.898 CC test/accel/dif/dif.o 00:02:51.898 CC test/nvme/overhead/overhead.o 00:02:51.898 CC test/nvme/fdp/fdp.o 00:02:52.157 LINK memory_ut 00:02:52.157 CC test/lvol/esnap/esnap.o 00:02:52.157 LINK boot_partition 00:02:52.157 LINK startup 00:02:52.157 LINK err_injection 00:02:52.157 LINK connect_stress 00:02:52.157 LINK reserve 00:02:52.157 LINK fused_ordering 00:02:52.157 LINK doorbell_aers 00:02:52.157 LINK simple_copy 00:02:52.157 LINK mkfs 00:02:52.157 LINK reset 00:02:52.157 CC examples/nvme/hello_world/hello_world.o 00:02:52.157 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:52.157 CC examples/nvme/hotplug/hotplug.o 00:02:52.157 CC examples/nvme/abort/abort.o 00:02:52.157 CC examples/nvme/arbitration/arbitration.o 00:02:52.157 CC examples/nvme/reconnect/reconnect.o 00:02:52.157 LINK nvme_compliance 00:02:52.157 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:52.157 LINK nvme_dp 00:02:52.157 LINK sgl 00:02:52.157 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:52.157 LINK aer 00:02:52.157 LINK overhead 00:02:52.157 LINK fdp 00:02:52.415 CC examples/accel/perf/accel_perf.o 00:02:52.415 CC examples/blob/cli/blobcli.o 00:02:52.415 CC examples/blob/hello_world/hello_blob.o 00:02:52.415 CC examples/fsdev/hello_world/hello_fsdev.o 00:02:52.415 LINK cmb_copy 00:02:52.415 LINK pmr_persistence 00:02:52.415 LINK hello_world 00:02:52.415 LINK hotplug 00:02:52.415 LINK iscsi_fuzz 00:02:52.415 LINK arbitration 00:02:52.415 LINK reconnect 00:02:52.415 LINK abort 00:02:52.674 LINK dif 00:02:52.674 LINK hello_blob 00:02:52.674 LINK nvme_manage 00:02:52.674 LINK hello_fsdev 00:02:52.674 LINK accel_perf 00:02:52.674 LINK blobcli 00:02:52.932 LINK cuse 00:02:53.192 CC test/bdev/bdevio/bdevio.o 00:02:53.192 CC examples/bdev/bdevperf/bdevperf.o 00:02:53.192 CC examples/bdev/hello_world/hello_bdev.o 00:02:53.450 LINK hello_bdev 00:02:53.450 LINK bdevio 00:02:53.709 LINK bdevperf 00:02:54.327 CC examples/nvmf/nvmf/nvmf.o 00:02:54.608 LINK nvmf 00:02:55.611 LINK esnap 00:02:55.869 00:02:55.869 real 0m54.845s 00:02:55.869 user 7m57.110s 00:02:55.869 sys 3m34.905s 00:02:55.869 12:26:38 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:55.869 12:26:38 make -- common/autotest_common.sh@10 -- $ set +x 00:02:55.869 ************************************ 00:02:55.869 END TEST make 00:02:55.869 ************************************ 00:02:55.869 12:26:38 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:55.869 12:26:38 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:55.869 12:26:38 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:55.869 12:26:38 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:55.869 12:26:38 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:55.869 12:26:38 -- pm/common@44 -- $ pid=2250283 00:02:55.869 12:26:38 -- pm/common@50 -- $ kill -TERM 2250283 00:02:55.869 12:26:38 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:55.869 12:26:38 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:55.869 12:26:38 -- pm/common@44 -- $ pid=2250285 00:02:55.869 12:26:38 -- pm/common@50 -- $ kill -TERM 2250285 00:02:55.869 12:26:38 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:55.869 12:26:38 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:55.870 12:26:38 -- pm/common@44 -- $ pid=2250286 00:02:55.870 12:26:38 -- pm/common@50 -- $ kill -TERM 2250286 00:02:55.870 12:26:38 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:55.870 12:26:38 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:55.870 12:26:38 -- pm/common@44 -- $ pid=2250313 00:02:55.870 12:26:38 -- pm/common@50 -- $ sudo -E kill -TERM 2250313 00:02:56.128 12:26:38 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:02:56.128 12:26:38 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:56.129 12:26:38 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:02:56.129 12:26:38 -- common/autotest_common.sh@1693 -- # lcov --version 00:02:56.129 12:26:38 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:02:56.129 12:26:38 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:02:56.129 12:26:38 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:02:56.129 12:26:38 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:02:56.129 12:26:38 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:02:56.129 12:26:38 -- scripts/common.sh@336 -- # IFS=.-: 00:02:56.129 12:26:38 -- scripts/common.sh@336 -- # read -ra ver1 00:02:56.129 12:26:38 -- scripts/common.sh@337 -- # IFS=.-: 00:02:56.129 12:26:38 -- scripts/common.sh@337 -- # read -ra ver2 00:02:56.129 12:26:38 -- scripts/common.sh@338 -- # local 'op=<' 00:02:56.129 12:26:38 -- scripts/common.sh@340 -- # ver1_l=2 00:02:56.129 12:26:38 -- scripts/common.sh@341 -- # ver2_l=1 00:02:56.129 12:26:38 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:02:56.129 12:26:38 -- scripts/common.sh@344 -- # case "$op" in 00:02:56.129 12:26:38 -- scripts/common.sh@345 -- # : 1 00:02:56.129 12:26:38 -- scripts/common.sh@364 -- # (( v = 0 )) 00:02:56.129 12:26:38 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:56.129 12:26:38 -- scripts/common.sh@365 -- # decimal 1 00:02:56.129 12:26:38 -- scripts/common.sh@353 -- # local d=1 00:02:56.129 12:26:38 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:02:56.129 12:26:38 -- scripts/common.sh@355 -- # echo 1 00:02:56.129 12:26:38 -- scripts/common.sh@365 -- # ver1[v]=1 00:02:56.129 12:26:38 -- scripts/common.sh@366 -- # decimal 2 00:02:56.129 12:26:38 -- scripts/common.sh@353 -- # local d=2 00:02:56.129 12:26:38 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:02:56.129 12:26:38 -- scripts/common.sh@355 -- # echo 2 00:02:56.129 12:26:38 -- scripts/common.sh@366 -- # ver2[v]=2 00:02:56.129 12:26:38 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:02:56.129 12:26:38 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:02:56.129 12:26:38 -- scripts/common.sh@368 -- # return 0 00:02:56.129 12:26:38 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:02:56.129 12:26:38 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:02:56.129 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:56.129 --rc genhtml_branch_coverage=1 00:02:56.129 --rc genhtml_function_coverage=1 00:02:56.129 --rc genhtml_legend=1 00:02:56.129 --rc geninfo_all_blocks=1 00:02:56.129 --rc geninfo_unexecuted_blocks=1 00:02:56.129 00:02:56.129 ' 00:02:56.129 12:26:38 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:02:56.129 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:56.129 --rc genhtml_branch_coverage=1 00:02:56.129 --rc genhtml_function_coverage=1 00:02:56.129 --rc genhtml_legend=1 00:02:56.129 --rc geninfo_all_blocks=1 00:02:56.129 --rc geninfo_unexecuted_blocks=1 00:02:56.129 00:02:56.129 ' 00:02:56.129 12:26:38 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:02:56.129 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:56.129 --rc genhtml_branch_coverage=1 00:02:56.129 --rc genhtml_function_coverage=1 00:02:56.129 --rc genhtml_legend=1 00:02:56.129 --rc geninfo_all_blocks=1 00:02:56.129 --rc geninfo_unexecuted_blocks=1 00:02:56.129 00:02:56.129 ' 00:02:56.129 12:26:38 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:02:56.129 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:56.129 --rc genhtml_branch_coverage=1 00:02:56.129 --rc genhtml_function_coverage=1 00:02:56.129 --rc genhtml_legend=1 00:02:56.129 --rc geninfo_all_blocks=1 00:02:56.129 --rc geninfo_unexecuted_blocks=1 00:02:56.129 00:02:56.129 ' 00:02:56.129 12:26:38 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:56.129 12:26:38 -- nvmf/common.sh@7 -- # uname -s 00:02:56.129 12:26:38 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:56.129 12:26:38 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:56.129 12:26:38 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:56.129 12:26:38 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:56.129 12:26:38 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:56.129 12:26:38 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:56.129 12:26:38 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:56.129 12:26:38 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:56.129 12:26:38 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:56.129 12:26:38 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:56.129 12:26:38 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:02:56.129 12:26:38 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:02:56.129 12:26:38 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:56.129 12:26:38 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:56.129 12:26:38 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:56.129 12:26:38 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:56.129 12:26:38 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:56.129 12:26:38 -- scripts/common.sh@15 -- # shopt -s extglob 00:02:56.129 12:26:38 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:56.129 12:26:38 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:56.129 12:26:38 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:56.129 12:26:38 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:56.129 12:26:38 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:56.129 12:26:38 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:56.129 12:26:38 -- paths/export.sh@5 -- # export PATH 00:02:56.129 12:26:38 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:56.129 12:26:38 -- nvmf/common.sh@51 -- # : 0 00:02:56.129 12:26:38 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:02:56.129 12:26:38 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:02:56.129 12:26:38 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:56.129 12:26:38 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:56.129 12:26:38 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:56.129 12:26:38 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:02:56.129 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:02:56.129 12:26:38 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:02:56.129 12:26:38 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:02:56.129 12:26:38 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:02:56.129 12:26:38 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:56.129 12:26:38 -- spdk/autotest.sh@32 -- # uname -s 00:02:56.129 12:26:38 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:56.129 12:26:38 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:56.129 12:26:38 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:56.129 12:26:38 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:56.129 12:26:38 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:56.129 12:26:38 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:56.129 12:26:38 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:56.129 12:26:38 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:56.129 12:26:38 -- spdk/autotest.sh@48 -- # udevadm_pid=2312504 00:02:56.129 12:26:38 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:56.129 12:26:38 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:56.129 12:26:38 -- pm/common@17 -- # local monitor 00:02:56.129 12:26:38 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:56.129 12:26:38 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:56.129 12:26:38 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:56.129 12:26:38 -- pm/common@21 -- # date +%s 00:02:56.129 12:26:38 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:56.129 12:26:38 -- pm/common@21 -- # date +%s 00:02:56.129 12:26:38 -- pm/common@25 -- # sleep 1 00:02:56.129 12:26:38 -- pm/common@21 -- # date +%s 00:02:56.387 12:26:38 -- pm/common@21 -- # date +%s 00:02:56.387 12:26:38 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732793198 00:02:56.387 12:26:38 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732793198 00:02:56.387 12:26:38 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732793198 00:02:56.387 12:26:38 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732793198 00:02:56.387 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732793198_collect-cpu-load.pm.log 00:02:56.387 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732793198_collect-vmstat.pm.log 00:02:56.387 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732793198_collect-cpu-temp.pm.log 00:02:56.387 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732793198_collect-bmc-pm.bmc.pm.log 00:02:57.322 12:26:39 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:57.322 12:26:39 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:57.322 12:26:39 -- common/autotest_common.sh@726 -- # xtrace_disable 00:02:57.322 12:26:39 -- common/autotest_common.sh@10 -- # set +x 00:02:57.322 12:26:39 -- spdk/autotest.sh@59 -- # create_test_list 00:02:57.322 12:26:39 -- common/autotest_common.sh@752 -- # xtrace_disable 00:02:57.322 12:26:39 -- common/autotest_common.sh@10 -- # set +x 00:02:57.322 12:26:39 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:57.322 12:26:39 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:57.322 12:26:39 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:57.322 12:26:39 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:57.322 12:26:39 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:57.322 12:26:39 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:57.322 12:26:39 -- common/autotest_common.sh@1457 -- # uname 00:02:57.322 12:26:39 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:02:57.322 12:26:39 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:57.322 12:26:39 -- common/autotest_common.sh@1477 -- # uname 00:02:57.322 12:26:39 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:02:57.322 12:26:39 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:02:57.322 12:26:39 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:02:57.322 lcov: LCOV version 1.15 00:02:57.322 12:26:39 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:03:12.196 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:12.196 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:24.400 12:27:05 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:03:24.400 12:27:05 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:24.400 12:27:05 -- common/autotest_common.sh@10 -- # set +x 00:03:24.400 12:27:05 -- spdk/autotest.sh@78 -- # rm -f 00:03:24.400 12:27:05 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:25.781 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:03:25.781 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:03:25.781 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:03:25.781 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:03:25.781 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:03:25.781 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:03:25.781 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:03:25.781 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:03:25.781 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:03:25.781 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:03:25.781 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:03:25.781 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:03:25.781 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:03:26.041 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:03:26.041 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:03:26.041 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:03:26.041 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:03:26.041 12:27:08 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:26.041 12:27:08 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:03:26.041 12:27:08 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:03:26.041 12:27:08 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:03:26.041 12:27:08 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:03:26.041 12:27:08 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:03:26.041 12:27:08 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:03:26.041 12:27:08 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:26.041 12:27:08 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:26.041 12:27:08 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:03:26.041 12:27:08 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:26.041 12:27:08 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:26.041 12:27:08 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:26.041 12:27:08 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:26.041 12:27:08 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:26.041 No valid GPT data, bailing 00:03:26.041 12:27:08 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:26.041 12:27:08 -- scripts/common.sh@394 -- # pt= 00:03:26.041 12:27:08 -- scripts/common.sh@395 -- # return 1 00:03:26.041 12:27:08 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:26.041 1+0 records in 00:03:26.041 1+0 records out 00:03:26.041 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00175629 s, 597 MB/s 00:03:26.041 12:27:08 -- spdk/autotest.sh@105 -- # sync 00:03:26.041 12:27:08 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:26.041 12:27:08 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:26.041 12:27:08 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:32.614 12:27:13 -- spdk/autotest.sh@111 -- # uname -s 00:03:32.614 12:27:13 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:03:32.614 12:27:13 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:03:32.614 12:27:13 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:34.522 Hugepages 00:03:34.522 node hugesize free / total 00:03:34.522 node0 1048576kB 0 / 0 00:03:34.522 node0 2048kB 0 / 0 00:03:34.522 node1 1048576kB 0 / 0 00:03:34.522 node1 2048kB 0 / 0 00:03:34.522 00:03:34.522 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:34.522 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:03:34.522 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:03:34.522 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:03:34.522 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:03:34.522 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:03:34.522 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:03:34.522 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:03:34.522 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:03:34.522 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:03:34.522 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:03:34.522 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:03:34.522 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:03:34.522 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:03:34.522 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:03:34.522 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:03:34.522 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:03:34.522 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:03:34.522 12:27:16 -- spdk/autotest.sh@117 -- # uname -s 00:03:34.522 12:27:16 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:03:34.522 12:27:16 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:03:34.522 12:27:16 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:37.815 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:37.815 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:37.815 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:37.815 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:37.815 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:37.815 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:37.815 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:37.815 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:37.815 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:37.815 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:37.815 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:37.815 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:37.815 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:37.815 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:37.815 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:37.815 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:38.382 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:03:38.382 12:27:20 -- common/autotest_common.sh@1517 -- # sleep 1 00:03:39.326 12:27:21 -- common/autotest_common.sh@1518 -- # bdfs=() 00:03:39.326 12:27:21 -- common/autotest_common.sh@1518 -- # local bdfs 00:03:39.326 12:27:21 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:03:39.326 12:27:21 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:03:39.326 12:27:21 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:39.326 12:27:21 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:39.326 12:27:21 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:39.326 12:27:21 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:39.326 12:27:21 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:39.587 12:27:21 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:03:39.587 12:27:21 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:03:39.587 12:27:21 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:42.124 Waiting for block devices as requested 00:03:42.124 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:03:42.384 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:03:42.384 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:03:42.384 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:03:42.384 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:03:42.643 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:03:42.643 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:03:42.643 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:03:42.643 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:03:42.901 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:03:42.901 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:03:42.901 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:03:43.160 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:03:43.160 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:03:43.160 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:03:43.160 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:03:43.419 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:03:43.419 12:27:25 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:03:43.419 12:27:25 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:5e:00.0 00:03:43.419 12:27:25 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:03:43.419 12:27:25 -- common/autotest_common.sh@1487 -- # grep 0000:5e:00.0/nvme/nvme 00:03:43.419 12:27:25 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:03:43.419 12:27:25 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 ]] 00:03:43.419 12:27:25 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:03:43.419 12:27:25 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:03:43.419 12:27:25 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:03:43.419 12:27:25 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:03:43.419 12:27:25 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:03:43.419 12:27:25 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:03:43.419 12:27:25 -- common/autotest_common.sh@1531 -- # grep oacs 00:03:43.419 12:27:25 -- common/autotest_common.sh@1531 -- # oacs=' 0xe' 00:03:43.419 12:27:25 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:03:43.419 12:27:25 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:03:43.419 12:27:25 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:03:43.419 12:27:25 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:03:43.419 12:27:25 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:03:43.419 12:27:25 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:03:43.419 12:27:25 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:03:43.419 12:27:25 -- common/autotest_common.sh@1543 -- # continue 00:03:43.419 12:27:25 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:03:43.419 12:27:25 -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:43.419 12:27:25 -- common/autotest_common.sh@10 -- # set +x 00:03:43.419 12:27:25 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:03:43.419 12:27:25 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:43.419 12:27:25 -- common/autotest_common.sh@10 -- # set +x 00:03:43.419 12:27:25 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:45.956 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:45.956 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:45.956 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:45.956 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:45.956 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:45.956 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:45.956 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:45.956 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:46.215 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:46.215 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:46.215 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:46.215 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:46.215 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:46.215 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:46.215 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:46.215 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:47.153 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:03:47.153 12:27:29 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:03:47.153 12:27:29 -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:47.153 12:27:29 -- common/autotest_common.sh@10 -- # set +x 00:03:47.153 12:27:29 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:03:47.153 12:27:29 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:03:47.153 12:27:29 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:03:47.153 12:27:29 -- common/autotest_common.sh@1563 -- # bdfs=() 00:03:47.153 12:27:29 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:03:47.153 12:27:29 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:03:47.153 12:27:29 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:03:47.153 12:27:29 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:03:47.153 12:27:29 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:47.153 12:27:29 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:47.153 12:27:29 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:47.153 12:27:29 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:47.153 12:27:29 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:47.153 12:27:29 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:03:47.153 12:27:29 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:03:47.153 12:27:29 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:03:47.153 12:27:29 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:5e:00.0/device 00:03:47.153 12:27:29 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:03:47.153 12:27:29 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:03:47.153 12:27:29 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:03:47.153 12:27:29 -- common/autotest_common.sh@1572 -- # (( 1 > 0 )) 00:03:47.153 12:27:29 -- common/autotest_common.sh@1573 -- # printf '%s\n' 0000:5e:00.0 00:03:47.153 12:27:29 -- common/autotest_common.sh@1579 -- # [[ -z 0000:5e:00.0 ]] 00:03:47.153 12:27:29 -- common/autotest_common.sh@1584 -- # spdk_tgt_pid=2327438 00:03:47.153 12:27:29 -- common/autotest_common.sh@1585 -- # waitforlisten 2327438 00:03:47.153 12:27:29 -- common/autotest_common.sh@835 -- # '[' -z 2327438 ']' 00:03:47.153 12:27:29 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:47.153 12:27:29 -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:47.153 12:27:29 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:47.153 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:47.153 12:27:29 -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:47.153 12:27:29 -- common/autotest_common.sh@10 -- # set +x 00:03:47.153 12:27:29 -- common/autotest_common.sh@1583 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:47.417 [2024-11-28 12:27:29.711593] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:03:47.417 [2024-11-28 12:27:29.711640] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2327438 ] 00:03:47.417 [2024-11-28 12:27:29.773467] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:47.417 [2024-11-28 12:27:29.815962] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:47.675 12:27:30 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:47.675 12:27:30 -- common/autotest_common.sh@868 -- # return 0 00:03:47.675 12:27:30 -- common/autotest_common.sh@1587 -- # bdf_id=0 00:03:47.675 12:27:30 -- common/autotest_common.sh@1588 -- # for bdf in "${bdfs[@]}" 00:03:47.675 12:27:30 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:5e:00.0 00:03:50.965 nvme0n1 00:03:50.965 12:27:33 -- common/autotest_common.sh@1591 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:03:50.965 [2024-11-28 12:27:33.214182] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:03:50.965 request: 00:03:50.965 { 00:03:50.965 "nvme_ctrlr_name": "nvme0", 00:03:50.965 "password": "test", 00:03:50.965 "method": "bdev_nvme_opal_revert", 00:03:50.965 "req_id": 1 00:03:50.965 } 00:03:50.965 Got JSON-RPC error response 00:03:50.965 response: 00:03:50.965 { 00:03:50.965 "code": -32602, 00:03:50.965 "message": "Invalid parameters" 00:03:50.965 } 00:03:50.965 12:27:33 -- common/autotest_common.sh@1591 -- # true 00:03:50.965 12:27:33 -- common/autotest_common.sh@1592 -- # (( ++bdf_id )) 00:03:50.965 12:27:33 -- common/autotest_common.sh@1595 -- # killprocess 2327438 00:03:50.965 12:27:33 -- common/autotest_common.sh@954 -- # '[' -z 2327438 ']' 00:03:50.965 12:27:33 -- common/autotest_common.sh@958 -- # kill -0 2327438 00:03:50.965 12:27:33 -- common/autotest_common.sh@959 -- # uname 00:03:50.965 12:27:33 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:50.965 12:27:33 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2327438 00:03:50.965 12:27:33 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:50.965 12:27:33 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:50.965 12:27:33 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2327438' 00:03:50.965 killing process with pid 2327438 00:03:50.965 12:27:33 -- common/autotest_common.sh@973 -- # kill 2327438 00:03:50.965 12:27:33 -- common/autotest_common.sh@978 -- # wait 2327438 00:03:52.870 12:27:34 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:03:52.870 12:27:34 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:03:52.870 12:27:34 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:52.870 12:27:34 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:52.870 12:27:34 -- spdk/autotest.sh@149 -- # timing_enter lib 00:03:52.870 12:27:34 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:52.870 12:27:34 -- common/autotest_common.sh@10 -- # set +x 00:03:52.870 12:27:34 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:03:52.870 12:27:34 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:52.870 12:27:34 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:52.870 12:27:34 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:52.870 12:27:34 -- common/autotest_common.sh@10 -- # set +x 00:03:52.870 ************************************ 00:03:52.870 START TEST env 00:03:52.870 ************************************ 00:03:52.870 12:27:34 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:52.870 * Looking for test storage... 00:03:52.870 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:03:52.870 12:27:35 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:52.870 12:27:35 env -- common/autotest_common.sh@1693 -- # lcov --version 00:03:52.870 12:27:35 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:52.870 12:27:35 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:52.870 12:27:35 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:52.870 12:27:35 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:52.870 12:27:35 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:52.870 12:27:35 env -- scripts/common.sh@336 -- # IFS=.-: 00:03:52.870 12:27:35 env -- scripts/common.sh@336 -- # read -ra ver1 00:03:52.870 12:27:35 env -- scripts/common.sh@337 -- # IFS=.-: 00:03:52.870 12:27:35 env -- scripts/common.sh@337 -- # read -ra ver2 00:03:52.870 12:27:35 env -- scripts/common.sh@338 -- # local 'op=<' 00:03:52.870 12:27:35 env -- scripts/common.sh@340 -- # ver1_l=2 00:03:52.870 12:27:35 env -- scripts/common.sh@341 -- # ver2_l=1 00:03:52.870 12:27:35 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:52.870 12:27:35 env -- scripts/common.sh@344 -- # case "$op" in 00:03:52.870 12:27:35 env -- scripts/common.sh@345 -- # : 1 00:03:52.870 12:27:35 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:52.870 12:27:35 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:52.870 12:27:35 env -- scripts/common.sh@365 -- # decimal 1 00:03:52.870 12:27:35 env -- scripts/common.sh@353 -- # local d=1 00:03:52.870 12:27:35 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:52.870 12:27:35 env -- scripts/common.sh@355 -- # echo 1 00:03:52.870 12:27:35 env -- scripts/common.sh@365 -- # ver1[v]=1 00:03:52.870 12:27:35 env -- scripts/common.sh@366 -- # decimal 2 00:03:52.870 12:27:35 env -- scripts/common.sh@353 -- # local d=2 00:03:52.870 12:27:35 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:52.870 12:27:35 env -- scripts/common.sh@355 -- # echo 2 00:03:52.870 12:27:35 env -- scripts/common.sh@366 -- # ver2[v]=2 00:03:52.870 12:27:35 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:52.870 12:27:35 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:52.870 12:27:35 env -- scripts/common.sh@368 -- # return 0 00:03:52.870 12:27:35 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:52.870 12:27:35 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:52.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:52.870 --rc genhtml_branch_coverage=1 00:03:52.870 --rc genhtml_function_coverage=1 00:03:52.870 --rc genhtml_legend=1 00:03:52.870 --rc geninfo_all_blocks=1 00:03:52.870 --rc geninfo_unexecuted_blocks=1 00:03:52.870 00:03:52.870 ' 00:03:52.870 12:27:35 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:52.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:52.870 --rc genhtml_branch_coverage=1 00:03:52.870 --rc genhtml_function_coverage=1 00:03:52.870 --rc genhtml_legend=1 00:03:52.870 --rc geninfo_all_blocks=1 00:03:52.870 --rc geninfo_unexecuted_blocks=1 00:03:52.870 00:03:52.870 ' 00:03:52.870 12:27:35 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:52.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:52.870 --rc genhtml_branch_coverage=1 00:03:52.870 --rc genhtml_function_coverage=1 00:03:52.870 --rc genhtml_legend=1 00:03:52.870 --rc geninfo_all_blocks=1 00:03:52.870 --rc geninfo_unexecuted_blocks=1 00:03:52.870 00:03:52.870 ' 00:03:52.870 12:27:35 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:52.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:52.870 --rc genhtml_branch_coverage=1 00:03:52.870 --rc genhtml_function_coverage=1 00:03:52.870 --rc genhtml_legend=1 00:03:52.870 --rc geninfo_all_blocks=1 00:03:52.870 --rc geninfo_unexecuted_blocks=1 00:03:52.870 00:03:52.870 ' 00:03:52.870 12:27:35 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:52.870 12:27:35 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:52.870 12:27:35 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:52.870 12:27:35 env -- common/autotest_common.sh@10 -- # set +x 00:03:52.870 ************************************ 00:03:52.870 START TEST env_memory 00:03:52.870 ************************************ 00:03:52.870 12:27:35 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:52.870 00:03:52.870 00:03:52.870 CUnit - A unit testing framework for C - Version 2.1-3 00:03:52.870 http://cunit.sourceforge.net/ 00:03:52.870 00:03:52.870 00:03:52.870 Suite: memory 00:03:52.870 Test: alloc and free memory map ...[2024-11-28 12:27:35.190482] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:52.870 passed 00:03:52.870 Test: mem map translation ...[2024-11-28 12:27:35.210493] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:52.870 [2024-11-28 12:27:35.210510] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:52.870 [2024-11-28 12:27:35.210548] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:52.870 [2024-11-28 12:27:35.210554] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:52.870 passed 00:03:52.870 Test: mem map registration ...[2024-11-28 12:27:35.251787] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:03:52.870 [2024-11-28 12:27:35.251807] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:03:52.870 passed 00:03:52.870 Test: mem map adjacent registrations ...passed 00:03:52.870 00:03:52.870 Run Summary: Type Total Ran Passed Failed Inactive 00:03:52.870 suites 1 1 n/a 0 0 00:03:52.870 tests 4 4 4 0 0 00:03:52.870 asserts 152 152 152 0 n/a 00:03:52.870 00:03:52.870 Elapsed time = 0.145 seconds 00:03:52.870 00:03:52.870 real 0m0.158s 00:03:52.870 user 0m0.150s 00:03:52.870 sys 0m0.007s 00:03:52.870 12:27:35 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:52.870 12:27:35 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:03:52.870 ************************************ 00:03:52.870 END TEST env_memory 00:03:52.870 ************************************ 00:03:52.870 12:27:35 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:52.870 12:27:35 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:52.870 12:27:35 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:52.870 12:27:35 env -- common/autotest_common.sh@10 -- # set +x 00:03:52.870 ************************************ 00:03:52.870 START TEST env_vtophys 00:03:52.870 ************************************ 00:03:52.870 12:27:35 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:53.130 EAL: lib.eal log level changed from notice to debug 00:03:53.130 EAL: Detected lcore 0 as core 0 on socket 0 00:03:53.130 EAL: Detected lcore 1 as core 1 on socket 0 00:03:53.130 EAL: Detected lcore 2 as core 2 on socket 0 00:03:53.130 EAL: Detected lcore 3 as core 3 on socket 0 00:03:53.130 EAL: Detected lcore 4 as core 4 on socket 0 00:03:53.130 EAL: Detected lcore 5 as core 5 on socket 0 00:03:53.130 EAL: Detected lcore 6 as core 6 on socket 0 00:03:53.130 EAL: Detected lcore 7 as core 8 on socket 0 00:03:53.130 EAL: Detected lcore 8 as core 9 on socket 0 00:03:53.130 EAL: Detected lcore 9 as core 10 on socket 0 00:03:53.130 EAL: Detected lcore 10 as core 11 on socket 0 00:03:53.130 EAL: Detected lcore 11 as core 12 on socket 0 00:03:53.130 EAL: Detected lcore 12 as core 13 on socket 0 00:03:53.130 EAL: Detected lcore 13 as core 16 on socket 0 00:03:53.130 EAL: Detected lcore 14 as core 17 on socket 0 00:03:53.130 EAL: Detected lcore 15 as core 18 on socket 0 00:03:53.130 EAL: Detected lcore 16 as core 19 on socket 0 00:03:53.130 EAL: Detected lcore 17 as core 20 on socket 0 00:03:53.130 EAL: Detected lcore 18 as core 21 on socket 0 00:03:53.130 EAL: Detected lcore 19 as core 25 on socket 0 00:03:53.130 EAL: Detected lcore 20 as core 26 on socket 0 00:03:53.130 EAL: Detected lcore 21 as core 27 on socket 0 00:03:53.130 EAL: Detected lcore 22 as core 28 on socket 0 00:03:53.130 EAL: Detected lcore 23 as core 29 on socket 0 00:03:53.130 EAL: Detected lcore 24 as core 0 on socket 1 00:03:53.130 EAL: Detected lcore 25 as core 1 on socket 1 00:03:53.130 EAL: Detected lcore 26 as core 2 on socket 1 00:03:53.130 EAL: Detected lcore 27 as core 3 on socket 1 00:03:53.130 EAL: Detected lcore 28 as core 4 on socket 1 00:03:53.130 EAL: Detected lcore 29 as core 5 on socket 1 00:03:53.130 EAL: Detected lcore 30 as core 6 on socket 1 00:03:53.130 EAL: Detected lcore 31 as core 9 on socket 1 00:03:53.130 EAL: Detected lcore 32 as core 10 on socket 1 00:03:53.130 EAL: Detected lcore 33 as core 11 on socket 1 00:03:53.130 EAL: Detected lcore 34 as core 12 on socket 1 00:03:53.130 EAL: Detected lcore 35 as core 13 on socket 1 00:03:53.130 EAL: Detected lcore 36 as core 16 on socket 1 00:03:53.130 EAL: Detected lcore 37 as core 17 on socket 1 00:03:53.130 EAL: Detected lcore 38 as core 18 on socket 1 00:03:53.130 EAL: Detected lcore 39 as core 19 on socket 1 00:03:53.130 EAL: Detected lcore 40 as core 20 on socket 1 00:03:53.130 EAL: Detected lcore 41 as core 21 on socket 1 00:03:53.130 EAL: Detected lcore 42 as core 24 on socket 1 00:03:53.130 EAL: Detected lcore 43 as core 25 on socket 1 00:03:53.130 EAL: Detected lcore 44 as core 26 on socket 1 00:03:53.130 EAL: Detected lcore 45 as core 27 on socket 1 00:03:53.130 EAL: Detected lcore 46 as core 28 on socket 1 00:03:53.130 EAL: Detected lcore 47 as core 29 on socket 1 00:03:53.130 EAL: Detected lcore 48 as core 0 on socket 0 00:03:53.130 EAL: Detected lcore 49 as core 1 on socket 0 00:03:53.130 EAL: Detected lcore 50 as core 2 on socket 0 00:03:53.130 EAL: Detected lcore 51 as core 3 on socket 0 00:03:53.130 EAL: Detected lcore 52 as core 4 on socket 0 00:03:53.130 EAL: Detected lcore 53 as core 5 on socket 0 00:03:53.130 EAL: Detected lcore 54 as core 6 on socket 0 00:03:53.130 EAL: Detected lcore 55 as core 8 on socket 0 00:03:53.130 EAL: Detected lcore 56 as core 9 on socket 0 00:03:53.130 EAL: Detected lcore 57 as core 10 on socket 0 00:03:53.130 EAL: Detected lcore 58 as core 11 on socket 0 00:03:53.130 EAL: Detected lcore 59 as core 12 on socket 0 00:03:53.130 EAL: Detected lcore 60 as core 13 on socket 0 00:03:53.130 EAL: Detected lcore 61 as core 16 on socket 0 00:03:53.130 EAL: Detected lcore 62 as core 17 on socket 0 00:03:53.130 EAL: Detected lcore 63 as core 18 on socket 0 00:03:53.130 EAL: Detected lcore 64 as core 19 on socket 0 00:03:53.130 EAL: Detected lcore 65 as core 20 on socket 0 00:03:53.130 EAL: Detected lcore 66 as core 21 on socket 0 00:03:53.130 EAL: Detected lcore 67 as core 25 on socket 0 00:03:53.130 EAL: Detected lcore 68 as core 26 on socket 0 00:03:53.130 EAL: Detected lcore 69 as core 27 on socket 0 00:03:53.130 EAL: Detected lcore 70 as core 28 on socket 0 00:03:53.130 EAL: Detected lcore 71 as core 29 on socket 0 00:03:53.130 EAL: Detected lcore 72 as core 0 on socket 1 00:03:53.130 EAL: Detected lcore 73 as core 1 on socket 1 00:03:53.130 EAL: Detected lcore 74 as core 2 on socket 1 00:03:53.130 EAL: Detected lcore 75 as core 3 on socket 1 00:03:53.130 EAL: Detected lcore 76 as core 4 on socket 1 00:03:53.130 EAL: Detected lcore 77 as core 5 on socket 1 00:03:53.130 EAL: Detected lcore 78 as core 6 on socket 1 00:03:53.130 EAL: Detected lcore 79 as core 9 on socket 1 00:03:53.130 EAL: Detected lcore 80 as core 10 on socket 1 00:03:53.130 EAL: Detected lcore 81 as core 11 on socket 1 00:03:53.130 EAL: Detected lcore 82 as core 12 on socket 1 00:03:53.130 EAL: Detected lcore 83 as core 13 on socket 1 00:03:53.130 EAL: Detected lcore 84 as core 16 on socket 1 00:03:53.130 EAL: Detected lcore 85 as core 17 on socket 1 00:03:53.130 EAL: Detected lcore 86 as core 18 on socket 1 00:03:53.130 EAL: Detected lcore 87 as core 19 on socket 1 00:03:53.130 EAL: Detected lcore 88 as core 20 on socket 1 00:03:53.130 EAL: Detected lcore 89 as core 21 on socket 1 00:03:53.130 EAL: Detected lcore 90 as core 24 on socket 1 00:03:53.130 EAL: Detected lcore 91 as core 25 on socket 1 00:03:53.130 EAL: Detected lcore 92 as core 26 on socket 1 00:03:53.130 EAL: Detected lcore 93 as core 27 on socket 1 00:03:53.130 EAL: Detected lcore 94 as core 28 on socket 1 00:03:53.130 EAL: Detected lcore 95 as core 29 on socket 1 00:03:53.130 EAL: Maximum logical cores by configuration: 128 00:03:53.130 EAL: Detected CPU lcores: 96 00:03:53.130 EAL: Detected NUMA nodes: 2 00:03:53.130 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:03:53.130 EAL: Detected shared linkage of DPDK 00:03:53.130 EAL: No shared files mode enabled, IPC will be disabled 00:03:53.130 EAL: Bus pci wants IOVA as 'DC' 00:03:53.130 EAL: Buses did not request a specific IOVA mode. 00:03:53.130 EAL: IOMMU is available, selecting IOVA as VA mode. 00:03:53.130 EAL: Selected IOVA mode 'VA' 00:03:53.130 EAL: Probing VFIO support... 00:03:53.130 EAL: IOMMU type 1 (Type 1) is supported 00:03:53.130 EAL: IOMMU type 7 (sPAPR) is not supported 00:03:53.130 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:03:53.130 EAL: VFIO support initialized 00:03:53.130 EAL: Ask a virtual area of 0x2e000 bytes 00:03:53.130 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:03:53.130 EAL: Setting up physically contiguous memory... 00:03:53.130 EAL: Setting maximum number of open files to 524288 00:03:53.130 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:03:53.130 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:03:53.130 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:03:53.130 EAL: Ask a virtual area of 0x61000 bytes 00:03:53.130 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:03:53.130 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:53.130 EAL: Ask a virtual area of 0x400000000 bytes 00:03:53.130 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:03:53.130 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:03:53.130 EAL: Ask a virtual area of 0x61000 bytes 00:03:53.130 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:03:53.130 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:53.130 EAL: Ask a virtual area of 0x400000000 bytes 00:03:53.130 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:03:53.130 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:03:53.130 EAL: Ask a virtual area of 0x61000 bytes 00:03:53.130 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:03:53.130 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:53.130 EAL: Ask a virtual area of 0x400000000 bytes 00:03:53.130 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:03:53.130 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:03:53.130 EAL: Ask a virtual area of 0x61000 bytes 00:03:53.130 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:03:53.130 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:53.130 EAL: Ask a virtual area of 0x400000000 bytes 00:03:53.130 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:03:53.130 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:03:53.130 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:03:53.130 EAL: Ask a virtual area of 0x61000 bytes 00:03:53.131 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:03:53.131 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:53.131 EAL: Ask a virtual area of 0x400000000 bytes 00:03:53.131 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:03:53.131 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:03:53.131 EAL: Ask a virtual area of 0x61000 bytes 00:03:53.131 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:03:53.131 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:53.131 EAL: Ask a virtual area of 0x400000000 bytes 00:03:53.131 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:03:53.131 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:03:53.131 EAL: Ask a virtual area of 0x61000 bytes 00:03:53.131 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:03:53.131 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:53.131 EAL: Ask a virtual area of 0x400000000 bytes 00:03:53.131 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:03:53.131 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:03:53.131 EAL: Ask a virtual area of 0x61000 bytes 00:03:53.131 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:03:53.131 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:53.131 EAL: Ask a virtual area of 0x400000000 bytes 00:03:53.131 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:03:53.131 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:03:53.131 EAL: Hugepages will be freed exactly as allocated. 00:03:53.131 EAL: No shared files mode enabled, IPC is disabled 00:03:53.131 EAL: No shared files mode enabled, IPC is disabled 00:03:53.131 EAL: TSC frequency is ~2300000 KHz 00:03:53.131 EAL: Main lcore 0 is ready (tid=7f201afcaa00;cpuset=[0]) 00:03:53.131 EAL: Trying to obtain current memory policy. 00:03:53.131 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:53.131 EAL: Restoring previous memory policy: 0 00:03:53.131 EAL: request: mp_malloc_sync 00:03:53.131 EAL: No shared files mode enabled, IPC is disabled 00:03:53.131 EAL: Heap on socket 0 was expanded by 2MB 00:03:53.131 EAL: No shared files mode enabled, IPC is disabled 00:03:53.131 EAL: No PCI address specified using 'addr=' in: bus=pci 00:03:53.131 EAL: Mem event callback 'spdk:(nil)' registered 00:03:53.131 00:03:53.131 00:03:53.131 CUnit - A unit testing framework for C - Version 2.1-3 00:03:53.131 http://cunit.sourceforge.net/ 00:03:53.131 00:03:53.131 00:03:53.131 Suite: components_suite 00:03:53.131 Test: vtophys_malloc_test ...passed 00:03:53.131 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:03:53.131 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:53.131 EAL: Restoring previous memory policy: 4 00:03:53.131 EAL: Calling mem event callback 'spdk:(nil)' 00:03:53.131 EAL: request: mp_malloc_sync 00:03:53.131 EAL: No shared files mode enabled, IPC is disabled 00:03:53.131 EAL: Heap on socket 0 was expanded by 4MB 00:03:53.131 EAL: Calling mem event callback 'spdk:(nil)' 00:03:53.131 EAL: request: mp_malloc_sync 00:03:53.131 EAL: No shared files mode enabled, IPC is disabled 00:03:53.131 EAL: Heap on socket 0 was shrunk by 4MB 00:03:53.131 EAL: Trying to obtain current memory policy. 00:03:53.131 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:53.131 EAL: Restoring previous memory policy: 4 00:03:53.131 EAL: Calling mem event callback 'spdk:(nil)' 00:03:53.131 EAL: request: mp_malloc_sync 00:03:53.131 EAL: No shared files mode enabled, IPC is disabled 00:03:53.131 EAL: Heap on socket 0 was expanded by 6MB 00:03:53.131 EAL: Calling mem event callback 'spdk:(nil)' 00:03:53.131 EAL: request: mp_malloc_sync 00:03:53.131 EAL: No shared files mode enabled, IPC is disabled 00:03:53.131 EAL: Heap on socket 0 was shrunk by 6MB 00:03:53.131 EAL: Trying to obtain current memory policy. 00:03:53.131 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:53.131 EAL: Restoring previous memory policy: 4 00:03:53.131 EAL: Calling mem event callback 'spdk:(nil)' 00:03:53.131 EAL: request: mp_malloc_sync 00:03:53.131 EAL: No shared files mode enabled, IPC is disabled 00:03:53.131 EAL: Heap on socket 0 was expanded by 10MB 00:03:53.131 EAL: Calling mem event callback 'spdk:(nil)' 00:03:53.131 EAL: request: mp_malloc_sync 00:03:53.131 EAL: No shared files mode enabled, IPC is disabled 00:03:53.131 EAL: Heap on socket 0 was shrunk by 10MB 00:03:53.131 EAL: Trying to obtain current memory policy. 00:03:53.131 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:53.131 EAL: Restoring previous memory policy: 4 00:03:53.131 EAL: Calling mem event callback 'spdk:(nil)' 00:03:53.131 EAL: request: mp_malloc_sync 00:03:53.131 EAL: No shared files mode enabled, IPC is disabled 00:03:53.131 EAL: Heap on socket 0 was expanded by 18MB 00:03:53.131 EAL: Calling mem event callback 'spdk:(nil)' 00:03:53.131 EAL: request: mp_malloc_sync 00:03:53.131 EAL: No shared files mode enabled, IPC is disabled 00:03:53.131 EAL: Heap on socket 0 was shrunk by 18MB 00:03:53.131 EAL: Trying to obtain current memory policy. 00:03:53.131 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:53.131 EAL: Restoring previous memory policy: 4 00:03:53.131 EAL: Calling mem event callback 'spdk:(nil)' 00:03:53.131 EAL: request: mp_malloc_sync 00:03:53.131 EAL: No shared files mode enabled, IPC is disabled 00:03:53.131 EAL: Heap on socket 0 was expanded by 34MB 00:03:53.131 EAL: Calling mem event callback 'spdk:(nil)' 00:03:53.131 EAL: request: mp_malloc_sync 00:03:53.131 EAL: No shared files mode enabled, IPC is disabled 00:03:53.131 EAL: Heap on socket 0 was shrunk by 34MB 00:03:53.131 EAL: Trying to obtain current memory policy. 00:03:53.131 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:53.131 EAL: Restoring previous memory policy: 4 00:03:53.131 EAL: Calling mem event callback 'spdk:(nil)' 00:03:53.131 EAL: request: mp_malloc_sync 00:03:53.131 EAL: No shared files mode enabled, IPC is disabled 00:03:53.131 EAL: Heap on socket 0 was expanded by 66MB 00:03:53.131 EAL: Calling mem event callback 'spdk:(nil)' 00:03:53.131 EAL: request: mp_malloc_sync 00:03:53.131 EAL: No shared files mode enabled, IPC is disabled 00:03:53.131 EAL: Heap on socket 0 was shrunk by 66MB 00:03:53.131 EAL: Trying to obtain current memory policy. 00:03:53.131 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:53.131 EAL: Restoring previous memory policy: 4 00:03:53.131 EAL: Calling mem event callback 'spdk:(nil)' 00:03:53.131 EAL: request: mp_malloc_sync 00:03:53.131 EAL: No shared files mode enabled, IPC is disabled 00:03:53.131 EAL: Heap on socket 0 was expanded by 130MB 00:03:53.131 EAL: Calling mem event callback 'spdk:(nil)' 00:03:53.131 EAL: request: mp_malloc_sync 00:03:53.131 EAL: No shared files mode enabled, IPC is disabled 00:03:53.131 EAL: Heap on socket 0 was shrunk by 130MB 00:03:53.131 EAL: Trying to obtain current memory policy. 00:03:53.131 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:53.131 EAL: Restoring previous memory policy: 4 00:03:53.131 EAL: Calling mem event callback 'spdk:(nil)' 00:03:53.131 EAL: request: mp_malloc_sync 00:03:53.131 EAL: No shared files mode enabled, IPC is disabled 00:03:53.131 EAL: Heap on socket 0 was expanded by 258MB 00:03:53.131 EAL: Calling mem event callback 'spdk:(nil)' 00:03:53.391 EAL: request: mp_malloc_sync 00:03:53.391 EAL: No shared files mode enabled, IPC is disabled 00:03:53.391 EAL: Heap on socket 0 was shrunk by 258MB 00:03:53.391 EAL: Trying to obtain current memory policy. 00:03:53.391 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:53.391 EAL: Restoring previous memory policy: 4 00:03:53.391 EAL: Calling mem event callback 'spdk:(nil)' 00:03:53.391 EAL: request: mp_malloc_sync 00:03:53.391 EAL: No shared files mode enabled, IPC is disabled 00:03:53.391 EAL: Heap on socket 0 was expanded by 514MB 00:03:53.391 EAL: Calling mem event callback 'spdk:(nil)' 00:03:53.650 EAL: request: mp_malloc_sync 00:03:53.650 EAL: No shared files mode enabled, IPC is disabled 00:03:53.650 EAL: Heap on socket 0 was shrunk by 514MB 00:03:53.650 EAL: Trying to obtain current memory policy. 00:03:53.650 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:53.650 EAL: Restoring previous memory policy: 4 00:03:53.650 EAL: Calling mem event callback 'spdk:(nil)' 00:03:53.650 EAL: request: mp_malloc_sync 00:03:53.650 EAL: No shared files mode enabled, IPC is disabled 00:03:53.650 EAL: Heap on socket 0 was expanded by 1026MB 00:03:53.909 EAL: Calling mem event callback 'spdk:(nil)' 00:03:54.170 EAL: request: mp_malloc_sync 00:03:54.170 EAL: No shared files mode enabled, IPC is disabled 00:03:54.170 EAL: Heap on socket 0 was shrunk by 1026MB 00:03:54.170 passed 00:03:54.170 00:03:54.170 Run Summary: Type Total Ran Passed Failed Inactive 00:03:54.170 suites 1 1 n/a 0 0 00:03:54.170 tests 2 2 2 0 0 00:03:54.170 asserts 497 497 497 0 n/a 00:03:54.170 00:03:54.170 Elapsed time = 0.966 seconds 00:03:54.170 EAL: Calling mem event callback 'spdk:(nil)' 00:03:54.170 EAL: request: mp_malloc_sync 00:03:54.170 EAL: No shared files mode enabled, IPC is disabled 00:03:54.170 EAL: Heap on socket 0 was shrunk by 2MB 00:03:54.170 EAL: No shared files mode enabled, IPC is disabled 00:03:54.170 EAL: No shared files mode enabled, IPC is disabled 00:03:54.170 EAL: No shared files mode enabled, IPC is disabled 00:03:54.170 00:03:54.170 real 0m1.077s 00:03:54.170 user 0m0.643s 00:03:54.170 sys 0m0.412s 00:03:54.170 12:27:36 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:54.170 12:27:36 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:03:54.170 ************************************ 00:03:54.170 END TEST env_vtophys 00:03:54.170 ************************************ 00:03:54.170 12:27:36 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:54.170 12:27:36 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:54.170 12:27:36 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:54.170 12:27:36 env -- common/autotest_common.sh@10 -- # set +x 00:03:54.170 ************************************ 00:03:54.170 START TEST env_pci 00:03:54.170 ************************************ 00:03:54.170 12:27:36 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:54.170 00:03:54.170 00:03:54.170 CUnit - A unit testing framework for C - Version 2.1-3 00:03:54.170 http://cunit.sourceforge.net/ 00:03:54.170 00:03:54.170 00:03:54.170 Suite: pci 00:03:54.170 Test: pci_hook ...[2024-11-28 12:27:36.523038] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 2328688 has claimed it 00:03:54.170 EAL: Cannot find device (10000:00:01.0) 00:03:54.170 EAL: Failed to attach device on primary process 00:03:54.170 passed 00:03:54.170 00:03:54.170 Run Summary: Type Total Ran Passed Failed Inactive 00:03:54.170 suites 1 1 n/a 0 0 00:03:54.170 tests 1 1 1 0 0 00:03:54.170 asserts 25 25 25 0 n/a 00:03:54.170 00:03:54.170 Elapsed time = 0.027 seconds 00:03:54.170 00:03:54.170 real 0m0.047s 00:03:54.170 user 0m0.017s 00:03:54.170 sys 0m0.030s 00:03:54.170 12:27:36 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:54.170 12:27:36 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:03:54.170 ************************************ 00:03:54.170 END TEST env_pci 00:03:54.170 ************************************ 00:03:54.170 12:27:36 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:03:54.170 12:27:36 env -- env/env.sh@15 -- # uname 00:03:54.170 12:27:36 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:03:54.170 12:27:36 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:03:54.170 12:27:36 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:54.170 12:27:36 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:03:54.170 12:27:36 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:54.170 12:27:36 env -- common/autotest_common.sh@10 -- # set +x 00:03:54.170 ************************************ 00:03:54.170 START TEST env_dpdk_post_init 00:03:54.170 ************************************ 00:03:54.170 12:27:36 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:54.170 EAL: Detected CPU lcores: 96 00:03:54.170 EAL: Detected NUMA nodes: 2 00:03:54.170 EAL: Detected shared linkage of DPDK 00:03:54.170 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:54.170 EAL: Selected IOVA mode 'VA' 00:03:54.170 EAL: VFIO support initialized 00:03:54.170 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:54.430 EAL: Using IOMMU type 1 (Type 1) 00:03:54.430 EAL: Ignore mapping IO port bar(1) 00:03:54.430 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:03:54.430 EAL: Ignore mapping IO port bar(1) 00:03:54.430 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:03:54.430 EAL: Ignore mapping IO port bar(1) 00:03:54.430 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:03:54.430 EAL: Ignore mapping IO port bar(1) 00:03:54.430 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:03:54.430 EAL: Ignore mapping IO port bar(1) 00:03:54.430 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:03:54.430 EAL: Ignore mapping IO port bar(1) 00:03:54.430 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:03:54.430 EAL: Ignore mapping IO port bar(1) 00:03:54.430 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:03:54.430 EAL: Ignore mapping IO port bar(1) 00:03:54.430 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:03:55.369 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5e:00.0 (socket 0) 00:03:55.369 EAL: Ignore mapping IO port bar(1) 00:03:55.369 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:03:55.369 EAL: Ignore mapping IO port bar(1) 00:03:55.369 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:03:55.369 EAL: Ignore mapping IO port bar(1) 00:03:55.369 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:03:55.369 EAL: Ignore mapping IO port bar(1) 00:03:55.369 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:03:55.369 EAL: Ignore mapping IO port bar(1) 00:03:55.369 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:03:55.369 EAL: Ignore mapping IO port bar(1) 00:03:55.369 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:03:55.369 EAL: Ignore mapping IO port bar(1) 00:03:55.369 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:03:55.369 EAL: Ignore mapping IO port bar(1) 00:03:55.369 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:03:58.658 EAL: Releasing PCI mapped resource for 0000:5e:00.0 00:03:58.658 EAL: Calling pci_unmap_resource for 0000:5e:00.0 at 0x202001020000 00:03:58.658 Starting DPDK initialization... 00:03:58.658 Starting SPDK post initialization... 00:03:58.658 SPDK NVMe probe 00:03:58.658 Attaching to 0000:5e:00.0 00:03:58.658 Attached to 0000:5e:00.0 00:03:58.658 Cleaning up... 00:03:58.658 00:03:58.658 real 0m4.384s 00:03:58.658 user 0m2.986s 00:03:58.658 sys 0m0.469s 00:03:58.658 12:27:41 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:58.658 12:27:41 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:03:58.658 ************************************ 00:03:58.658 END TEST env_dpdk_post_init 00:03:58.658 ************************************ 00:03:58.658 12:27:41 env -- env/env.sh@26 -- # uname 00:03:58.658 12:27:41 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:03:58.658 12:27:41 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:58.658 12:27:41 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:58.658 12:27:41 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:58.658 12:27:41 env -- common/autotest_common.sh@10 -- # set +x 00:03:58.658 ************************************ 00:03:58.658 START TEST env_mem_callbacks 00:03:58.658 ************************************ 00:03:58.658 12:27:41 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:58.658 EAL: Detected CPU lcores: 96 00:03:58.658 EAL: Detected NUMA nodes: 2 00:03:58.658 EAL: Detected shared linkage of DPDK 00:03:58.658 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:58.658 EAL: Selected IOVA mode 'VA' 00:03:58.658 EAL: VFIO support initialized 00:03:58.658 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:58.658 00:03:58.658 00:03:58.658 CUnit - A unit testing framework for C - Version 2.1-3 00:03:58.658 http://cunit.sourceforge.net/ 00:03:58.658 00:03:58.658 00:03:58.658 Suite: memory 00:03:58.658 Test: test ... 00:03:58.658 register 0x200000200000 2097152 00:03:58.658 malloc 3145728 00:03:58.658 register 0x200000400000 4194304 00:03:58.658 buf 0x200000500000 len 3145728 PASSED 00:03:58.658 malloc 64 00:03:58.658 buf 0x2000004fff40 len 64 PASSED 00:03:58.658 malloc 4194304 00:03:58.658 register 0x200000800000 6291456 00:03:58.658 buf 0x200000a00000 len 4194304 PASSED 00:03:58.658 free 0x200000500000 3145728 00:03:58.658 free 0x2000004fff40 64 00:03:58.658 unregister 0x200000400000 4194304 PASSED 00:03:58.658 free 0x200000a00000 4194304 00:03:58.658 unregister 0x200000800000 6291456 PASSED 00:03:58.658 malloc 8388608 00:03:58.658 register 0x200000400000 10485760 00:03:58.658 buf 0x200000600000 len 8388608 PASSED 00:03:58.658 free 0x200000600000 8388608 00:03:58.658 unregister 0x200000400000 10485760 PASSED 00:03:58.658 passed 00:03:58.658 00:03:58.658 Run Summary: Type Total Ran Passed Failed Inactive 00:03:58.658 suites 1 1 n/a 0 0 00:03:58.658 tests 1 1 1 0 0 00:03:58.658 asserts 15 15 15 0 n/a 00:03:58.658 00:03:58.658 Elapsed time = 0.006 seconds 00:03:58.658 00:03:58.658 real 0m0.052s 00:03:58.658 user 0m0.018s 00:03:58.658 sys 0m0.034s 00:03:58.658 12:27:41 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:58.658 12:27:41 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:03:58.658 ************************************ 00:03:58.658 END TEST env_mem_callbacks 00:03:58.658 ************************************ 00:03:58.658 00:03:58.658 real 0m6.233s 00:03:58.658 user 0m4.047s 00:03:58.658 sys 0m1.270s 00:03:58.658 12:27:41 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:58.658 12:27:41 env -- common/autotest_common.sh@10 -- # set +x 00:03:58.658 ************************************ 00:03:58.658 END TEST env 00:03:58.658 ************************************ 00:03:58.919 12:27:41 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:58.919 12:27:41 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:58.919 12:27:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:58.919 12:27:41 -- common/autotest_common.sh@10 -- # set +x 00:03:58.919 ************************************ 00:03:58.919 START TEST rpc 00:03:58.919 ************************************ 00:03:58.919 12:27:41 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:58.919 * Looking for test storage... 00:03:58.919 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:58.919 12:27:41 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:58.919 12:27:41 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:03:58.919 12:27:41 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:58.919 12:27:41 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:58.919 12:27:41 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:58.919 12:27:41 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:58.919 12:27:41 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:58.919 12:27:41 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:58.919 12:27:41 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:58.919 12:27:41 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:58.919 12:27:41 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:58.919 12:27:41 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:58.919 12:27:41 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:58.919 12:27:41 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:58.919 12:27:41 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:58.919 12:27:41 rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:58.919 12:27:41 rpc -- scripts/common.sh@345 -- # : 1 00:03:58.919 12:27:41 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:58.919 12:27:41 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:58.919 12:27:41 rpc -- scripts/common.sh@365 -- # decimal 1 00:03:58.919 12:27:41 rpc -- scripts/common.sh@353 -- # local d=1 00:03:58.919 12:27:41 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:58.919 12:27:41 rpc -- scripts/common.sh@355 -- # echo 1 00:03:58.919 12:27:41 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:58.919 12:27:41 rpc -- scripts/common.sh@366 -- # decimal 2 00:03:58.919 12:27:41 rpc -- scripts/common.sh@353 -- # local d=2 00:03:58.919 12:27:41 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:58.919 12:27:41 rpc -- scripts/common.sh@355 -- # echo 2 00:03:58.919 12:27:41 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:58.919 12:27:41 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:58.919 12:27:41 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:58.919 12:27:41 rpc -- scripts/common.sh@368 -- # return 0 00:03:58.919 12:27:41 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:58.919 12:27:41 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:58.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:58.919 --rc genhtml_branch_coverage=1 00:03:58.919 --rc genhtml_function_coverage=1 00:03:58.919 --rc genhtml_legend=1 00:03:58.919 --rc geninfo_all_blocks=1 00:03:58.919 --rc geninfo_unexecuted_blocks=1 00:03:58.919 00:03:58.919 ' 00:03:58.919 12:27:41 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:58.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:58.919 --rc genhtml_branch_coverage=1 00:03:58.919 --rc genhtml_function_coverage=1 00:03:58.919 --rc genhtml_legend=1 00:03:58.919 --rc geninfo_all_blocks=1 00:03:58.919 --rc geninfo_unexecuted_blocks=1 00:03:58.919 00:03:58.919 ' 00:03:58.919 12:27:41 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:58.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:58.919 --rc genhtml_branch_coverage=1 00:03:58.919 --rc genhtml_function_coverage=1 00:03:58.919 --rc genhtml_legend=1 00:03:58.919 --rc geninfo_all_blocks=1 00:03:58.919 --rc geninfo_unexecuted_blocks=1 00:03:58.919 00:03:58.919 ' 00:03:58.919 12:27:41 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:58.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:58.919 --rc genhtml_branch_coverage=1 00:03:58.919 --rc genhtml_function_coverage=1 00:03:58.919 --rc genhtml_legend=1 00:03:58.919 --rc geninfo_all_blocks=1 00:03:58.919 --rc geninfo_unexecuted_blocks=1 00:03:58.919 00:03:58.919 ' 00:03:58.919 12:27:41 rpc -- rpc/rpc.sh@65 -- # spdk_pid=2329574 00:03:58.919 12:27:41 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:58.919 12:27:41 rpc -- rpc/rpc.sh@67 -- # waitforlisten 2329574 00:03:58.919 12:27:41 rpc -- common/autotest_common.sh@835 -- # '[' -z 2329574 ']' 00:03:58.919 12:27:41 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:58.919 12:27:41 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:58.919 12:27:41 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:03:58.919 12:27:41 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:58.919 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:58.919 12:27:41 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:58.919 12:27:41 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:59.178 [2024-11-28 12:27:41.459150] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:03:59.178 [2024-11-28 12:27:41.459201] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2329574 ] 00:03:59.178 [2024-11-28 12:27:41.521889] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:59.178 [2024-11-28 12:27:41.564046] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:03:59.178 [2024-11-28 12:27:41.564083] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 2329574' to capture a snapshot of events at runtime. 00:03:59.178 [2024-11-28 12:27:41.564092] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:03:59.178 [2024-11-28 12:27:41.564097] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:03:59.178 [2024-11-28 12:27:41.564102] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid2329574 for offline analysis/debug. 00:03:59.178 [2024-11-28 12:27:41.564640] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:59.438 12:27:41 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:59.438 12:27:41 rpc -- common/autotest_common.sh@868 -- # return 0 00:03:59.438 12:27:41 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:59.438 12:27:41 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:59.438 12:27:41 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:03:59.438 12:27:41 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:03:59.438 12:27:41 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:59.438 12:27:41 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:59.438 12:27:41 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:59.438 ************************************ 00:03:59.438 START TEST rpc_integrity 00:03:59.438 ************************************ 00:03:59.438 12:27:41 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:03:59.438 12:27:41 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:59.438 12:27:41 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:59.438 12:27:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:59.438 12:27:41 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:59.438 12:27:41 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:59.438 12:27:41 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:59.438 12:27:41 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:59.438 12:27:41 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:59.438 12:27:41 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:59.438 12:27:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:59.438 12:27:41 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:59.438 12:27:41 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:03:59.438 12:27:41 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:59.438 12:27:41 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:59.438 12:27:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:59.438 12:27:41 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:59.438 12:27:41 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:59.438 { 00:03:59.438 "name": "Malloc0", 00:03:59.438 "aliases": [ 00:03:59.438 "626698d8-5a55-4d0b-b975-b43a3655ef5f" 00:03:59.438 ], 00:03:59.438 "product_name": "Malloc disk", 00:03:59.438 "block_size": 512, 00:03:59.438 "num_blocks": 16384, 00:03:59.438 "uuid": "626698d8-5a55-4d0b-b975-b43a3655ef5f", 00:03:59.438 "assigned_rate_limits": { 00:03:59.438 "rw_ios_per_sec": 0, 00:03:59.438 "rw_mbytes_per_sec": 0, 00:03:59.438 "r_mbytes_per_sec": 0, 00:03:59.438 "w_mbytes_per_sec": 0 00:03:59.438 }, 00:03:59.438 "claimed": false, 00:03:59.438 "zoned": false, 00:03:59.438 "supported_io_types": { 00:03:59.438 "read": true, 00:03:59.438 "write": true, 00:03:59.438 "unmap": true, 00:03:59.438 "flush": true, 00:03:59.438 "reset": true, 00:03:59.438 "nvme_admin": false, 00:03:59.438 "nvme_io": false, 00:03:59.438 "nvme_io_md": false, 00:03:59.438 "write_zeroes": true, 00:03:59.438 "zcopy": true, 00:03:59.438 "get_zone_info": false, 00:03:59.438 "zone_management": false, 00:03:59.438 "zone_append": false, 00:03:59.438 "compare": false, 00:03:59.438 "compare_and_write": false, 00:03:59.438 "abort": true, 00:03:59.438 "seek_hole": false, 00:03:59.438 "seek_data": false, 00:03:59.438 "copy": true, 00:03:59.438 "nvme_iov_md": false 00:03:59.438 }, 00:03:59.438 "memory_domains": [ 00:03:59.438 { 00:03:59.438 "dma_device_id": "system", 00:03:59.438 "dma_device_type": 1 00:03:59.438 }, 00:03:59.438 { 00:03:59.438 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:59.438 "dma_device_type": 2 00:03:59.438 } 00:03:59.438 ], 00:03:59.438 "driver_specific": {} 00:03:59.438 } 00:03:59.438 ]' 00:03:59.438 12:27:41 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:59.438 12:27:41 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:59.438 12:27:41 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:03:59.438 12:27:41 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:59.438 12:27:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:59.438 [2024-11-28 12:27:41.928541] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:03:59.438 [2024-11-28 12:27:41.928570] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:59.438 [2024-11-28 12:27:41.928582] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1414280 00:03:59.438 [2024-11-28 12:27:41.928589] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:59.438 [2024-11-28 12:27:41.929691] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:59.438 [2024-11-28 12:27:41.929711] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:59.438 Passthru0 00:03:59.438 12:27:41 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:59.438 12:27:41 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:59.438 12:27:41 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:59.438 12:27:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:59.438 12:27:41 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:59.438 12:27:41 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:59.438 { 00:03:59.438 "name": "Malloc0", 00:03:59.438 "aliases": [ 00:03:59.438 "626698d8-5a55-4d0b-b975-b43a3655ef5f" 00:03:59.438 ], 00:03:59.438 "product_name": "Malloc disk", 00:03:59.438 "block_size": 512, 00:03:59.438 "num_blocks": 16384, 00:03:59.438 "uuid": "626698d8-5a55-4d0b-b975-b43a3655ef5f", 00:03:59.438 "assigned_rate_limits": { 00:03:59.438 "rw_ios_per_sec": 0, 00:03:59.438 "rw_mbytes_per_sec": 0, 00:03:59.438 "r_mbytes_per_sec": 0, 00:03:59.438 "w_mbytes_per_sec": 0 00:03:59.438 }, 00:03:59.438 "claimed": true, 00:03:59.438 "claim_type": "exclusive_write", 00:03:59.438 "zoned": false, 00:03:59.438 "supported_io_types": { 00:03:59.438 "read": true, 00:03:59.438 "write": true, 00:03:59.438 "unmap": true, 00:03:59.438 "flush": true, 00:03:59.438 "reset": true, 00:03:59.438 "nvme_admin": false, 00:03:59.438 "nvme_io": false, 00:03:59.438 "nvme_io_md": false, 00:03:59.438 "write_zeroes": true, 00:03:59.438 "zcopy": true, 00:03:59.438 "get_zone_info": false, 00:03:59.438 "zone_management": false, 00:03:59.438 "zone_append": false, 00:03:59.438 "compare": false, 00:03:59.438 "compare_and_write": false, 00:03:59.438 "abort": true, 00:03:59.438 "seek_hole": false, 00:03:59.438 "seek_data": false, 00:03:59.438 "copy": true, 00:03:59.438 "nvme_iov_md": false 00:03:59.438 }, 00:03:59.438 "memory_domains": [ 00:03:59.438 { 00:03:59.438 "dma_device_id": "system", 00:03:59.438 "dma_device_type": 1 00:03:59.438 }, 00:03:59.438 { 00:03:59.438 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:59.438 "dma_device_type": 2 00:03:59.438 } 00:03:59.438 ], 00:03:59.438 "driver_specific": {} 00:03:59.438 }, 00:03:59.438 { 00:03:59.438 "name": "Passthru0", 00:03:59.438 "aliases": [ 00:03:59.438 "cd0bfbea-2b25-5b0d-a55a-333295b7b63c" 00:03:59.438 ], 00:03:59.438 "product_name": "passthru", 00:03:59.438 "block_size": 512, 00:03:59.438 "num_blocks": 16384, 00:03:59.438 "uuid": "cd0bfbea-2b25-5b0d-a55a-333295b7b63c", 00:03:59.438 "assigned_rate_limits": { 00:03:59.438 "rw_ios_per_sec": 0, 00:03:59.438 "rw_mbytes_per_sec": 0, 00:03:59.438 "r_mbytes_per_sec": 0, 00:03:59.438 "w_mbytes_per_sec": 0 00:03:59.438 }, 00:03:59.438 "claimed": false, 00:03:59.438 "zoned": false, 00:03:59.438 "supported_io_types": { 00:03:59.438 "read": true, 00:03:59.438 "write": true, 00:03:59.438 "unmap": true, 00:03:59.438 "flush": true, 00:03:59.438 "reset": true, 00:03:59.438 "nvme_admin": false, 00:03:59.438 "nvme_io": false, 00:03:59.438 "nvme_io_md": false, 00:03:59.438 "write_zeroes": true, 00:03:59.438 "zcopy": true, 00:03:59.438 "get_zone_info": false, 00:03:59.438 "zone_management": false, 00:03:59.438 "zone_append": false, 00:03:59.438 "compare": false, 00:03:59.438 "compare_and_write": false, 00:03:59.438 "abort": true, 00:03:59.438 "seek_hole": false, 00:03:59.438 "seek_data": false, 00:03:59.438 "copy": true, 00:03:59.438 "nvme_iov_md": false 00:03:59.438 }, 00:03:59.438 "memory_domains": [ 00:03:59.438 { 00:03:59.439 "dma_device_id": "system", 00:03:59.439 "dma_device_type": 1 00:03:59.439 }, 00:03:59.439 { 00:03:59.439 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:59.439 "dma_device_type": 2 00:03:59.439 } 00:03:59.439 ], 00:03:59.439 "driver_specific": { 00:03:59.439 "passthru": { 00:03:59.439 "name": "Passthru0", 00:03:59.439 "base_bdev_name": "Malloc0" 00:03:59.439 } 00:03:59.439 } 00:03:59.439 } 00:03:59.439 ]' 00:03:59.439 12:27:41 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:59.697 12:27:41 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:59.698 12:27:41 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:59.698 12:27:41 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:59.698 12:27:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:59.698 12:27:41 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:59.698 12:27:41 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:03:59.698 12:27:41 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:59.698 12:27:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:59.698 12:27:42 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:59.698 12:27:42 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:59.698 12:27:42 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:59.698 12:27:42 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:59.698 12:27:42 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:59.698 12:27:42 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:59.698 12:27:42 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:59.698 12:27:42 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:59.698 00:03:59.698 real 0m0.247s 00:03:59.698 user 0m0.167s 00:03:59.698 sys 0m0.024s 00:03:59.698 12:27:42 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:59.698 12:27:42 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:59.698 ************************************ 00:03:59.698 END TEST rpc_integrity 00:03:59.698 ************************************ 00:03:59.698 12:27:42 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:03:59.698 12:27:42 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:59.698 12:27:42 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:59.698 12:27:42 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:59.698 ************************************ 00:03:59.698 START TEST rpc_plugins 00:03:59.698 ************************************ 00:03:59.698 12:27:42 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:03:59.698 12:27:42 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:03:59.698 12:27:42 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:59.698 12:27:42 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:59.698 12:27:42 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:59.698 12:27:42 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:03:59.698 12:27:42 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:03:59.698 12:27:42 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:59.698 12:27:42 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:59.698 12:27:42 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:59.698 12:27:42 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:03:59.698 { 00:03:59.698 "name": "Malloc1", 00:03:59.698 "aliases": [ 00:03:59.698 "22435fd6-edcc-4c84-bc22-2d42f4a1eb64" 00:03:59.698 ], 00:03:59.698 "product_name": "Malloc disk", 00:03:59.698 "block_size": 4096, 00:03:59.698 "num_blocks": 256, 00:03:59.698 "uuid": "22435fd6-edcc-4c84-bc22-2d42f4a1eb64", 00:03:59.698 "assigned_rate_limits": { 00:03:59.698 "rw_ios_per_sec": 0, 00:03:59.698 "rw_mbytes_per_sec": 0, 00:03:59.698 "r_mbytes_per_sec": 0, 00:03:59.698 "w_mbytes_per_sec": 0 00:03:59.698 }, 00:03:59.698 "claimed": false, 00:03:59.698 "zoned": false, 00:03:59.698 "supported_io_types": { 00:03:59.698 "read": true, 00:03:59.698 "write": true, 00:03:59.698 "unmap": true, 00:03:59.698 "flush": true, 00:03:59.698 "reset": true, 00:03:59.698 "nvme_admin": false, 00:03:59.698 "nvme_io": false, 00:03:59.698 "nvme_io_md": false, 00:03:59.698 "write_zeroes": true, 00:03:59.698 "zcopy": true, 00:03:59.698 "get_zone_info": false, 00:03:59.698 "zone_management": false, 00:03:59.698 "zone_append": false, 00:03:59.698 "compare": false, 00:03:59.698 "compare_and_write": false, 00:03:59.698 "abort": true, 00:03:59.698 "seek_hole": false, 00:03:59.698 "seek_data": false, 00:03:59.698 "copy": true, 00:03:59.698 "nvme_iov_md": false 00:03:59.698 }, 00:03:59.698 "memory_domains": [ 00:03:59.698 { 00:03:59.698 "dma_device_id": "system", 00:03:59.698 "dma_device_type": 1 00:03:59.698 }, 00:03:59.698 { 00:03:59.698 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:59.698 "dma_device_type": 2 00:03:59.698 } 00:03:59.698 ], 00:03:59.698 "driver_specific": {} 00:03:59.698 } 00:03:59.698 ]' 00:03:59.698 12:27:42 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:03:59.698 12:27:42 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:03:59.698 12:27:42 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:03:59.698 12:27:42 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:59.698 12:27:42 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:59.698 12:27:42 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:59.698 12:27:42 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:03:59.698 12:27:42 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:59.698 12:27:42 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:59.956 12:27:42 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:59.956 12:27:42 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:03:59.956 12:27:42 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:03:59.956 12:27:42 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:03:59.956 00:03:59.956 real 0m0.140s 00:03:59.956 user 0m0.079s 00:03:59.956 sys 0m0.022s 00:03:59.956 12:27:42 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:59.956 12:27:42 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:59.956 ************************************ 00:03:59.956 END TEST rpc_plugins 00:03:59.956 ************************************ 00:03:59.956 12:27:42 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:03:59.957 12:27:42 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:59.957 12:27:42 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:59.957 12:27:42 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:59.957 ************************************ 00:03:59.957 START TEST rpc_trace_cmd_test 00:03:59.957 ************************************ 00:03:59.957 12:27:42 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:03:59.957 12:27:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:03:59.957 12:27:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:03:59.957 12:27:42 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:59.957 12:27:42 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:59.957 12:27:42 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:59.957 12:27:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:03:59.957 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid2329574", 00:03:59.957 "tpoint_group_mask": "0x8", 00:03:59.957 "iscsi_conn": { 00:03:59.957 "mask": "0x2", 00:03:59.957 "tpoint_mask": "0x0" 00:03:59.957 }, 00:03:59.957 "scsi": { 00:03:59.957 "mask": "0x4", 00:03:59.957 "tpoint_mask": "0x0" 00:03:59.957 }, 00:03:59.957 "bdev": { 00:03:59.957 "mask": "0x8", 00:03:59.957 "tpoint_mask": "0xffffffffffffffff" 00:03:59.957 }, 00:03:59.957 "nvmf_rdma": { 00:03:59.957 "mask": "0x10", 00:03:59.957 "tpoint_mask": "0x0" 00:03:59.957 }, 00:03:59.957 "nvmf_tcp": { 00:03:59.957 "mask": "0x20", 00:03:59.957 "tpoint_mask": "0x0" 00:03:59.957 }, 00:03:59.957 "ftl": { 00:03:59.957 "mask": "0x40", 00:03:59.957 "tpoint_mask": "0x0" 00:03:59.957 }, 00:03:59.957 "blobfs": { 00:03:59.957 "mask": "0x80", 00:03:59.957 "tpoint_mask": "0x0" 00:03:59.957 }, 00:03:59.957 "dsa": { 00:03:59.957 "mask": "0x200", 00:03:59.957 "tpoint_mask": "0x0" 00:03:59.957 }, 00:03:59.957 "thread": { 00:03:59.957 "mask": "0x400", 00:03:59.957 "tpoint_mask": "0x0" 00:03:59.957 }, 00:03:59.957 "nvme_pcie": { 00:03:59.957 "mask": "0x800", 00:03:59.957 "tpoint_mask": "0x0" 00:03:59.957 }, 00:03:59.957 "iaa": { 00:03:59.957 "mask": "0x1000", 00:03:59.957 "tpoint_mask": "0x0" 00:03:59.957 }, 00:03:59.957 "nvme_tcp": { 00:03:59.957 "mask": "0x2000", 00:03:59.957 "tpoint_mask": "0x0" 00:03:59.957 }, 00:03:59.957 "bdev_nvme": { 00:03:59.957 "mask": "0x4000", 00:03:59.957 "tpoint_mask": "0x0" 00:03:59.957 }, 00:03:59.957 "sock": { 00:03:59.957 "mask": "0x8000", 00:03:59.957 "tpoint_mask": "0x0" 00:03:59.957 }, 00:03:59.957 "blob": { 00:03:59.957 "mask": "0x10000", 00:03:59.957 "tpoint_mask": "0x0" 00:03:59.957 }, 00:03:59.957 "bdev_raid": { 00:03:59.957 "mask": "0x20000", 00:03:59.957 "tpoint_mask": "0x0" 00:03:59.957 }, 00:03:59.957 "scheduler": { 00:03:59.957 "mask": "0x40000", 00:03:59.957 "tpoint_mask": "0x0" 00:03:59.957 } 00:03:59.957 }' 00:03:59.957 12:27:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:03:59.957 12:27:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:03:59.957 12:27:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:03:59.957 12:27:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:03:59.957 12:27:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:00.215 12:27:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:00.215 12:27:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:00.215 12:27:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:00.215 12:27:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:00.215 12:27:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:00.215 00:04:00.215 real 0m0.206s 00:04:00.215 user 0m0.176s 00:04:00.215 sys 0m0.021s 00:04:00.215 12:27:42 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:00.215 12:27:42 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:00.215 ************************************ 00:04:00.215 END TEST rpc_trace_cmd_test 00:04:00.215 ************************************ 00:04:00.215 12:27:42 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:00.215 12:27:42 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:00.215 12:27:42 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:00.215 12:27:42 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:00.215 12:27:42 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:00.215 12:27:42 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:00.215 ************************************ 00:04:00.215 START TEST rpc_daemon_integrity 00:04:00.215 ************************************ 00:04:00.215 12:27:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:00.215 12:27:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:00.215 12:27:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:00.215 12:27:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:00.215 12:27:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:00.215 12:27:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:00.215 12:27:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:00.215 12:27:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:00.215 12:27:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:00.215 12:27:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:00.215 12:27:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:00.215 12:27:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:00.215 12:27:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:00.215 12:27:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:00.215 12:27:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:00.215 12:27:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:00.215 12:27:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:00.215 12:27:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:00.215 { 00:04:00.215 "name": "Malloc2", 00:04:00.215 "aliases": [ 00:04:00.215 "c321817d-0a26-449c-9ec6-26c5ee531d0a" 00:04:00.215 ], 00:04:00.215 "product_name": "Malloc disk", 00:04:00.215 "block_size": 512, 00:04:00.215 "num_blocks": 16384, 00:04:00.215 "uuid": "c321817d-0a26-449c-9ec6-26c5ee531d0a", 00:04:00.215 "assigned_rate_limits": { 00:04:00.215 "rw_ios_per_sec": 0, 00:04:00.215 "rw_mbytes_per_sec": 0, 00:04:00.215 "r_mbytes_per_sec": 0, 00:04:00.215 "w_mbytes_per_sec": 0 00:04:00.215 }, 00:04:00.215 "claimed": false, 00:04:00.215 "zoned": false, 00:04:00.215 "supported_io_types": { 00:04:00.215 "read": true, 00:04:00.215 "write": true, 00:04:00.215 "unmap": true, 00:04:00.215 "flush": true, 00:04:00.215 "reset": true, 00:04:00.215 "nvme_admin": false, 00:04:00.215 "nvme_io": false, 00:04:00.215 "nvme_io_md": false, 00:04:00.215 "write_zeroes": true, 00:04:00.215 "zcopy": true, 00:04:00.215 "get_zone_info": false, 00:04:00.215 "zone_management": false, 00:04:00.215 "zone_append": false, 00:04:00.215 "compare": false, 00:04:00.215 "compare_and_write": false, 00:04:00.215 "abort": true, 00:04:00.215 "seek_hole": false, 00:04:00.215 "seek_data": false, 00:04:00.215 "copy": true, 00:04:00.215 "nvme_iov_md": false 00:04:00.215 }, 00:04:00.215 "memory_domains": [ 00:04:00.215 { 00:04:00.215 "dma_device_id": "system", 00:04:00.215 "dma_device_type": 1 00:04:00.215 }, 00:04:00.215 { 00:04:00.215 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:00.215 "dma_device_type": 2 00:04:00.215 } 00:04:00.215 ], 00:04:00.215 "driver_specific": {} 00:04:00.215 } 00:04:00.215 ]' 00:04:00.215 12:27:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:00.474 12:27:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:00.474 12:27:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:00.474 12:27:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:00.474 12:27:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:00.474 [2024-11-28 12:27:42.738746] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:00.474 [2024-11-28 12:27:42.738773] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:00.474 [2024-11-28 12:27:42.738785] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1416150 00:04:00.474 [2024-11-28 12:27:42.738791] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:00.474 [2024-11-28 12:27:42.739791] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:00.474 [2024-11-28 12:27:42.739812] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:00.474 Passthru0 00:04:00.474 12:27:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:00.474 12:27:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:00.474 12:27:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:00.474 12:27:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:00.474 12:27:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:00.474 12:27:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:00.474 { 00:04:00.474 "name": "Malloc2", 00:04:00.474 "aliases": [ 00:04:00.474 "c321817d-0a26-449c-9ec6-26c5ee531d0a" 00:04:00.474 ], 00:04:00.474 "product_name": "Malloc disk", 00:04:00.474 "block_size": 512, 00:04:00.474 "num_blocks": 16384, 00:04:00.474 "uuid": "c321817d-0a26-449c-9ec6-26c5ee531d0a", 00:04:00.474 "assigned_rate_limits": { 00:04:00.474 "rw_ios_per_sec": 0, 00:04:00.474 "rw_mbytes_per_sec": 0, 00:04:00.474 "r_mbytes_per_sec": 0, 00:04:00.474 "w_mbytes_per_sec": 0 00:04:00.474 }, 00:04:00.474 "claimed": true, 00:04:00.474 "claim_type": "exclusive_write", 00:04:00.474 "zoned": false, 00:04:00.474 "supported_io_types": { 00:04:00.474 "read": true, 00:04:00.474 "write": true, 00:04:00.474 "unmap": true, 00:04:00.474 "flush": true, 00:04:00.474 "reset": true, 00:04:00.474 "nvme_admin": false, 00:04:00.474 "nvme_io": false, 00:04:00.474 "nvme_io_md": false, 00:04:00.474 "write_zeroes": true, 00:04:00.474 "zcopy": true, 00:04:00.474 "get_zone_info": false, 00:04:00.474 "zone_management": false, 00:04:00.474 "zone_append": false, 00:04:00.474 "compare": false, 00:04:00.474 "compare_and_write": false, 00:04:00.474 "abort": true, 00:04:00.474 "seek_hole": false, 00:04:00.474 "seek_data": false, 00:04:00.474 "copy": true, 00:04:00.474 "nvme_iov_md": false 00:04:00.474 }, 00:04:00.474 "memory_domains": [ 00:04:00.474 { 00:04:00.474 "dma_device_id": "system", 00:04:00.474 "dma_device_type": 1 00:04:00.474 }, 00:04:00.474 { 00:04:00.474 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:00.474 "dma_device_type": 2 00:04:00.474 } 00:04:00.474 ], 00:04:00.474 "driver_specific": {} 00:04:00.474 }, 00:04:00.474 { 00:04:00.474 "name": "Passthru0", 00:04:00.474 "aliases": [ 00:04:00.474 "b15bb7ee-3255-5595-9bf3-75c0637e37a6" 00:04:00.474 ], 00:04:00.474 "product_name": "passthru", 00:04:00.474 "block_size": 512, 00:04:00.474 "num_blocks": 16384, 00:04:00.474 "uuid": "b15bb7ee-3255-5595-9bf3-75c0637e37a6", 00:04:00.474 "assigned_rate_limits": { 00:04:00.474 "rw_ios_per_sec": 0, 00:04:00.474 "rw_mbytes_per_sec": 0, 00:04:00.474 "r_mbytes_per_sec": 0, 00:04:00.474 "w_mbytes_per_sec": 0 00:04:00.474 }, 00:04:00.474 "claimed": false, 00:04:00.474 "zoned": false, 00:04:00.474 "supported_io_types": { 00:04:00.474 "read": true, 00:04:00.474 "write": true, 00:04:00.474 "unmap": true, 00:04:00.474 "flush": true, 00:04:00.474 "reset": true, 00:04:00.474 "nvme_admin": false, 00:04:00.474 "nvme_io": false, 00:04:00.474 "nvme_io_md": false, 00:04:00.474 "write_zeroes": true, 00:04:00.474 "zcopy": true, 00:04:00.474 "get_zone_info": false, 00:04:00.474 "zone_management": false, 00:04:00.474 "zone_append": false, 00:04:00.474 "compare": false, 00:04:00.474 "compare_and_write": false, 00:04:00.474 "abort": true, 00:04:00.474 "seek_hole": false, 00:04:00.474 "seek_data": false, 00:04:00.474 "copy": true, 00:04:00.474 "nvme_iov_md": false 00:04:00.474 }, 00:04:00.474 "memory_domains": [ 00:04:00.474 { 00:04:00.474 "dma_device_id": "system", 00:04:00.474 "dma_device_type": 1 00:04:00.474 }, 00:04:00.474 { 00:04:00.474 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:00.474 "dma_device_type": 2 00:04:00.474 } 00:04:00.474 ], 00:04:00.474 "driver_specific": { 00:04:00.474 "passthru": { 00:04:00.474 "name": "Passthru0", 00:04:00.474 "base_bdev_name": "Malloc2" 00:04:00.474 } 00:04:00.474 } 00:04:00.474 } 00:04:00.474 ]' 00:04:00.474 12:27:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:00.474 12:27:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:00.474 12:27:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:00.474 12:27:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:00.474 12:27:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:00.474 12:27:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:00.474 12:27:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:00.474 12:27:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:00.474 12:27:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:00.474 12:27:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:00.474 12:27:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:00.474 12:27:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:00.474 12:27:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:00.474 12:27:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:00.474 12:27:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:00.474 12:27:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:00.474 12:27:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:00.474 00:04:00.474 real 0m0.262s 00:04:00.474 user 0m0.170s 00:04:00.474 sys 0m0.035s 00:04:00.474 12:27:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:00.474 12:27:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:00.474 ************************************ 00:04:00.474 END TEST rpc_daemon_integrity 00:04:00.474 ************************************ 00:04:00.474 12:27:42 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:00.474 12:27:42 rpc -- rpc/rpc.sh@84 -- # killprocess 2329574 00:04:00.474 12:27:42 rpc -- common/autotest_common.sh@954 -- # '[' -z 2329574 ']' 00:04:00.474 12:27:42 rpc -- common/autotest_common.sh@958 -- # kill -0 2329574 00:04:00.474 12:27:42 rpc -- common/autotest_common.sh@959 -- # uname 00:04:00.474 12:27:42 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:00.474 12:27:42 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2329574 00:04:00.474 12:27:42 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:00.474 12:27:42 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:00.474 12:27:42 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2329574' 00:04:00.474 killing process with pid 2329574 00:04:00.474 12:27:42 rpc -- common/autotest_common.sh@973 -- # kill 2329574 00:04:00.474 12:27:42 rpc -- common/autotest_common.sh@978 -- # wait 2329574 00:04:01.041 00:04:01.041 real 0m2.019s 00:04:01.041 user 0m2.593s 00:04:01.041 sys 0m0.648s 00:04:01.041 12:27:43 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:01.041 12:27:43 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:01.041 ************************************ 00:04:01.041 END TEST rpc 00:04:01.041 ************************************ 00:04:01.041 12:27:43 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:01.041 12:27:43 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:01.041 12:27:43 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:01.041 12:27:43 -- common/autotest_common.sh@10 -- # set +x 00:04:01.041 ************************************ 00:04:01.041 START TEST skip_rpc 00:04:01.041 ************************************ 00:04:01.041 12:27:43 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:01.041 * Looking for test storage... 00:04:01.041 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:01.041 12:27:43 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:01.041 12:27:43 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:01.041 12:27:43 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:01.041 12:27:43 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:01.041 12:27:43 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:01.041 12:27:43 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:01.041 12:27:43 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:01.041 12:27:43 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:01.041 12:27:43 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:01.041 12:27:43 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:01.041 12:27:43 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:01.041 12:27:43 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:01.041 12:27:43 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:01.041 12:27:43 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:01.041 12:27:43 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:01.041 12:27:43 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:01.041 12:27:43 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:01.041 12:27:43 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:01.041 12:27:43 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:01.041 12:27:43 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:01.041 12:27:43 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:01.041 12:27:43 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:01.041 12:27:43 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:01.041 12:27:43 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:01.041 12:27:43 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:01.041 12:27:43 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:01.041 12:27:43 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:01.041 12:27:43 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:01.041 12:27:43 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:01.041 12:27:43 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:01.041 12:27:43 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:01.041 12:27:43 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:01.041 12:27:43 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:01.041 12:27:43 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:01.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:01.041 --rc genhtml_branch_coverage=1 00:04:01.041 --rc genhtml_function_coverage=1 00:04:01.041 --rc genhtml_legend=1 00:04:01.041 --rc geninfo_all_blocks=1 00:04:01.041 --rc geninfo_unexecuted_blocks=1 00:04:01.041 00:04:01.041 ' 00:04:01.041 12:27:43 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:01.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:01.041 --rc genhtml_branch_coverage=1 00:04:01.041 --rc genhtml_function_coverage=1 00:04:01.041 --rc genhtml_legend=1 00:04:01.041 --rc geninfo_all_blocks=1 00:04:01.041 --rc geninfo_unexecuted_blocks=1 00:04:01.041 00:04:01.041 ' 00:04:01.041 12:27:43 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:01.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:01.041 --rc genhtml_branch_coverage=1 00:04:01.041 --rc genhtml_function_coverage=1 00:04:01.041 --rc genhtml_legend=1 00:04:01.041 --rc geninfo_all_blocks=1 00:04:01.041 --rc geninfo_unexecuted_blocks=1 00:04:01.041 00:04:01.041 ' 00:04:01.041 12:27:43 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:01.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:01.041 --rc genhtml_branch_coverage=1 00:04:01.041 --rc genhtml_function_coverage=1 00:04:01.041 --rc genhtml_legend=1 00:04:01.041 --rc geninfo_all_blocks=1 00:04:01.041 --rc geninfo_unexecuted_blocks=1 00:04:01.041 00:04:01.041 ' 00:04:01.041 12:27:43 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:01.041 12:27:43 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:01.041 12:27:43 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:01.041 12:27:43 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:01.041 12:27:43 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:01.041 12:27:43 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:01.041 ************************************ 00:04:01.041 START TEST skip_rpc 00:04:01.041 ************************************ 00:04:01.041 12:27:43 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:04:01.041 12:27:43 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=2330209 00:04:01.041 12:27:43 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:01.041 12:27:43 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:01.041 12:27:43 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:01.308 [2024-11-28 12:27:43.593487] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:04:01.308 [2024-11-28 12:27:43.593529] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2330209 ] 00:04:01.308 [2024-11-28 12:27:43.654696] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:01.308 [2024-11-28 12:27:43.694823] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:06.573 12:27:48 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:06.573 12:27:48 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:06.573 12:27:48 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:06.573 12:27:48 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:06.573 12:27:48 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:06.573 12:27:48 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:06.573 12:27:48 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:06.573 12:27:48 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:04:06.573 12:27:48 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:06.573 12:27:48 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:06.573 12:27:48 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:06.573 12:27:48 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:06.573 12:27:48 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:06.573 12:27:48 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:06.573 12:27:48 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:06.573 12:27:48 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:06.573 12:27:48 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 2330209 00:04:06.573 12:27:48 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 2330209 ']' 00:04:06.573 12:27:48 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 2330209 00:04:06.573 12:27:48 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:04:06.573 12:27:48 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:06.573 12:27:48 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2330209 00:04:06.573 12:27:48 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:06.573 12:27:48 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:06.573 12:27:48 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2330209' 00:04:06.573 killing process with pid 2330209 00:04:06.573 12:27:48 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 2330209 00:04:06.573 12:27:48 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 2330209 00:04:06.573 00:04:06.573 real 0m5.371s 00:04:06.573 user 0m5.143s 00:04:06.573 sys 0m0.270s 00:04:06.573 12:27:48 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:06.573 12:27:48 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:06.573 ************************************ 00:04:06.573 END TEST skip_rpc 00:04:06.573 ************************************ 00:04:06.573 12:27:48 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:06.573 12:27:48 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:06.573 12:27:48 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:06.573 12:27:48 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:06.573 ************************************ 00:04:06.573 START TEST skip_rpc_with_json 00:04:06.573 ************************************ 00:04:06.573 12:27:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:04:06.573 12:27:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:06.573 12:27:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=2331157 00:04:06.573 12:27:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:06.573 12:27:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:06.573 12:27:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 2331157 00:04:06.573 12:27:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 2331157 ']' 00:04:06.573 12:27:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:06.573 12:27:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:06.573 12:27:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:06.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:06.573 12:27:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:06.573 12:27:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:06.573 [2024-11-28 12:27:49.038185] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:04:06.573 [2024-11-28 12:27:49.038228] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2331157 ] 00:04:06.833 [2024-11-28 12:27:49.101505] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:06.833 [2024-11-28 12:27:49.143990] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:06.833 12:27:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:06.833 12:27:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:04:06.833 12:27:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:06.833 12:27:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:06.833 12:27:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:07.092 [2024-11-28 12:27:49.352893] nvmf_rpc.c:2706:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:07.092 request: 00:04:07.092 { 00:04:07.092 "trtype": "tcp", 00:04:07.092 "method": "nvmf_get_transports", 00:04:07.092 "req_id": 1 00:04:07.092 } 00:04:07.092 Got JSON-RPC error response 00:04:07.092 response: 00:04:07.092 { 00:04:07.092 "code": -19, 00:04:07.092 "message": "No such device" 00:04:07.092 } 00:04:07.092 12:27:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:07.092 12:27:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:07.092 12:27:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:07.092 12:27:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:07.092 [2024-11-28 12:27:49.365006] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:07.092 12:27:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:07.092 12:27:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:07.092 12:27:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:07.092 12:27:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:07.092 12:27:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:07.092 12:27:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:07.092 { 00:04:07.092 "subsystems": [ 00:04:07.092 { 00:04:07.092 "subsystem": "fsdev", 00:04:07.092 "config": [ 00:04:07.092 { 00:04:07.092 "method": "fsdev_set_opts", 00:04:07.092 "params": { 00:04:07.092 "fsdev_io_pool_size": 65535, 00:04:07.092 "fsdev_io_cache_size": 256 00:04:07.092 } 00:04:07.092 } 00:04:07.092 ] 00:04:07.092 }, 00:04:07.092 { 00:04:07.092 "subsystem": "vfio_user_target", 00:04:07.092 "config": null 00:04:07.092 }, 00:04:07.092 { 00:04:07.092 "subsystem": "keyring", 00:04:07.092 "config": [] 00:04:07.092 }, 00:04:07.092 { 00:04:07.092 "subsystem": "iobuf", 00:04:07.092 "config": [ 00:04:07.092 { 00:04:07.092 "method": "iobuf_set_options", 00:04:07.092 "params": { 00:04:07.092 "small_pool_count": 8192, 00:04:07.092 "large_pool_count": 1024, 00:04:07.092 "small_bufsize": 8192, 00:04:07.092 "large_bufsize": 135168, 00:04:07.092 "enable_numa": false 00:04:07.092 } 00:04:07.092 } 00:04:07.092 ] 00:04:07.092 }, 00:04:07.092 { 00:04:07.092 "subsystem": "sock", 00:04:07.092 "config": [ 00:04:07.092 { 00:04:07.092 "method": "sock_set_default_impl", 00:04:07.092 "params": { 00:04:07.092 "impl_name": "posix" 00:04:07.092 } 00:04:07.092 }, 00:04:07.092 { 00:04:07.092 "method": "sock_impl_set_options", 00:04:07.092 "params": { 00:04:07.092 "impl_name": "ssl", 00:04:07.092 "recv_buf_size": 4096, 00:04:07.092 "send_buf_size": 4096, 00:04:07.092 "enable_recv_pipe": true, 00:04:07.092 "enable_quickack": false, 00:04:07.092 "enable_placement_id": 0, 00:04:07.092 "enable_zerocopy_send_server": true, 00:04:07.092 "enable_zerocopy_send_client": false, 00:04:07.092 "zerocopy_threshold": 0, 00:04:07.093 "tls_version": 0, 00:04:07.093 "enable_ktls": false 00:04:07.093 } 00:04:07.093 }, 00:04:07.093 { 00:04:07.093 "method": "sock_impl_set_options", 00:04:07.093 "params": { 00:04:07.093 "impl_name": "posix", 00:04:07.093 "recv_buf_size": 2097152, 00:04:07.093 "send_buf_size": 2097152, 00:04:07.093 "enable_recv_pipe": true, 00:04:07.093 "enable_quickack": false, 00:04:07.093 "enable_placement_id": 0, 00:04:07.093 "enable_zerocopy_send_server": true, 00:04:07.093 "enable_zerocopy_send_client": false, 00:04:07.093 "zerocopy_threshold": 0, 00:04:07.093 "tls_version": 0, 00:04:07.093 "enable_ktls": false 00:04:07.093 } 00:04:07.093 } 00:04:07.093 ] 00:04:07.093 }, 00:04:07.093 { 00:04:07.093 "subsystem": "vmd", 00:04:07.093 "config": [] 00:04:07.093 }, 00:04:07.093 { 00:04:07.093 "subsystem": "accel", 00:04:07.093 "config": [ 00:04:07.093 { 00:04:07.093 "method": "accel_set_options", 00:04:07.093 "params": { 00:04:07.093 "small_cache_size": 128, 00:04:07.093 "large_cache_size": 16, 00:04:07.093 "task_count": 2048, 00:04:07.093 "sequence_count": 2048, 00:04:07.093 "buf_count": 2048 00:04:07.093 } 00:04:07.093 } 00:04:07.093 ] 00:04:07.093 }, 00:04:07.093 { 00:04:07.093 "subsystem": "bdev", 00:04:07.093 "config": [ 00:04:07.093 { 00:04:07.093 "method": "bdev_set_options", 00:04:07.093 "params": { 00:04:07.093 "bdev_io_pool_size": 65535, 00:04:07.093 "bdev_io_cache_size": 256, 00:04:07.093 "bdev_auto_examine": true, 00:04:07.093 "iobuf_small_cache_size": 128, 00:04:07.093 "iobuf_large_cache_size": 16 00:04:07.093 } 00:04:07.093 }, 00:04:07.093 { 00:04:07.093 "method": "bdev_raid_set_options", 00:04:07.093 "params": { 00:04:07.093 "process_window_size_kb": 1024, 00:04:07.093 "process_max_bandwidth_mb_sec": 0 00:04:07.093 } 00:04:07.093 }, 00:04:07.093 { 00:04:07.093 "method": "bdev_iscsi_set_options", 00:04:07.093 "params": { 00:04:07.093 "timeout_sec": 30 00:04:07.093 } 00:04:07.093 }, 00:04:07.093 { 00:04:07.093 "method": "bdev_nvme_set_options", 00:04:07.093 "params": { 00:04:07.093 "action_on_timeout": "none", 00:04:07.093 "timeout_us": 0, 00:04:07.093 "timeout_admin_us": 0, 00:04:07.093 "keep_alive_timeout_ms": 10000, 00:04:07.093 "arbitration_burst": 0, 00:04:07.093 "low_priority_weight": 0, 00:04:07.093 "medium_priority_weight": 0, 00:04:07.093 "high_priority_weight": 0, 00:04:07.093 "nvme_adminq_poll_period_us": 10000, 00:04:07.093 "nvme_ioq_poll_period_us": 0, 00:04:07.093 "io_queue_requests": 0, 00:04:07.093 "delay_cmd_submit": true, 00:04:07.093 "transport_retry_count": 4, 00:04:07.093 "bdev_retry_count": 3, 00:04:07.093 "transport_ack_timeout": 0, 00:04:07.093 "ctrlr_loss_timeout_sec": 0, 00:04:07.093 "reconnect_delay_sec": 0, 00:04:07.093 "fast_io_fail_timeout_sec": 0, 00:04:07.093 "disable_auto_failback": false, 00:04:07.093 "generate_uuids": false, 00:04:07.093 "transport_tos": 0, 00:04:07.093 "nvme_error_stat": false, 00:04:07.093 "rdma_srq_size": 0, 00:04:07.093 "io_path_stat": false, 00:04:07.093 "allow_accel_sequence": false, 00:04:07.093 "rdma_max_cq_size": 0, 00:04:07.093 "rdma_cm_event_timeout_ms": 0, 00:04:07.093 "dhchap_digests": [ 00:04:07.093 "sha256", 00:04:07.093 "sha384", 00:04:07.093 "sha512" 00:04:07.093 ], 00:04:07.093 "dhchap_dhgroups": [ 00:04:07.093 "null", 00:04:07.093 "ffdhe2048", 00:04:07.093 "ffdhe3072", 00:04:07.093 "ffdhe4096", 00:04:07.093 "ffdhe6144", 00:04:07.093 "ffdhe8192" 00:04:07.093 ] 00:04:07.093 } 00:04:07.093 }, 00:04:07.093 { 00:04:07.093 "method": "bdev_nvme_set_hotplug", 00:04:07.093 "params": { 00:04:07.093 "period_us": 100000, 00:04:07.093 "enable": false 00:04:07.093 } 00:04:07.093 }, 00:04:07.093 { 00:04:07.093 "method": "bdev_wait_for_examine" 00:04:07.093 } 00:04:07.093 ] 00:04:07.093 }, 00:04:07.093 { 00:04:07.093 "subsystem": "scsi", 00:04:07.093 "config": null 00:04:07.093 }, 00:04:07.093 { 00:04:07.093 "subsystem": "scheduler", 00:04:07.093 "config": [ 00:04:07.093 { 00:04:07.093 "method": "framework_set_scheduler", 00:04:07.093 "params": { 00:04:07.093 "name": "static" 00:04:07.093 } 00:04:07.093 } 00:04:07.093 ] 00:04:07.093 }, 00:04:07.093 { 00:04:07.093 "subsystem": "vhost_scsi", 00:04:07.093 "config": [] 00:04:07.093 }, 00:04:07.093 { 00:04:07.093 "subsystem": "vhost_blk", 00:04:07.093 "config": [] 00:04:07.093 }, 00:04:07.093 { 00:04:07.093 "subsystem": "ublk", 00:04:07.093 "config": [] 00:04:07.093 }, 00:04:07.093 { 00:04:07.093 "subsystem": "nbd", 00:04:07.093 "config": [] 00:04:07.093 }, 00:04:07.093 { 00:04:07.093 "subsystem": "nvmf", 00:04:07.093 "config": [ 00:04:07.093 { 00:04:07.093 "method": "nvmf_set_config", 00:04:07.093 "params": { 00:04:07.093 "discovery_filter": "match_any", 00:04:07.093 "admin_cmd_passthru": { 00:04:07.093 "identify_ctrlr": false 00:04:07.093 }, 00:04:07.093 "dhchap_digests": [ 00:04:07.093 "sha256", 00:04:07.093 "sha384", 00:04:07.093 "sha512" 00:04:07.093 ], 00:04:07.093 "dhchap_dhgroups": [ 00:04:07.093 "null", 00:04:07.093 "ffdhe2048", 00:04:07.093 "ffdhe3072", 00:04:07.093 "ffdhe4096", 00:04:07.093 "ffdhe6144", 00:04:07.093 "ffdhe8192" 00:04:07.093 ] 00:04:07.093 } 00:04:07.093 }, 00:04:07.093 { 00:04:07.093 "method": "nvmf_set_max_subsystems", 00:04:07.093 "params": { 00:04:07.093 "max_subsystems": 1024 00:04:07.093 } 00:04:07.093 }, 00:04:07.093 { 00:04:07.093 "method": "nvmf_set_crdt", 00:04:07.093 "params": { 00:04:07.093 "crdt1": 0, 00:04:07.093 "crdt2": 0, 00:04:07.093 "crdt3": 0 00:04:07.093 } 00:04:07.093 }, 00:04:07.093 { 00:04:07.093 "method": "nvmf_create_transport", 00:04:07.093 "params": { 00:04:07.093 "trtype": "TCP", 00:04:07.093 "max_queue_depth": 128, 00:04:07.093 "max_io_qpairs_per_ctrlr": 127, 00:04:07.093 "in_capsule_data_size": 4096, 00:04:07.093 "max_io_size": 131072, 00:04:07.093 "io_unit_size": 131072, 00:04:07.093 "max_aq_depth": 128, 00:04:07.093 "num_shared_buffers": 511, 00:04:07.093 "buf_cache_size": 4294967295, 00:04:07.093 "dif_insert_or_strip": false, 00:04:07.093 "zcopy": false, 00:04:07.093 "c2h_success": true, 00:04:07.093 "sock_priority": 0, 00:04:07.093 "abort_timeout_sec": 1, 00:04:07.093 "ack_timeout": 0, 00:04:07.093 "data_wr_pool_size": 0 00:04:07.093 } 00:04:07.093 } 00:04:07.093 ] 00:04:07.093 }, 00:04:07.093 { 00:04:07.093 "subsystem": "iscsi", 00:04:07.093 "config": [ 00:04:07.093 { 00:04:07.093 "method": "iscsi_set_options", 00:04:07.093 "params": { 00:04:07.093 "node_base": "iqn.2016-06.io.spdk", 00:04:07.093 "max_sessions": 128, 00:04:07.093 "max_connections_per_session": 2, 00:04:07.093 "max_queue_depth": 64, 00:04:07.093 "default_time2wait": 2, 00:04:07.093 "default_time2retain": 20, 00:04:07.093 "first_burst_length": 8192, 00:04:07.093 "immediate_data": true, 00:04:07.093 "allow_duplicated_isid": false, 00:04:07.093 "error_recovery_level": 0, 00:04:07.093 "nop_timeout": 60, 00:04:07.093 "nop_in_interval": 30, 00:04:07.093 "disable_chap": false, 00:04:07.093 "require_chap": false, 00:04:07.093 "mutual_chap": false, 00:04:07.093 "chap_group": 0, 00:04:07.093 "max_large_datain_per_connection": 64, 00:04:07.093 "max_r2t_per_connection": 4, 00:04:07.093 "pdu_pool_size": 36864, 00:04:07.093 "immediate_data_pool_size": 16384, 00:04:07.093 "data_out_pool_size": 2048 00:04:07.093 } 00:04:07.093 } 00:04:07.093 ] 00:04:07.093 } 00:04:07.093 ] 00:04:07.093 } 00:04:07.093 12:27:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:07.093 12:27:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 2331157 00:04:07.093 12:27:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 2331157 ']' 00:04:07.093 12:27:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 2331157 00:04:07.093 12:27:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:07.093 12:27:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:07.093 12:27:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2331157 00:04:07.093 12:27:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:07.093 12:27:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:07.093 12:27:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2331157' 00:04:07.093 killing process with pid 2331157 00:04:07.093 12:27:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 2331157 00:04:07.093 12:27:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 2331157 00:04:07.662 12:27:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=2331183 00:04:07.662 12:27:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:07.662 12:27:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:12.939 12:27:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 2331183 00:04:12.939 12:27:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 2331183 ']' 00:04:12.939 12:27:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 2331183 00:04:12.939 12:27:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:12.939 12:27:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:12.939 12:27:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2331183 00:04:12.939 12:27:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:12.939 12:27:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:12.939 12:27:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2331183' 00:04:12.939 killing process with pid 2331183 00:04:12.939 12:27:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 2331183 00:04:12.939 12:27:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 2331183 00:04:12.939 12:27:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:12.939 12:27:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:12.939 00:04:12.939 real 0m6.276s 00:04:12.939 user 0m5.999s 00:04:12.939 sys 0m0.569s 00:04:12.939 12:27:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:12.939 12:27:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:12.939 ************************************ 00:04:12.939 END TEST skip_rpc_with_json 00:04:12.939 ************************************ 00:04:12.939 12:27:55 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:12.939 12:27:55 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:12.939 12:27:55 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:12.939 12:27:55 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:12.939 ************************************ 00:04:12.939 START TEST skip_rpc_with_delay 00:04:12.939 ************************************ 00:04:12.939 12:27:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:04:12.939 12:27:55 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:12.939 12:27:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:04:12.939 12:27:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:12.939 12:27:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:12.939 12:27:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:12.939 12:27:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:12.939 12:27:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:12.939 12:27:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:12.939 12:27:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:12.939 12:27:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:12.939 12:27:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:12.939 12:27:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:12.939 [2024-11-28 12:27:55.381173] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:12.939 12:27:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:04:12.939 12:27:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:12.939 12:27:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:12.939 12:27:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:12.939 00:04:12.939 real 0m0.069s 00:04:12.939 user 0m0.044s 00:04:12.939 sys 0m0.024s 00:04:12.939 12:27:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:12.939 12:27:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:12.939 ************************************ 00:04:12.939 END TEST skip_rpc_with_delay 00:04:12.939 ************************************ 00:04:12.939 12:27:55 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:12.939 12:27:55 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:12.939 12:27:55 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:12.939 12:27:55 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:12.939 12:27:55 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:12.939 12:27:55 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:13.198 ************************************ 00:04:13.198 START TEST exit_on_failed_rpc_init 00:04:13.198 ************************************ 00:04:13.198 12:27:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:04:13.198 12:27:55 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=2332197 00:04:13.198 12:27:55 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 2332197 00:04:13.198 12:27:55 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:13.198 12:27:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 2332197 ']' 00:04:13.198 12:27:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:13.198 12:27:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:13.198 12:27:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:13.198 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:13.198 12:27:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:13.198 12:27:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:13.198 [2024-11-28 12:27:55.520115] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:04:13.198 [2024-11-28 12:27:55.520156] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2332197 ] 00:04:13.198 [2024-11-28 12:27:55.581674] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:13.198 [2024-11-28 12:27:55.622598] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:13.456 12:27:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:13.456 12:27:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:04:13.456 12:27:55 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:13.456 12:27:55 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:13.456 12:27:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:04:13.456 12:27:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:13.456 12:27:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:13.456 12:27:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:13.456 12:27:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:13.456 12:27:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:13.456 12:27:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:13.456 12:27:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:13.456 12:27:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:13.456 12:27:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:13.456 12:27:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:13.456 [2024-11-28 12:27:55.907831] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:04:13.456 [2024-11-28 12:27:55.907878] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2332376 ] 00:04:13.456 [2024-11-28 12:27:55.967702] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:13.715 [2024-11-28 12:27:56.008802] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:13.715 [2024-11-28 12:27:56.008851] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:13.715 [2024-11-28 12:27:56.008861] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:13.715 [2024-11-28 12:27:56.008869] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:13.715 12:27:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:04:13.715 12:27:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:13.715 12:27:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:04:13.715 12:27:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:04:13.715 12:27:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:04:13.715 12:27:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:13.715 12:27:56 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:13.715 12:27:56 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 2332197 00:04:13.715 12:27:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 2332197 ']' 00:04:13.715 12:27:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 2332197 00:04:13.715 12:27:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:04:13.715 12:27:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:13.715 12:27:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2332197 00:04:13.715 12:27:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:13.715 12:27:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:13.715 12:27:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2332197' 00:04:13.715 killing process with pid 2332197 00:04:13.715 12:27:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 2332197 00:04:13.715 12:27:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 2332197 00:04:13.975 00:04:13.975 real 0m0.937s 00:04:13.975 user 0m1.009s 00:04:13.975 sys 0m0.373s 00:04:13.975 12:27:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:13.975 12:27:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:13.975 ************************************ 00:04:13.975 END TEST exit_on_failed_rpc_init 00:04:13.975 ************************************ 00:04:13.975 12:27:56 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:13.975 00:04:13.975 real 0m13.106s 00:04:13.975 user 0m12.405s 00:04:13.975 sys 0m1.510s 00:04:13.975 12:27:56 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:13.975 12:27:56 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:13.975 ************************************ 00:04:13.975 END TEST skip_rpc 00:04:13.975 ************************************ 00:04:13.975 12:27:56 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:13.975 12:27:56 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:13.975 12:27:56 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:13.975 12:27:56 -- common/autotest_common.sh@10 -- # set +x 00:04:14.234 ************************************ 00:04:14.234 START TEST rpc_client 00:04:14.234 ************************************ 00:04:14.234 12:27:56 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:14.234 * Looking for test storage... 00:04:14.234 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:04:14.234 12:27:56 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:14.234 12:27:56 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:04:14.234 12:27:56 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:14.234 12:27:56 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:14.234 12:27:56 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:14.234 12:27:56 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:14.234 12:27:56 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:14.234 12:27:56 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:14.235 12:27:56 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:14.235 12:27:56 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:14.235 12:27:56 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:14.235 12:27:56 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:14.235 12:27:56 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:14.235 12:27:56 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:14.235 12:27:56 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:14.235 12:27:56 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:14.235 12:27:56 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:14.235 12:27:56 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:14.235 12:27:56 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:14.235 12:27:56 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:14.235 12:27:56 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:14.235 12:27:56 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:14.235 12:27:56 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:14.235 12:27:56 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:14.235 12:27:56 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:14.235 12:27:56 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:14.235 12:27:56 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:14.235 12:27:56 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:14.235 12:27:56 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:14.235 12:27:56 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:14.235 12:27:56 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:14.235 12:27:56 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:14.235 12:27:56 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:14.235 12:27:56 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:14.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.235 --rc genhtml_branch_coverage=1 00:04:14.235 --rc genhtml_function_coverage=1 00:04:14.235 --rc genhtml_legend=1 00:04:14.235 --rc geninfo_all_blocks=1 00:04:14.235 --rc geninfo_unexecuted_blocks=1 00:04:14.235 00:04:14.235 ' 00:04:14.235 12:27:56 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:14.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.235 --rc genhtml_branch_coverage=1 00:04:14.235 --rc genhtml_function_coverage=1 00:04:14.235 --rc genhtml_legend=1 00:04:14.235 --rc geninfo_all_blocks=1 00:04:14.235 --rc geninfo_unexecuted_blocks=1 00:04:14.235 00:04:14.235 ' 00:04:14.235 12:27:56 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:14.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.235 --rc genhtml_branch_coverage=1 00:04:14.235 --rc genhtml_function_coverage=1 00:04:14.235 --rc genhtml_legend=1 00:04:14.235 --rc geninfo_all_blocks=1 00:04:14.235 --rc geninfo_unexecuted_blocks=1 00:04:14.235 00:04:14.235 ' 00:04:14.235 12:27:56 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:14.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.235 --rc genhtml_branch_coverage=1 00:04:14.235 --rc genhtml_function_coverage=1 00:04:14.235 --rc genhtml_legend=1 00:04:14.235 --rc geninfo_all_blocks=1 00:04:14.235 --rc geninfo_unexecuted_blocks=1 00:04:14.235 00:04:14.235 ' 00:04:14.235 12:27:56 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:14.235 OK 00:04:14.235 12:27:56 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:14.235 00:04:14.235 real 0m0.185s 00:04:14.235 user 0m0.098s 00:04:14.235 sys 0m0.097s 00:04:14.235 12:27:56 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:14.235 12:27:56 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:14.235 ************************************ 00:04:14.235 END TEST rpc_client 00:04:14.235 ************************************ 00:04:14.235 12:27:56 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:14.235 12:27:56 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:14.235 12:27:56 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:14.235 12:27:56 -- common/autotest_common.sh@10 -- # set +x 00:04:14.235 ************************************ 00:04:14.235 START TEST json_config 00:04:14.235 ************************************ 00:04:14.235 12:27:56 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:14.495 12:27:56 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:14.495 12:27:56 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:04:14.495 12:27:56 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:14.495 12:27:56 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:14.495 12:27:56 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:14.495 12:27:56 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:14.495 12:27:56 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:14.495 12:27:56 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:14.495 12:27:56 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:14.495 12:27:56 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:14.495 12:27:56 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:14.495 12:27:56 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:14.495 12:27:56 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:14.495 12:27:56 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:14.495 12:27:56 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:14.495 12:27:56 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:14.495 12:27:56 json_config -- scripts/common.sh@345 -- # : 1 00:04:14.495 12:27:56 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:14.495 12:27:56 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:14.495 12:27:56 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:14.495 12:27:56 json_config -- scripts/common.sh@353 -- # local d=1 00:04:14.495 12:27:56 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:14.495 12:27:56 json_config -- scripts/common.sh@355 -- # echo 1 00:04:14.495 12:27:56 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:14.495 12:27:56 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:14.495 12:27:56 json_config -- scripts/common.sh@353 -- # local d=2 00:04:14.495 12:27:56 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:14.495 12:27:56 json_config -- scripts/common.sh@355 -- # echo 2 00:04:14.495 12:27:56 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:14.495 12:27:56 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:14.495 12:27:56 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:14.495 12:27:56 json_config -- scripts/common.sh@368 -- # return 0 00:04:14.495 12:27:56 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:14.495 12:27:56 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:14.495 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.495 --rc genhtml_branch_coverage=1 00:04:14.495 --rc genhtml_function_coverage=1 00:04:14.495 --rc genhtml_legend=1 00:04:14.495 --rc geninfo_all_blocks=1 00:04:14.495 --rc geninfo_unexecuted_blocks=1 00:04:14.495 00:04:14.495 ' 00:04:14.495 12:27:56 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:14.495 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.495 --rc genhtml_branch_coverage=1 00:04:14.495 --rc genhtml_function_coverage=1 00:04:14.495 --rc genhtml_legend=1 00:04:14.495 --rc geninfo_all_blocks=1 00:04:14.495 --rc geninfo_unexecuted_blocks=1 00:04:14.495 00:04:14.495 ' 00:04:14.495 12:27:56 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:14.495 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.495 --rc genhtml_branch_coverage=1 00:04:14.495 --rc genhtml_function_coverage=1 00:04:14.495 --rc genhtml_legend=1 00:04:14.495 --rc geninfo_all_blocks=1 00:04:14.495 --rc geninfo_unexecuted_blocks=1 00:04:14.495 00:04:14.495 ' 00:04:14.495 12:27:56 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:14.495 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.495 --rc genhtml_branch_coverage=1 00:04:14.495 --rc genhtml_function_coverage=1 00:04:14.495 --rc genhtml_legend=1 00:04:14.495 --rc geninfo_all_blocks=1 00:04:14.495 --rc geninfo_unexecuted_blocks=1 00:04:14.495 00:04:14.495 ' 00:04:14.495 12:27:56 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:14.495 12:27:56 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:14.495 12:27:56 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:14.495 12:27:56 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:14.495 12:27:56 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:14.495 12:27:56 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:14.495 12:27:56 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:14.495 12:27:56 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:14.495 12:27:56 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:14.495 12:27:56 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:14.495 12:27:56 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:14.495 12:27:56 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:14.495 12:27:56 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:04:14.495 12:27:56 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:04:14.495 12:27:56 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:14.495 12:27:56 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:14.495 12:27:56 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:14.495 12:27:56 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:14.495 12:27:56 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:14.495 12:27:56 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:14.495 12:27:56 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:14.495 12:27:56 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:14.495 12:27:56 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:14.495 12:27:56 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:14.495 12:27:56 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:14.495 12:27:56 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:14.495 12:27:56 json_config -- paths/export.sh@5 -- # export PATH 00:04:14.496 12:27:56 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:14.496 12:27:56 json_config -- nvmf/common.sh@51 -- # : 0 00:04:14.496 12:27:56 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:14.496 12:27:56 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:14.496 12:27:56 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:14.496 12:27:56 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:14.496 12:27:56 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:14.496 12:27:56 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:14.496 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:14.496 12:27:56 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:14.496 12:27:56 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:14.496 12:27:56 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:14.496 12:27:56 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:14.496 12:27:56 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:14.496 12:27:56 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:14.496 12:27:56 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:14.496 12:27:56 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:14.496 12:27:56 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:14.496 12:27:56 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:14.496 12:27:56 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:14.496 12:27:56 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:14.496 12:27:56 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:14.496 12:27:56 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:14.496 12:27:56 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:04:14.496 12:27:56 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:14.496 12:27:56 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:14.496 12:27:56 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:14.496 12:27:56 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:04:14.496 INFO: JSON configuration test init 00:04:14.496 12:27:56 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:04:14.496 12:27:56 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:04:14.496 12:27:56 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:14.496 12:27:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:14.496 12:27:56 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:04:14.496 12:27:56 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:14.496 12:27:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:14.496 12:27:56 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:04:14.496 12:27:56 json_config -- json_config/common.sh@9 -- # local app=target 00:04:14.496 12:27:56 json_config -- json_config/common.sh@10 -- # shift 00:04:14.496 12:27:56 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:14.496 12:27:56 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:14.496 12:27:56 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:14.496 12:27:56 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:14.496 12:27:56 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:14.496 12:27:56 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2332619 00:04:14.496 12:27:56 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:14.496 Waiting for target to run... 00:04:14.496 12:27:56 json_config -- json_config/common.sh@25 -- # waitforlisten 2332619 /var/tmp/spdk_tgt.sock 00:04:14.496 12:27:56 json_config -- common/autotest_common.sh@835 -- # '[' -z 2332619 ']' 00:04:14.496 12:27:56 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:14.496 12:27:56 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:14.496 12:27:56 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:14.496 12:27:56 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:14.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:14.496 12:27:56 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:14.496 12:27:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:14.496 [2024-11-28 12:27:56.974469] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:04:14.496 [2024-11-28 12:27:56.974522] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2332619 ] 00:04:14.756 [2024-11-28 12:27:57.242703] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:15.015 [2024-11-28 12:27:57.277080] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:15.582 12:27:57 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:15.582 12:27:57 json_config -- common/autotest_common.sh@868 -- # return 0 00:04:15.582 12:27:57 json_config -- json_config/common.sh@26 -- # echo '' 00:04:15.582 00:04:15.583 12:27:57 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:04:15.583 12:27:57 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:04:15.583 12:27:57 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:15.583 12:27:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:15.583 12:27:57 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:04:15.583 12:27:57 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:04:15.583 12:27:57 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:15.583 12:27:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:15.583 12:27:57 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:15.583 12:27:57 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:04:15.583 12:27:57 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:18.872 12:28:00 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:04:18.872 12:28:00 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:18.872 12:28:00 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:18.872 12:28:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:18.872 12:28:00 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:18.872 12:28:00 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:18.872 12:28:00 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:18.872 12:28:00 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:04:18.872 12:28:00 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:04:18.872 12:28:00 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:04:18.872 12:28:00 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:04:18.872 12:28:00 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:18.872 12:28:01 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:04:18.872 12:28:01 json_config -- json_config/json_config.sh@51 -- # local get_types 00:04:18.872 12:28:01 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:04:18.872 12:28:01 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:04:18.872 12:28:01 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:04:18.872 12:28:01 json_config -- json_config/json_config.sh@54 -- # sort 00:04:18.872 12:28:01 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:04:18.872 12:28:01 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:04:18.872 12:28:01 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:04:18.872 12:28:01 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:04:18.872 12:28:01 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:18.872 12:28:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:18.872 12:28:01 json_config -- json_config/json_config.sh@62 -- # return 0 00:04:18.872 12:28:01 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:04:18.872 12:28:01 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:04:18.872 12:28:01 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:04:18.873 12:28:01 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:04:18.873 12:28:01 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:04:18.873 12:28:01 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:04:18.873 12:28:01 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:18.873 12:28:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:18.873 12:28:01 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:18.873 12:28:01 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:04:18.873 12:28:01 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:04:18.873 12:28:01 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:18.873 12:28:01 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:18.873 MallocForNvmf0 00:04:18.873 12:28:01 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:18.873 12:28:01 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:19.132 MallocForNvmf1 00:04:19.132 12:28:01 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:19.132 12:28:01 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:19.391 [2024-11-28 12:28:01.727036] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:19.391 12:28:01 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:19.391 12:28:01 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:19.650 12:28:01 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:19.650 12:28:01 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:19.650 12:28:02 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:19.650 12:28:02 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:19.910 12:28:02 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:19.910 12:28:02 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:20.170 [2024-11-28 12:28:02.493534] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:20.170 12:28:02 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:04:20.170 12:28:02 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:20.170 12:28:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:20.170 12:28:02 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:04:20.170 12:28:02 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:20.170 12:28:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:20.170 12:28:02 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:04:20.170 12:28:02 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:20.170 12:28:02 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:20.431 MallocBdevForConfigChangeCheck 00:04:20.431 12:28:02 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:04:20.431 12:28:02 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:20.431 12:28:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:20.431 12:28:02 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:04:20.431 12:28:02 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:20.690 12:28:03 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:04:20.690 INFO: shutting down applications... 00:04:20.690 12:28:03 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:04:20.690 12:28:03 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:04:20.690 12:28:03 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:04:20.690 12:28:03 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:22.595 Calling clear_iscsi_subsystem 00:04:22.595 Calling clear_nvmf_subsystem 00:04:22.595 Calling clear_nbd_subsystem 00:04:22.595 Calling clear_ublk_subsystem 00:04:22.595 Calling clear_vhost_blk_subsystem 00:04:22.595 Calling clear_vhost_scsi_subsystem 00:04:22.595 Calling clear_bdev_subsystem 00:04:22.595 12:28:04 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:04:22.595 12:28:04 json_config -- json_config/json_config.sh@350 -- # count=100 00:04:22.595 12:28:04 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:04:22.595 12:28:04 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:22.595 12:28:04 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:22.595 12:28:04 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:04:22.595 12:28:05 json_config -- json_config/json_config.sh@352 -- # break 00:04:22.595 12:28:05 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:04:22.595 12:28:05 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:04:22.595 12:28:05 json_config -- json_config/common.sh@31 -- # local app=target 00:04:22.595 12:28:05 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:22.595 12:28:05 json_config -- json_config/common.sh@35 -- # [[ -n 2332619 ]] 00:04:22.595 12:28:05 json_config -- json_config/common.sh@38 -- # kill -SIGINT 2332619 00:04:22.595 12:28:05 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:22.595 12:28:05 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:22.595 12:28:05 json_config -- json_config/common.sh@41 -- # kill -0 2332619 00:04:22.595 12:28:05 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:23.163 12:28:05 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:23.163 12:28:05 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:23.163 12:28:05 json_config -- json_config/common.sh@41 -- # kill -0 2332619 00:04:23.163 12:28:05 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:23.163 12:28:05 json_config -- json_config/common.sh@43 -- # break 00:04:23.163 12:28:05 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:23.163 12:28:05 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:23.163 SPDK target shutdown done 00:04:23.163 12:28:05 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:04:23.163 INFO: relaunching applications... 00:04:23.163 12:28:05 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:23.163 12:28:05 json_config -- json_config/common.sh@9 -- # local app=target 00:04:23.163 12:28:05 json_config -- json_config/common.sh@10 -- # shift 00:04:23.163 12:28:05 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:23.163 12:28:05 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:23.163 12:28:05 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:23.163 12:28:05 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:23.163 12:28:05 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:23.163 12:28:05 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2334245 00:04:23.163 12:28:05 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:23.163 Waiting for target to run... 00:04:23.163 12:28:05 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:23.163 12:28:05 json_config -- json_config/common.sh@25 -- # waitforlisten 2334245 /var/tmp/spdk_tgt.sock 00:04:23.163 12:28:05 json_config -- common/autotest_common.sh@835 -- # '[' -z 2334245 ']' 00:04:23.163 12:28:05 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:23.163 12:28:05 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:23.163 12:28:05 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:23.163 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:23.163 12:28:05 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:23.163 12:28:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:23.163 [2024-11-28 12:28:05.637002] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:04:23.163 [2024-11-28 12:28:05.637063] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2334245 ] 00:04:23.733 [2024-11-28 12:28:06.077148] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:23.733 [2024-11-28 12:28:06.134902] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:27.020 [2024-11-28 12:28:09.164829] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:27.020 [2024-11-28 12:28:09.197188] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:27.588 12:28:09 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:27.588 12:28:09 json_config -- common/autotest_common.sh@868 -- # return 0 00:04:27.588 12:28:09 json_config -- json_config/common.sh@26 -- # echo '' 00:04:27.588 00:04:27.588 12:28:09 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:04:27.588 12:28:09 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:27.588 INFO: Checking if target configuration is the same... 00:04:27.588 12:28:09 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:27.588 12:28:09 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:04:27.588 12:28:09 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:27.588 + '[' 2 -ne 2 ']' 00:04:27.588 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:27.588 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:27.588 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:27.588 +++ basename /dev/fd/62 00:04:27.588 ++ mktemp /tmp/62.XXX 00:04:27.588 + tmp_file_1=/tmp/62.RSv 00:04:27.588 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:27.588 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:27.588 + tmp_file_2=/tmp/spdk_tgt_config.json.eGu 00:04:27.588 + ret=0 00:04:27.588 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:27.848 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:27.848 + diff -u /tmp/62.RSv /tmp/spdk_tgt_config.json.eGu 00:04:27.848 + echo 'INFO: JSON config files are the same' 00:04:27.848 INFO: JSON config files are the same 00:04:27.848 + rm /tmp/62.RSv /tmp/spdk_tgt_config.json.eGu 00:04:27.848 + exit 0 00:04:27.848 12:28:10 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:04:27.848 12:28:10 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:27.848 INFO: changing configuration and checking if this can be detected... 00:04:27.848 12:28:10 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:27.848 12:28:10 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:28.107 12:28:10 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:04:28.107 12:28:10 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:28.107 12:28:10 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:28.107 + '[' 2 -ne 2 ']' 00:04:28.107 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:28.107 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:28.107 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:28.107 +++ basename /dev/fd/62 00:04:28.107 ++ mktemp /tmp/62.XXX 00:04:28.107 + tmp_file_1=/tmp/62.fDs 00:04:28.107 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:28.107 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:28.107 + tmp_file_2=/tmp/spdk_tgt_config.json.biU 00:04:28.107 + ret=0 00:04:28.107 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:28.367 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:28.367 + diff -u /tmp/62.fDs /tmp/spdk_tgt_config.json.biU 00:04:28.367 + ret=1 00:04:28.367 + echo '=== Start of file: /tmp/62.fDs ===' 00:04:28.367 + cat /tmp/62.fDs 00:04:28.367 + echo '=== End of file: /tmp/62.fDs ===' 00:04:28.367 + echo '' 00:04:28.367 + echo '=== Start of file: /tmp/spdk_tgt_config.json.biU ===' 00:04:28.367 + cat /tmp/spdk_tgt_config.json.biU 00:04:28.367 + echo '=== End of file: /tmp/spdk_tgt_config.json.biU ===' 00:04:28.367 + echo '' 00:04:28.367 + rm /tmp/62.fDs /tmp/spdk_tgt_config.json.biU 00:04:28.367 + exit 1 00:04:28.367 12:28:10 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:04:28.367 INFO: configuration change detected. 00:04:28.367 12:28:10 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:04:28.367 12:28:10 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:04:28.367 12:28:10 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:28.367 12:28:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:28.367 12:28:10 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:04:28.367 12:28:10 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:04:28.367 12:28:10 json_config -- json_config/json_config.sh@324 -- # [[ -n 2334245 ]] 00:04:28.367 12:28:10 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:04:28.367 12:28:10 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:04:28.367 12:28:10 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:28.367 12:28:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:28.367 12:28:10 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:04:28.367 12:28:10 json_config -- json_config/json_config.sh@200 -- # uname -s 00:04:28.367 12:28:10 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:04:28.367 12:28:10 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:04:28.367 12:28:10 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:04:28.367 12:28:10 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:04:28.367 12:28:10 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:28.367 12:28:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:28.367 12:28:10 json_config -- json_config/json_config.sh@330 -- # killprocess 2334245 00:04:28.367 12:28:10 json_config -- common/autotest_common.sh@954 -- # '[' -z 2334245 ']' 00:04:28.367 12:28:10 json_config -- common/autotest_common.sh@958 -- # kill -0 2334245 00:04:28.367 12:28:10 json_config -- common/autotest_common.sh@959 -- # uname 00:04:28.367 12:28:10 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:28.367 12:28:10 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2334245 00:04:28.627 12:28:10 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:28.627 12:28:10 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:28.627 12:28:10 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2334245' 00:04:28.627 killing process with pid 2334245 00:04:28.627 12:28:10 json_config -- common/autotest_common.sh@973 -- # kill 2334245 00:04:28.627 12:28:10 json_config -- common/autotest_common.sh@978 -- # wait 2334245 00:04:30.086 12:28:12 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:30.086 12:28:12 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:04:30.086 12:28:12 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:30.086 12:28:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:30.086 12:28:12 json_config -- json_config/json_config.sh@335 -- # return 0 00:04:30.086 12:28:12 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:04:30.086 INFO: Success 00:04:30.086 00:04:30.086 real 0m15.730s 00:04:30.086 user 0m16.197s 00:04:30.086 sys 0m2.571s 00:04:30.086 12:28:12 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:30.086 12:28:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:30.086 ************************************ 00:04:30.086 END TEST json_config 00:04:30.086 ************************************ 00:04:30.086 12:28:12 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:30.086 12:28:12 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:30.086 12:28:12 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:30.086 12:28:12 -- common/autotest_common.sh@10 -- # set +x 00:04:30.086 ************************************ 00:04:30.086 START TEST json_config_extra_key 00:04:30.086 ************************************ 00:04:30.086 12:28:12 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:30.371 12:28:12 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:30.371 12:28:12 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:04:30.371 12:28:12 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:30.371 12:28:12 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:30.372 12:28:12 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:30.372 12:28:12 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:30.372 12:28:12 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:30.372 12:28:12 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:30.372 12:28:12 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:30.372 12:28:12 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:30.372 12:28:12 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:30.372 12:28:12 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:30.372 12:28:12 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:30.372 12:28:12 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:30.372 12:28:12 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:30.372 12:28:12 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:30.372 12:28:12 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:30.372 12:28:12 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:30.372 12:28:12 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:30.372 12:28:12 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:30.372 12:28:12 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:30.372 12:28:12 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:30.372 12:28:12 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:30.372 12:28:12 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:30.372 12:28:12 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:30.372 12:28:12 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:30.372 12:28:12 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:30.372 12:28:12 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:30.372 12:28:12 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:30.372 12:28:12 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:30.372 12:28:12 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:30.372 12:28:12 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:30.372 12:28:12 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:30.372 12:28:12 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:30.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.372 --rc genhtml_branch_coverage=1 00:04:30.372 --rc genhtml_function_coverage=1 00:04:30.372 --rc genhtml_legend=1 00:04:30.372 --rc geninfo_all_blocks=1 00:04:30.372 --rc geninfo_unexecuted_blocks=1 00:04:30.372 00:04:30.372 ' 00:04:30.372 12:28:12 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:30.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.372 --rc genhtml_branch_coverage=1 00:04:30.372 --rc genhtml_function_coverage=1 00:04:30.372 --rc genhtml_legend=1 00:04:30.372 --rc geninfo_all_blocks=1 00:04:30.372 --rc geninfo_unexecuted_blocks=1 00:04:30.372 00:04:30.372 ' 00:04:30.372 12:28:12 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:30.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.372 --rc genhtml_branch_coverage=1 00:04:30.372 --rc genhtml_function_coverage=1 00:04:30.372 --rc genhtml_legend=1 00:04:30.372 --rc geninfo_all_blocks=1 00:04:30.372 --rc geninfo_unexecuted_blocks=1 00:04:30.372 00:04:30.372 ' 00:04:30.372 12:28:12 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:30.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.372 --rc genhtml_branch_coverage=1 00:04:30.372 --rc genhtml_function_coverage=1 00:04:30.372 --rc genhtml_legend=1 00:04:30.372 --rc geninfo_all_blocks=1 00:04:30.372 --rc geninfo_unexecuted_blocks=1 00:04:30.372 00:04:30.372 ' 00:04:30.372 12:28:12 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:30.372 12:28:12 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:30.372 12:28:12 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:30.372 12:28:12 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:30.372 12:28:12 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:30.372 12:28:12 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:30.372 12:28:12 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:30.372 12:28:12 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:30.372 12:28:12 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:30.372 12:28:12 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:30.372 12:28:12 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:30.372 12:28:12 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:30.372 12:28:12 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:04:30.372 12:28:12 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:04:30.372 12:28:12 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:30.372 12:28:12 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:30.372 12:28:12 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:30.372 12:28:12 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:30.372 12:28:12 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:30.372 12:28:12 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:30.372 12:28:12 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:30.372 12:28:12 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:30.372 12:28:12 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:30.372 12:28:12 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:30.372 12:28:12 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:30.372 12:28:12 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:30.372 12:28:12 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:30.372 12:28:12 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:30.372 12:28:12 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:30.372 12:28:12 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:30.372 12:28:12 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:30.372 12:28:12 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:30.372 12:28:12 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:30.372 12:28:12 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:30.372 12:28:12 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:30.373 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:30.373 12:28:12 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:30.373 12:28:12 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:30.373 12:28:12 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:30.373 12:28:12 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:30.373 12:28:12 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:30.373 12:28:12 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:30.373 12:28:12 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:30.373 12:28:12 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:30.373 12:28:12 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:30.373 12:28:12 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:30.373 12:28:12 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:30.373 12:28:12 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:30.373 12:28:12 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:30.373 12:28:12 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:30.373 INFO: launching applications... 00:04:30.373 12:28:12 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:30.373 12:28:12 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:30.373 12:28:12 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:30.373 12:28:12 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:30.373 12:28:12 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:30.373 12:28:12 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:30.373 12:28:12 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:30.373 12:28:12 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:30.373 12:28:12 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=2335531 00:04:30.373 12:28:12 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:30.373 Waiting for target to run... 00:04:30.373 12:28:12 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 2335531 /var/tmp/spdk_tgt.sock 00:04:30.373 12:28:12 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 2335531 ']' 00:04:30.373 12:28:12 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:30.373 12:28:12 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:30.373 12:28:12 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:30.373 12:28:12 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:30.373 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:30.373 12:28:12 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:30.373 12:28:12 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:30.373 [2024-11-28 12:28:12.789008] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:04:30.373 [2024-11-28 12:28:12.789061] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2335531 ] 00:04:30.941 [2024-11-28 12:28:13.233036] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:30.941 [2024-11-28 12:28:13.284929] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:31.200 12:28:13 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:31.200 12:28:13 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:04:31.200 12:28:13 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:31.200 00:04:31.200 12:28:13 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:31.200 INFO: shutting down applications... 00:04:31.200 12:28:13 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:31.200 12:28:13 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:31.200 12:28:13 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:31.200 12:28:13 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 2335531 ]] 00:04:31.200 12:28:13 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 2335531 00:04:31.200 12:28:13 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:31.200 12:28:13 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:31.200 12:28:13 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2335531 00:04:31.200 12:28:13 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:31.769 12:28:14 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:31.769 12:28:14 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:31.769 12:28:14 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2335531 00:04:31.769 12:28:14 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:31.769 12:28:14 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:31.769 12:28:14 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:31.769 12:28:14 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:31.769 SPDK target shutdown done 00:04:31.769 12:28:14 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:31.769 Success 00:04:31.769 00:04:31.769 real 0m1.584s 00:04:31.769 user 0m1.208s 00:04:31.769 sys 0m0.564s 00:04:31.769 12:28:14 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:31.769 12:28:14 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:31.769 ************************************ 00:04:31.769 END TEST json_config_extra_key 00:04:31.769 ************************************ 00:04:31.769 12:28:14 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:31.769 12:28:14 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:31.769 12:28:14 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:31.769 12:28:14 -- common/autotest_common.sh@10 -- # set +x 00:04:31.769 ************************************ 00:04:31.769 START TEST alias_rpc 00:04:31.769 ************************************ 00:04:31.769 12:28:14 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:31.769 * Looking for test storage... 00:04:32.029 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:04:32.029 12:28:14 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:32.029 12:28:14 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:32.029 12:28:14 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:32.029 12:28:14 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:32.029 12:28:14 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:32.029 12:28:14 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:32.029 12:28:14 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:32.029 12:28:14 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:32.029 12:28:14 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:32.029 12:28:14 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:32.029 12:28:14 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:32.029 12:28:14 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:32.029 12:28:14 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:32.029 12:28:14 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:32.029 12:28:14 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:32.029 12:28:14 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:32.029 12:28:14 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:32.029 12:28:14 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:32.029 12:28:14 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:32.029 12:28:14 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:32.029 12:28:14 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:32.029 12:28:14 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:32.029 12:28:14 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:32.029 12:28:14 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:32.029 12:28:14 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:32.029 12:28:14 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:32.029 12:28:14 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:32.029 12:28:14 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:32.029 12:28:14 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:32.029 12:28:14 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:32.029 12:28:14 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:32.029 12:28:14 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:32.029 12:28:14 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:32.029 12:28:14 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:32.029 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.029 --rc genhtml_branch_coverage=1 00:04:32.029 --rc genhtml_function_coverage=1 00:04:32.029 --rc genhtml_legend=1 00:04:32.029 --rc geninfo_all_blocks=1 00:04:32.029 --rc geninfo_unexecuted_blocks=1 00:04:32.029 00:04:32.029 ' 00:04:32.029 12:28:14 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:32.029 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.029 --rc genhtml_branch_coverage=1 00:04:32.029 --rc genhtml_function_coverage=1 00:04:32.029 --rc genhtml_legend=1 00:04:32.029 --rc geninfo_all_blocks=1 00:04:32.029 --rc geninfo_unexecuted_blocks=1 00:04:32.029 00:04:32.029 ' 00:04:32.029 12:28:14 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:32.029 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.029 --rc genhtml_branch_coverage=1 00:04:32.029 --rc genhtml_function_coverage=1 00:04:32.029 --rc genhtml_legend=1 00:04:32.029 --rc geninfo_all_blocks=1 00:04:32.029 --rc geninfo_unexecuted_blocks=1 00:04:32.029 00:04:32.029 ' 00:04:32.029 12:28:14 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:32.029 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.029 --rc genhtml_branch_coverage=1 00:04:32.029 --rc genhtml_function_coverage=1 00:04:32.029 --rc genhtml_legend=1 00:04:32.029 --rc geninfo_all_blocks=1 00:04:32.029 --rc geninfo_unexecuted_blocks=1 00:04:32.030 00:04:32.030 ' 00:04:32.030 12:28:14 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:32.030 12:28:14 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=2335828 00:04:32.030 12:28:14 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 2335828 00:04:32.030 12:28:14 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:32.030 12:28:14 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 2335828 ']' 00:04:32.030 12:28:14 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:32.030 12:28:14 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:32.030 12:28:14 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:32.030 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:32.030 12:28:14 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:32.030 12:28:14 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:32.030 [2024-11-28 12:28:14.429289] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:04:32.030 [2024-11-28 12:28:14.429335] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2335828 ] 00:04:32.030 [2024-11-28 12:28:14.487347] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:32.030 [2024-11-28 12:28:14.529889] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:32.289 12:28:14 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:32.289 12:28:14 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:32.289 12:28:14 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:32.548 12:28:14 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 2335828 00:04:32.548 12:28:14 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 2335828 ']' 00:04:32.548 12:28:14 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 2335828 00:04:32.548 12:28:14 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:04:32.548 12:28:14 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:32.548 12:28:14 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2335828 00:04:32.548 12:28:15 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:32.548 12:28:15 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:32.548 12:28:15 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2335828' 00:04:32.548 killing process with pid 2335828 00:04:32.548 12:28:15 alias_rpc -- common/autotest_common.sh@973 -- # kill 2335828 00:04:32.548 12:28:15 alias_rpc -- common/autotest_common.sh@978 -- # wait 2335828 00:04:32.808 00:04:32.808 real 0m1.121s 00:04:32.808 user 0m1.149s 00:04:32.808 sys 0m0.398s 00:04:32.808 12:28:15 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:32.808 12:28:15 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:32.808 ************************************ 00:04:32.808 END TEST alias_rpc 00:04:32.808 ************************************ 00:04:33.066 12:28:15 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:04:33.066 12:28:15 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:33.066 12:28:15 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:33.066 12:28:15 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:33.066 12:28:15 -- common/autotest_common.sh@10 -- # set +x 00:04:33.066 ************************************ 00:04:33.066 START TEST spdkcli_tcp 00:04:33.066 ************************************ 00:04:33.066 12:28:15 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:33.066 * Looking for test storage... 00:04:33.066 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:04:33.066 12:28:15 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:33.066 12:28:15 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:04:33.066 12:28:15 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:33.067 12:28:15 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:33.067 12:28:15 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:33.067 12:28:15 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:33.067 12:28:15 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:33.067 12:28:15 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:33.067 12:28:15 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:33.067 12:28:15 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:33.067 12:28:15 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:33.067 12:28:15 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:33.067 12:28:15 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:33.067 12:28:15 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:33.067 12:28:15 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:33.067 12:28:15 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:33.067 12:28:15 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:04:33.067 12:28:15 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:33.067 12:28:15 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:33.067 12:28:15 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:33.067 12:28:15 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:04:33.067 12:28:15 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:33.067 12:28:15 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:04:33.067 12:28:15 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:33.067 12:28:15 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:33.067 12:28:15 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:04:33.067 12:28:15 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:33.067 12:28:15 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:04:33.067 12:28:15 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:33.067 12:28:15 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:33.067 12:28:15 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:33.067 12:28:15 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:04:33.067 12:28:15 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:33.067 12:28:15 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:33.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:33.067 --rc genhtml_branch_coverage=1 00:04:33.067 --rc genhtml_function_coverage=1 00:04:33.067 --rc genhtml_legend=1 00:04:33.067 --rc geninfo_all_blocks=1 00:04:33.067 --rc geninfo_unexecuted_blocks=1 00:04:33.067 00:04:33.067 ' 00:04:33.067 12:28:15 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:33.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:33.067 --rc genhtml_branch_coverage=1 00:04:33.067 --rc genhtml_function_coverage=1 00:04:33.067 --rc genhtml_legend=1 00:04:33.067 --rc geninfo_all_blocks=1 00:04:33.067 --rc geninfo_unexecuted_blocks=1 00:04:33.067 00:04:33.067 ' 00:04:33.067 12:28:15 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:33.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:33.067 --rc genhtml_branch_coverage=1 00:04:33.067 --rc genhtml_function_coverage=1 00:04:33.067 --rc genhtml_legend=1 00:04:33.067 --rc geninfo_all_blocks=1 00:04:33.067 --rc geninfo_unexecuted_blocks=1 00:04:33.067 00:04:33.067 ' 00:04:33.067 12:28:15 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:33.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:33.067 --rc genhtml_branch_coverage=1 00:04:33.067 --rc genhtml_function_coverage=1 00:04:33.067 --rc genhtml_legend=1 00:04:33.067 --rc geninfo_all_blocks=1 00:04:33.067 --rc geninfo_unexecuted_blocks=1 00:04:33.067 00:04:33.067 ' 00:04:33.067 12:28:15 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:04:33.067 12:28:15 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:33.067 12:28:15 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:04:33.067 12:28:15 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:33.067 12:28:15 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:33.067 12:28:15 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:33.067 12:28:15 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:33.067 12:28:15 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:33.067 12:28:15 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:33.067 12:28:15 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=2336117 00:04:33.067 12:28:15 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 2336117 00:04:33.067 12:28:15 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:33.067 12:28:15 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 2336117 ']' 00:04:33.067 12:28:15 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:33.067 12:28:15 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:33.067 12:28:15 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:33.067 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:33.067 12:28:15 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:33.067 12:28:15 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:33.326 [2024-11-28 12:28:15.610923] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:04:33.326 [2024-11-28 12:28:15.610978] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2336117 ] 00:04:33.326 [2024-11-28 12:28:15.673777] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:33.326 [2024-11-28 12:28:15.715029] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:33.326 [2024-11-28 12:28:15.715031] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:33.585 12:28:15 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:33.585 12:28:15 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:04:33.585 12:28:15 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=2336125 00:04:33.585 12:28:15 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:33.585 12:28:15 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:33.844 [ 00:04:33.844 "bdev_malloc_delete", 00:04:33.844 "bdev_malloc_create", 00:04:33.844 "bdev_null_resize", 00:04:33.844 "bdev_null_delete", 00:04:33.844 "bdev_null_create", 00:04:33.844 "bdev_nvme_cuse_unregister", 00:04:33.844 "bdev_nvme_cuse_register", 00:04:33.844 "bdev_opal_new_user", 00:04:33.844 "bdev_opal_set_lock_state", 00:04:33.844 "bdev_opal_delete", 00:04:33.844 "bdev_opal_get_info", 00:04:33.844 "bdev_opal_create", 00:04:33.844 "bdev_nvme_opal_revert", 00:04:33.844 "bdev_nvme_opal_init", 00:04:33.844 "bdev_nvme_send_cmd", 00:04:33.844 "bdev_nvme_set_keys", 00:04:33.844 "bdev_nvme_get_path_iostat", 00:04:33.844 "bdev_nvme_get_mdns_discovery_info", 00:04:33.844 "bdev_nvme_stop_mdns_discovery", 00:04:33.844 "bdev_nvme_start_mdns_discovery", 00:04:33.844 "bdev_nvme_set_multipath_policy", 00:04:33.844 "bdev_nvme_set_preferred_path", 00:04:33.844 "bdev_nvme_get_io_paths", 00:04:33.844 "bdev_nvme_remove_error_injection", 00:04:33.845 "bdev_nvme_add_error_injection", 00:04:33.845 "bdev_nvme_get_discovery_info", 00:04:33.845 "bdev_nvme_stop_discovery", 00:04:33.845 "bdev_nvme_start_discovery", 00:04:33.845 "bdev_nvme_get_controller_health_info", 00:04:33.845 "bdev_nvme_disable_controller", 00:04:33.845 "bdev_nvme_enable_controller", 00:04:33.845 "bdev_nvme_reset_controller", 00:04:33.845 "bdev_nvme_get_transport_statistics", 00:04:33.845 "bdev_nvme_apply_firmware", 00:04:33.845 "bdev_nvme_detach_controller", 00:04:33.845 "bdev_nvme_get_controllers", 00:04:33.845 "bdev_nvme_attach_controller", 00:04:33.845 "bdev_nvme_set_hotplug", 00:04:33.845 "bdev_nvme_set_options", 00:04:33.845 "bdev_passthru_delete", 00:04:33.845 "bdev_passthru_create", 00:04:33.845 "bdev_lvol_set_parent_bdev", 00:04:33.845 "bdev_lvol_set_parent", 00:04:33.845 "bdev_lvol_check_shallow_copy", 00:04:33.845 "bdev_lvol_start_shallow_copy", 00:04:33.845 "bdev_lvol_grow_lvstore", 00:04:33.845 "bdev_lvol_get_lvols", 00:04:33.845 "bdev_lvol_get_lvstores", 00:04:33.845 "bdev_lvol_delete", 00:04:33.845 "bdev_lvol_set_read_only", 00:04:33.845 "bdev_lvol_resize", 00:04:33.845 "bdev_lvol_decouple_parent", 00:04:33.845 "bdev_lvol_inflate", 00:04:33.845 "bdev_lvol_rename", 00:04:33.845 "bdev_lvol_clone_bdev", 00:04:33.845 "bdev_lvol_clone", 00:04:33.845 "bdev_lvol_snapshot", 00:04:33.845 "bdev_lvol_create", 00:04:33.845 "bdev_lvol_delete_lvstore", 00:04:33.845 "bdev_lvol_rename_lvstore", 00:04:33.845 "bdev_lvol_create_lvstore", 00:04:33.845 "bdev_raid_set_options", 00:04:33.845 "bdev_raid_remove_base_bdev", 00:04:33.845 "bdev_raid_add_base_bdev", 00:04:33.845 "bdev_raid_delete", 00:04:33.845 "bdev_raid_create", 00:04:33.845 "bdev_raid_get_bdevs", 00:04:33.845 "bdev_error_inject_error", 00:04:33.845 "bdev_error_delete", 00:04:33.845 "bdev_error_create", 00:04:33.845 "bdev_split_delete", 00:04:33.845 "bdev_split_create", 00:04:33.845 "bdev_delay_delete", 00:04:33.845 "bdev_delay_create", 00:04:33.845 "bdev_delay_update_latency", 00:04:33.845 "bdev_zone_block_delete", 00:04:33.845 "bdev_zone_block_create", 00:04:33.845 "blobfs_create", 00:04:33.845 "blobfs_detect", 00:04:33.845 "blobfs_set_cache_size", 00:04:33.845 "bdev_aio_delete", 00:04:33.845 "bdev_aio_rescan", 00:04:33.845 "bdev_aio_create", 00:04:33.845 "bdev_ftl_set_property", 00:04:33.845 "bdev_ftl_get_properties", 00:04:33.845 "bdev_ftl_get_stats", 00:04:33.845 "bdev_ftl_unmap", 00:04:33.845 "bdev_ftl_unload", 00:04:33.845 "bdev_ftl_delete", 00:04:33.845 "bdev_ftl_load", 00:04:33.845 "bdev_ftl_create", 00:04:33.845 "bdev_virtio_attach_controller", 00:04:33.845 "bdev_virtio_scsi_get_devices", 00:04:33.845 "bdev_virtio_detach_controller", 00:04:33.845 "bdev_virtio_blk_set_hotplug", 00:04:33.845 "bdev_iscsi_delete", 00:04:33.845 "bdev_iscsi_create", 00:04:33.845 "bdev_iscsi_set_options", 00:04:33.845 "accel_error_inject_error", 00:04:33.845 "ioat_scan_accel_module", 00:04:33.845 "dsa_scan_accel_module", 00:04:33.845 "iaa_scan_accel_module", 00:04:33.845 "vfu_virtio_create_fs_endpoint", 00:04:33.845 "vfu_virtio_create_scsi_endpoint", 00:04:33.845 "vfu_virtio_scsi_remove_target", 00:04:33.845 "vfu_virtio_scsi_add_target", 00:04:33.845 "vfu_virtio_create_blk_endpoint", 00:04:33.845 "vfu_virtio_delete_endpoint", 00:04:33.845 "keyring_file_remove_key", 00:04:33.845 "keyring_file_add_key", 00:04:33.845 "keyring_linux_set_options", 00:04:33.845 "fsdev_aio_delete", 00:04:33.845 "fsdev_aio_create", 00:04:33.845 "iscsi_get_histogram", 00:04:33.845 "iscsi_enable_histogram", 00:04:33.845 "iscsi_set_options", 00:04:33.845 "iscsi_get_auth_groups", 00:04:33.845 "iscsi_auth_group_remove_secret", 00:04:33.845 "iscsi_auth_group_add_secret", 00:04:33.845 "iscsi_delete_auth_group", 00:04:33.845 "iscsi_create_auth_group", 00:04:33.845 "iscsi_set_discovery_auth", 00:04:33.845 "iscsi_get_options", 00:04:33.845 "iscsi_target_node_request_logout", 00:04:33.845 "iscsi_target_node_set_redirect", 00:04:33.845 "iscsi_target_node_set_auth", 00:04:33.845 "iscsi_target_node_add_lun", 00:04:33.845 "iscsi_get_stats", 00:04:33.845 "iscsi_get_connections", 00:04:33.845 "iscsi_portal_group_set_auth", 00:04:33.845 "iscsi_start_portal_group", 00:04:33.845 "iscsi_delete_portal_group", 00:04:33.845 "iscsi_create_portal_group", 00:04:33.845 "iscsi_get_portal_groups", 00:04:33.845 "iscsi_delete_target_node", 00:04:33.845 "iscsi_target_node_remove_pg_ig_maps", 00:04:33.845 "iscsi_target_node_add_pg_ig_maps", 00:04:33.845 "iscsi_create_target_node", 00:04:33.845 "iscsi_get_target_nodes", 00:04:33.845 "iscsi_delete_initiator_group", 00:04:33.845 "iscsi_initiator_group_remove_initiators", 00:04:33.845 "iscsi_initiator_group_add_initiators", 00:04:33.845 "iscsi_create_initiator_group", 00:04:33.845 "iscsi_get_initiator_groups", 00:04:33.845 "nvmf_set_crdt", 00:04:33.845 "nvmf_set_config", 00:04:33.845 "nvmf_set_max_subsystems", 00:04:33.845 "nvmf_stop_mdns_prr", 00:04:33.845 "nvmf_publish_mdns_prr", 00:04:33.845 "nvmf_subsystem_get_listeners", 00:04:33.845 "nvmf_subsystem_get_qpairs", 00:04:33.845 "nvmf_subsystem_get_controllers", 00:04:33.845 "nvmf_get_stats", 00:04:33.845 "nvmf_get_transports", 00:04:33.845 "nvmf_create_transport", 00:04:33.845 "nvmf_get_targets", 00:04:33.845 "nvmf_delete_target", 00:04:33.845 "nvmf_create_target", 00:04:33.845 "nvmf_subsystem_allow_any_host", 00:04:33.845 "nvmf_subsystem_set_keys", 00:04:33.845 "nvmf_subsystem_remove_host", 00:04:33.845 "nvmf_subsystem_add_host", 00:04:33.845 "nvmf_ns_remove_host", 00:04:33.845 "nvmf_ns_add_host", 00:04:33.845 "nvmf_subsystem_remove_ns", 00:04:33.845 "nvmf_subsystem_set_ns_ana_group", 00:04:33.845 "nvmf_subsystem_add_ns", 00:04:33.845 "nvmf_subsystem_listener_set_ana_state", 00:04:33.845 "nvmf_discovery_get_referrals", 00:04:33.845 "nvmf_discovery_remove_referral", 00:04:33.845 "nvmf_discovery_add_referral", 00:04:33.845 "nvmf_subsystem_remove_listener", 00:04:33.845 "nvmf_subsystem_add_listener", 00:04:33.845 "nvmf_delete_subsystem", 00:04:33.845 "nvmf_create_subsystem", 00:04:33.845 "nvmf_get_subsystems", 00:04:33.845 "env_dpdk_get_mem_stats", 00:04:33.845 "nbd_get_disks", 00:04:33.845 "nbd_stop_disk", 00:04:33.845 "nbd_start_disk", 00:04:33.845 "ublk_recover_disk", 00:04:33.845 "ublk_get_disks", 00:04:33.845 "ublk_stop_disk", 00:04:33.845 "ublk_start_disk", 00:04:33.845 "ublk_destroy_target", 00:04:33.845 "ublk_create_target", 00:04:33.845 "virtio_blk_create_transport", 00:04:33.845 "virtio_blk_get_transports", 00:04:33.845 "vhost_controller_set_coalescing", 00:04:33.845 "vhost_get_controllers", 00:04:33.845 "vhost_delete_controller", 00:04:33.845 "vhost_create_blk_controller", 00:04:33.845 "vhost_scsi_controller_remove_target", 00:04:33.845 "vhost_scsi_controller_add_target", 00:04:33.845 "vhost_start_scsi_controller", 00:04:33.845 "vhost_create_scsi_controller", 00:04:33.845 "thread_set_cpumask", 00:04:33.845 "scheduler_set_options", 00:04:33.845 "framework_get_governor", 00:04:33.845 "framework_get_scheduler", 00:04:33.845 "framework_set_scheduler", 00:04:33.845 "framework_get_reactors", 00:04:33.845 "thread_get_io_channels", 00:04:33.845 "thread_get_pollers", 00:04:33.845 "thread_get_stats", 00:04:33.845 "framework_monitor_context_switch", 00:04:33.845 "spdk_kill_instance", 00:04:33.845 "log_enable_timestamps", 00:04:33.845 "log_get_flags", 00:04:33.845 "log_clear_flag", 00:04:33.845 "log_set_flag", 00:04:33.845 "log_get_level", 00:04:33.845 "log_set_level", 00:04:33.845 "log_get_print_level", 00:04:33.845 "log_set_print_level", 00:04:33.845 "framework_enable_cpumask_locks", 00:04:33.845 "framework_disable_cpumask_locks", 00:04:33.845 "framework_wait_init", 00:04:33.845 "framework_start_init", 00:04:33.845 "scsi_get_devices", 00:04:33.845 "bdev_get_histogram", 00:04:33.845 "bdev_enable_histogram", 00:04:33.845 "bdev_set_qos_limit", 00:04:33.845 "bdev_set_qd_sampling_period", 00:04:33.845 "bdev_get_bdevs", 00:04:33.845 "bdev_reset_iostat", 00:04:33.845 "bdev_get_iostat", 00:04:33.845 "bdev_examine", 00:04:33.845 "bdev_wait_for_examine", 00:04:33.845 "bdev_set_options", 00:04:33.845 "accel_get_stats", 00:04:33.845 "accel_set_options", 00:04:33.845 "accel_set_driver", 00:04:33.845 "accel_crypto_key_destroy", 00:04:33.845 "accel_crypto_keys_get", 00:04:33.845 "accel_crypto_key_create", 00:04:33.845 "accel_assign_opc", 00:04:33.845 "accel_get_module_info", 00:04:33.845 "accel_get_opc_assignments", 00:04:33.845 "vmd_rescan", 00:04:33.845 "vmd_remove_device", 00:04:33.845 "vmd_enable", 00:04:33.845 "sock_get_default_impl", 00:04:33.845 "sock_set_default_impl", 00:04:33.845 "sock_impl_set_options", 00:04:33.845 "sock_impl_get_options", 00:04:33.845 "iobuf_get_stats", 00:04:33.845 "iobuf_set_options", 00:04:33.845 "keyring_get_keys", 00:04:33.845 "vfu_tgt_set_base_path", 00:04:33.846 "framework_get_pci_devices", 00:04:33.846 "framework_get_config", 00:04:33.846 "framework_get_subsystems", 00:04:33.846 "fsdev_set_opts", 00:04:33.846 "fsdev_get_opts", 00:04:33.846 "trace_get_info", 00:04:33.846 "trace_get_tpoint_group_mask", 00:04:33.846 "trace_disable_tpoint_group", 00:04:33.846 "trace_enable_tpoint_group", 00:04:33.846 "trace_clear_tpoint_mask", 00:04:33.846 "trace_set_tpoint_mask", 00:04:33.846 "notify_get_notifications", 00:04:33.846 "notify_get_types", 00:04:33.846 "spdk_get_version", 00:04:33.846 "rpc_get_methods" 00:04:33.846 ] 00:04:33.846 12:28:16 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:33.846 12:28:16 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:33.846 12:28:16 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:33.846 12:28:16 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:33.846 12:28:16 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 2336117 00:04:33.846 12:28:16 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 2336117 ']' 00:04:33.846 12:28:16 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 2336117 00:04:33.846 12:28:16 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:04:33.846 12:28:16 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:33.846 12:28:16 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2336117 00:04:33.846 12:28:16 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:33.846 12:28:16 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:33.846 12:28:16 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2336117' 00:04:33.846 killing process with pid 2336117 00:04:33.846 12:28:16 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 2336117 00:04:33.846 12:28:16 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 2336117 00:04:34.105 00:04:34.105 real 0m1.135s 00:04:34.105 user 0m1.940s 00:04:34.105 sys 0m0.419s 00:04:34.105 12:28:16 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:34.105 12:28:16 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:34.105 ************************************ 00:04:34.105 END TEST spdkcli_tcp 00:04:34.105 ************************************ 00:04:34.105 12:28:16 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:34.105 12:28:16 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:34.105 12:28:16 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:34.105 12:28:16 -- common/autotest_common.sh@10 -- # set +x 00:04:34.105 ************************************ 00:04:34.105 START TEST dpdk_mem_utility 00:04:34.105 ************************************ 00:04:34.105 12:28:16 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:34.365 * Looking for test storage... 00:04:34.365 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:04:34.365 12:28:16 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:34.365 12:28:16 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:04:34.365 12:28:16 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:34.365 12:28:16 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:34.365 12:28:16 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:34.365 12:28:16 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:34.365 12:28:16 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:34.365 12:28:16 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:04:34.365 12:28:16 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:04:34.365 12:28:16 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:04:34.365 12:28:16 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:04:34.365 12:28:16 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:04:34.365 12:28:16 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:04:34.365 12:28:16 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:04:34.365 12:28:16 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:34.365 12:28:16 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:04:34.365 12:28:16 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:04:34.365 12:28:16 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:34.365 12:28:16 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:34.365 12:28:16 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:04:34.365 12:28:16 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:04:34.365 12:28:16 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:34.365 12:28:16 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:04:34.365 12:28:16 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:04:34.365 12:28:16 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:04:34.365 12:28:16 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:04:34.365 12:28:16 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:34.365 12:28:16 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:04:34.365 12:28:16 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:04:34.365 12:28:16 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:34.365 12:28:16 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:34.365 12:28:16 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:04:34.365 12:28:16 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:34.365 12:28:16 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:34.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:34.365 --rc genhtml_branch_coverage=1 00:04:34.365 --rc genhtml_function_coverage=1 00:04:34.365 --rc genhtml_legend=1 00:04:34.365 --rc geninfo_all_blocks=1 00:04:34.365 --rc geninfo_unexecuted_blocks=1 00:04:34.365 00:04:34.365 ' 00:04:34.365 12:28:16 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:34.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:34.365 --rc genhtml_branch_coverage=1 00:04:34.365 --rc genhtml_function_coverage=1 00:04:34.365 --rc genhtml_legend=1 00:04:34.365 --rc geninfo_all_blocks=1 00:04:34.365 --rc geninfo_unexecuted_blocks=1 00:04:34.365 00:04:34.365 ' 00:04:34.365 12:28:16 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:34.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:34.365 --rc genhtml_branch_coverage=1 00:04:34.365 --rc genhtml_function_coverage=1 00:04:34.365 --rc genhtml_legend=1 00:04:34.365 --rc geninfo_all_blocks=1 00:04:34.365 --rc geninfo_unexecuted_blocks=1 00:04:34.365 00:04:34.365 ' 00:04:34.365 12:28:16 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:34.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:34.365 --rc genhtml_branch_coverage=1 00:04:34.365 --rc genhtml_function_coverage=1 00:04:34.365 --rc genhtml_legend=1 00:04:34.365 --rc geninfo_all_blocks=1 00:04:34.365 --rc geninfo_unexecuted_blocks=1 00:04:34.365 00:04:34.365 ' 00:04:34.365 12:28:16 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:34.365 12:28:16 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=2336419 00:04:34.365 12:28:16 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 2336419 00:04:34.365 12:28:16 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 2336419 ']' 00:04:34.365 12:28:16 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:34.365 12:28:16 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:34.365 12:28:16 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:34.365 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:34.365 12:28:16 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:34.365 12:28:16 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:34.365 12:28:16 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:34.365 [2024-11-28 12:28:16.802360] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:04:34.365 [2024-11-28 12:28:16.802406] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2336419 ] 00:04:34.365 [2024-11-28 12:28:16.864451] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:34.625 [2024-11-28 12:28:16.908104] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:34.625 12:28:17 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:34.625 12:28:17 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:04:34.625 12:28:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:34.625 12:28:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:34.625 12:28:17 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:34.625 12:28:17 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:34.625 { 00:04:34.625 "filename": "/tmp/spdk_mem_dump.txt" 00:04:34.625 } 00:04:34.625 12:28:17 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:34.625 12:28:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:34.884 DPDK memory size 818.000000 MiB in 1 heap(s) 00:04:34.884 1 heaps totaling size 818.000000 MiB 00:04:34.884 size: 818.000000 MiB heap id: 0 00:04:34.884 end heaps---------- 00:04:34.884 9 mempools totaling size 603.782043 MiB 00:04:34.884 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:34.884 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:34.884 size: 100.555481 MiB name: bdev_io_2336419 00:04:34.884 size: 50.003479 MiB name: msgpool_2336419 00:04:34.884 size: 36.509338 MiB name: fsdev_io_2336419 00:04:34.884 size: 21.763794 MiB name: PDU_Pool 00:04:34.884 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:34.885 size: 4.133484 MiB name: evtpool_2336419 00:04:34.885 size: 0.026123 MiB name: Session_Pool 00:04:34.885 end mempools------- 00:04:34.885 6 memzones totaling size 4.142822 MiB 00:04:34.885 size: 1.000366 MiB name: RG_ring_0_2336419 00:04:34.885 size: 1.000366 MiB name: RG_ring_1_2336419 00:04:34.885 size: 1.000366 MiB name: RG_ring_4_2336419 00:04:34.885 size: 1.000366 MiB name: RG_ring_5_2336419 00:04:34.885 size: 0.125366 MiB name: RG_ring_2_2336419 00:04:34.885 size: 0.015991 MiB name: RG_ring_3_2336419 00:04:34.885 end memzones------- 00:04:34.885 12:28:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:04:34.885 heap id: 0 total size: 818.000000 MiB number of busy elements: 44 number of free elements: 15 00:04:34.885 list of free elements. size: 10.852478 MiB 00:04:34.885 element at address: 0x200019200000 with size: 0.999878 MiB 00:04:34.885 element at address: 0x200019400000 with size: 0.999878 MiB 00:04:34.885 element at address: 0x200000400000 with size: 0.998535 MiB 00:04:34.885 element at address: 0x200032000000 with size: 0.994446 MiB 00:04:34.885 element at address: 0x200006400000 with size: 0.959839 MiB 00:04:34.885 element at address: 0x200012c00000 with size: 0.944275 MiB 00:04:34.885 element at address: 0x200019600000 with size: 0.936584 MiB 00:04:34.885 element at address: 0x200000200000 with size: 0.717346 MiB 00:04:34.885 element at address: 0x20001ae00000 with size: 0.582886 MiB 00:04:34.885 element at address: 0x200000c00000 with size: 0.495422 MiB 00:04:34.885 element at address: 0x20000a600000 with size: 0.490723 MiB 00:04:34.885 element at address: 0x200019800000 with size: 0.485657 MiB 00:04:34.885 element at address: 0x200003e00000 with size: 0.481934 MiB 00:04:34.885 element at address: 0x200028200000 with size: 0.410034 MiB 00:04:34.885 element at address: 0x200000800000 with size: 0.355042 MiB 00:04:34.885 list of standard malloc elements. size: 199.218628 MiB 00:04:34.885 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:04:34.885 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:04:34.885 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:04:34.885 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:04:34.885 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:04:34.885 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:34.885 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:04:34.885 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:34.885 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:04:34.885 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:34.885 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:34.885 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:04:34.885 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:04:34.885 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:04:34.885 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:04:34.885 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:04:34.885 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:04:34.885 element at address: 0x20000085b040 with size: 0.000183 MiB 00:04:34.885 element at address: 0x20000085f300 with size: 0.000183 MiB 00:04:34.885 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:04:34.885 element at address: 0x20000087f680 with size: 0.000183 MiB 00:04:34.885 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:04:34.885 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:04:34.885 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:04:34.885 element at address: 0x200000cff000 with size: 0.000183 MiB 00:04:34.885 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:04:34.885 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:04:34.885 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:04:34.885 element at address: 0x200003efb980 with size: 0.000183 MiB 00:04:34.885 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:04:34.885 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:04:34.885 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:04:34.885 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:04:34.885 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:04:34.885 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:04:34.885 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:04:34.885 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:04:34.885 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:04:34.885 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:04:34.885 element at address: 0x200028268f80 with size: 0.000183 MiB 00:04:34.885 element at address: 0x200028269040 with size: 0.000183 MiB 00:04:34.885 element at address: 0x20002826fc40 with size: 0.000183 MiB 00:04:34.885 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:04:34.885 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:04:34.885 list of memzone associated elements. size: 607.928894 MiB 00:04:34.885 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:04:34.885 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:34.885 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:04:34.885 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:34.885 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:04:34.885 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_2336419_0 00:04:34.885 element at address: 0x200000dff380 with size: 48.003052 MiB 00:04:34.885 associated memzone info: size: 48.002930 MiB name: MP_msgpool_2336419_0 00:04:34.885 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:04:34.885 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_2336419_0 00:04:34.885 element at address: 0x2000199be940 with size: 20.255554 MiB 00:04:34.885 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:34.885 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:04:34.885 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:34.885 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:04:34.885 associated memzone info: size: 3.000122 MiB name: MP_evtpool_2336419_0 00:04:34.885 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:04:34.885 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_2336419 00:04:34.885 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:34.885 associated memzone info: size: 1.007996 MiB name: MP_evtpool_2336419 00:04:34.885 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:04:34.885 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:34.885 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:04:34.885 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:34.885 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:04:34.885 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:34.885 element at address: 0x200003efba40 with size: 1.008118 MiB 00:04:34.885 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:34.885 element at address: 0x200000cff180 with size: 1.000488 MiB 00:04:34.885 associated memzone info: size: 1.000366 MiB name: RG_ring_0_2336419 00:04:34.885 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:04:34.885 associated memzone info: size: 1.000366 MiB name: RG_ring_1_2336419 00:04:34.885 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:04:34.885 associated memzone info: size: 1.000366 MiB name: RG_ring_4_2336419 00:04:34.885 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:04:34.885 associated memzone info: size: 1.000366 MiB name: RG_ring_5_2336419 00:04:34.885 element at address: 0x20000087f740 with size: 0.500488 MiB 00:04:34.885 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_2336419 00:04:34.885 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:04:34.885 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_2336419 00:04:34.885 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:04:34.885 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:34.885 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:04:34.885 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:34.885 element at address: 0x20001987c540 with size: 0.250488 MiB 00:04:34.885 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:34.885 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:04:34.885 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_2336419 00:04:34.885 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:04:34.885 associated memzone info: size: 0.125366 MiB name: RG_ring_2_2336419 00:04:34.885 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:04:34.885 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:34.885 element at address: 0x200028269100 with size: 0.023743 MiB 00:04:34.885 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:34.885 element at address: 0x20000085b100 with size: 0.016113 MiB 00:04:34.885 associated memzone info: size: 0.015991 MiB name: RG_ring_3_2336419 00:04:34.885 element at address: 0x20002826f240 with size: 0.002441 MiB 00:04:34.885 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:34.885 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:04:34.885 associated memzone info: size: 0.000183 MiB name: MP_msgpool_2336419 00:04:34.885 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:04:34.885 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_2336419 00:04:34.885 element at address: 0x20000085af00 with size: 0.000305 MiB 00:04:34.885 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_2336419 00:04:34.885 element at address: 0x20002826fd00 with size: 0.000305 MiB 00:04:34.885 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:34.885 12:28:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:34.885 12:28:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 2336419 00:04:34.885 12:28:17 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 2336419 ']' 00:04:34.885 12:28:17 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 2336419 00:04:34.885 12:28:17 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:04:34.885 12:28:17 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:34.885 12:28:17 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2336419 00:04:34.886 12:28:17 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:34.886 12:28:17 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:34.886 12:28:17 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2336419' 00:04:34.886 killing process with pid 2336419 00:04:34.886 12:28:17 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 2336419 00:04:34.886 12:28:17 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 2336419 00:04:35.145 00:04:35.145 real 0m0.977s 00:04:35.145 user 0m0.928s 00:04:35.145 sys 0m0.374s 00:04:35.145 12:28:17 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:35.145 12:28:17 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:35.145 ************************************ 00:04:35.145 END TEST dpdk_mem_utility 00:04:35.145 ************************************ 00:04:35.145 12:28:17 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:35.145 12:28:17 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:35.145 12:28:17 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:35.145 12:28:17 -- common/autotest_common.sh@10 -- # set +x 00:04:35.145 ************************************ 00:04:35.145 START TEST event 00:04:35.145 ************************************ 00:04:35.145 12:28:17 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:35.404 * Looking for test storage... 00:04:35.404 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:35.404 12:28:17 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:35.404 12:28:17 event -- common/autotest_common.sh@1693 -- # lcov --version 00:04:35.404 12:28:17 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:35.404 12:28:17 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:35.404 12:28:17 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:35.404 12:28:17 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:35.404 12:28:17 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:35.404 12:28:17 event -- scripts/common.sh@336 -- # IFS=.-: 00:04:35.404 12:28:17 event -- scripts/common.sh@336 -- # read -ra ver1 00:04:35.404 12:28:17 event -- scripts/common.sh@337 -- # IFS=.-: 00:04:35.404 12:28:17 event -- scripts/common.sh@337 -- # read -ra ver2 00:04:35.404 12:28:17 event -- scripts/common.sh@338 -- # local 'op=<' 00:04:35.404 12:28:17 event -- scripts/common.sh@340 -- # ver1_l=2 00:04:35.404 12:28:17 event -- scripts/common.sh@341 -- # ver2_l=1 00:04:35.404 12:28:17 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:35.404 12:28:17 event -- scripts/common.sh@344 -- # case "$op" in 00:04:35.404 12:28:17 event -- scripts/common.sh@345 -- # : 1 00:04:35.404 12:28:17 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:35.404 12:28:17 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:35.404 12:28:17 event -- scripts/common.sh@365 -- # decimal 1 00:04:35.404 12:28:17 event -- scripts/common.sh@353 -- # local d=1 00:04:35.404 12:28:17 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:35.404 12:28:17 event -- scripts/common.sh@355 -- # echo 1 00:04:35.404 12:28:17 event -- scripts/common.sh@365 -- # ver1[v]=1 00:04:35.404 12:28:17 event -- scripts/common.sh@366 -- # decimal 2 00:04:35.404 12:28:17 event -- scripts/common.sh@353 -- # local d=2 00:04:35.404 12:28:17 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:35.404 12:28:17 event -- scripts/common.sh@355 -- # echo 2 00:04:35.404 12:28:17 event -- scripts/common.sh@366 -- # ver2[v]=2 00:04:35.404 12:28:17 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:35.404 12:28:17 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:35.404 12:28:17 event -- scripts/common.sh@368 -- # return 0 00:04:35.404 12:28:17 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:35.404 12:28:17 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:35.404 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.404 --rc genhtml_branch_coverage=1 00:04:35.404 --rc genhtml_function_coverage=1 00:04:35.404 --rc genhtml_legend=1 00:04:35.404 --rc geninfo_all_blocks=1 00:04:35.404 --rc geninfo_unexecuted_blocks=1 00:04:35.404 00:04:35.404 ' 00:04:35.404 12:28:17 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:35.404 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.404 --rc genhtml_branch_coverage=1 00:04:35.404 --rc genhtml_function_coverage=1 00:04:35.404 --rc genhtml_legend=1 00:04:35.404 --rc geninfo_all_blocks=1 00:04:35.404 --rc geninfo_unexecuted_blocks=1 00:04:35.404 00:04:35.404 ' 00:04:35.404 12:28:17 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:35.404 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.405 --rc genhtml_branch_coverage=1 00:04:35.405 --rc genhtml_function_coverage=1 00:04:35.405 --rc genhtml_legend=1 00:04:35.405 --rc geninfo_all_blocks=1 00:04:35.405 --rc geninfo_unexecuted_blocks=1 00:04:35.405 00:04:35.405 ' 00:04:35.405 12:28:17 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:35.405 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.405 --rc genhtml_branch_coverage=1 00:04:35.405 --rc genhtml_function_coverage=1 00:04:35.405 --rc genhtml_legend=1 00:04:35.405 --rc geninfo_all_blocks=1 00:04:35.405 --rc geninfo_unexecuted_blocks=1 00:04:35.405 00:04:35.405 ' 00:04:35.405 12:28:17 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:04:35.405 12:28:17 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:35.405 12:28:17 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:35.405 12:28:17 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:04:35.405 12:28:17 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:35.405 12:28:17 event -- common/autotest_common.sh@10 -- # set +x 00:04:35.405 ************************************ 00:04:35.405 START TEST event_perf 00:04:35.405 ************************************ 00:04:35.405 12:28:17 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:35.405 Running I/O for 1 seconds...[2024-11-28 12:28:17.839810] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:04:35.405 [2024-11-28 12:28:17.839857] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2336709 ] 00:04:35.405 [2024-11-28 12:28:17.901030] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:35.663 [2024-11-28 12:28:17.946420] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:35.663 [2024-11-28 12:28:17.946516] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:35.663 [2024-11-28 12:28:17.946577] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:35.663 [2024-11-28 12:28:17.946578] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:36.601 Running I/O for 1 seconds... 00:04:36.601 lcore 0: 206039 00:04:36.601 lcore 1: 206040 00:04:36.601 lcore 2: 206040 00:04:36.601 lcore 3: 206041 00:04:36.601 done. 00:04:36.601 00:04:36.601 real 0m1.161s 00:04:36.601 user 0m4.094s 00:04:36.601 sys 0m0.064s 00:04:36.601 12:28:18 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:36.601 12:28:18 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:36.601 ************************************ 00:04:36.601 END TEST event_perf 00:04:36.601 ************************************ 00:04:36.601 12:28:19 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:36.601 12:28:19 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:36.601 12:28:19 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:36.601 12:28:19 event -- common/autotest_common.sh@10 -- # set +x 00:04:36.601 ************************************ 00:04:36.601 START TEST event_reactor 00:04:36.601 ************************************ 00:04:36.601 12:28:19 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:36.601 [2024-11-28 12:28:19.078482] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:04:36.601 [2024-11-28 12:28:19.078567] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2336961 ] 00:04:36.861 [2024-11-28 12:28:19.144624] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:36.861 [2024-11-28 12:28:19.187030] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:37.798 test_start 00:04:37.798 oneshot 00:04:37.798 tick 100 00:04:37.798 tick 100 00:04:37.798 tick 250 00:04:37.798 tick 100 00:04:37.798 tick 100 00:04:37.798 tick 250 00:04:37.798 tick 100 00:04:37.798 tick 500 00:04:37.798 tick 100 00:04:37.798 tick 100 00:04:37.798 tick 250 00:04:37.798 tick 100 00:04:37.798 tick 100 00:04:37.798 test_end 00:04:37.798 00:04:37.798 real 0m1.171s 00:04:37.798 user 0m1.102s 00:04:37.798 sys 0m0.065s 00:04:37.798 12:28:20 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:37.798 12:28:20 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:37.798 ************************************ 00:04:37.798 END TEST event_reactor 00:04:37.798 ************************************ 00:04:37.798 12:28:20 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:37.798 12:28:20 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:37.798 12:28:20 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:37.798 12:28:20 event -- common/autotest_common.sh@10 -- # set +x 00:04:37.798 ************************************ 00:04:37.798 START TEST event_reactor_perf 00:04:37.798 ************************************ 00:04:37.798 12:28:20 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:37.798 [2024-11-28 12:28:20.302076] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:04:37.798 [2024-11-28 12:28:20.302126] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2337168 ] 00:04:38.057 [2024-11-28 12:28:20.358084] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:38.057 [2024-11-28 12:28:20.399242] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:38.995 test_start 00:04:38.995 test_end 00:04:38.995 Performance: 504949 events per second 00:04:38.995 00:04:38.995 real 0m1.148s 00:04:38.995 user 0m1.095s 00:04:38.995 sys 0m0.049s 00:04:38.995 12:28:21 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:38.995 12:28:21 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:38.995 ************************************ 00:04:38.995 END TEST event_reactor_perf 00:04:38.995 ************************************ 00:04:38.995 12:28:21 event -- event/event.sh@49 -- # uname -s 00:04:38.995 12:28:21 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:38.995 12:28:21 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:38.995 12:28:21 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:38.995 12:28:21 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:38.995 12:28:21 event -- common/autotest_common.sh@10 -- # set +x 00:04:38.995 ************************************ 00:04:38.995 START TEST event_scheduler 00:04:38.995 ************************************ 00:04:38.995 12:28:21 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:39.255 * Looking for test storage... 00:04:39.255 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:04:39.255 12:28:21 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:39.255 12:28:21 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:04:39.255 12:28:21 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:39.255 12:28:21 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:39.255 12:28:21 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:39.255 12:28:21 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:39.255 12:28:21 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:39.255 12:28:21 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:04:39.255 12:28:21 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:04:39.255 12:28:21 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:04:39.255 12:28:21 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:04:39.255 12:28:21 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:04:39.255 12:28:21 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:04:39.255 12:28:21 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:04:39.255 12:28:21 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:39.255 12:28:21 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:04:39.255 12:28:21 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:04:39.255 12:28:21 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:39.255 12:28:21 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:39.255 12:28:21 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:04:39.255 12:28:21 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:04:39.255 12:28:21 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:39.255 12:28:21 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:04:39.255 12:28:21 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:04:39.255 12:28:21 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:04:39.255 12:28:21 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:04:39.255 12:28:21 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:39.255 12:28:21 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:04:39.255 12:28:21 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:04:39.255 12:28:21 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:39.255 12:28:21 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:39.255 12:28:21 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:04:39.255 12:28:21 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:39.255 12:28:21 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:39.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.255 --rc genhtml_branch_coverage=1 00:04:39.255 --rc genhtml_function_coverage=1 00:04:39.255 --rc genhtml_legend=1 00:04:39.255 --rc geninfo_all_blocks=1 00:04:39.255 --rc geninfo_unexecuted_blocks=1 00:04:39.255 00:04:39.255 ' 00:04:39.255 12:28:21 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:39.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.255 --rc genhtml_branch_coverage=1 00:04:39.255 --rc genhtml_function_coverage=1 00:04:39.255 --rc genhtml_legend=1 00:04:39.255 --rc geninfo_all_blocks=1 00:04:39.255 --rc geninfo_unexecuted_blocks=1 00:04:39.255 00:04:39.255 ' 00:04:39.255 12:28:21 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:39.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.255 --rc genhtml_branch_coverage=1 00:04:39.255 --rc genhtml_function_coverage=1 00:04:39.255 --rc genhtml_legend=1 00:04:39.255 --rc geninfo_all_blocks=1 00:04:39.255 --rc geninfo_unexecuted_blocks=1 00:04:39.255 00:04:39.255 ' 00:04:39.255 12:28:21 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:39.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.255 --rc genhtml_branch_coverage=1 00:04:39.255 --rc genhtml_function_coverage=1 00:04:39.255 --rc genhtml_legend=1 00:04:39.255 --rc geninfo_all_blocks=1 00:04:39.255 --rc geninfo_unexecuted_blocks=1 00:04:39.255 00:04:39.255 ' 00:04:39.255 12:28:21 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:39.255 12:28:21 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=2337458 00:04:39.255 12:28:21 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:39.255 12:28:21 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:39.255 12:28:21 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 2337458 00:04:39.255 12:28:21 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 2337458 ']' 00:04:39.255 12:28:21 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:39.255 12:28:21 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:39.255 12:28:21 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:39.255 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:39.255 12:28:21 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:39.255 12:28:21 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:39.255 [2024-11-28 12:28:21.717996] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:04:39.255 [2024-11-28 12:28:21.718048] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2337458 ] 00:04:39.514 [2024-11-28 12:28:21.776273] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:39.514 [2024-11-28 12:28:21.822784] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:39.514 [2024-11-28 12:28:21.822875] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:39.514 [2024-11-28 12:28:21.822979] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:39.514 [2024-11-28 12:28:21.822981] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:39.514 12:28:21 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:39.514 12:28:21 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:04:39.514 12:28:21 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:39.514 12:28:21 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:39.514 12:28:21 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:39.514 [2024-11-28 12:28:21.875526] dpdk_governor.c: 178:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:04:39.514 [2024-11-28 12:28:21.875543] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:04:39.514 [2024-11-28 12:28:21.875551] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:39.514 [2024-11-28 12:28:21.875557] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:39.514 [2024-11-28 12:28:21.875562] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:39.514 12:28:21 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:39.514 12:28:21 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:39.514 12:28:21 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:39.514 12:28:21 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:39.514 [2024-11-28 12:28:21.950864] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:39.514 12:28:21 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:39.514 12:28:21 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:39.514 12:28:21 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:39.514 12:28:21 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:39.514 12:28:21 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:39.514 ************************************ 00:04:39.514 START TEST scheduler_create_thread 00:04:39.514 ************************************ 00:04:39.514 12:28:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:04:39.514 12:28:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:39.515 12:28:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:39.515 12:28:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:39.515 2 00:04:39.515 12:28:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:39.515 12:28:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:39.515 12:28:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:39.515 12:28:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:39.515 3 00:04:39.515 12:28:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:39.515 12:28:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:39.515 12:28:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:39.515 12:28:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:39.515 4 00:04:39.515 12:28:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:39.515 12:28:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:39.515 12:28:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:39.515 12:28:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:39.775 5 00:04:39.775 12:28:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:39.775 12:28:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:39.775 12:28:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:39.775 12:28:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:39.775 6 00:04:39.775 12:28:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:39.775 12:28:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:39.775 12:28:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:39.775 12:28:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:39.775 7 00:04:39.775 12:28:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:39.775 12:28:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:39.775 12:28:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:39.775 12:28:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:39.775 8 00:04:39.775 12:28:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:39.775 12:28:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:39.775 12:28:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:39.775 12:28:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:39.775 9 00:04:39.775 12:28:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:39.775 12:28:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:39.775 12:28:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:39.775 12:28:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:39.775 10 00:04:39.775 12:28:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:39.775 12:28:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:39.775 12:28:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:39.775 12:28:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:39.775 12:28:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:39.775 12:28:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:39.775 12:28:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:39.775 12:28:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:39.775 12:28:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:40.713 12:28:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:40.713 12:28:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:40.713 12:28:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:40.713 12:28:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:42.089 12:28:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:42.089 12:28:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:42.089 12:28:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:42.089 12:28:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:42.089 12:28:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:43.025 12:28:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:43.025 00:04:43.025 real 0m3.382s 00:04:43.025 user 0m0.022s 00:04:43.025 sys 0m0.008s 00:04:43.025 12:28:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:43.025 12:28:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:43.025 ************************************ 00:04:43.025 END TEST scheduler_create_thread 00:04:43.025 ************************************ 00:04:43.025 12:28:25 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:43.025 12:28:25 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 2337458 00:04:43.025 12:28:25 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 2337458 ']' 00:04:43.025 12:28:25 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 2337458 00:04:43.025 12:28:25 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:04:43.025 12:28:25 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:43.025 12:28:25 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2337458 00:04:43.025 12:28:25 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:04:43.025 12:28:25 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:04:43.025 12:28:25 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2337458' 00:04:43.025 killing process with pid 2337458 00:04:43.025 12:28:25 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 2337458 00:04:43.025 12:28:25 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 2337458 00:04:43.284 [2024-11-28 12:28:25.751008] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:43.544 00:04:43.544 real 0m4.449s 00:04:43.544 user 0m7.825s 00:04:43.544 sys 0m0.353s 00:04:43.544 12:28:25 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:43.544 12:28:25 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:43.544 ************************************ 00:04:43.544 END TEST event_scheduler 00:04:43.544 ************************************ 00:04:43.544 12:28:25 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:43.544 12:28:25 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:43.544 12:28:25 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:43.544 12:28:25 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:43.544 12:28:25 event -- common/autotest_common.sh@10 -- # set +x 00:04:43.544 ************************************ 00:04:43.544 START TEST app_repeat 00:04:43.544 ************************************ 00:04:43.544 12:28:26 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:04:43.544 12:28:26 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:43.544 12:28:26 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:43.544 12:28:26 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:43.544 12:28:26 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:43.544 12:28:26 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:43.544 12:28:26 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:43.544 12:28:26 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:43.544 12:28:26 event.app_repeat -- event/event.sh@19 -- # repeat_pid=2338241 00:04:43.544 12:28:26 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:43.544 12:28:26 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:43.544 12:28:26 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 2338241' 00:04:43.544 Process app_repeat pid: 2338241 00:04:43.544 12:28:26 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:43.544 12:28:26 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:43.544 spdk_app_start Round 0 00:04:43.544 12:28:26 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2338241 /var/tmp/spdk-nbd.sock 00:04:43.544 12:28:26 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2338241 ']' 00:04:43.544 12:28:26 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:43.544 12:28:26 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:43.544 12:28:26 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:43.544 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:43.544 12:28:26 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:43.544 12:28:26 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:43.804 [2024-11-28 12:28:26.069055] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:04:43.804 [2024-11-28 12:28:26.069109] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2338241 ] 00:04:43.804 [2024-11-28 12:28:26.134573] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:43.804 [2024-11-28 12:28:26.175844] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:43.804 [2024-11-28 12:28:26.175848] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:43.804 12:28:26 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:43.804 12:28:26 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:43.804 12:28:26 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:44.063 Malloc0 00:04:44.063 12:28:26 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:44.322 Malloc1 00:04:44.322 12:28:26 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:44.322 12:28:26 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:44.322 12:28:26 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:44.322 12:28:26 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:44.322 12:28:26 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:44.322 12:28:26 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:44.322 12:28:26 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:44.322 12:28:26 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:44.322 12:28:26 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:44.322 12:28:26 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:44.322 12:28:26 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:44.322 12:28:26 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:44.322 12:28:26 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:44.322 12:28:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:44.322 12:28:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:44.322 12:28:26 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:44.581 /dev/nbd0 00:04:44.581 12:28:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:44.581 12:28:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:44.581 12:28:26 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:44.581 12:28:26 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:44.581 12:28:26 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:44.581 12:28:26 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:44.581 12:28:26 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:44.581 12:28:26 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:44.581 12:28:26 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:44.581 12:28:26 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:44.581 12:28:26 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:44.581 1+0 records in 00:04:44.581 1+0 records out 00:04:44.581 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000187664 s, 21.8 MB/s 00:04:44.581 12:28:26 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:44.581 12:28:26 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:44.581 12:28:26 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:44.581 12:28:26 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:44.581 12:28:26 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:44.581 12:28:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:44.581 12:28:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:44.581 12:28:26 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:44.840 /dev/nbd1 00:04:44.840 12:28:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:44.840 12:28:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:44.840 12:28:27 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:44.840 12:28:27 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:44.840 12:28:27 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:44.840 12:28:27 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:44.840 12:28:27 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:44.840 12:28:27 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:44.840 12:28:27 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:44.840 12:28:27 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:44.840 12:28:27 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:44.840 1+0 records in 00:04:44.840 1+0 records out 00:04:44.840 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000191337 s, 21.4 MB/s 00:04:44.840 12:28:27 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:44.840 12:28:27 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:44.840 12:28:27 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:44.840 12:28:27 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:44.840 12:28:27 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:44.840 12:28:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:44.840 12:28:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:44.840 12:28:27 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:44.840 12:28:27 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:44.840 12:28:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:44.840 12:28:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:44.840 { 00:04:44.840 "nbd_device": "/dev/nbd0", 00:04:44.840 "bdev_name": "Malloc0" 00:04:44.840 }, 00:04:44.840 { 00:04:44.840 "nbd_device": "/dev/nbd1", 00:04:44.840 "bdev_name": "Malloc1" 00:04:44.840 } 00:04:44.840 ]' 00:04:45.100 12:28:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:45.100 { 00:04:45.100 "nbd_device": "/dev/nbd0", 00:04:45.100 "bdev_name": "Malloc0" 00:04:45.100 }, 00:04:45.100 { 00:04:45.100 "nbd_device": "/dev/nbd1", 00:04:45.100 "bdev_name": "Malloc1" 00:04:45.100 } 00:04:45.100 ]' 00:04:45.100 12:28:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:45.100 12:28:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:45.100 /dev/nbd1' 00:04:45.100 12:28:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:45.100 /dev/nbd1' 00:04:45.100 12:28:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:45.100 12:28:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:45.100 12:28:27 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:45.100 12:28:27 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:45.100 12:28:27 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:45.100 12:28:27 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:45.100 12:28:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:45.100 12:28:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:45.100 12:28:27 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:45.100 12:28:27 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:45.100 12:28:27 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:45.100 12:28:27 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:45.100 256+0 records in 00:04:45.100 256+0 records out 00:04:45.100 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0108593 s, 96.6 MB/s 00:04:45.100 12:28:27 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:45.100 12:28:27 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:45.100 256+0 records in 00:04:45.100 256+0 records out 00:04:45.100 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0141957 s, 73.9 MB/s 00:04:45.100 12:28:27 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:45.100 12:28:27 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:45.100 256+0 records in 00:04:45.100 256+0 records out 00:04:45.100 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0149603 s, 70.1 MB/s 00:04:45.100 12:28:27 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:45.100 12:28:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:45.100 12:28:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:45.100 12:28:27 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:45.100 12:28:27 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:45.100 12:28:27 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:45.100 12:28:27 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:45.100 12:28:27 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:45.100 12:28:27 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:45.100 12:28:27 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:45.100 12:28:27 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:45.100 12:28:27 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:45.100 12:28:27 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:45.100 12:28:27 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:45.100 12:28:27 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:45.100 12:28:27 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:45.100 12:28:27 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:45.100 12:28:27 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:45.100 12:28:27 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:45.358 12:28:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:45.358 12:28:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:45.358 12:28:27 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:45.358 12:28:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:45.358 12:28:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:45.358 12:28:27 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:45.358 12:28:27 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:45.359 12:28:27 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:45.359 12:28:27 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:45.359 12:28:27 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:45.618 12:28:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:45.618 12:28:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:45.618 12:28:27 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:45.618 12:28:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:45.618 12:28:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:45.618 12:28:27 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:45.618 12:28:27 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:45.618 12:28:27 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:45.618 12:28:27 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:45.618 12:28:27 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:45.618 12:28:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:45.618 12:28:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:45.618 12:28:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:45.618 12:28:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:45.618 12:28:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:45.618 12:28:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:45.618 12:28:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:45.618 12:28:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:45.618 12:28:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:45.618 12:28:28 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:45.618 12:28:28 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:45.618 12:28:28 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:45.618 12:28:28 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:45.618 12:28:28 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:45.877 12:28:28 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:46.136 [2024-11-28 12:28:28.497721] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:46.136 [2024-11-28 12:28:28.535333] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:46.136 [2024-11-28 12:28:28.535335] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:46.136 [2024-11-28 12:28:28.576441] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:46.136 [2024-11-28 12:28:28.576489] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:49.421 12:28:31 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:49.421 12:28:31 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:49.421 spdk_app_start Round 1 00:04:49.421 12:28:31 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2338241 /var/tmp/spdk-nbd.sock 00:04:49.421 12:28:31 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2338241 ']' 00:04:49.421 12:28:31 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:49.421 12:28:31 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:49.421 12:28:31 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:49.421 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:49.421 12:28:31 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:49.421 12:28:31 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:49.421 12:28:31 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:49.421 12:28:31 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:49.421 12:28:31 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:49.421 Malloc0 00:04:49.421 12:28:31 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:49.421 Malloc1 00:04:49.421 12:28:31 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:49.421 12:28:31 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:49.421 12:28:31 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:49.421 12:28:31 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:49.421 12:28:31 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:49.421 12:28:31 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:49.421 12:28:31 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:49.421 12:28:31 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:49.421 12:28:31 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:49.421 12:28:31 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:49.421 12:28:31 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:49.421 12:28:31 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:49.421 12:28:31 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:49.421 12:28:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:49.421 12:28:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:49.421 12:28:31 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:49.679 /dev/nbd0 00:04:49.679 12:28:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:49.679 12:28:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:49.679 12:28:32 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:49.679 12:28:32 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:49.679 12:28:32 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:49.679 12:28:32 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:49.679 12:28:32 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:49.679 12:28:32 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:49.679 12:28:32 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:49.679 12:28:32 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:49.679 12:28:32 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:49.679 1+0 records in 00:04:49.679 1+0 records out 00:04:49.679 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000214684 s, 19.1 MB/s 00:04:49.679 12:28:32 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:49.679 12:28:32 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:49.679 12:28:32 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:49.679 12:28:32 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:49.679 12:28:32 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:49.679 12:28:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:49.679 12:28:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:49.679 12:28:32 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:49.936 /dev/nbd1 00:04:49.936 12:28:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:49.936 12:28:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:49.936 12:28:32 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:49.936 12:28:32 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:49.936 12:28:32 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:49.936 12:28:32 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:49.936 12:28:32 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:49.936 12:28:32 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:49.936 12:28:32 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:49.936 12:28:32 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:49.936 12:28:32 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:49.936 1+0 records in 00:04:49.936 1+0 records out 00:04:49.936 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000194625 s, 21.0 MB/s 00:04:49.936 12:28:32 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:49.936 12:28:32 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:49.936 12:28:32 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:49.936 12:28:32 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:49.936 12:28:32 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:49.936 12:28:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:49.936 12:28:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:49.936 12:28:32 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:49.936 12:28:32 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:49.936 12:28:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:50.193 12:28:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:50.193 { 00:04:50.193 "nbd_device": "/dev/nbd0", 00:04:50.193 "bdev_name": "Malloc0" 00:04:50.193 }, 00:04:50.193 { 00:04:50.193 "nbd_device": "/dev/nbd1", 00:04:50.193 "bdev_name": "Malloc1" 00:04:50.193 } 00:04:50.193 ]' 00:04:50.193 12:28:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:50.193 { 00:04:50.193 "nbd_device": "/dev/nbd0", 00:04:50.193 "bdev_name": "Malloc0" 00:04:50.193 }, 00:04:50.193 { 00:04:50.193 "nbd_device": "/dev/nbd1", 00:04:50.193 "bdev_name": "Malloc1" 00:04:50.193 } 00:04:50.193 ]' 00:04:50.193 12:28:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:50.193 12:28:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:50.193 /dev/nbd1' 00:04:50.193 12:28:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:50.193 /dev/nbd1' 00:04:50.193 12:28:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:50.193 12:28:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:50.193 12:28:32 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:50.193 12:28:32 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:50.193 12:28:32 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:50.193 12:28:32 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:50.193 12:28:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:50.193 12:28:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:50.193 12:28:32 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:50.193 12:28:32 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:50.193 12:28:32 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:50.193 12:28:32 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:50.193 256+0 records in 00:04:50.193 256+0 records out 00:04:50.193 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0105395 s, 99.5 MB/s 00:04:50.193 12:28:32 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:50.193 12:28:32 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:50.193 256+0 records in 00:04:50.193 256+0 records out 00:04:50.193 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0139805 s, 75.0 MB/s 00:04:50.193 12:28:32 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:50.193 12:28:32 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:50.193 256+0 records in 00:04:50.193 256+0 records out 00:04:50.194 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0149738 s, 70.0 MB/s 00:04:50.194 12:28:32 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:50.194 12:28:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:50.194 12:28:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:50.194 12:28:32 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:50.194 12:28:32 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:50.194 12:28:32 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:50.194 12:28:32 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:50.194 12:28:32 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:50.194 12:28:32 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:50.451 12:28:32 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:50.451 12:28:32 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:50.451 12:28:32 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:50.451 12:28:32 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:50.451 12:28:32 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:50.451 12:28:32 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:50.451 12:28:32 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:50.451 12:28:32 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:50.451 12:28:32 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:50.451 12:28:32 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:50.451 12:28:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:50.451 12:28:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:50.451 12:28:32 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:50.451 12:28:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:50.451 12:28:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:50.451 12:28:32 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:50.451 12:28:32 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:50.451 12:28:32 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:50.451 12:28:32 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:50.451 12:28:32 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:50.709 12:28:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:50.709 12:28:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:50.709 12:28:33 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:50.709 12:28:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:50.709 12:28:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:50.709 12:28:33 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:50.709 12:28:33 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:50.709 12:28:33 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:50.709 12:28:33 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:50.709 12:28:33 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:50.709 12:28:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:50.966 12:28:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:50.966 12:28:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:50.966 12:28:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:50.966 12:28:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:50.966 12:28:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:50.966 12:28:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:50.966 12:28:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:50.966 12:28:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:50.966 12:28:33 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:50.966 12:28:33 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:50.966 12:28:33 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:50.966 12:28:33 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:50.966 12:28:33 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:51.223 12:28:33 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:51.481 [2024-11-28 12:28:33.752032] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:51.481 [2024-11-28 12:28:33.790035] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:51.481 [2024-11-28 12:28:33.790037] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:51.481 [2024-11-28 12:28:33.832041] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:51.481 [2024-11-28 12:28:33.832082] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:54.760 12:28:36 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:54.760 12:28:36 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:04:54.760 spdk_app_start Round 2 00:04:54.760 12:28:36 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2338241 /var/tmp/spdk-nbd.sock 00:04:54.760 12:28:36 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2338241 ']' 00:04:54.760 12:28:36 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:54.760 12:28:36 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:54.760 12:28:36 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:54.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:54.761 12:28:36 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:54.761 12:28:36 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:54.761 12:28:36 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:54.761 12:28:36 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:54.761 12:28:36 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:54.761 Malloc0 00:04:54.761 12:28:37 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:54.761 Malloc1 00:04:54.761 12:28:37 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:54.761 12:28:37 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:54.761 12:28:37 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:54.761 12:28:37 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:54.761 12:28:37 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:54.761 12:28:37 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:54.761 12:28:37 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:54.761 12:28:37 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:54.761 12:28:37 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:54.761 12:28:37 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:54.761 12:28:37 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:54.761 12:28:37 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:54.761 12:28:37 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:54.761 12:28:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:54.761 12:28:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:54.761 12:28:37 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:55.019 /dev/nbd0 00:04:55.019 12:28:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:55.019 12:28:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:55.019 12:28:37 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:55.019 12:28:37 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:55.019 12:28:37 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:55.019 12:28:37 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:55.019 12:28:37 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:55.019 12:28:37 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:55.019 12:28:37 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:55.019 12:28:37 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:55.019 12:28:37 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:55.019 1+0 records in 00:04:55.019 1+0 records out 00:04:55.019 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000262671 s, 15.6 MB/s 00:04:55.019 12:28:37 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:55.019 12:28:37 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:55.019 12:28:37 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:55.019 12:28:37 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:55.019 12:28:37 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:55.019 12:28:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:55.019 12:28:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:55.019 12:28:37 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:55.278 /dev/nbd1 00:04:55.278 12:28:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:55.278 12:28:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:55.278 12:28:37 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:55.278 12:28:37 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:55.278 12:28:37 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:55.278 12:28:37 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:55.278 12:28:37 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:55.278 12:28:37 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:55.278 12:28:37 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:55.278 12:28:37 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:55.278 12:28:37 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:55.278 1+0 records in 00:04:55.278 1+0 records out 00:04:55.278 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000213159 s, 19.2 MB/s 00:04:55.278 12:28:37 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:55.278 12:28:37 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:55.278 12:28:37 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:55.278 12:28:37 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:55.278 12:28:37 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:55.278 12:28:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:55.278 12:28:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:55.278 12:28:37 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:55.278 12:28:37 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:55.278 12:28:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:55.537 12:28:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:55.537 { 00:04:55.537 "nbd_device": "/dev/nbd0", 00:04:55.537 "bdev_name": "Malloc0" 00:04:55.537 }, 00:04:55.537 { 00:04:55.537 "nbd_device": "/dev/nbd1", 00:04:55.537 "bdev_name": "Malloc1" 00:04:55.537 } 00:04:55.537 ]' 00:04:55.537 12:28:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:55.537 12:28:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:55.537 { 00:04:55.537 "nbd_device": "/dev/nbd0", 00:04:55.537 "bdev_name": "Malloc0" 00:04:55.537 }, 00:04:55.537 { 00:04:55.537 "nbd_device": "/dev/nbd1", 00:04:55.537 "bdev_name": "Malloc1" 00:04:55.537 } 00:04:55.537 ]' 00:04:55.537 12:28:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:55.537 /dev/nbd1' 00:04:55.537 12:28:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:55.537 /dev/nbd1' 00:04:55.537 12:28:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:55.537 12:28:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:55.537 12:28:37 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:55.537 12:28:37 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:55.537 12:28:37 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:55.537 12:28:37 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:55.537 12:28:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:55.537 12:28:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:55.537 12:28:37 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:55.537 12:28:37 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:55.537 12:28:37 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:55.537 12:28:37 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:55.537 256+0 records in 00:04:55.537 256+0 records out 00:04:55.537 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0107566 s, 97.5 MB/s 00:04:55.537 12:28:37 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:55.537 12:28:37 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:55.537 256+0 records in 00:04:55.537 256+0 records out 00:04:55.537 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0143192 s, 73.2 MB/s 00:04:55.537 12:28:37 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:55.537 12:28:37 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:55.537 256+0 records in 00:04:55.537 256+0 records out 00:04:55.537 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0153943 s, 68.1 MB/s 00:04:55.537 12:28:37 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:55.537 12:28:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:55.537 12:28:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:55.537 12:28:37 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:55.537 12:28:37 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:55.537 12:28:37 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:55.537 12:28:37 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:55.537 12:28:37 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:55.537 12:28:37 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:55.537 12:28:37 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:55.537 12:28:37 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:55.537 12:28:37 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:55.537 12:28:37 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:55.537 12:28:37 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:55.537 12:28:37 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:55.537 12:28:37 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:55.537 12:28:37 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:55.537 12:28:37 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:55.537 12:28:37 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:55.796 12:28:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:55.796 12:28:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:55.796 12:28:38 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:55.796 12:28:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:55.796 12:28:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:55.796 12:28:38 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:55.796 12:28:38 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:55.796 12:28:38 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:55.796 12:28:38 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:55.796 12:28:38 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:56.055 12:28:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:56.055 12:28:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:56.055 12:28:38 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:56.055 12:28:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:56.055 12:28:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:56.055 12:28:38 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:56.055 12:28:38 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:56.055 12:28:38 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:56.055 12:28:38 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:56.055 12:28:38 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:56.055 12:28:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:56.314 12:28:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:56.314 12:28:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:56.314 12:28:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:56.314 12:28:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:56.314 12:28:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:56.314 12:28:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:56.314 12:28:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:56.314 12:28:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:56.314 12:28:38 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:56.314 12:28:38 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:56.314 12:28:38 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:56.314 12:28:38 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:56.314 12:28:38 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:56.573 12:28:38 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:56.573 [2024-11-28 12:28:39.015449] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:56.573 [2024-11-28 12:28:39.052338] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:56.573 [2024-11-28 12:28:39.052340] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:56.831 [2024-11-28 12:28:39.093562] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:56.831 [2024-11-28 12:28:39.093597] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:59.365 12:28:41 event.app_repeat -- event/event.sh@38 -- # waitforlisten 2338241 /var/tmp/spdk-nbd.sock 00:04:59.365 12:28:41 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2338241 ']' 00:04:59.365 12:28:41 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:59.365 12:28:41 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:59.365 12:28:41 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:59.365 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:59.365 12:28:41 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:59.365 12:28:41 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:59.624 12:28:42 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:59.624 12:28:42 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:59.624 12:28:42 event.app_repeat -- event/event.sh@39 -- # killprocess 2338241 00:04:59.624 12:28:42 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 2338241 ']' 00:04:59.624 12:28:42 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 2338241 00:04:59.624 12:28:42 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:04:59.624 12:28:42 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:59.624 12:28:42 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2338241 00:04:59.624 12:28:42 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:59.624 12:28:42 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:59.624 12:28:42 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2338241' 00:04:59.624 killing process with pid 2338241 00:04:59.624 12:28:42 event.app_repeat -- common/autotest_common.sh@973 -- # kill 2338241 00:04:59.624 12:28:42 event.app_repeat -- common/autotest_common.sh@978 -- # wait 2338241 00:04:59.884 spdk_app_start is called in Round 0. 00:04:59.884 Shutdown signal received, stop current app iteration 00:04:59.884 Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 reinitialization... 00:04:59.884 spdk_app_start is called in Round 1. 00:04:59.884 Shutdown signal received, stop current app iteration 00:04:59.884 Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 reinitialization... 00:04:59.884 spdk_app_start is called in Round 2. 00:04:59.884 Shutdown signal received, stop current app iteration 00:04:59.884 Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 reinitialization... 00:04:59.884 spdk_app_start is called in Round 3. 00:04:59.884 Shutdown signal received, stop current app iteration 00:04:59.884 12:28:42 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:04:59.884 12:28:42 event.app_repeat -- event/event.sh@42 -- # return 0 00:04:59.884 00:04:59.884 real 0m16.214s 00:04:59.884 user 0m35.542s 00:04:59.884 sys 0m2.531s 00:04:59.884 12:28:42 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:59.884 12:28:42 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:59.884 ************************************ 00:04:59.884 END TEST app_repeat 00:04:59.884 ************************************ 00:04:59.884 12:28:42 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:04:59.884 12:28:42 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:59.884 12:28:42 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:59.884 12:28:42 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:59.884 12:28:42 event -- common/autotest_common.sh@10 -- # set +x 00:04:59.884 ************************************ 00:04:59.884 START TEST cpu_locks 00:04:59.884 ************************************ 00:04:59.884 12:28:42 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:59.884 * Looking for test storage... 00:05:00.144 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:00.144 12:28:42 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:00.144 12:28:42 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:05:00.144 12:28:42 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:00.144 12:28:42 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:00.144 12:28:42 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:00.144 12:28:42 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:00.144 12:28:42 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:00.144 12:28:42 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:00.144 12:28:42 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:00.144 12:28:42 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:00.144 12:28:42 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:00.144 12:28:42 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:00.144 12:28:42 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:00.144 12:28:42 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:00.144 12:28:42 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:00.144 12:28:42 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:00.144 12:28:42 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:00.144 12:28:42 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:00.144 12:28:42 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:00.144 12:28:42 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:00.144 12:28:42 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:00.144 12:28:42 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:00.144 12:28:42 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:00.144 12:28:42 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:00.144 12:28:42 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:00.144 12:28:42 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:00.144 12:28:42 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:00.144 12:28:42 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:00.144 12:28:42 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:00.144 12:28:42 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:00.144 12:28:42 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:00.144 12:28:42 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:00.144 12:28:42 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:00.144 12:28:42 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:00.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.144 --rc genhtml_branch_coverage=1 00:05:00.144 --rc genhtml_function_coverage=1 00:05:00.144 --rc genhtml_legend=1 00:05:00.144 --rc geninfo_all_blocks=1 00:05:00.144 --rc geninfo_unexecuted_blocks=1 00:05:00.144 00:05:00.144 ' 00:05:00.144 12:28:42 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:00.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.144 --rc genhtml_branch_coverage=1 00:05:00.144 --rc genhtml_function_coverage=1 00:05:00.144 --rc genhtml_legend=1 00:05:00.144 --rc geninfo_all_blocks=1 00:05:00.144 --rc geninfo_unexecuted_blocks=1 00:05:00.144 00:05:00.144 ' 00:05:00.144 12:28:42 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:00.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.144 --rc genhtml_branch_coverage=1 00:05:00.144 --rc genhtml_function_coverage=1 00:05:00.144 --rc genhtml_legend=1 00:05:00.144 --rc geninfo_all_blocks=1 00:05:00.144 --rc geninfo_unexecuted_blocks=1 00:05:00.144 00:05:00.144 ' 00:05:00.144 12:28:42 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:00.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.144 --rc genhtml_branch_coverage=1 00:05:00.144 --rc genhtml_function_coverage=1 00:05:00.144 --rc genhtml_legend=1 00:05:00.144 --rc geninfo_all_blocks=1 00:05:00.144 --rc geninfo_unexecuted_blocks=1 00:05:00.144 00:05:00.144 ' 00:05:00.144 12:28:42 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:00.144 12:28:42 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:00.144 12:28:42 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:00.144 12:28:42 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:00.144 12:28:42 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:00.144 12:28:42 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:00.144 12:28:42 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:00.144 ************************************ 00:05:00.144 START TEST default_locks 00:05:00.144 ************************************ 00:05:00.144 12:28:42 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:05:00.144 12:28:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=2341236 00:05:00.144 12:28:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 2341236 00:05:00.144 12:28:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:00.144 12:28:42 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 2341236 ']' 00:05:00.144 12:28:42 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:00.144 12:28:42 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:00.144 12:28:42 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:00.144 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:00.144 12:28:42 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:00.144 12:28:42 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:00.144 [2024-11-28 12:28:42.584465] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:05:00.144 [2024-11-28 12:28:42.584507] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2341236 ] 00:05:00.144 [2024-11-28 12:28:42.645461] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:00.403 [2024-11-28 12:28:42.689061] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.403 12:28:42 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:00.403 12:28:42 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:05:00.403 12:28:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 2341236 00:05:00.403 12:28:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 2341236 00:05:00.403 12:28:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:00.972 lslocks: write error 00:05:00.972 12:28:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 2341236 00:05:00.972 12:28:43 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 2341236 ']' 00:05:00.972 12:28:43 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 2341236 00:05:00.972 12:28:43 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:05:00.972 12:28:43 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:00.972 12:28:43 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2341236 00:05:00.972 12:28:43 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:00.972 12:28:43 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:00.972 12:28:43 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2341236' 00:05:00.972 killing process with pid 2341236 00:05:00.972 12:28:43 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 2341236 00:05:00.972 12:28:43 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 2341236 00:05:01.231 12:28:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 2341236 00:05:01.231 12:28:43 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:05:01.231 12:28:43 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 2341236 00:05:01.231 12:28:43 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:01.231 12:28:43 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:01.231 12:28:43 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:01.231 12:28:43 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:01.231 12:28:43 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 2341236 00:05:01.231 12:28:43 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 2341236 ']' 00:05:01.231 12:28:43 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:01.231 12:28:43 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:01.231 12:28:43 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:01.231 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:01.231 12:28:43 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:01.231 12:28:43 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:01.231 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (2341236) - No such process 00:05:01.231 ERROR: process (pid: 2341236) is no longer running 00:05:01.231 12:28:43 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:01.231 12:28:43 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:05:01.231 12:28:43 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:05:01.231 12:28:43 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:01.231 12:28:43 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:01.231 12:28:43 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:01.231 12:28:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:01.231 12:28:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:01.231 12:28:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:01.231 12:28:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:01.231 00:05:01.231 real 0m1.143s 00:05:01.231 user 0m1.125s 00:05:01.231 sys 0m0.505s 00:05:01.231 12:28:43 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:01.231 12:28:43 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:01.231 ************************************ 00:05:01.231 END TEST default_locks 00:05:01.231 ************************************ 00:05:01.231 12:28:43 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:01.231 12:28:43 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:01.231 12:28:43 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:01.231 12:28:43 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:01.231 ************************************ 00:05:01.231 START TEST default_locks_via_rpc 00:05:01.231 ************************************ 00:05:01.231 12:28:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:05:01.231 12:28:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=2341492 00:05:01.231 12:28:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 2341492 00:05:01.231 12:28:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:01.231 12:28:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2341492 ']' 00:05:01.231 12:28:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:01.231 12:28:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:01.231 12:28:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:01.231 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:01.232 12:28:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:01.232 12:28:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:01.491 [2024-11-28 12:28:43.791094] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:05:01.491 [2024-11-28 12:28:43.791137] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2341492 ] 00:05:01.491 [2024-11-28 12:28:43.853000] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:01.491 [2024-11-28 12:28:43.895783] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:01.750 12:28:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:01.750 12:28:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:01.750 12:28:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:01.750 12:28:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:01.750 12:28:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:01.750 12:28:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:01.750 12:28:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:01.750 12:28:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:01.750 12:28:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:01.750 12:28:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:01.750 12:28:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:01.750 12:28:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:01.750 12:28:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:01.750 12:28:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:01.750 12:28:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 2341492 00:05:01.750 12:28:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 2341492 00:05:01.750 12:28:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:02.009 12:28:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 2341492 00:05:02.009 12:28:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 2341492 ']' 00:05:02.009 12:28:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 2341492 00:05:02.009 12:28:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:05:02.269 12:28:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:02.269 12:28:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2341492 00:05:02.269 12:28:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:02.269 12:28:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:02.269 12:28:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2341492' 00:05:02.269 killing process with pid 2341492 00:05:02.269 12:28:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 2341492 00:05:02.269 12:28:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 2341492 00:05:02.528 00:05:02.528 real 0m1.140s 00:05:02.528 user 0m1.115s 00:05:02.528 sys 0m0.518s 00:05:02.528 12:28:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:02.528 12:28:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:02.528 ************************************ 00:05:02.528 END TEST default_locks_via_rpc 00:05:02.528 ************************************ 00:05:02.528 12:28:44 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:02.528 12:28:44 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:02.528 12:28:44 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:02.528 12:28:44 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:02.528 ************************************ 00:05:02.528 START TEST non_locking_app_on_locked_coremask 00:05:02.528 ************************************ 00:05:02.528 12:28:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:05:02.528 12:28:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=2341648 00:05:02.528 12:28:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 2341648 /var/tmp/spdk.sock 00:05:02.528 12:28:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:02.528 12:28:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2341648 ']' 00:05:02.528 12:28:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:02.528 12:28:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:02.528 12:28:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:02.528 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:02.528 12:28:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:02.528 12:28:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:02.528 [2024-11-28 12:28:45.005190] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:05:02.528 [2024-11-28 12:28:45.005236] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2341648 ] 00:05:02.797 [2024-11-28 12:28:45.068594] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:02.797 [2024-11-28 12:28:45.109445] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:03.055 12:28:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:03.055 12:28:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:03.055 12:28:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=2341762 00:05:03.055 12:28:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 2341762 /var/tmp/spdk2.sock 00:05:03.055 12:28:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:03.055 12:28:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2341762 ']' 00:05:03.055 12:28:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:03.055 12:28:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:03.055 12:28:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:03.055 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:03.055 12:28:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:03.055 12:28:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:03.055 [2024-11-28 12:28:45.371807] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:05:03.055 [2024-11-28 12:28:45.371858] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2341762 ] 00:05:03.055 [2024-11-28 12:28:45.464421] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:03.055 [2024-11-28 12:28:45.464452] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:03.055 [2024-11-28 12:28:45.545680] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.012 12:28:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:04.012 12:28:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:04.012 12:28:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 2341648 00:05:04.012 12:28:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2341648 00:05:04.012 12:28:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:04.012 lslocks: write error 00:05:04.012 12:28:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 2341648 00:05:04.012 12:28:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2341648 ']' 00:05:04.012 12:28:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 2341648 00:05:04.012 12:28:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:04.012 12:28:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:04.013 12:28:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2341648 00:05:04.013 12:28:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:04.013 12:28:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:04.013 12:28:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2341648' 00:05:04.013 killing process with pid 2341648 00:05:04.013 12:28:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 2341648 00:05:04.013 12:28:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 2341648 00:05:04.951 12:28:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 2341762 00:05:04.951 12:28:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2341762 ']' 00:05:04.951 12:28:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 2341762 00:05:04.951 12:28:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:04.951 12:28:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:04.951 12:28:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2341762 00:05:04.951 12:28:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:04.951 12:28:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:04.951 12:28:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2341762' 00:05:04.951 killing process with pid 2341762 00:05:04.951 12:28:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 2341762 00:05:04.951 12:28:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 2341762 00:05:04.951 00:05:04.951 real 0m2.506s 00:05:04.951 user 0m2.637s 00:05:04.951 sys 0m0.818s 00:05:04.951 12:28:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:04.951 12:28:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:04.951 ************************************ 00:05:04.951 END TEST non_locking_app_on_locked_coremask 00:05:04.951 ************************************ 00:05:05.210 12:28:47 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:05.210 12:28:47 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:05.210 12:28:47 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:05.210 12:28:47 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:05.210 ************************************ 00:05:05.210 START TEST locking_app_on_unlocked_coremask 00:05:05.210 ************************************ 00:05:05.210 12:28:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:05:05.210 12:28:47 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=2342050 00:05:05.210 12:28:47 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 2342050 /var/tmp/spdk.sock 00:05:05.210 12:28:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2342050 ']' 00:05:05.210 12:28:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:05.210 12:28:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:05.210 12:28:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:05.210 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:05.210 12:28:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:05.210 12:28:47 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:05.210 12:28:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:05.210 [2024-11-28 12:28:47.570634] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:05:05.210 [2024-11-28 12:28:47.570675] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2342050 ] 00:05:05.210 [2024-11-28 12:28:47.632373] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:05.210 [2024-11-28 12:28:47.632398] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:05.210 [2024-11-28 12:28:47.675103] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:05.470 12:28:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:05.470 12:28:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:05.470 12:28:47 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=2342254 00:05:05.470 12:28:47 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 2342254 /var/tmp/spdk2.sock 00:05:05.470 12:28:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2342254 ']' 00:05:05.470 12:28:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:05.470 12:28:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:05.470 12:28:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:05.470 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:05.470 12:28:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:05.470 12:28:47 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:05.470 12:28:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:05.470 [2024-11-28 12:28:47.943383] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:05:05.470 [2024-11-28 12:28:47.943431] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2342254 ] 00:05:05.730 [2024-11-28 12:28:48.026783] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:05.730 [2024-11-28 12:28:48.107507] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.299 12:28:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:06.299 12:28:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:06.299 12:28:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 2342254 00:05:06.299 12:28:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2342254 00:05:06.299 12:28:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:06.867 lslocks: write error 00:05:06.867 12:28:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 2342050 00:05:06.867 12:28:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2342050 ']' 00:05:06.867 12:28:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 2342050 00:05:06.867 12:28:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:06.867 12:28:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:06.867 12:28:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2342050 00:05:06.867 12:28:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:06.867 12:28:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:06.867 12:28:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2342050' 00:05:06.867 killing process with pid 2342050 00:05:06.867 12:28:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 2342050 00:05:06.867 12:28:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 2342050 00:05:07.435 12:28:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 2342254 00:05:07.435 12:28:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2342254 ']' 00:05:07.435 12:28:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 2342254 00:05:07.435 12:28:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:07.435 12:28:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:07.435 12:28:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2342254 00:05:07.435 12:28:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:07.435 12:28:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:07.435 12:28:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2342254' 00:05:07.435 killing process with pid 2342254 00:05:07.435 12:28:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 2342254 00:05:07.435 12:28:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 2342254 00:05:07.694 00:05:07.694 real 0m2.634s 00:05:07.694 user 0m2.744s 00:05:07.694 sys 0m0.886s 00:05:07.694 12:28:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:07.694 12:28:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:07.694 ************************************ 00:05:07.694 END TEST locking_app_on_unlocked_coremask 00:05:07.694 ************************************ 00:05:07.694 12:28:50 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:07.694 12:28:50 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:07.694 12:28:50 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:07.694 12:28:50 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:07.952 ************************************ 00:05:07.952 START TEST locking_app_on_locked_coremask 00:05:07.952 ************************************ 00:05:07.952 12:28:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:05:07.952 12:28:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=2342537 00:05:07.952 12:28:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 2342537 /var/tmp/spdk.sock 00:05:07.952 12:28:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:07.952 12:28:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2342537 ']' 00:05:07.952 12:28:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:07.953 12:28:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:07.953 12:28:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:07.953 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:07.953 12:28:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:07.953 12:28:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:07.953 [2024-11-28 12:28:50.265196] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:05:07.953 [2024-11-28 12:28:50.265239] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2342537 ] 00:05:07.953 [2024-11-28 12:28:50.328330] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:07.953 [2024-11-28 12:28:50.370963] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.211 12:28:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:08.211 12:28:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:08.211 12:28:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=2342747 00:05:08.211 12:28:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 2342747 /var/tmp/spdk2.sock 00:05:08.212 12:28:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:08.212 12:28:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:08.212 12:28:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 2342747 /var/tmp/spdk2.sock 00:05:08.212 12:28:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:08.212 12:28:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:08.212 12:28:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:08.212 12:28:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:08.212 12:28:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 2342747 /var/tmp/spdk2.sock 00:05:08.212 12:28:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2342747 ']' 00:05:08.212 12:28:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:08.212 12:28:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:08.212 12:28:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:08.212 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:08.212 12:28:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:08.212 12:28:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:08.212 [2024-11-28 12:28:50.640128] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:05:08.212 [2024-11-28 12:28:50.640177] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2342747 ] 00:05:08.470 [2024-11-28 12:28:50.732135] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 2342537 has claimed it. 00:05:08.470 [2024-11-28 12:28:50.732170] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:09.037 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (2342747) - No such process 00:05:09.037 ERROR: process (pid: 2342747) is no longer running 00:05:09.037 12:28:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:09.037 12:28:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:09.037 12:28:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:09.037 12:28:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:09.037 12:28:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:09.037 12:28:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:09.037 12:28:51 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 2342537 00:05:09.037 12:28:51 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2342537 00:05:09.037 12:28:51 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:09.296 lslocks: write error 00:05:09.296 12:28:51 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 2342537 00:05:09.296 12:28:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2342537 ']' 00:05:09.296 12:28:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 2342537 00:05:09.296 12:28:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:09.296 12:28:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:09.296 12:28:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2342537 00:05:09.296 12:28:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:09.296 12:28:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:09.296 12:28:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2342537' 00:05:09.296 killing process with pid 2342537 00:05:09.296 12:28:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 2342537 00:05:09.296 12:28:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 2342537 00:05:09.555 00:05:09.555 real 0m1.756s 00:05:09.555 user 0m1.898s 00:05:09.555 sys 0m0.594s 00:05:09.555 12:28:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:09.555 12:28:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:09.555 ************************************ 00:05:09.555 END TEST locking_app_on_locked_coremask 00:05:09.555 ************************************ 00:05:09.555 12:28:52 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:09.555 12:28:52 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:09.555 12:28:52 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:09.555 12:28:52 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:09.555 ************************************ 00:05:09.555 START TEST locking_overlapped_coremask 00:05:09.555 ************************************ 00:05:09.555 12:28:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:05:09.555 12:28:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=2343014 00:05:09.555 12:28:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 2343014 /var/tmp/spdk.sock 00:05:09.555 12:28:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:09.555 12:28:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 2343014 ']' 00:05:09.555 12:28:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:09.555 12:28:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:09.555 12:28:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:09.555 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:09.555 12:28:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:09.555 12:28:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:09.814 [2024-11-28 12:28:52.084490] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:05:09.814 [2024-11-28 12:28:52.084532] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2343014 ] 00:05:09.814 [2024-11-28 12:28:52.147624] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:09.814 [2024-11-28 12:28:52.192830] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:09.814 [2024-11-28 12:28:52.192929] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.814 [2024-11-28 12:28:52.192929] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:10.073 12:28:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:10.073 12:28:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:10.073 12:28:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=2343030 00:05:10.073 12:28:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 2343030 /var/tmp/spdk2.sock 00:05:10.073 12:28:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:10.073 12:28:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:10.073 12:28:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 2343030 /var/tmp/spdk2.sock 00:05:10.073 12:28:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:10.073 12:28:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:10.073 12:28:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:10.073 12:28:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:10.073 12:28:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 2343030 /var/tmp/spdk2.sock 00:05:10.073 12:28:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 2343030 ']' 00:05:10.073 12:28:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:10.073 12:28:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:10.073 12:28:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:10.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:10.073 12:28:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:10.073 12:28:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:10.073 [2024-11-28 12:28:52.451524] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:05:10.073 [2024-11-28 12:28:52.451571] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2343030 ] 00:05:10.073 [2024-11-28 12:28:52.544658] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2343014 has claimed it. 00:05:10.073 [2024-11-28 12:28:52.544695] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:10.640 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (2343030) - No such process 00:05:10.641 ERROR: process (pid: 2343030) is no longer running 00:05:10.641 12:28:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:10.641 12:28:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:10.641 12:28:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:10.641 12:28:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:10.641 12:28:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:10.641 12:28:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:10.641 12:28:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:10.641 12:28:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:10.641 12:28:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:10.641 12:28:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:10.641 12:28:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 2343014 00:05:10.641 12:28:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 2343014 ']' 00:05:10.641 12:28:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 2343014 00:05:10.641 12:28:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:05:10.641 12:28:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:10.641 12:28:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2343014 00:05:10.641 12:28:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:10.641 12:28:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:10.641 12:28:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2343014' 00:05:10.641 killing process with pid 2343014 00:05:10.641 12:28:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 2343014 00:05:10.641 12:28:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 2343014 00:05:11.208 00:05:11.208 real 0m1.415s 00:05:11.208 user 0m3.922s 00:05:11.208 sys 0m0.386s 00:05:11.209 12:28:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:11.209 12:28:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:11.209 ************************************ 00:05:11.209 END TEST locking_overlapped_coremask 00:05:11.209 ************************************ 00:05:11.209 12:28:53 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:11.209 12:28:53 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:11.209 12:28:53 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:11.209 12:28:53 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:11.209 ************************************ 00:05:11.209 START TEST locking_overlapped_coremask_via_rpc 00:05:11.209 ************************************ 00:05:11.209 12:28:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:05:11.209 12:28:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=2343286 00:05:11.209 12:28:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 2343286 /var/tmp/spdk.sock 00:05:11.209 12:28:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:11.209 12:28:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2343286 ']' 00:05:11.209 12:28:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:11.209 12:28:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:11.209 12:28:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:11.209 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:11.209 12:28:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:11.209 12:28:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:11.209 [2024-11-28 12:28:53.565004] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:05:11.209 [2024-11-28 12:28:53.565044] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2343286 ] 00:05:11.209 [2024-11-28 12:28:53.627389] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:11.209 [2024-11-28 12:28:53.627412] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:11.209 [2024-11-28 12:28:53.672997] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:11.209 [2024-11-28 12:28:53.673096] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:11.209 [2024-11-28 12:28:53.673098] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.468 12:28:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:11.468 12:28:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:11.468 12:28:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=2343294 00:05:11.468 12:28:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 2343294 /var/tmp/spdk2.sock 00:05:11.468 12:28:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:11.468 12:28:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2343294 ']' 00:05:11.468 12:28:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:11.468 12:28:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:11.468 12:28:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:11.468 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:11.468 12:28:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:11.468 12:28:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:11.468 [2024-11-28 12:28:53.940434] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:05:11.468 [2024-11-28 12:28:53.940482] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2343294 ] 00:05:11.727 [2024-11-28 12:28:54.034278] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:11.727 [2024-11-28 12:28:54.034305] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:11.727 [2024-11-28 12:28:54.122308] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:11.727 [2024-11-28 12:28:54.122419] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:11.727 [2024-11-28 12:28:54.122420] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:05:12.294 12:28:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:12.294 12:28:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:12.294 12:28:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:12.294 12:28:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:12.294 12:28:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:12.295 12:28:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:12.295 12:28:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:12.295 12:28:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:12.295 12:28:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:12.295 12:28:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:12.295 12:28:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:12.295 12:28:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:12.295 12:28:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:12.295 12:28:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:12.295 12:28:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:12.295 12:28:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:12.295 [2024-11-28 12:28:54.793018] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2343286 has claimed it. 00:05:12.295 request: 00:05:12.295 { 00:05:12.295 "method": "framework_enable_cpumask_locks", 00:05:12.295 "req_id": 1 00:05:12.295 } 00:05:12.295 Got JSON-RPC error response 00:05:12.295 response: 00:05:12.295 { 00:05:12.295 "code": -32603, 00:05:12.295 "message": "Failed to claim CPU core: 2" 00:05:12.295 } 00:05:12.295 12:28:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:12.295 12:28:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:12.295 12:28:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:12.295 12:28:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:12.295 12:28:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:12.295 12:28:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 2343286 /var/tmp/spdk.sock 00:05:12.295 12:28:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2343286 ']' 00:05:12.295 12:28:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:12.295 12:28:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:12.295 12:28:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:12.295 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:12.295 12:28:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:12.295 12:28:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:12.554 12:28:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:12.554 12:28:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:12.554 12:28:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 2343294 /var/tmp/spdk2.sock 00:05:12.554 12:28:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2343294 ']' 00:05:12.554 12:28:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:12.554 12:28:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:12.554 12:28:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:12.554 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:12.554 12:28:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:12.554 12:28:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:12.812 12:28:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:12.812 12:28:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:12.812 12:28:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:12.812 12:28:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:12.812 12:28:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:12.812 12:28:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:12.812 00:05:12.812 real 0m1.696s 00:05:12.812 user 0m0.820s 00:05:12.812 sys 0m0.134s 00:05:12.812 12:28:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:12.812 12:28:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:12.812 ************************************ 00:05:12.812 END TEST locking_overlapped_coremask_via_rpc 00:05:12.812 ************************************ 00:05:12.812 12:28:55 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:12.812 12:28:55 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2343286 ]] 00:05:12.812 12:28:55 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2343286 00:05:12.812 12:28:55 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2343286 ']' 00:05:12.812 12:28:55 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2343286 00:05:12.812 12:28:55 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:12.812 12:28:55 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:12.812 12:28:55 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2343286 00:05:12.812 12:28:55 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:12.812 12:28:55 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:12.812 12:28:55 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2343286' 00:05:12.812 killing process with pid 2343286 00:05:12.812 12:28:55 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 2343286 00:05:12.812 12:28:55 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 2343286 00:05:13.379 12:28:55 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2343294 ]] 00:05:13.379 12:28:55 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2343294 00:05:13.379 12:28:55 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2343294 ']' 00:05:13.379 12:28:55 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2343294 00:05:13.379 12:28:55 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:13.379 12:28:55 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:13.379 12:28:55 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2343294 00:05:13.379 12:28:55 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:13.379 12:28:55 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:13.379 12:28:55 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2343294' 00:05:13.379 killing process with pid 2343294 00:05:13.379 12:28:55 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 2343294 00:05:13.379 12:28:55 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 2343294 00:05:13.639 12:28:55 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:13.639 12:28:55 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:13.639 12:28:55 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2343286 ]] 00:05:13.639 12:28:55 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2343286 00:05:13.639 12:28:55 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2343286 ']' 00:05:13.639 12:28:55 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2343286 00:05:13.639 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2343286) - No such process 00:05:13.639 12:28:55 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 2343286 is not found' 00:05:13.639 Process with pid 2343286 is not found 00:05:13.639 12:28:55 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2343294 ]] 00:05:13.639 12:28:55 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2343294 00:05:13.639 12:28:55 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2343294 ']' 00:05:13.639 12:28:55 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2343294 00:05:13.639 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2343294) - No such process 00:05:13.639 12:28:55 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 2343294 is not found' 00:05:13.639 Process with pid 2343294 is not found 00:05:13.639 12:28:55 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:13.639 00:05:13.639 real 0m13.654s 00:05:13.639 user 0m23.975s 00:05:13.639 sys 0m4.787s 00:05:13.639 12:28:55 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:13.639 12:28:55 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:13.639 ************************************ 00:05:13.639 END TEST cpu_locks 00:05:13.639 ************************************ 00:05:13.639 00:05:13.639 real 0m38.376s 00:05:13.639 user 1m13.896s 00:05:13.639 sys 0m8.202s 00:05:13.639 12:28:56 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:13.639 12:28:56 event -- common/autotest_common.sh@10 -- # set +x 00:05:13.639 ************************************ 00:05:13.639 END TEST event 00:05:13.639 ************************************ 00:05:13.639 12:28:56 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:13.639 12:28:56 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:13.639 12:28:56 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:13.639 12:28:56 -- common/autotest_common.sh@10 -- # set +x 00:05:13.639 ************************************ 00:05:13.639 START TEST thread 00:05:13.639 ************************************ 00:05:13.639 12:28:56 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:13.639 * Looking for test storage... 00:05:13.639 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:05:13.639 12:28:56 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:13.898 12:28:56 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:05:13.898 12:28:56 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:13.898 12:28:56 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:13.898 12:28:56 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:13.898 12:28:56 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:13.898 12:28:56 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:13.898 12:28:56 thread -- scripts/common.sh@336 -- # IFS=.-: 00:05:13.898 12:28:56 thread -- scripts/common.sh@336 -- # read -ra ver1 00:05:13.898 12:28:56 thread -- scripts/common.sh@337 -- # IFS=.-: 00:05:13.898 12:28:56 thread -- scripts/common.sh@337 -- # read -ra ver2 00:05:13.898 12:28:56 thread -- scripts/common.sh@338 -- # local 'op=<' 00:05:13.898 12:28:56 thread -- scripts/common.sh@340 -- # ver1_l=2 00:05:13.898 12:28:56 thread -- scripts/common.sh@341 -- # ver2_l=1 00:05:13.898 12:28:56 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:13.898 12:28:56 thread -- scripts/common.sh@344 -- # case "$op" in 00:05:13.898 12:28:56 thread -- scripts/common.sh@345 -- # : 1 00:05:13.898 12:28:56 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:13.898 12:28:56 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:13.898 12:28:56 thread -- scripts/common.sh@365 -- # decimal 1 00:05:13.898 12:28:56 thread -- scripts/common.sh@353 -- # local d=1 00:05:13.898 12:28:56 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:13.898 12:28:56 thread -- scripts/common.sh@355 -- # echo 1 00:05:13.899 12:28:56 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:05:13.899 12:28:56 thread -- scripts/common.sh@366 -- # decimal 2 00:05:13.899 12:28:56 thread -- scripts/common.sh@353 -- # local d=2 00:05:13.899 12:28:56 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:13.899 12:28:56 thread -- scripts/common.sh@355 -- # echo 2 00:05:13.899 12:28:56 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:05:13.899 12:28:56 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:13.899 12:28:56 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:13.899 12:28:56 thread -- scripts/common.sh@368 -- # return 0 00:05:13.899 12:28:56 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:13.899 12:28:56 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:13.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.899 --rc genhtml_branch_coverage=1 00:05:13.899 --rc genhtml_function_coverage=1 00:05:13.899 --rc genhtml_legend=1 00:05:13.899 --rc geninfo_all_blocks=1 00:05:13.899 --rc geninfo_unexecuted_blocks=1 00:05:13.899 00:05:13.899 ' 00:05:13.899 12:28:56 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:13.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.899 --rc genhtml_branch_coverage=1 00:05:13.899 --rc genhtml_function_coverage=1 00:05:13.899 --rc genhtml_legend=1 00:05:13.899 --rc geninfo_all_blocks=1 00:05:13.899 --rc geninfo_unexecuted_blocks=1 00:05:13.899 00:05:13.899 ' 00:05:13.899 12:28:56 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:13.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.899 --rc genhtml_branch_coverage=1 00:05:13.899 --rc genhtml_function_coverage=1 00:05:13.899 --rc genhtml_legend=1 00:05:13.899 --rc geninfo_all_blocks=1 00:05:13.899 --rc geninfo_unexecuted_blocks=1 00:05:13.899 00:05:13.899 ' 00:05:13.899 12:28:56 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:13.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.899 --rc genhtml_branch_coverage=1 00:05:13.899 --rc genhtml_function_coverage=1 00:05:13.899 --rc genhtml_legend=1 00:05:13.899 --rc geninfo_all_blocks=1 00:05:13.899 --rc geninfo_unexecuted_blocks=1 00:05:13.899 00:05:13.899 ' 00:05:13.899 12:28:56 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:13.899 12:28:56 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:13.899 12:28:56 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:13.899 12:28:56 thread -- common/autotest_common.sh@10 -- # set +x 00:05:13.899 ************************************ 00:05:13.899 START TEST thread_poller_perf 00:05:13.899 ************************************ 00:05:13.899 12:28:56 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:13.899 [2024-11-28 12:28:56.292765] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:05:13.899 [2024-11-28 12:28:56.292831] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2343854 ] 00:05:13.899 [2024-11-28 12:28:56.360025] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:13.899 [2024-11-28 12:28:56.400231] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.899 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:15.275 [2024-11-28T11:28:57.794Z] ====================================== 00:05:15.275 [2024-11-28T11:28:57.794Z] busy:2309593606 (cyc) 00:05:15.275 [2024-11-28T11:28:57.794Z] total_run_count: 405000 00:05:15.275 [2024-11-28T11:28:57.794Z] tsc_hz: 2300000000 (cyc) 00:05:15.275 [2024-11-28T11:28:57.794Z] ====================================== 00:05:15.275 [2024-11-28T11:28:57.794Z] poller_cost: 5702 (cyc), 2479 (nsec) 00:05:15.275 00:05:15.275 real 0m1.178s 00:05:15.275 user 0m1.110s 00:05:15.275 sys 0m0.064s 00:05:15.275 12:28:57 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:15.275 12:28:57 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:15.275 ************************************ 00:05:15.275 END TEST thread_poller_perf 00:05:15.275 ************************************ 00:05:15.275 12:28:57 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:15.275 12:28:57 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:15.275 12:28:57 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:15.275 12:28:57 thread -- common/autotest_common.sh@10 -- # set +x 00:05:15.275 ************************************ 00:05:15.275 START TEST thread_poller_perf 00:05:15.275 ************************************ 00:05:15.275 12:28:57 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:15.275 [2024-11-28 12:28:57.542681] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:05:15.275 [2024-11-28 12:28:57.542749] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2344108 ] 00:05:15.275 [2024-11-28 12:28:57.607153] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.275 [2024-11-28 12:28:57.646990] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.275 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:16.212 [2024-11-28T11:28:58.731Z] ====================================== 00:05:16.212 [2024-11-28T11:28:58.731Z] busy:2301731200 (cyc) 00:05:16.212 [2024-11-28T11:28:58.731Z] total_run_count: 5370000 00:05:16.212 [2024-11-28T11:28:58.731Z] tsc_hz: 2300000000 (cyc) 00:05:16.212 [2024-11-28T11:28:58.731Z] ====================================== 00:05:16.212 [2024-11-28T11:28:58.731Z] poller_cost: 428 (cyc), 186 (nsec) 00:05:16.212 00:05:16.212 real 0m1.168s 00:05:16.212 user 0m1.101s 00:05:16.212 sys 0m0.064s 00:05:16.212 12:28:58 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:16.212 12:28:58 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:16.212 ************************************ 00:05:16.212 END TEST thread_poller_perf 00:05:16.212 ************************************ 00:05:16.212 12:28:58 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:16.212 00:05:16.212 real 0m2.641s 00:05:16.212 user 0m2.366s 00:05:16.212 sys 0m0.288s 00:05:16.212 12:28:58 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:16.212 12:28:58 thread -- common/autotest_common.sh@10 -- # set +x 00:05:16.212 ************************************ 00:05:16.212 END TEST thread 00:05:16.212 ************************************ 00:05:16.472 12:28:58 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:05:16.472 12:28:58 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:16.472 12:28:58 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:16.472 12:28:58 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:16.472 12:28:58 -- common/autotest_common.sh@10 -- # set +x 00:05:16.472 ************************************ 00:05:16.472 START TEST app_cmdline 00:05:16.472 ************************************ 00:05:16.472 12:28:58 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:16.472 * Looking for test storage... 00:05:16.472 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:16.472 12:28:58 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:16.472 12:28:58 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:05:16.472 12:28:58 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:16.472 12:28:58 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:16.472 12:28:58 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:16.472 12:28:58 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:16.472 12:28:58 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:16.472 12:28:58 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:05:16.472 12:28:58 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:05:16.472 12:28:58 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:05:16.472 12:28:58 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:05:16.472 12:28:58 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:05:16.472 12:28:58 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:05:16.472 12:28:58 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:05:16.472 12:28:58 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:16.472 12:28:58 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:05:16.472 12:28:58 app_cmdline -- scripts/common.sh@345 -- # : 1 00:05:16.472 12:28:58 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:16.472 12:28:58 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:16.472 12:28:58 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:05:16.472 12:28:58 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:05:16.472 12:28:58 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:16.472 12:28:58 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:05:16.472 12:28:58 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:05:16.472 12:28:58 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:05:16.472 12:28:58 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:05:16.472 12:28:58 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:16.472 12:28:58 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:05:16.472 12:28:58 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:05:16.472 12:28:58 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:16.472 12:28:58 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:16.472 12:28:58 app_cmdline -- scripts/common.sh@368 -- # return 0 00:05:16.472 12:28:58 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:16.472 12:28:58 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:16.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.472 --rc genhtml_branch_coverage=1 00:05:16.472 --rc genhtml_function_coverage=1 00:05:16.472 --rc genhtml_legend=1 00:05:16.472 --rc geninfo_all_blocks=1 00:05:16.472 --rc geninfo_unexecuted_blocks=1 00:05:16.472 00:05:16.472 ' 00:05:16.472 12:28:58 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:16.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.472 --rc genhtml_branch_coverage=1 00:05:16.472 --rc genhtml_function_coverage=1 00:05:16.472 --rc genhtml_legend=1 00:05:16.472 --rc geninfo_all_blocks=1 00:05:16.472 --rc geninfo_unexecuted_blocks=1 00:05:16.472 00:05:16.472 ' 00:05:16.472 12:28:58 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:16.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.472 --rc genhtml_branch_coverage=1 00:05:16.472 --rc genhtml_function_coverage=1 00:05:16.472 --rc genhtml_legend=1 00:05:16.472 --rc geninfo_all_blocks=1 00:05:16.472 --rc geninfo_unexecuted_blocks=1 00:05:16.472 00:05:16.472 ' 00:05:16.472 12:28:58 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:16.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.472 --rc genhtml_branch_coverage=1 00:05:16.472 --rc genhtml_function_coverage=1 00:05:16.472 --rc genhtml_legend=1 00:05:16.472 --rc geninfo_all_blocks=1 00:05:16.472 --rc geninfo_unexecuted_blocks=1 00:05:16.472 00:05:16.472 ' 00:05:16.472 12:28:58 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:16.472 12:28:58 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=2344401 00:05:16.472 12:28:58 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 2344401 00:05:16.472 12:28:58 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:16.472 12:28:58 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 2344401 ']' 00:05:16.472 12:28:58 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:16.472 12:28:58 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:16.472 12:28:58 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:16.472 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:16.472 12:28:58 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:16.472 12:28:58 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:16.731 [2024-11-28 12:28:59.009010] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:05:16.731 [2024-11-28 12:28:59.009061] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2344401 ] 00:05:16.731 [2024-11-28 12:28:59.071565] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:16.731 [2024-11-28 12:28:59.111518] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.990 12:28:59 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:16.990 12:28:59 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:05:16.990 12:28:59 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:05:16.990 { 00:05:16.990 "version": "SPDK v25.01-pre git sha1 bf92c7a42", 00:05:16.990 "fields": { 00:05:16.990 "major": 25, 00:05:16.990 "minor": 1, 00:05:16.990 "patch": 0, 00:05:16.990 "suffix": "-pre", 00:05:16.990 "commit": "bf92c7a42" 00:05:16.990 } 00:05:16.990 } 00:05:17.249 12:28:59 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:17.249 12:28:59 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:17.249 12:28:59 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:17.249 12:28:59 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:17.249 12:28:59 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:17.249 12:28:59 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:17.249 12:28:59 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:17.249 12:28:59 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:17.249 12:28:59 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:17.249 12:28:59 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:17.249 12:28:59 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:17.249 12:28:59 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:17.249 12:28:59 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:17.249 12:28:59 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:05:17.249 12:28:59 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:17.249 12:28:59 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:17.249 12:28:59 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:17.249 12:28:59 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:17.249 12:28:59 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:17.249 12:28:59 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:17.249 12:28:59 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:17.249 12:28:59 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:17.249 12:28:59 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:05:17.249 12:28:59 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:17.249 request: 00:05:17.249 { 00:05:17.249 "method": "env_dpdk_get_mem_stats", 00:05:17.249 "req_id": 1 00:05:17.249 } 00:05:17.249 Got JSON-RPC error response 00:05:17.249 response: 00:05:17.249 { 00:05:17.249 "code": -32601, 00:05:17.249 "message": "Method not found" 00:05:17.249 } 00:05:17.249 12:28:59 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:05:17.249 12:28:59 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:17.249 12:28:59 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:17.249 12:28:59 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:17.249 12:28:59 app_cmdline -- app/cmdline.sh@1 -- # killprocess 2344401 00:05:17.249 12:28:59 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 2344401 ']' 00:05:17.249 12:28:59 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 2344401 00:05:17.249 12:28:59 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:05:17.249 12:28:59 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:17.249 12:28:59 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2344401 00:05:17.508 12:28:59 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:17.508 12:28:59 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:17.508 12:28:59 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2344401' 00:05:17.508 killing process with pid 2344401 00:05:17.508 12:28:59 app_cmdline -- common/autotest_common.sh@973 -- # kill 2344401 00:05:17.508 12:28:59 app_cmdline -- common/autotest_common.sh@978 -- # wait 2344401 00:05:17.768 00:05:17.768 real 0m1.319s 00:05:17.768 user 0m1.535s 00:05:17.768 sys 0m0.436s 00:05:17.768 12:29:00 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:17.768 12:29:00 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:17.768 ************************************ 00:05:17.768 END TEST app_cmdline 00:05:17.768 ************************************ 00:05:17.768 12:29:00 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:17.768 12:29:00 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:17.768 12:29:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:17.768 12:29:00 -- common/autotest_common.sh@10 -- # set +x 00:05:17.768 ************************************ 00:05:17.768 START TEST version 00:05:17.768 ************************************ 00:05:17.768 12:29:00 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:17.768 * Looking for test storage... 00:05:17.768 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:17.768 12:29:00 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:17.768 12:29:00 version -- common/autotest_common.sh@1693 -- # lcov --version 00:05:17.768 12:29:00 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:18.028 12:29:00 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:18.028 12:29:00 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:18.028 12:29:00 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:18.028 12:29:00 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:18.028 12:29:00 version -- scripts/common.sh@336 -- # IFS=.-: 00:05:18.028 12:29:00 version -- scripts/common.sh@336 -- # read -ra ver1 00:05:18.028 12:29:00 version -- scripts/common.sh@337 -- # IFS=.-: 00:05:18.028 12:29:00 version -- scripts/common.sh@337 -- # read -ra ver2 00:05:18.028 12:29:00 version -- scripts/common.sh@338 -- # local 'op=<' 00:05:18.028 12:29:00 version -- scripts/common.sh@340 -- # ver1_l=2 00:05:18.028 12:29:00 version -- scripts/common.sh@341 -- # ver2_l=1 00:05:18.028 12:29:00 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:18.028 12:29:00 version -- scripts/common.sh@344 -- # case "$op" in 00:05:18.028 12:29:00 version -- scripts/common.sh@345 -- # : 1 00:05:18.028 12:29:00 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:18.028 12:29:00 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:18.028 12:29:00 version -- scripts/common.sh@365 -- # decimal 1 00:05:18.028 12:29:00 version -- scripts/common.sh@353 -- # local d=1 00:05:18.028 12:29:00 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:18.028 12:29:00 version -- scripts/common.sh@355 -- # echo 1 00:05:18.028 12:29:00 version -- scripts/common.sh@365 -- # ver1[v]=1 00:05:18.028 12:29:00 version -- scripts/common.sh@366 -- # decimal 2 00:05:18.028 12:29:00 version -- scripts/common.sh@353 -- # local d=2 00:05:18.028 12:29:00 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:18.028 12:29:00 version -- scripts/common.sh@355 -- # echo 2 00:05:18.028 12:29:00 version -- scripts/common.sh@366 -- # ver2[v]=2 00:05:18.028 12:29:00 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:18.028 12:29:00 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:18.028 12:29:00 version -- scripts/common.sh@368 -- # return 0 00:05:18.028 12:29:00 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:18.028 12:29:00 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:18.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.028 --rc genhtml_branch_coverage=1 00:05:18.028 --rc genhtml_function_coverage=1 00:05:18.028 --rc genhtml_legend=1 00:05:18.028 --rc geninfo_all_blocks=1 00:05:18.028 --rc geninfo_unexecuted_blocks=1 00:05:18.028 00:05:18.028 ' 00:05:18.028 12:29:00 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:18.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.028 --rc genhtml_branch_coverage=1 00:05:18.028 --rc genhtml_function_coverage=1 00:05:18.028 --rc genhtml_legend=1 00:05:18.028 --rc geninfo_all_blocks=1 00:05:18.028 --rc geninfo_unexecuted_blocks=1 00:05:18.028 00:05:18.028 ' 00:05:18.028 12:29:00 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:18.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.028 --rc genhtml_branch_coverage=1 00:05:18.028 --rc genhtml_function_coverage=1 00:05:18.028 --rc genhtml_legend=1 00:05:18.028 --rc geninfo_all_blocks=1 00:05:18.028 --rc geninfo_unexecuted_blocks=1 00:05:18.028 00:05:18.028 ' 00:05:18.028 12:29:00 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:18.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.028 --rc genhtml_branch_coverage=1 00:05:18.028 --rc genhtml_function_coverage=1 00:05:18.028 --rc genhtml_legend=1 00:05:18.028 --rc geninfo_all_blocks=1 00:05:18.028 --rc geninfo_unexecuted_blocks=1 00:05:18.028 00:05:18.028 ' 00:05:18.028 12:29:00 version -- app/version.sh@17 -- # get_header_version major 00:05:18.028 12:29:00 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:18.028 12:29:00 version -- app/version.sh@14 -- # cut -f2 00:05:18.028 12:29:00 version -- app/version.sh@14 -- # tr -d '"' 00:05:18.028 12:29:00 version -- app/version.sh@17 -- # major=25 00:05:18.028 12:29:00 version -- app/version.sh@18 -- # get_header_version minor 00:05:18.028 12:29:00 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:18.028 12:29:00 version -- app/version.sh@14 -- # cut -f2 00:05:18.028 12:29:00 version -- app/version.sh@14 -- # tr -d '"' 00:05:18.028 12:29:00 version -- app/version.sh@18 -- # minor=1 00:05:18.028 12:29:00 version -- app/version.sh@19 -- # get_header_version patch 00:05:18.028 12:29:00 version -- app/version.sh@14 -- # cut -f2 00:05:18.028 12:29:00 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:18.028 12:29:00 version -- app/version.sh@14 -- # tr -d '"' 00:05:18.028 12:29:00 version -- app/version.sh@19 -- # patch=0 00:05:18.028 12:29:00 version -- app/version.sh@20 -- # get_header_version suffix 00:05:18.028 12:29:00 version -- app/version.sh@14 -- # tr -d '"' 00:05:18.028 12:29:00 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:18.028 12:29:00 version -- app/version.sh@14 -- # cut -f2 00:05:18.028 12:29:00 version -- app/version.sh@20 -- # suffix=-pre 00:05:18.028 12:29:00 version -- app/version.sh@22 -- # version=25.1 00:05:18.028 12:29:00 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:18.028 12:29:00 version -- app/version.sh@28 -- # version=25.1rc0 00:05:18.028 12:29:00 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:05:18.028 12:29:00 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:18.028 12:29:00 version -- app/version.sh@30 -- # py_version=25.1rc0 00:05:18.028 12:29:00 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:05:18.028 00:05:18.028 real 0m0.237s 00:05:18.028 user 0m0.139s 00:05:18.028 sys 0m0.138s 00:05:18.028 12:29:00 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:18.028 12:29:00 version -- common/autotest_common.sh@10 -- # set +x 00:05:18.028 ************************************ 00:05:18.028 END TEST version 00:05:18.028 ************************************ 00:05:18.028 12:29:00 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:05:18.028 12:29:00 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:05:18.028 12:29:00 -- spdk/autotest.sh@194 -- # uname -s 00:05:18.028 12:29:00 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:05:18.028 12:29:00 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:18.028 12:29:00 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:18.028 12:29:00 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:05:18.028 12:29:00 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:05:18.028 12:29:00 -- spdk/autotest.sh@260 -- # timing_exit lib 00:05:18.028 12:29:00 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:18.028 12:29:00 -- common/autotest_common.sh@10 -- # set +x 00:05:18.028 12:29:00 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:05:18.028 12:29:00 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:05:18.028 12:29:00 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:05:18.028 12:29:00 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:05:18.028 12:29:00 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:05:18.028 12:29:00 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:05:18.028 12:29:00 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:18.028 12:29:00 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:18.028 12:29:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:18.028 12:29:00 -- common/autotest_common.sh@10 -- # set +x 00:05:18.028 ************************************ 00:05:18.028 START TEST nvmf_tcp 00:05:18.028 ************************************ 00:05:18.028 12:29:00 nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:18.288 * Looking for test storage... 00:05:18.288 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:18.288 12:29:00 nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:18.288 12:29:00 nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:05:18.288 12:29:00 nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:18.288 12:29:00 nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:18.288 12:29:00 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:18.288 12:29:00 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:18.288 12:29:00 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:18.288 12:29:00 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:18.288 12:29:00 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:18.288 12:29:00 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:18.288 12:29:00 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:18.288 12:29:00 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:18.288 12:29:00 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:18.288 12:29:00 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:18.288 12:29:00 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:18.288 12:29:00 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:18.288 12:29:00 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:05:18.288 12:29:00 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:18.288 12:29:00 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:18.288 12:29:00 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:18.288 12:29:00 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:05:18.288 12:29:00 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:18.289 12:29:00 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:05:18.289 12:29:00 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:18.289 12:29:00 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:18.289 12:29:00 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:05:18.289 12:29:00 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:18.289 12:29:00 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:05:18.289 12:29:00 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:18.289 12:29:00 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:18.289 12:29:00 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:18.289 12:29:00 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:05:18.289 12:29:00 nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:18.289 12:29:00 nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:18.289 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.289 --rc genhtml_branch_coverage=1 00:05:18.289 --rc genhtml_function_coverage=1 00:05:18.289 --rc genhtml_legend=1 00:05:18.289 --rc geninfo_all_blocks=1 00:05:18.289 --rc geninfo_unexecuted_blocks=1 00:05:18.289 00:05:18.289 ' 00:05:18.289 12:29:00 nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:18.289 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.289 --rc genhtml_branch_coverage=1 00:05:18.289 --rc genhtml_function_coverage=1 00:05:18.289 --rc genhtml_legend=1 00:05:18.289 --rc geninfo_all_blocks=1 00:05:18.289 --rc geninfo_unexecuted_blocks=1 00:05:18.289 00:05:18.289 ' 00:05:18.289 12:29:00 nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:18.289 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.289 --rc genhtml_branch_coverage=1 00:05:18.289 --rc genhtml_function_coverage=1 00:05:18.289 --rc genhtml_legend=1 00:05:18.289 --rc geninfo_all_blocks=1 00:05:18.289 --rc geninfo_unexecuted_blocks=1 00:05:18.289 00:05:18.289 ' 00:05:18.289 12:29:00 nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:18.289 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.289 --rc genhtml_branch_coverage=1 00:05:18.289 --rc genhtml_function_coverage=1 00:05:18.289 --rc genhtml_legend=1 00:05:18.289 --rc geninfo_all_blocks=1 00:05:18.289 --rc geninfo_unexecuted_blocks=1 00:05:18.289 00:05:18.289 ' 00:05:18.289 12:29:00 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:05:18.289 12:29:00 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:18.289 12:29:00 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:05:18.289 12:29:00 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:18.289 12:29:00 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:18.289 12:29:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:18.289 ************************************ 00:05:18.289 START TEST nvmf_target_core 00:05:18.289 ************************************ 00:05:18.289 12:29:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:05:18.289 * Looking for test storage... 00:05:18.289 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:18.289 12:29:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:18.289 12:29:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lcov --version 00:05:18.289 12:29:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:18.550 12:29:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:18.550 12:29:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:18.550 12:29:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:18.550 12:29:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:18.550 12:29:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:05:18.550 12:29:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:05:18.550 12:29:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:05:18.550 12:29:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:05:18.550 12:29:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:05:18.550 12:29:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:05:18.550 12:29:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:05:18.550 12:29:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:18.550 12:29:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:05:18.550 12:29:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:05:18.550 12:29:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:18.550 12:29:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:18.550 12:29:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:05:18.550 12:29:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:05:18.550 12:29:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:18.550 12:29:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:05:18.550 12:29:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:05:18.550 12:29:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:05:18.550 12:29:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:05:18.550 12:29:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:18.550 12:29:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:05:18.550 12:29:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:05:18.550 12:29:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:18.550 12:29:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:18.550 12:29:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:05:18.550 12:29:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:18.550 12:29:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:18.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.550 --rc genhtml_branch_coverage=1 00:05:18.550 --rc genhtml_function_coverage=1 00:05:18.550 --rc genhtml_legend=1 00:05:18.550 --rc geninfo_all_blocks=1 00:05:18.550 --rc geninfo_unexecuted_blocks=1 00:05:18.550 00:05:18.550 ' 00:05:18.550 12:29:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:18.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.550 --rc genhtml_branch_coverage=1 00:05:18.550 --rc genhtml_function_coverage=1 00:05:18.550 --rc genhtml_legend=1 00:05:18.550 --rc geninfo_all_blocks=1 00:05:18.550 --rc geninfo_unexecuted_blocks=1 00:05:18.550 00:05:18.550 ' 00:05:18.550 12:29:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:18.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.550 --rc genhtml_branch_coverage=1 00:05:18.550 --rc genhtml_function_coverage=1 00:05:18.550 --rc genhtml_legend=1 00:05:18.550 --rc geninfo_all_blocks=1 00:05:18.550 --rc geninfo_unexecuted_blocks=1 00:05:18.550 00:05:18.550 ' 00:05:18.550 12:29:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:18.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.550 --rc genhtml_branch_coverage=1 00:05:18.550 --rc genhtml_function_coverage=1 00:05:18.550 --rc genhtml_legend=1 00:05:18.550 --rc geninfo_all_blocks=1 00:05:18.550 --rc geninfo_unexecuted_blocks=1 00:05:18.550 00:05:18.550 ' 00:05:18.550 12:29:00 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:05:18.550 12:29:00 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:18.550 12:29:00 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:18.550 12:29:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:05:18.550 12:29:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:18.550 12:29:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:18.550 12:29:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:18.550 12:29:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:18.550 12:29:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:18.550 12:29:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:18.550 12:29:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:18.550 12:29:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:18.550 12:29:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:18.550 12:29:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:18.550 12:29:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:05:18.550 12:29:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:05:18.550 12:29:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:18.550 12:29:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:18.550 12:29:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:18.550 12:29:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:18.550 12:29:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:18.550 12:29:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:05:18.550 12:29:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:18.550 12:29:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:18.550 12:29:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:18.550 12:29:00 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:18.550 12:29:00 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:18.550 12:29:00 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:18.550 12:29:00 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:05:18.550 12:29:00 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:18.550 12:29:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:05:18.550 12:29:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:18.550 12:29:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:18.550 12:29:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:18.550 12:29:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:18.550 12:29:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:18.550 12:29:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:18.551 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:18.551 12:29:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:18.551 12:29:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:18.551 12:29:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:18.551 12:29:00 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:05:18.551 12:29:00 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:05:18.551 12:29:00 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:05:18.551 12:29:00 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:05:18.551 12:29:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:18.551 12:29:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:18.551 12:29:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:18.551 ************************************ 00:05:18.551 START TEST nvmf_abort 00:05:18.551 ************************************ 00:05:18.551 12:29:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:05:18.551 * Looking for test storage... 00:05:18.551 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:18.551 12:29:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:18.551 12:29:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:05:18.551 12:29:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:18.812 12:29:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:18.812 12:29:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:18.812 12:29:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:18.812 12:29:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:18.812 12:29:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:05:18.812 12:29:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:05:18.812 12:29:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:05:18.812 12:29:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:05:18.812 12:29:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:05:18.812 12:29:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:05:18.812 12:29:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:05:18.812 12:29:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:18.812 12:29:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:05:18.812 12:29:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:05:18.812 12:29:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:18.812 12:29:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:18.812 12:29:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:05:18.812 12:29:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:05:18.812 12:29:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:18.812 12:29:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:05:18.812 12:29:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:05:18.812 12:29:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:05:18.812 12:29:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:05:18.812 12:29:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:18.812 12:29:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:05:18.812 12:29:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:05:18.812 12:29:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:18.812 12:29:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:18.812 12:29:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:05:18.812 12:29:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:18.812 12:29:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:18.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.812 --rc genhtml_branch_coverage=1 00:05:18.812 --rc genhtml_function_coverage=1 00:05:18.812 --rc genhtml_legend=1 00:05:18.812 --rc geninfo_all_blocks=1 00:05:18.812 --rc geninfo_unexecuted_blocks=1 00:05:18.812 00:05:18.812 ' 00:05:18.812 12:29:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:18.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.812 --rc genhtml_branch_coverage=1 00:05:18.812 --rc genhtml_function_coverage=1 00:05:18.812 --rc genhtml_legend=1 00:05:18.812 --rc geninfo_all_blocks=1 00:05:18.812 --rc geninfo_unexecuted_blocks=1 00:05:18.812 00:05:18.812 ' 00:05:18.812 12:29:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:18.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.812 --rc genhtml_branch_coverage=1 00:05:18.812 --rc genhtml_function_coverage=1 00:05:18.812 --rc genhtml_legend=1 00:05:18.812 --rc geninfo_all_blocks=1 00:05:18.812 --rc geninfo_unexecuted_blocks=1 00:05:18.812 00:05:18.812 ' 00:05:18.812 12:29:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:18.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.812 --rc genhtml_branch_coverage=1 00:05:18.812 --rc genhtml_function_coverage=1 00:05:18.812 --rc genhtml_legend=1 00:05:18.812 --rc geninfo_all_blocks=1 00:05:18.812 --rc geninfo_unexecuted_blocks=1 00:05:18.812 00:05:18.812 ' 00:05:18.812 12:29:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:18.812 12:29:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:05:18.812 12:29:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:18.812 12:29:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:18.812 12:29:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:18.812 12:29:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:18.812 12:29:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:18.812 12:29:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:18.812 12:29:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:18.812 12:29:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:18.812 12:29:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:18.812 12:29:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:18.812 12:29:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:05:18.812 12:29:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:05:18.812 12:29:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:18.812 12:29:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:18.812 12:29:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:18.812 12:29:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:18.812 12:29:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:18.812 12:29:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:05:18.812 12:29:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:18.812 12:29:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:18.812 12:29:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:18.812 12:29:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:18.812 12:29:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:18.812 12:29:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:18.812 12:29:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:05:18.812 12:29:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:18.812 12:29:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:05:18.812 12:29:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:18.812 12:29:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:18.812 12:29:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:18.812 12:29:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:18.812 12:29:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:18.812 12:29:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:18.812 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:18.813 12:29:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:18.813 12:29:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:18.813 12:29:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:18.813 12:29:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:05:18.813 12:29:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:05:18.813 12:29:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:05:18.813 12:29:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:05:18.813 12:29:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:18.813 12:29:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:18.813 12:29:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:18.813 12:29:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:18.813 12:29:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:18.813 12:29:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:18.813 12:29:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:18.813 12:29:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:05:18.813 12:29:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:18.813 12:29:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:05:18.813 12:29:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:24.086 12:29:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:24.086 12:29:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:05:24.086 12:29:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:24.086 12:29:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:24.086 12:29:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:24.086 12:29:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:24.086 12:29:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:24.086 12:29:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:05:24.086 12:29:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:24.086 12:29:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:05:24.086 12:29:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:05:24.086 12:29:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:05:24.086 12:29:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:05:24.086 12:29:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:05:24.086 12:29:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:05:24.086 12:29:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:24.086 12:29:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:24.086 12:29:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:24.086 12:29:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:24.086 12:29:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:24.086 12:29:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:24.086 12:29:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:24.086 12:29:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:24.086 12:29:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:24.086 12:29:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:24.086 12:29:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:24.086 12:29:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:24.086 12:29:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:24.086 12:29:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:05:24.086 12:29:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:05:24.086 12:29:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:05:24.086 12:29:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:05:24.086 12:29:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:24.086 12:29:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:24.086 12:29:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:05:24.086 Found 0000:86:00.0 (0x8086 - 0x159b) 00:05:24.086 12:29:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:24.086 12:29:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:24.086 12:29:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:24.086 12:29:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:24.086 12:29:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:24.086 12:29:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:24.086 12:29:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:05:24.086 Found 0000:86:00.1 (0x8086 - 0x159b) 00:05:24.086 12:29:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:24.086 12:29:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:24.086 12:29:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:24.086 12:29:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:24.086 12:29:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:24.086 12:29:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:24.086 12:29:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:05:24.086 12:29:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:05:24.086 12:29:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:24.086 12:29:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:24.086 12:29:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:24.086 12:29:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:24.086 12:29:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:24.086 12:29:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:24.086 12:29:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:24.086 12:29:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:05:24.086 Found net devices under 0000:86:00.0: cvl_0_0 00:05:24.086 12:29:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:24.086 12:29:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:24.086 12:29:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:24.086 12:29:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:24.086 12:29:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:24.086 12:29:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:24.086 12:29:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:24.086 12:29:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:24.086 12:29:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:05:24.086 Found net devices under 0000:86:00.1: cvl_0_1 00:05:24.086 12:29:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:24.086 12:29:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:05:24.086 12:29:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:05:24.086 12:29:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:05:24.086 12:29:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:05:24.086 12:29:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:05:24.086 12:29:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:24.086 12:29:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:24.086 12:29:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:24.086 12:29:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:24.086 12:29:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:05:24.086 12:29:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:24.086 12:29:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:24.086 12:29:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:05:24.086 12:29:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:05:24.086 12:29:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:24.086 12:29:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:24.086 12:29:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:05:24.086 12:29:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:05:24.086 12:29:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:05:24.086 12:29:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:24.086 12:29:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:24.086 12:29:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:24.086 12:29:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:05:24.086 12:29:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:24.533 12:29:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:24.533 12:29:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:24.533 12:29:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:24.533 12:29:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:05:24.533 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:24.533 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.458 ms 00:05:24.533 00:05:24.533 --- 10.0.0.2 ping statistics --- 00:05:24.533 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:24.533 rtt min/avg/max/mdev = 0.458/0.458/0.458/0.000 ms 00:05:24.533 12:29:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:24.533 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:24.533 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.224 ms 00:05:24.533 00:05:24.533 --- 10.0.0.1 ping statistics --- 00:05:24.533 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:24.533 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:05:24.533 12:29:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:24.533 12:29:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:05:24.533 12:29:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:05:24.533 12:29:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:24.533 12:29:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:05:24.533 12:29:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:05:24.533 12:29:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:24.533 12:29:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:05:24.533 12:29:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:05:24.533 12:29:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:05:24.533 12:29:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:05:24.533 12:29:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:24.533 12:29:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:24.533 12:29:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=2347867 00:05:24.533 12:29:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 2347867 00:05:24.533 12:29:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 2347867 ']' 00:05:24.533 12:29:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:24.533 12:29:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:24.533 12:29:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:24.533 12:29:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:24.533 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:24.533 12:29:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:24.533 12:29:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:24.533 [2024-11-28 12:29:06.737464] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:05:24.533 [2024-11-28 12:29:06.737519] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:24.533 [2024-11-28 12:29:06.807340] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:24.533 [2024-11-28 12:29:06.851998] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:24.533 [2024-11-28 12:29:06.852036] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:24.533 [2024-11-28 12:29:06.852043] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:24.533 [2024-11-28 12:29:06.852050] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:24.533 [2024-11-28 12:29:06.852054] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:24.533 [2024-11-28 12:29:06.853396] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:24.533 [2024-11-28 12:29:06.853490] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:24.533 [2024-11-28 12:29:06.853618] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:24.533 12:29:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:24.533 12:29:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:05:24.533 12:29:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:05:24.533 12:29:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:24.533 12:29:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:24.533 12:29:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:24.533 12:29:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:05:24.533 12:29:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:24.533 12:29:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:24.533 [2024-11-28 12:29:06.996162] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:24.533 12:29:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:24.533 12:29:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:05:24.533 12:29:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:24.533 12:29:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:24.826 Malloc0 00:05:24.826 12:29:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:24.826 12:29:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:24.826 12:29:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:24.826 12:29:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:24.826 Delay0 00:05:24.826 12:29:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:24.826 12:29:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:05:24.826 12:29:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:24.826 12:29:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:24.826 12:29:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:24.826 12:29:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:05:24.826 12:29:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:24.826 12:29:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:24.826 12:29:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:24.826 12:29:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:05:24.826 12:29:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:24.826 12:29:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:24.826 [2024-11-28 12:29:07.077236] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:24.826 12:29:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:24.826 12:29:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:05:24.826 12:29:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:24.826 12:29:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:24.826 12:29:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:24.826 12:29:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:05:24.826 [2024-11-28 12:29:07.237101] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:05:27.413 [2024-11-28 12:29:09.382978] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x945d70 is same with the state(6) to be set 00:05:27.413 Initializing NVMe Controllers 00:05:27.413 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:05:27.413 controller IO queue size 128 less than required 00:05:27.413 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:05:27.413 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:05:27.413 Initialization complete. Launching workers. 00:05:27.413 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 35708 00:05:27.413 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 35769, failed to submit 62 00:05:27.413 success 35712, unsuccessful 57, failed 0 00:05:27.413 12:29:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:05:27.413 12:29:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:27.413 12:29:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:27.413 12:29:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:27.413 12:29:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:05:27.413 12:29:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:05:27.413 12:29:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:05:27.413 12:29:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:05:27.413 12:29:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:05:27.413 12:29:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:05:27.413 12:29:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:05:27.413 12:29:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:05:27.413 rmmod nvme_tcp 00:05:27.413 rmmod nvme_fabrics 00:05:27.413 rmmod nvme_keyring 00:05:27.413 12:29:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:05:27.413 12:29:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:05:27.413 12:29:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:05:27.413 12:29:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 2347867 ']' 00:05:27.413 12:29:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 2347867 00:05:27.413 12:29:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 2347867 ']' 00:05:27.413 12:29:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 2347867 00:05:27.413 12:29:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:05:27.413 12:29:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:27.413 12:29:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2347867 00:05:27.413 12:29:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:05:27.413 12:29:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:05:27.413 12:29:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2347867' 00:05:27.413 killing process with pid 2347867 00:05:27.413 12:29:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 2347867 00:05:27.413 12:29:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 2347867 00:05:27.413 12:29:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:05:27.413 12:29:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:05:27.413 12:29:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:05:27.413 12:29:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:05:27.413 12:29:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:05:27.413 12:29:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:05:27.413 12:29:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:05:27.413 12:29:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:05:27.413 12:29:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:05:27.413 12:29:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:27.413 12:29:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:27.413 12:29:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:29.319 12:29:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:05:29.319 00:05:29.319 real 0m10.849s 00:05:29.319 user 0m11.811s 00:05:29.319 sys 0m5.190s 00:05:29.319 12:29:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:29.319 12:29:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:29.319 ************************************ 00:05:29.319 END TEST nvmf_abort 00:05:29.319 ************************************ 00:05:29.319 12:29:11 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:05:29.319 12:29:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:29.319 12:29:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:29.319 12:29:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:29.578 ************************************ 00:05:29.578 START TEST nvmf_ns_hotplug_stress 00:05:29.578 ************************************ 00:05:29.578 12:29:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:05:29.578 * Looking for test storage... 00:05:29.578 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:29.578 12:29:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:29.578 12:29:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:05:29.578 12:29:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:29.579 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:29.579 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:29.579 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:29.579 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:29.579 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:05:29.579 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:05:29.579 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:05:29.579 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:05:29.579 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:05:29.579 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:05:29.579 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:05:29.579 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:29.579 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:05:29.579 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:05:29.579 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:29.579 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:29.579 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:05:29.579 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:05:29.579 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:29.579 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:05:29.579 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:05:29.579 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:05:29.579 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:05:29.579 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:29.579 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:05:29.579 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:05:29.579 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:29.579 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:29.579 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:05:29.579 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:29.579 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:29.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.579 --rc genhtml_branch_coverage=1 00:05:29.579 --rc genhtml_function_coverage=1 00:05:29.579 --rc genhtml_legend=1 00:05:29.579 --rc geninfo_all_blocks=1 00:05:29.579 --rc geninfo_unexecuted_blocks=1 00:05:29.579 00:05:29.579 ' 00:05:29.579 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:29.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.579 --rc genhtml_branch_coverage=1 00:05:29.579 --rc genhtml_function_coverage=1 00:05:29.579 --rc genhtml_legend=1 00:05:29.579 --rc geninfo_all_blocks=1 00:05:29.579 --rc geninfo_unexecuted_blocks=1 00:05:29.579 00:05:29.579 ' 00:05:29.579 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:29.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.579 --rc genhtml_branch_coverage=1 00:05:29.579 --rc genhtml_function_coverage=1 00:05:29.579 --rc genhtml_legend=1 00:05:29.579 --rc geninfo_all_blocks=1 00:05:29.579 --rc geninfo_unexecuted_blocks=1 00:05:29.579 00:05:29.579 ' 00:05:29.579 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:29.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.579 --rc genhtml_branch_coverage=1 00:05:29.579 --rc genhtml_function_coverage=1 00:05:29.579 --rc genhtml_legend=1 00:05:29.579 --rc geninfo_all_blocks=1 00:05:29.579 --rc geninfo_unexecuted_blocks=1 00:05:29.579 00:05:29.579 ' 00:05:29.579 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:29.579 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:05:29.579 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:29.579 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:29.579 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:29.579 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:29.579 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:29.579 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:29.579 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:29.579 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:29.579 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:29.579 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:29.579 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:05:29.579 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:05:29.579 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:29.579 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:29.579 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:29.579 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:29.579 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:29.579 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:05:29.579 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:29.579 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:29.579 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:29.579 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:29.579 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:29.579 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:29.579 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:05:29.579 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:29.579 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:05:29.579 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:29.579 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:29.579 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:29.579 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:29.579 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:29.579 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:29.579 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:29.579 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:29.579 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:29.579 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:29.579 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:29.579 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:05:29.579 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:05:29.580 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:29.580 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:29.580 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:29.580 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:29.580 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:29.580 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:29.580 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:29.580 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:05:29.580 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:29.580 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:05:29.580 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:34.850 12:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:34.850 12:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:05:34.850 12:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:34.850 12:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:34.850 12:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:34.850 12:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:34.850 12:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:34.850 12:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:05:34.850 12:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:34.850 12:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:05:34.850 12:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:05:34.850 12:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:05:34.850 12:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:05:34.850 12:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:05:34.850 12:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:05:34.850 12:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:34.850 12:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:34.850 12:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:34.850 12:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:34.850 12:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:34.850 12:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:34.850 12:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:34.850 12:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:34.850 12:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:34.850 12:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:34.850 12:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:34.850 12:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:34.850 12:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:34.850 12:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:05:34.850 12:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:05:34.850 12:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:05:34.850 12:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:05:34.850 12:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:34.850 12:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:34.850 12:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:05:34.850 Found 0000:86:00.0 (0x8086 - 0x159b) 00:05:34.850 12:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:34.850 12:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:34.850 12:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:34.850 12:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:34.851 12:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:34.851 12:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:34.851 12:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:05:34.851 Found 0000:86:00.1 (0x8086 - 0x159b) 00:05:34.851 12:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:34.851 12:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:34.851 12:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:34.851 12:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:34.851 12:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:34.851 12:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:34.851 12:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:05:34.851 12:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:05:34.851 12:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:34.851 12:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:34.851 12:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:34.851 12:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:34.851 12:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:34.851 12:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:34.851 12:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:34.851 12:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:05:34.851 Found net devices under 0000:86:00.0: cvl_0_0 00:05:34.851 12:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:34.851 12:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:34.851 12:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:34.851 12:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:34.851 12:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:34.851 12:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:34.851 12:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:34.851 12:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:34.851 12:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:05:34.851 Found net devices under 0000:86:00.1: cvl_0_1 00:05:34.851 12:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:34.851 12:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:05:34.851 12:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:05:34.851 12:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:05:34.851 12:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:05:34.851 12:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:05:34.851 12:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:34.851 12:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:34.851 12:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:34.851 12:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:34.851 12:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:05:34.851 12:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:34.851 12:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:34.851 12:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:05:34.851 12:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:05:34.851 12:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:34.851 12:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:34.851 12:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:05:34.851 12:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:05:34.851 12:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:05:34.851 12:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:34.851 12:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:34.851 12:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:34.851 12:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:05:34.851 12:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:34.851 12:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:34.851 12:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:34.851 12:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:34.851 12:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:05:34.851 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:34.851 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.401 ms 00:05:34.851 00:05:34.851 --- 10.0.0.2 ping statistics --- 00:05:34.851 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:34.851 rtt min/avg/max/mdev = 0.401/0.401/0.401/0.000 ms 00:05:34.851 12:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:34.851 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:34.851 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.104 ms 00:05:34.851 00:05:34.851 --- 10.0.0.1 ping statistics --- 00:05:34.851 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:34.851 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:05:34.851 12:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:34.851 12:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:05:34.851 12:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:05:34.851 12:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:34.851 12:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:05:34.851 12:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:05:34.851 12:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:34.851 12:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:05:34.851 12:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:05:34.851 12:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:05:34.851 12:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:05:34.851 12:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:34.851 12:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:34.851 12:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=2351887 00:05:34.851 12:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 2351887 00:05:34.851 12:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:34.851 12:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 2351887 ']' 00:05:34.851 12:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:34.851 12:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:34.851 12:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:34.851 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:34.851 12:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:34.851 12:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:34.851 [2024-11-28 12:29:17.348241] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:05:34.851 [2024-11-28 12:29:17.348291] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:35.110 [2024-11-28 12:29:17.416321] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:35.110 [2024-11-28 12:29:17.456929] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:35.110 [2024-11-28 12:29:17.456970] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:35.110 [2024-11-28 12:29:17.456977] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:35.110 [2024-11-28 12:29:17.456983] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:35.110 [2024-11-28 12:29:17.456988] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:35.110 [2024-11-28 12:29:17.458393] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:35.110 [2024-11-28 12:29:17.458460] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:35.110 [2024-11-28 12:29:17.458461] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:35.110 12:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:35.110 12:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:05:35.110 12:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:05:35.110 12:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:35.110 12:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:35.110 12:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:35.110 12:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:05:35.111 12:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:05:35.370 [2024-11-28 12:29:17.768310] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:35.370 12:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:05:35.628 12:29:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:05:35.888 [2024-11-28 12:29:18.185800] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:35.888 12:29:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:05:36.146 12:29:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:05:36.146 Malloc0 00:05:36.146 12:29:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:36.405 Delay0 00:05:36.405 12:29:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:36.663 12:29:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:05:36.922 NULL1 00:05:36.922 12:29:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:05:37.181 12:29:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2352328 00:05:37.181 12:29:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:05:37.181 12:29:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2352328 00:05:37.181 12:29:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:38.118 Read completed with error (sct=0, sc=11) 00:05:38.376 12:29:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:38.376 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:38.376 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:38.376 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:38.376 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:38.376 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:38.376 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:38.634 12:29:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:05:38.634 12:29:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:05:38.634 true 00:05:38.634 12:29:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2352328 00:05:38.634 12:29:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:39.568 12:29:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:39.826 12:29:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:05:39.826 12:29:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:05:39.826 true 00:05:39.826 12:29:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2352328 00:05:39.826 12:29:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:40.085 12:29:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:40.342 12:29:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:05:40.342 12:29:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:05:40.600 true 00:05:40.600 12:29:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2352328 00:05:40.600 12:29:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:41.534 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:41.534 12:29:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:41.792 12:29:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:05:41.792 12:29:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:05:41.792 true 00:05:42.051 12:29:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2352328 00:05:42.051 12:29:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:42.051 12:29:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:42.309 12:29:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:05:42.309 12:29:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:05:42.567 true 00:05:42.568 12:29:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2352328 00:05:42.568 12:29:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:43.502 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:43.761 12:29:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:43.761 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:43.761 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:43.761 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:43.761 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:43.761 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:43.761 12:29:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:05:43.761 12:29:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:05:44.020 true 00:05:44.020 12:29:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2352328 00:05:44.020 12:29:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:44.957 12:29:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:44.957 12:29:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:05:44.957 12:29:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:05:45.216 true 00:05:45.216 12:29:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2352328 00:05:45.216 12:29:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:45.474 12:29:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:45.732 12:29:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:05:45.732 12:29:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:05:45.991 true 00:05:45.991 12:29:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2352328 00:05:45.991 12:29:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:46.928 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:46.928 12:29:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:46.928 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:47.186 12:29:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:05:47.186 12:29:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:05:47.186 true 00:05:47.186 12:29:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2352328 00:05:47.186 12:29:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:47.445 12:29:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:47.704 12:29:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:05:47.704 12:29:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:05:47.963 true 00:05:47.963 12:29:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2352328 00:05:47.963 12:29:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:48.898 12:29:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:48.898 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:49.157 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:49.157 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:49.157 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:49.157 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:49.157 12:29:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:05:49.157 12:29:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:05:49.417 true 00:05:49.417 12:29:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2352328 00:05:49.417 12:29:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:50.355 12:29:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:50.355 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:50.355 12:29:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:05:50.355 12:29:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:05:50.614 true 00:05:50.614 12:29:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2352328 00:05:50.614 12:29:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:50.873 12:29:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:51.131 12:29:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:05:51.131 12:29:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:05:51.131 true 00:05:51.131 12:29:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2352328 00:05:51.131 12:29:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:52.509 12:29:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:52.509 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:52.509 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:52.509 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:52.509 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:52.509 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:52.509 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:52.509 12:29:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:05:52.509 12:29:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:05:52.768 true 00:05:52.768 12:29:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2352328 00:05:52.768 12:29:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:53.704 12:29:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:53.704 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:53.704 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:05:53.704 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:05:53.962 true 00:05:53.962 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2352328 00:05:53.962 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:53.962 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:54.221 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:05:54.221 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:05:54.479 true 00:05:54.479 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2352328 00:05:54.479 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:55.856 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:55.857 12:29:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:55.857 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:55.857 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:55.857 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:55.857 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:55.857 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:55.857 12:29:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:05:55.857 12:29:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:05:55.857 true 00:05:56.115 12:29:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2352328 00:05:56.115 12:29:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:56.682 12:29:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:56.941 12:29:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:05:56.941 12:29:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:05:57.200 true 00:05:57.200 12:29:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2352328 00:05:57.200 12:29:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:57.459 12:29:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:57.719 12:29:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:05:57.719 12:29:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:05:57.719 true 00:05:57.719 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2352328 00:05:57.719 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:59.096 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:59.096 12:29:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:59.096 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:59.096 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:59.096 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:59.096 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:59.096 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:59.096 12:29:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:05:59.096 12:29:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:05:59.355 true 00:05:59.355 12:29:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2352328 00:05:59.355 12:29:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:00.291 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:00.291 12:29:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:00.292 12:29:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:06:00.292 12:29:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:06:00.550 true 00:06:00.550 12:29:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2352328 00:06:00.550 12:29:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:00.809 12:29:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:00.809 12:29:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:06:00.809 12:29:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:06:01.068 true 00:06:01.068 12:29:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2352328 00:06:01.068 12:29:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:02.446 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:02.446 12:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:02.446 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:02.446 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:02.446 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:02.446 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:02.446 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:02.446 12:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:06:02.446 12:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:06:02.705 true 00:06:02.705 12:29:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2352328 00:06:02.705 12:29:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:03.640 12:29:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:03.640 12:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:06:03.640 12:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:06:03.899 true 00:06:03.899 12:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2352328 00:06:03.899 12:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:04.158 12:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:04.158 12:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:06:04.158 12:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:06:04.416 true 00:06:04.416 12:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2352328 00:06:04.416 12:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:05.794 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:05.794 12:29:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:05.794 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:05.794 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:05.794 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:05.794 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:05.794 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:05.794 12:29:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:06:05.794 12:29:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:06:06.053 true 00:06:06.053 12:29:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2352328 00:06:06.053 12:29:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:06.989 12:29:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:06.989 12:29:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:06:06.989 12:29:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:06:07.248 true 00:06:07.248 12:29:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2352328 00:06:07.248 12:29:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:07.248 Initializing NVMe Controllers 00:06:07.248 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:07.248 Controller IO queue size 128, less than required. 00:06:07.248 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:07.248 Controller IO queue size 128, less than required. 00:06:07.248 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:07.248 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:06:07.248 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:06:07.248 Initialization complete. Launching workers. 00:06:07.248 ======================================================== 00:06:07.248 Latency(us) 00:06:07.248 Device Information : IOPS MiB/s Average min max 00:06:07.248 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1867.92 0.91 47103.66 2110.99 1019671.14 00:06:07.248 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 17225.31 8.41 7430.49 2671.14 382626.46 00:06:07.248 ======================================================== 00:06:07.248 Total : 19093.23 9.32 11311.79 2110.99 1019671.14 00:06:07.248 00:06:07.507 12:29:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:07.507 12:29:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:06:07.507 12:29:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:06:07.771 true 00:06:07.771 12:29:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2352328 00:06:07.771 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2352328) - No such process 00:06:07.771 12:29:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2352328 00:06:07.771 12:29:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:08.029 12:29:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:08.288 12:29:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:06:08.288 12:29:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:06:08.288 12:29:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:06:08.288 12:29:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:08.288 12:29:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:06:08.288 null0 00:06:08.288 12:29:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:08.288 12:29:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:08.288 12:29:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:06:08.546 null1 00:06:08.546 12:29:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:08.546 12:29:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:08.546 12:29:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:06:08.805 null2 00:06:08.805 12:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:08.805 12:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:08.805 12:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:06:09.063 null3 00:06:09.063 12:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:09.063 12:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:09.064 12:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:06:09.322 null4 00:06:09.322 12:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:09.322 12:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:09.322 12:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:06:09.322 null5 00:06:09.322 12:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:09.322 12:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:09.322 12:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:06:09.580 null6 00:06:09.580 12:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:09.580 12:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:09.580 12:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:06:09.839 null7 00:06:09.839 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:09.839 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:09.839 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:06:09.839 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:09.839 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:09.839 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:09.839 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:06:09.839 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:09.839 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:06:09.839 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:09.839 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:09.839 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:09.839 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:09.839 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:09.839 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:06:09.839 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:09.839 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:06:09.839 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:09.839 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:09.839 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:09.839 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:09.839 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:09.839 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:06:09.840 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:09.840 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:06:09.840 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:09.840 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:09.840 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:09.840 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:09.840 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:09.840 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:06:09.840 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:09.840 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:06:09.840 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:09.840 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:09.840 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:09.840 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:09.840 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:09.840 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:06:09.840 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:09.840 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:06:09.840 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:09.840 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:09.840 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:09.840 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:09.840 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:09.840 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:09.840 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:06:09.840 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:06:09.840 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:09.840 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:09.840 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:09.840 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:09.840 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:09.840 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:09.840 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:06:09.840 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:06:09.840 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:09.840 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:09.840 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:06:09.840 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:09.840 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:09.840 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:06:09.840 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:09.840 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:09.840 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:09.840 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2357770 2357771 2357774 2357776 2357779 2357782 2357784 2357786 00:06:09.840 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:09.840 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:10.099 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:10.099 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:10.099 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:10.099 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:10.100 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:10.100 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:10.100 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:10.100 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:10.100 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:10.100 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:10.100 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:10.100 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:10.100 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:10.100 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:10.358 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:10.358 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:10.358 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:10.358 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:10.358 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:10.358 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:10.358 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:10.358 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:10.358 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:10.358 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:10.358 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:10.358 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:10.358 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:10.358 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:10.358 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:10.358 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:10.358 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:10.358 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:10.358 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:10.358 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:10.358 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:10.358 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:10.358 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:10.358 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:10.358 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:10.358 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:10.616 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:10.616 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:10.616 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:10.616 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:10.616 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:10.616 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:10.616 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:10.616 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:10.616 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:10.616 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:10.616 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:10.616 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:10.616 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:10.616 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:10.616 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:10.616 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:10.616 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:10.617 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:10.617 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:10.617 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:10.617 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:10.617 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:10.617 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:10.617 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:10.875 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:10.875 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:10.875 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:10.875 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:10.875 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:10.875 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:10.875 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:10.875 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:11.133 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:11.133 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.133 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:11.133 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:11.133 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.133 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:11.133 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:11.133 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.133 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:11.133 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:11.133 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.133 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:11.133 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:11.133 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.134 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:11.134 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.134 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:11.134 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:11.134 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:11.134 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.134 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:11.134 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:11.134 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.134 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:11.412 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:11.412 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:11.413 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:11.413 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:11.413 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:11.413 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:11.413 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:11.413 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:11.413 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:11.413 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.413 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:11.413 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:11.413 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.413 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:11.413 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:11.413 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:11.413 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.413 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.413 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:11.413 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:11.413 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:11.413 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.413 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:11.413 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:11.413 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.413 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:11.413 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:11.413 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.413 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:11.413 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:11.413 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.413 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:11.671 12:29:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:11.672 12:29:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:11.672 12:29:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:11.672 12:29:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:11.672 12:29:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:11.672 12:29:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:11.672 12:29:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:11.672 12:29:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:11.931 12:29:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:11.931 12:29:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.931 12:29:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:11.931 12:29:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:11.931 12:29:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.931 12:29:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:11.931 12:29:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:11.931 12:29:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.931 12:29:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:11.931 12:29:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:11.931 12:29:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.931 12:29:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:11.931 12:29:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:11.931 12:29:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.931 12:29:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:11.931 12:29:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:11.931 12:29:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.931 12:29:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:11.931 12:29:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:11.931 12:29:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.931 12:29:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:11.931 12:29:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:11.931 12:29:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.931 12:29:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:12.189 12:29:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:12.189 12:29:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:12.189 12:29:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:12.189 12:29:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:12.189 12:29:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:12.189 12:29:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:12.189 12:29:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:12.189 12:29:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:12.447 12:29:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:12.447 12:29:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.447 12:29:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:12.447 12:29:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:12.447 12:29:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.447 12:29:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:12.447 12:29:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:12.447 12:29:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.447 12:29:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:12.447 12:29:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:12.447 12:29:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.447 12:29:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:12.447 12:29:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:12.447 12:29:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.447 12:29:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:12.447 12:29:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:12.447 12:29:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:12.447 12:29:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.447 12:29:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:12.448 12:29:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.448 12:29:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:12.448 12:29:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:12.448 12:29:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.448 12:29:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:12.448 12:29:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:12.448 12:29:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:12.448 12:29:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:12.448 12:29:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:12.448 12:29:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:12.448 12:29:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:12.448 12:29:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:12.707 12:29:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:12.707 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:12.707 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.707 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:12.707 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:12.707 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.707 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:12.707 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:12.707 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.707 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:12.708 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:12.708 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.708 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:12.708 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:12.708 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.708 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:12.708 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:12.708 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.708 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:12.708 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:12.708 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.708 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:12.708 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:12.708 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.708 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:12.966 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:12.966 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:12.966 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:12.966 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:12.966 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:12.966 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:12.966 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:12.966 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:13.225 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:13.225 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:13.225 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:13.225 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:13.225 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:13.225 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:13.225 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:13.225 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:13.225 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:13.225 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:13.225 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:13.225 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:13.225 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:13.225 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:13.225 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:13.225 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:13.225 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:13.225 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:13.225 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:13.225 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:13.225 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:13.225 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:13.225 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:13.225 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:13.484 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:13.484 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:13.484 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:13.484 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:13.484 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:13.484 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:13.484 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:13.484 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:13.743 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:13.743 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:13.743 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:13.743 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:13.743 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:13.743 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:13.743 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:13.743 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:13.743 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:13.743 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:13.743 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:13.743 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:13.743 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:13.743 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:13.743 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:13.743 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:13.743 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:13.743 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:13.743 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:13.743 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:13.743 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:13.743 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:13.743 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:13.743 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:13.743 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:13.743 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:13.743 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:13.743 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:13.743 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:13.743 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:13.743 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:13.743 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:14.002 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:14.002 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:14.002 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:14.002 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:14.002 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:14.002 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:14.002 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:14.003 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:14.003 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:14.003 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:14.003 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:14.003 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:14.003 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:14.003 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:14.003 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:14.003 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:14.003 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:06:14.003 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:06:14.003 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:14.003 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:06:14.003 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:14.003 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:06:14.003 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:14.003 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:14.003 rmmod nvme_tcp 00:06:14.003 rmmod nvme_fabrics 00:06:14.003 rmmod nvme_keyring 00:06:14.262 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:14.262 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:06:14.262 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:06:14.262 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 2351887 ']' 00:06:14.262 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 2351887 00:06:14.262 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 2351887 ']' 00:06:14.262 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 2351887 00:06:14.262 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:06:14.262 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:14.262 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2351887 00:06:14.262 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:14.262 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:14.262 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2351887' 00:06:14.262 killing process with pid 2351887 00:06:14.262 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 2351887 00:06:14.262 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 2351887 00:06:14.262 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:14.262 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:14.262 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:14.262 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:06:14.262 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:06:14.262 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:06:14.262 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:14.262 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:14.262 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:14.262 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:14.262 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:14.520 12:29:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:16.425 12:29:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:16.425 00:06:16.425 real 0m46.987s 00:06:16.425 user 3m13.396s 00:06:16.425 sys 0m15.074s 00:06:16.425 12:29:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:16.425 12:29:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:16.425 ************************************ 00:06:16.425 END TEST nvmf_ns_hotplug_stress 00:06:16.425 ************************************ 00:06:16.425 12:29:58 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:16.425 12:29:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:16.425 12:29:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:16.425 12:29:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:16.425 ************************************ 00:06:16.425 START TEST nvmf_delete_subsystem 00:06:16.425 ************************************ 00:06:16.425 12:29:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:16.685 * Looking for test storage... 00:06:16.685 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:16.685 12:29:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:16.685 12:29:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:06:16.685 12:29:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:16.685 12:29:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:16.685 12:29:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:16.685 12:29:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:16.685 12:29:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:16.685 12:29:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:06:16.685 12:29:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:06:16.685 12:29:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:06:16.685 12:29:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:06:16.685 12:29:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:06:16.685 12:29:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:06:16.685 12:29:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:06:16.685 12:29:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:16.685 12:29:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:06:16.685 12:29:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:06:16.685 12:29:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:16.685 12:29:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:16.685 12:29:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:06:16.685 12:29:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:06:16.685 12:29:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:16.685 12:29:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:06:16.685 12:29:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:06:16.685 12:29:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:06:16.685 12:29:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:06:16.685 12:29:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:16.685 12:29:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:06:16.685 12:29:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:06:16.685 12:29:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:16.685 12:29:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:16.685 12:29:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:06:16.685 12:29:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:16.685 12:29:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:16.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.685 --rc genhtml_branch_coverage=1 00:06:16.685 --rc genhtml_function_coverage=1 00:06:16.685 --rc genhtml_legend=1 00:06:16.685 --rc geninfo_all_blocks=1 00:06:16.685 --rc geninfo_unexecuted_blocks=1 00:06:16.685 00:06:16.685 ' 00:06:16.685 12:29:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:16.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.685 --rc genhtml_branch_coverage=1 00:06:16.685 --rc genhtml_function_coverage=1 00:06:16.685 --rc genhtml_legend=1 00:06:16.685 --rc geninfo_all_blocks=1 00:06:16.685 --rc geninfo_unexecuted_blocks=1 00:06:16.685 00:06:16.685 ' 00:06:16.685 12:29:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:16.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.685 --rc genhtml_branch_coverage=1 00:06:16.685 --rc genhtml_function_coverage=1 00:06:16.685 --rc genhtml_legend=1 00:06:16.685 --rc geninfo_all_blocks=1 00:06:16.685 --rc geninfo_unexecuted_blocks=1 00:06:16.685 00:06:16.685 ' 00:06:16.685 12:29:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:16.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.685 --rc genhtml_branch_coverage=1 00:06:16.685 --rc genhtml_function_coverage=1 00:06:16.685 --rc genhtml_legend=1 00:06:16.685 --rc geninfo_all_blocks=1 00:06:16.685 --rc geninfo_unexecuted_blocks=1 00:06:16.685 00:06:16.685 ' 00:06:16.685 12:29:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:16.685 12:29:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:06:16.685 12:29:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:16.685 12:29:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:16.685 12:29:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:16.685 12:29:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:16.685 12:29:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:16.685 12:29:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:16.685 12:29:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:16.685 12:29:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:16.685 12:29:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:16.685 12:29:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:16.685 12:29:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:06:16.685 12:29:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:06:16.685 12:29:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:16.685 12:29:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:16.685 12:29:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:16.685 12:29:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:16.685 12:29:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:16.685 12:29:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:06:16.685 12:29:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:16.685 12:29:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:16.685 12:29:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:16.686 12:29:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:16.686 12:29:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:16.686 12:29:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:16.686 12:29:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:06:16.686 12:29:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:16.686 12:29:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:06:16.686 12:29:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:16.686 12:29:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:16.686 12:29:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:16.686 12:29:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:16.686 12:29:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:16.686 12:29:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:16.686 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:16.686 12:29:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:16.686 12:29:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:16.686 12:29:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:16.686 12:29:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:06:16.686 12:29:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:16.686 12:29:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:16.686 12:29:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:16.686 12:29:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:16.686 12:29:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:16.686 12:29:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:16.686 12:29:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:16.686 12:29:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:16.686 12:29:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:16.686 12:29:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:16.686 12:29:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:06:16.686 12:29:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:23.255 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:23.255 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:06:23.255 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:23.255 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:23.255 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:23.255 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:23.255 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:23.255 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:06:23.255 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:23.255 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:06:23.255 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:06:23.255 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:06:23.255 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:06:23.255 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:06:23.255 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:06:23.255 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:23.255 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:23.255 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:23.255 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:23.255 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:23.255 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:23.255 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:23.255 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:23.255 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:23.255 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:23.255 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:23.255 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:23.255 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:23.255 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:23.255 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:23.255 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:23.255 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:23.255 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:23.255 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:23.255 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:06:23.255 Found 0000:86:00.0 (0x8086 - 0x159b) 00:06:23.255 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:23.255 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:23.255 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:23.255 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:23.255 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:23.255 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:23.255 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:06:23.255 Found 0000:86:00.1 (0x8086 - 0x159b) 00:06:23.255 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:23.255 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:23.255 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:23.255 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:23.256 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:23.256 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:23.256 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:23.256 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:23.256 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:23.256 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:23.256 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:23.256 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:23.256 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:23.256 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:23.256 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:23.256 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:06:23.256 Found net devices under 0000:86:00.0: cvl_0_0 00:06:23.256 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:23.256 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:23.256 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:23.256 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:23.256 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:23.256 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:23.256 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:23.256 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:23.256 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:06:23.256 Found net devices under 0000:86:00.1: cvl_0_1 00:06:23.256 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:23.256 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:23.256 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:06:23.256 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:23.256 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:23.256 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:23.256 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:23.256 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:23.256 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:23.256 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:23.256 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:23.256 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:23.256 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:23.256 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:23.256 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:23.256 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:23.256 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:23.256 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:23.256 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:23.256 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:23.256 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:23.256 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:23.256 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:23.256 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:23.256 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:23.256 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:23.256 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:23.256 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:23.256 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:23.256 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:23.256 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.435 ms 00:06:23.256 00:06:23.256 --- 10.0.0.2 ping statistics --- 00:06:23.256 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:23.256 rtt min/avg/max/mdev = 0.435/0.435/0.435/0.000 ms 00:06:23.256 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:23.256 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:23.256 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.200 ms 00:06:23.256 00:06:23.256 --- 10.0.0.1 ping statistics --- 00:06:23.256 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:23.256 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:06:23.256 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:23.256 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:06:23.256 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:23.256 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:23.256 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:23.256 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:23.256 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:23.256 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:23.256 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:23.256 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:06:23.256 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:23.256 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:23.257 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:23.257 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=2362503 00:06:23.257 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 2362503 00:06:23.257 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:06:23.257 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 2362503 ']' 00:06:23.257 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:23.257 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:23.257 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:23.257 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:23.257 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:23.257 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:23.257 [2024-11-28 12:30:04.950045] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:06:23.257 [2024-11-28 12:30:04.950088] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:23.257 [2024-11-28 12:30:05.017461] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:23.257 [2024-11-28 12:30:05.057756] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:23.257 [2024-11-28 12:30:05.057790] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:23.257 [2024-11-28 12:30:05.057798] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:23.257 [2024-11-28 12:30:05.057804] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:23.257 [2024-11-28 12:30:05.057810] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:23.257 [2024-11-28 12:30:05.058942] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:23.257 [2024-11-28 12:30:05.058945] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.257 12:30:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:23.257 12:30:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:06:23.257 12:30:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:23.257 12:30:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:23.257 12:30:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:23.257 12:30:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:23.257 12:30:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:23.257 12:30:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:23.257 12:30:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:23.257 [2024-11-28 12:30:05.192995] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:23.257 12:30:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:23.257 12:30:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:23.257 12:30:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:23.257 12:30:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:23.257 12:30:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:23.257 12:30:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:23.257 12:30:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:23.257 12:30:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:23.257 [2024-11-28 12:30:05.209191] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:23.257 12:30:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:23.257 12:30:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:06:23.257 12:30:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:23.257 12:30:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:23.257 NULL1 00:06:23.257 12:30:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:23.257 12:30:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:23.257 12:30:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:23.257 12:30:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:23.257 Delay0 00:06:23.257 12:30:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:23.257 12:30:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:23.257 12:30:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:23.257 12:30:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:23.257 12:30:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:23.257 12:30:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2362531 00:06:23.257 12:30:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:06:23.257 12:30:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:23.257 [2024-11-28 12:30:05.293887] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:25.160 12:30:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:25.161 12:30:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:25.161 12:30:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:25.161 Read completed with error (sct=0, sc=8) 00:06:25.161 Read completed with error (sct=0, sc=8) 00:06:25.161 Read completed with error (sct=0, sc=8) 00:06:25.161 starting I/O failed: -6 00:06:25.161 Read completed with error (sct=0, sc=8) 00:06:25.161 Read completed with error (sct=0, sc=8) 00:06:25.161 Read completed with error (sct=0, sc=8) 00:06:25.161 Read completed with error (sct=0, sc=8) 00:06:25.161 starting I/O failed: -6 00:06:25.161 Read completed with error (sct=0, sc=8) 00:06:25.161 Write completed with error (sct=0, sc=8) 00:06:25.161 Write completed with error (sct=0, sc=8) 00:06:25.161 Read completed with error (sct=0, sc=8) 00:06:25.161 starting I/O failed: -6 00:06:25.161 Read completed with error (sct=0, sc=8) 00:06:25.161 Read completed with error (sct=0, sc=8) 00:06:25.161 Read completed with error (sct=0, sc=8) 00:06:25.161 Write completed with error (sct=0, sc=8) 00:06:25.161 starting I/O failed: -6 00:06:25.161 Read completed with error (sct=0, sc=8) 00:06:25.161 Read completed with error (sct=0, sc=8) 00:06:25.161 Read completed with error (sct=0, sc=8) 00:06:25.161 Read completed with error (sct=0, sc=8) 00:06:25.161 starting I/O failed: -6 00:06:25.161 Read completed with error (sct=0, sc=8) 00:06:25.161 Read completed with error (sct=0, sc=8) 00:06:25.161 Write completed with error (sct=0, sc=8) 00:06:25.161 Read completed with error (sct=0, sc=8) 00:06:25.161 starting I/O failed: -6 00:06:25.161 Read completed with error (sct=0, sc=8) 00:06:25.161 Read completed with error (sct=0, sc=8) 00:06:25.161 Read completed with error (sct=0, sc=8) 00:06:25.161 Read completed with error (sct=0, sc=8) 00:06:25.161 starting I/O failed: -6 00:06:25.161 Write completed with error (sct=0, sc=8) 00:06:25.161 Read completed with error (sct=0, sc=8) 00:06:25.161 Write completed with error (sct=0, sc=8) 00:06:25.161 Read completed with error (sct=0, sc=8) 00:06:25.161 starting I/O failed: -6 00:06:25.161 Read completed with error (sct=0, sc=8) 00:06:25.161 Write completed with error (sct=0, sc=8) 00:06:25.161 Read completed with error (sct=0, sc=8) 00:06:25.161 Read completed with error (sct=0, sc=8) 00:06:25.161 starting I/O failed: -6 00:06:25.161 Read completed with error (sct=0, sc=8) 00:06:25.161 Read completed with error (sct=0, sc=8) 00:06:25.161 Read completed with error (sct=0, sc=8) 00:06:25.161 Write completed with error (sct=0, sc=8) 00:06:25.161 starting I/O failed: -6 00:06:25.161 Read completed with error (sct=0, sc=8) 00:06:25.161 Read completed with error (sct=0, sc=8) 00:06:25.161 Write completed with error (sct=0, sc=8) 00:06:25.161 [2024-11-28 12:30:07.415787] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fe484000c40 is same with the state(6) to be set 00:06:25.161 Write completed with error (sct=0, sc=8) 00:06:25.161 starting I/O failed: -6 00:06:25.161 Read completed with error (sct=0, sc=8) 00:06:25.161 Write completed with error (sct=0, sc=8) 00:06:25.161 Read completed with error (sct=0, sc=8) 00:06:25.161 Read completed with error (sct=0, sc=8) 00:06:25.161 starting I/O failed: -6 00:06:25.161 Write completed with error (sct=0, sc=8) 00:06:25.161 Read completed with error (sct=0, sc=8) 00:06:25.161 Write completed with error (sct=0, sc=8) 00:06:25.161 Read completed with error (sct=0, sc=8) 00:06:25.161 starting I/O failed: -6 00:06:25.161 Write completed with error (sct=0, sc=8) 00:06:25.161 Read completed with error (sct=0, sc=8) 00:06:25.161 Read completed with error (sct=0, sc=8) 00:06:25.161 Read completed with error (sct=0, sc=8) 00:06:25.161 Read completed with error (sct=0, sc=8) 00:06:25.161 Read completed with error (sct=0, sc=8) 00:06:25.161 Write completed with error (sct=0, sc=8) 00:06:25.161 starting I/O failed: -6 00:06:25.161 Read completed with error (sct=0, sc=8) 00:06:25.161 Write completed with error (sct=0, sc=8) 00:06:25.161 Read completed with error (sct=0, sc=8) 00:06:25.161 Read completed with error (sct=0, sc=8) 00:06:25.161 Read completed with error (sct=0, sc=8) 00:06:25.161 Read completed with error (sct=0, sc=8) 00:06:25.161 Read completed with error (sct=0, sc=8) 00:06:25.161 Read completed with error (sct=0, sc=8) 00:06:25.161 Read completed with error (sct=0, sc=8) 00:06:25.161 Read completed with error (sct=0, sc=8) 00:06:25.161 Write completed with error (sct=0, sc=8) 00:06:25.161 starting I/O failed: -6 00:06:25.161 Read completed with error (sct=0, sc=8) 00:06:25.161 Read completed with error (sct=0, sc=8) 00:06:25.161 Read completed with error (sct=0, sc=8) 00:06:25.161 Write completed with error (sct=0, sc=8) 00:06:25.161 Write completed with error (sct=0, sc=8) 00:06:25.161 Read completed with error (sct=0, sc=8) 00:06:25.161 Read completed with error (sct=0, sc=8) 00:06:25.161 Read completed with error (sct=0, sc=8) 00:06:25.161 Read completed with error (sct=0, sc=8) 00:06:25.161 Read completed with error (sct=0, sc=8) 00:06:25.161 starting I/O failed: -6 00:06:25.161 Write completed with error (sct=0, sc=8) 00:06:25.161 Read completed with error (sct=0, sc=8) 00:06:25.161 Read completed with error (sct=0, sc=8) 00:06:25.161 Read completed with error (sct=0, sc=8) 00:06:25.161 Write completed with error (sct=0, sc=8) 00:06:25.161 Write completed with error (sct=0, sc=8) 00:06:25.161 Read completed with error (sct=0, sc=8) 00:06:25.161 Write completed with error (sct=0, sc=8) 00:06:25.161 Read completed with error (sct=0, sc=8) 00:06:25.161 Read completed with error (sct=0, sc=8) 00:06:25.161 Write completed with error (sct=0, sc=8) 00:06:25.161 starting I/O failed: -6 00:06:25.161 Read completed with error (sct=0, sc=8) 00:06:25.161 Read completed with error (sct=0, sc=8) 00:06:25.161 Write completed with error (sct=0, sc=8) 00:06:25.161 Write completed with error (sct=0, sc=8) 00:06:25.161 Read completed with error (sct=0, sc=8) 00:06:25.161 Write completed with error (sct=0, sc=8) 00:06:25.161 Read completed with error (sct=0, sc=8) 00:06:25.161 Read completed with error (sct=0, sc=8) 00:06:25.161 Read completed with error (sct=0, sc=8) 00:06:25.161 Write completed with error (sct=0, sc=8) 00:06:25.161 Read completed with error (sct=0, sc=8) 00:06:25.161 starting I/O failed: -6 00:06:25.161 Read completed with error (sct=0, sc=8) 00:06:25.161 Read completed with error (sct=0, sc=8) 00:06:25.161 Write completed with error (sct=0, sc=8) 00:06:25.161 Write completed with error (sct=0, sc=8) 00:06:25.161 Read completed with error (sct=0, sc=8) 00:06:25.161 Read completed with error (sct=0, sc=8) 00:06:25.161 Write completed with error (sct=0, sc=8) 00:06:25.161 Read completed with error (sct=0, sc=8) 00:06:25.161 Read completed with error (sct=0, sc=8) 00:06:25.161 Read completed with error (sct=0, sc=8) 00:06:25.161 Read completed with error (sct=0, sc=8) 00:06:25.161 Read completed with error (sct=0, sc=8) 00:06:25.161 starting I/O failed: -6 00:06:25.161 Read completed with error (sct=0, sc=8) 00:06:25.161 Read completed with error (sct=0, sc=8) 00:06:25.161 Read completed with error (sct=0, sc=8) 00:06:25.161 Read completed with error (sct=0, sc=8) 00:06:25.161 Write completed with error (sct=0, sc=8) 00:06:25.161 Write completed with error (sct=0, sc=8) 00:06:25.161 Read completed with error (sct=0, sc=8) 00:06:25.161 Read completed with error (sct=0, sc=8) 00:06:25.161 Read completed with error (sct=0, sc=8) 00:06:25.161 Read completed with error (sct=0, sc=8) 00:06:25.161 starting I/O failed: -6 00:06:25.161 Write completed with error (sct=0, sc=8) 00:06:25.161 Write completed with error (sct=0, sc=8) 00:06:25.161 Write completed with error (sct=0, sc=8) 00:06:25.161 Write completed with error (sct=0, sc=8) 00:06:25.161 Read completed with error (sct=0, sc=8) 00:06:25.161 starting I/O failed: -6 00:06:25.161 Write completed with error (sct=0, sc=8) 00:06:25.161 Read completed with error (sct=0, sc=8) 00:06:25.161 Read completed with error (sct=0, sc=8) 00:06:25.161 Read completed with error (sct=0, sc=8) 00:06:25.161 starting I/O failed: -6 00:06:25.161 Read completed with error (sct=0, sc=8) 00:06:25.161 Read completed with error (sct=0, sc=8) 00:06:25.161 Read completed with error (sct=0, sc=8) 00:06:25.161 Write completed with error (sct=0, sc=8) 00:06:25.161 Write completed with error (sct=0, sc=8) 00:06:25.161 Read completed with error (sct=0, sc=8) 00:06:25.161 Write completed with error (sct=0, sc=8) 00:06:25.161 Read completed with error (sct=0, sc=8) 00:06:25.161 Write completed with error (sct=0, sc=8) 00:06:25.161 Read completed with error (sct=0, sc=8) 00:06:25.161 Read completed with error (sct=0, sc=8) 00:06:25.161 starting I/O failed: -6 00:06:25.161 Write completed with error (sct=0, sc=8) 00:06:25.161 Read completed with error (sct=0, sc=8) 00:06:25.161 Read completed with error (sct=0, sc=8) 00:06:25.161 Write completed with error (sct=0, sc=8) 00:06:25.161 Write completed with error (sct=0, sc=8) 00:06:25.161 Write completed with error (sct=0, sc=8) 00:06:25.161 Read completed with error (sct=0, sc=8) 00:06:25.161 Read completed with error (sct=0, sc=8) 00:06:25.161 Write completed with error (sct=0, sc=8) 00:06:25.161 Write completed with error (sct=0, sc=8) 00:06:25.161 starting I/O failed: -6 00:06:25.161 Read completed with error (sct=0, sc=8) 00:06:25.161 Write completed with error (sct=0, sc=8) 00:06:25.161 Write completed with error (sct=0, sc=8) 00:06:25.161 Read completed with error (sct=0, sc=8) 00:06:25.161 Write completed with error (sct=0, sc=8) 00:06:25.161 Write completed with error (sct=0, sc=8) 00:06:25.161 Read completed with error (sct=0, sc=8) 00:06:25.161 Read completed with error (sct=0, sc=8) 00:06:25.161 Write completed with error (sct=0, sc=8) 00:06:25.161 Read completed with error (sct=0, sc=8) 00:06:25.161 starting I/O failed: -6 00:06:25.161 Read completed with error (sct=0, sc=8) 00:06:25.161 Write completed with error (sct=0, sc=8) 00:06:25.161 Read completed with error (sct=0, sc=8) 00:06:25.161 Read completed with error (sct=0, sc=8) 00:06:25.161 Write completed with error (sct=0, sc=8) 00:06:25.161 Read completed with error (sct=0, sc=8) 00:06:25.161 Write completed with error (sct=0, sc=8) 00:06:25.161 Read completed with error (sct=0, sc=8) 00:06:25.161 Read completed with error (sct=0, sc=8) 00:06:25.161 starting I/O failed: -6 00:06:25.161 Read completed with error (sct=0, sc=8) 00:06:25.161 Read completed with error (sct=0, sc=8) 00:06:25.161 Read completed with error (sct=0, sc=8) 00:06:25.161 Write completed with error (sct=0, sc=8) 00:06:25.161 Read completed with error (sct=0, sc=8) 00:06:25.162 Write completed with error (sct=0, sc=8) 00:06:25.162 Write completed with error (sct=0, sc=8) 00:06:25.162 Read completed with error (sct=0, sc=8) 00:06:25.162 [2024-11-28 12:30:07.416529] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fe48400d020 is same with the state(6) to be set 00:06:26.098 [2024-11-28 12:30:08.390557] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf359b0 is same with the state(6) to be set 00:06:26.098 Read completed with error (sct=0, sc=8) 00:06:26.098 Write completed with error (sct=0, sc=8) 00:06:26.098 Write completed with error (sct=0, sc=8) 00:06:26.098 Read completed with error (sct=0, sc=8) 00:06:26.098 Read completed with error (sct=0, sc=8) 00:06:26.098 Write completed with error (sct=0, sc=8) 00:06:26.098 Read completed with error (sct=0, sc=8) 00:06:26.098 Read completed with error (sct=0, sc=8) 00:06:26.098 Read completed with error (sct=0, sc=8) 00:06:26.098 Read completed with error (sct=0, sc=8) 00:06:26.098 Read completed with error (sct=0, sc=8) 00:06:26.098 Read completed with error (sct=0, sc=8) 00:06:26.098 Write completed with error (sct=0, sc=8) 00:06:26.098 Read completed with error (sct=0, sc=8) 00:06:26.098 [2024-11-28 12:30:08.416372] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fe48400d350 is same with the state(6) to be set 00:06:26.098 Read completed with error (sct=0, sc=8) 00:06:26.098 Read completed with error (sct=0, sc=8) 00:06:26.098 Read completed with error (sct=0, sc=8) 00:06:26.098 Read completed with error (sct=0, sc=8) 00:06:26.098 Read completed with error (sct=0, sc=8) 00:06:26.098 Write completed with error (sct=0, sc=8) 00:06:26.098 Read completed with error (sct=0, sc=8) 00:06:26.098 Read completed with error (sct=0, sc=8) 00:06:26.098 Read completed with error (sct=0, sc=8) 00:06:26.098 Read completed with error (sct=0, sc=8) 00:06:26.098 Read completed with error (sct=0, sc=8) 00:06:26.098 Read completed with error (sct=0, sc=8) 00:06:26.098 Read completed with error (sct=0, sc=8) 00:06:26.098 Read completed with error (sct=0, sc=8) 00:06:26.098 Read completed with error (sct=0, sc=8) 00:06:26.099 Read completed with error (sct=0, sc=8) 00:06:26.099 Read completed with error (sct=0, sc=8) 00:06:26.099 Write completed with error (sct=0, sc=8) 00:06:26.099 Read completed with error (sct=0, sc=8) 00:06:26.099 Read completed with error (sct=0, sc=8) 00:06:26.099 Read completed with error (sct=0, sc=8) 00:06:26.099 Write completed with error (sct=0, sc=8) 00:06:26.099 Read completed with error (sct=0, sc=8) 00:06:26.099 Read completed with error (sct=0, sc=8) 00:06:26.099 Read completed with error (sct=0, sc=8) 00:06:26.099 Read completed with error (sct=0, sc=8) 00:06:26.099 Read completed with error (sct=0, sc=8) 00:06:26.099 Read completed with error (sct=0, sc=8) 00:06:26.099 Read completed with error (sct=0, sc=8) 00:06:26.099 Read completed with error (sct=0, sc=8) 00:06:26.099 Write completed with error (sct=0, sc=8) 00:06:26.099 Write completed with error (sct=0, sc=8) 00:06:26.099 Read completed with error (sct=0, sc=8) 00:06:26.099 Read completed with error (sct=0, sc=8) 00:06:26.099 Read completed with error (sct=0, sc=8) 00:06:26.099 Read completed with error (sct=0, sc=8) 00:06:26.099 Read completed with error (sct=0, sc=8) 00:06:26.099 [2024-11-28 12:30:08.418576] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf34860 is same with the state(6) to be set 00:06:26.099 Write completed with error (sct=0, sc=8) 00:06:26.099 Write completed with error (sct=0, sc=8) 00:06:26.099 Write completed with error (sct=0, sc=8) 00:06:26.099 Read completed with error (sct=0, sc=8) 00:06:26.099 Read completed with error (sct=0, sc=8) 00:06:26.099 Read completed with error (sct=0, sc=8) 00:06:26.099 Write completed with error (sct=0, sc=8) 00:06:26.099 Write completed with error (sct=0, sc=8) 00:06:26.099 Read completed with error (sct=0, sc=8) 00:06:26.099 Write completed with error (sct=0, sc=8) 00:06:26.099 Write completed with error (sct=0, sc=8) 00:06:26.099 Read completed with error (sct=0, sc=8) 00:06:26.099 Read completed with error (sct=0, sc=8) 00:06:26.099 Read completed with error (sct=0, sc=8) 00:06:26.099 Read completed with error (sct=0, sc=8) 00:06:26.099 Read completed with error (sct=0, sc=8) 00:06:26.099 Read completed with error (sct=0, sc=8) 00:06:26.099 Read completed with error (sct=0, sc=8) 00:06:26.099 Write completed with error (sct=0, sc=8) 00:06:26.099 Write completed with error (sct=0, sc=8) 00:06:26.099 Read completed with error (sct=0, sc=8) 00:06:26.099 Read completed with error (sct=0, sc=8) 00:06:26.099 Read completed with error (sct=0, sc=8) 00:06:26.099 Read completed with error (sct=0, sc=8) 00:06:26.099 Read completed with error (sct=0, sc=8) 00:06:26.099 Read completed with error (sct=0, sc=8) 00:06:26.099 Write completed with error (sct=0, sc=8) 00:06:26.099 Read completed with error (sct=0, sc=8) 00:06:26.099 Read completed with error (sct=0, sc=8) 00:06:26.099 Read completed with error (sct=0, sc=8) 00:06:26.099 Write completed with error (sct=0, sc=8) 00:06:26.099 Read completed with error (sct=0, sc=8) 00:06:26.099 Read completed with error (sct=0, sc=8) 00:06:26.099 Read completed with error (sct=0, sc=8) 00:06:26.099 Write completed with error (sct=0, sc=8) 00:06:26.099 Read completed with error (sct=0, sc=8) 00:06:26.099 Read completed with error (sct=0, sc=8) 00:06:26.099 [2024-11-28 12:30:08.418868] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf342c0 is same with the state(6) to be set 00:06:26.099 Read completed with error (sct=0, sc=8) 00:06:26.099 Read completed with error (sct=0, sc=8) 00:06:26.099 Read completed with error (sct=0, sc=8) 00:06:26.099 Read completed with error (sct=0, sc=8) 00:06:26.099 Read completed with error (sct=0, sc=8) 00:06:26.099 Read completed with error (sct=0, sc=8) 00:06:26.099 Read completed with error (sct=0, sc=8) 00:06:26.099 Read completed with error (sct=0, sc=8) 00:06:26.099 Read completed with error (sct=0, sc=8) 00:06:26.099 Read completed with error (sct=0, sc=8) 00:06:26.099 Write completed with error (sct=0, sc=8) 00:06:26.099 Read completed with error (sct=0, sc=8) 00:06:26.099 Read completed with error (sct=0, sc=8) 00:06:26.099 Read completed with error (sct=0, sc=8) 00:06:26.099 Read completed with error (sct=0, sc=8) 00:06:26.099 Read completed with error (sct=0, sc=8) 00:06:26.099 Read completed with error (sct=0, sc=8) 00:06:26.099 Read completed with error (sct=0, sc=8) 00:06:26.099 Read completed with error (sct=0, sc=8) 00:06:26.099 Write completed with error (sct=0, sc=8) 00:06:26.099 Read completed with error (sct=0, sc=8) 00:06:26.099 Write completed with error (sct=0, sc=8) 00:06:26.099 Write completed with error (sct=0, sc=8) 00:06:26.099 Read completed with error (sct=0, sc=8) 00:06:26.099 Write completed with error (sct=0, sc=8) 00:06:26.099 Write completed with error (sct=0, sc=8) 00:06:26.099 Read completed with error (sct=0, sc=8) 00:06:26.099 Read completed with error (sct=0, sc=8) 00:06:26.099 Read completed with error (sct=0, sc=8) 00:06:26.099 Read completed with error (sct=0, sc=8) 00:06:26.099 Write completed with error (sct=0, sc=8) 00:06:26.099 Read completed with error (sct=0, sc=8) 00:06:26.099 Read completed with error (sct=0, sc=8) 00:06:26.099 Read completed with error (sct=0, sc=8) 00:06:26.099 Read completed with error (sct=0, sc=8) 00:06:26.099 Read completed with error (sct=0, sc=8) 00:06:26.099 Write completed with error (sct=0, sc=8) 00:06:26.099 [2024-11-28 12:30:08.419487] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf344a0 is same with the state(6) to be set 00:06:26.099 Initializing NVMe Controllers 00:06:26.099 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:26.099 Controller IO queue size 128, less than required. 00:06:26.099 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:26.099 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:26.099 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:26.099 Initialization complete. Launching workers. 00:06:26.099 ======================================================== 00:06:26.099 Latency(us) 00:06:26.099 Device Information : IOPS MiB/s Average min max 00:06:26.099 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 192.00 0.09 953198.13 616.51 1012800.11 00:06:26.099 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 149.83 0.07 900075.56 440.05 1011916.63 00:06:26.099 ======================================================== 00:06:26.099 Total : 341.83 0.17 929913.64 440.05 1012800.11 00:06:26.099 00:06:26.099 [2024-11-28 12:30:08.420104] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf359b0 (9): Bad file descriptor 00:06:26.099 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:06:26.099 12:30:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:26.099 12:30:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:06:26.099 12:30:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2362531 00:06:26.099 12:30:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:06:26.667 12:30:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:06:26.667 12:30:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2362531 00:06:26.667 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2362531) - No such process 00:06:26.667 12:30:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2362531 00:06:26.667 12:30:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:06:26.667 12:30:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2362531 00:06:26.667 12:30:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:06:26.667 12:30:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:26.667 12:30:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:06:26.667 12:30:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:26.667 12:30:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 2362531 00:06:26.667 12:30:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:06:26.667 12:30:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:26.667 12:30:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:26.667 12:30:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:26.667 12:30:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:26.667 12:30:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:26.667 12:30:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:26.667 12:30:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:26.667 12:30:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:26.667 12:30:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:26.667 12:30:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:26.667 [2024-11-28 12:30:08.948683] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:26.667 12:30:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:26.667 12:30:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:26.667 12:30:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:26.667 12:30:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:26.667 12:30:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:26.667 12:30:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2363454 00:06:26.667 12:30:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:26.667 12:30:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:06:26.667 12:30:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2363454 00:06:26.667 12:30:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:26.667 [2024-11-28 12:30:09.018607] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:27.234 12:30:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:27.234 12:30:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2363454 00:06:27.234 12:30:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:27.493 12:30:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:27.493 12:30:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2363454 00:06:27.493 12:30:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:28.060 12:30:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:28.060 12:30:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2363454 00:06:28.060 12:30:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:28.629 12:30:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:28.629 12:30:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2363454 00:06:28.629 12:30:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:29.196 12:30:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:29.196 12:30:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2363454 00:06:29.196 12:30:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:29.762 12:30:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:29.762 12:30:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2363454 00:06:29.762 12:30:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:29.762 Initializing NVMe Controllers 00:06:29.762 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:29.762 Controller IO queue size 128, less than required. 00:06:29.762 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:29.762 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:29.762 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:29.762 Initialization complete. Launching workers. 00:06:29.762 ======================================================== 00:06:29.762 Latency(us) 00:06:29.762 Device Information : IOPS MiB/s Average min max 00:06:29.762 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003250.20 1000135.34 1010865.74 00:06:29.762 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004940.69 1000162.15 1012180.94 00:06:29.762 ======================================================== 00:06:29.762 Total : 256.00 0.12 1004095.44 1000135.34 1012180.94 00:06:29.762 00:06:30.021 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:30.021 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2363454 00:06:30.021 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2363454) - No such process 00:06:30.021 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2363454 00:06:30.021 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:30.021 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:06:30.021 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:30.021 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:06:30.021 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:30.021 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:06:30.021 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:30.021 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:30.021 rmmod nvme_tcp 00:06:30.021 rmmod nvme_fabrics 00:06:30.279 rmmod nvme_keyring 00:06:30.279 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:30.279 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:06:30.279 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:06:30.279 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 2362503 ']' 00:06:30.279 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 2362503 00:06:30.279 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 2362503 ']' 00:06:30.279 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 2362503 00:06:30.279 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:06:30.279 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:30.279 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2362503 00:06:30.279 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:30.279 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:30.279 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2362503' 00:06:30.279 killing process with pid 2362503 00:06:30.279 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 2362503 00:06:30.279 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 2362503 00:06:30.279 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:30.279 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:30.279 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:30.279 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:06:30.279 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:06:30.279 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:30.279 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:06:30.279 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:30.279 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:30.279 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:30.279 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:30.279 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:32.815 12:30:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:32.815 00:06:32.815 real 0m15.950s 00:06:32.815 user 0m29.004s 00:06:32.815 sys 0m5.398s 00:06:32.815 12:30:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:32.815 12:30:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:32.815 ************************************ 00:06:32.815 END TEST nvmf_delete_subsystem 00:06:32.815 ************************************ 00:06:32.815 12:30:14 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:32.815 12:30:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:32.815 12:30:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:32.815 12:30:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:32.815 ************************************ 00:06:32.815 START TEST nvmf_host_management 00:06:32.815 ************************************ 00:06:32.815 12:30:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:32.815 * Looking for test storage... 00:06:32.815 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:32.815 12:30:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:32.815 12:30:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:06:32.815 12:30:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:32.815 12:30:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:32.815 12:30:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:32.815 12:30:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:32.815 12:30:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:32.815 12:30:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:06:32.815 12:30:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:06:32.815 12:30:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:06:32.815 12:30:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:06:32.815 12:30:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:06:32.815 12:30:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:06:32.815 12:30:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:06:32.815 12:30:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:32.815 12:30:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:06:32.815 12:30:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:06:32.815 12:30:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:32.815 12:30:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:32.815 12:30:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:06:32.815 12:30:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:06:32.815 12:30:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:32.815 12:30:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:06:32.815 12:30:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:06:32.815 12:30:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:06:32.815 12:30:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:06:32.815 12:30:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:32.815 12:30:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:06:32.815 12:30:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:06:32.815 12:30:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:32.815 12:30:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:32.815 12:30:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:06:32.815 12:30:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:32.815 12:30:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:32.815 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.815 --rc genhtml_branch_coverage=1 00:06:32.815 --rc genhtml_function_coverage=1 00:06:32.815 --rc genhtml_legend=1 00:06:32.815 --rc geninfo_all_blocks=1 00:06:32.815 --rc geninfo_unexecuted_blocks=1 00:06:32.815 00:06:32.815 ' 00:06:32.815 12:30:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:32.815 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.815 --rc genhtml_branch_coverage=1 00:06:32.815 --rc genhtml_function_coverage=1 00:06:32.815 --rc genhtml_legend=1 00:06:32.815 --rc geninfo_all_blocks=1 00:06:32.815 --rc geninfo_unexecuted_blocks=1 00:06:32.815 00:06:32.815 ' 00:06:32.815 12:30:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:32.815 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.815 --rc genhtml_branch_coverage=1 00:06:32.815 --rc genhtml_function_coverage=1 00:06:32.815 --rc genhtml_legend=1 00:06:32.815 --rc geninfo_all_blocks=1 00:06:32.815 --rc geninfo_unexecuted_blocks=1 00:06:32.815 00:06:32.815 ' 00:06:32.815 12:30:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:32.815 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.815 --rc genhtml_branch_coverage=1 00:06:32.815 --rc genhtml_function_coverage=1 00:06:32.815 --rc genhtml_legend=1 00:06:32.815 --rc geninfo_all_blocks=1 00:06:32.815 --rc geninfo_unexecuted_blocks=1 00:06:32.815 00:06:32.815 ' 00:06:32.815 12:30:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:32.815 12:30:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:06:32.815 12:30:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:32.815 12:30:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:32.815 12:30:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:32.815 12:30:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:32.815 12:30:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:32.815 12:30:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:32.815 12:30:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:32.815 12:30:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:32.815 12:30:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:32.815 12:30:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:32.815 12:30:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:06:32.815 12:30:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:06:32.815 12:30:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:32.815 12:30:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:32.815 12:30:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:32.815 12:30:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:32.815 12:30:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:32.815 12:30:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:06:32.815 12:30:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:32.815 12:30:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:32.815 12:30:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:32.815 12:30:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:32.815 12:30:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:32.815 12:30:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:32.815 12:30:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:06:32.815 12:30:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:32.815 12:30:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:06:32.815 12:30:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:32.815 12:30:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:32.815 12:30:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:32.815 12:30:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:32.815 12:30:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:32.815 12:30:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:32.815 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:32.815 12:30:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:32.815 12:30:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:32.815 12:30:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:32.815 12:30:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:32.815 12:30:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:32.815 12:30:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:06:32.815 12:30:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:32.815 12:30:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:32.815 12:30:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:32.815 12:30:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:32.815 12:30:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:32.815 12:30:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:32.815 12:30:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:32.815 12:30:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:32.815 12:30:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:32.815 12:30:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:32.815 12:30:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:06:32.815 12:30:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:38.091 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:38.091 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:06:38.091 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:38.091 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:38.091 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:38.091 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:38.091 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:38.091 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:06:38.091 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:38.091 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:06:38.091 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:06:38.091 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:06:38.091 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:06:38.091 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:06:38.091 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:06:38.091 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:38.091 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:38.091 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:38.091 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:38.091 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:38.091 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:38.091 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:38.092 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:38.092 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:38.092 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:38.092 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:38.092 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:38.092 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:38.092 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:38.092 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:38.092 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:38.092 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:38.092 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:38.092 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:38.092 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:06:38.092 Found 0000:86:00.0 (0x8086 - 0x159b) 00:06:38.092 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:38.092 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:38.092 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:38.092 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:38.092 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:38.092 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:38.092 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:06:38.092 Found 0000:86:00.1 (0x8086 - 0x159b) 00:06:38.092 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:38.092 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:38.092 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:38.092 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:38.092 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:38.092 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:38.092 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:38.092 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:38.092 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:38.092 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:38.092 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:38.092 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:38.092 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:38.092 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:38.092 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:38.092 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:06:38.092 Found net devices under 0000:86:00.0: cvl_0_0 00:06:38.092 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:38.092 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:38.092 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:38.092 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:38.092 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:38.092 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:38.092 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:38.092 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:38.092 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:06:38.092 Found net devices under 0000:86:00.1: cvl_0_1 00:06:38.092 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:38.092 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:38.092 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:06:38.092 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:38.092 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:38.092 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:38.092 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:38.092 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:38.092 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:38.092 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:38.092 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:38.092 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:38.092 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:38.092 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:38.092 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:38.092 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:38.092 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:38.092 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:38.092 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:38.092 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:38.092 12:30:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:38.092 12:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:38.092 12:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:38.092 12:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:38.092 12:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:38.092 12:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:38.092 12:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:38.092 12:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:38.092 12:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:38.092 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:38.092 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.359 ms 00:06:38.092 00:06:38.092 --- 10.0.0.2 ping statistics --- 00:06:38.092 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:38.092 rtt min/avg/max/mdev = 0.359/0.359/0.359/0.000 ms 00:06:38.092 12:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:38.092 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:38.092 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.191 ms 00:06:38.092 00:06:38.092 --- 10.0.0.1 ping statistics --- 00:06:38.092 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:38.092 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:06:38.092 12:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:38.092 12:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:06:38.092 12:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:38.092 12:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:38.092 12:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:38.092 12:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:38.092 12:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:38.092 12:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:38.092 12:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:38.092 12:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:06:38.092 12:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:06:38.092 12:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:06:38.092 12:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:38.092 12:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:38.092 12:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:38.092 12:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=2367612 00:06:38.092 12:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 2367612 00:06:38.092 12:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:06:38.092 12:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 2367612 ']' 00:06:38.092 12:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:38.092 12:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:38.092 12:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:38.092 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:38.092 12:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:38.092 12:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:38.092 [2024-11-28 12:30:20.227023] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:06:38.092 [2024-11-28 12:30:20.227065] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:38.092 [2024-11-28 12:30:20.294713] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:38.092 [2024-11-28 12:30:20.336378] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:38.092 [2024-11-28 12:30:20.336417] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:38.092 [2024-11-28 12:30:20.336425] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:38.092 [2024-11-28 12:30:20.336431] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:38.092 [2024-11-28 12:30:20.336436] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:38.092 [2024-11-28 12:30:20.338083] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:38.092 [2024-11-28 12:30:20.338150] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:38.092 [2024-11-28 12:30:20.338257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:38.092 [2024-11-28 12:30:20.338257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:38.092 12:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:38.092 12:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:06:38.092 12:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:38.092 12:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:38.092 12:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:38.092 12:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:38.092 12:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:38.092 12:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.093 12:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:38.093 [2024-11-28 12:30:20.484534] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:38.093 12:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.093 12:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:06:38.093 12:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:38.093 12:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:38.093 12:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:06:38.093 12:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:06:38.093 12:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:06:38.093 12:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.093 12:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:38.093 Malloc0 00:06:38.093 [2024-11-28 12:30:20.551715] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:38.093 12:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.093 12:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:06:38.093 12:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:38.093 12:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:38.093 12:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2367667 00:06:38.093 12:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2367667 /var/tmp/bdevperf.sock 00:06:38.093 12:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 2367667 ']' 00:06:38.093 12:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:06:38.093 12:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:06:38.093 12:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:06:38.093 12:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:38.093 12:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:06:38.093 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:06:38.093 12:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:06:38.093 12:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:38.093 12:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:06:38.093 12:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:38.093 12:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:06:38.093 12:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:06:38.093 { 00:06:38.093 "params": { 00:06:38.093 "name": "Nvme$subsystem", 00:06:38.093 "trtype": "$TEST_TRANSPORT", 00:06:38.093 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:38.093 "adrfam": "ipv4", 00:06:38.093 "trsvcid": "$NVMF_PORT", 00:06:38.093 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:38.093 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:38.093 "hdgst": ${hdgst:-false}, 00:06:38.093 "ddgst": ${ddgst:-false} 00:06:38.093 }, 00:06:38.093 "method": "bdev_nvme_attach_controller" 00:06:38.093 } 00:06:38.093 EOF 00:06:38.093 )") 00:06:38.093 12:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:06:38.352 12:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:06:38.352 12:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:06:38.352 12:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:06:38.352 "params": { 00:06:38.352 "name": "Nvme0", 00:06:38.352 "trtype": "tcp", 00:06:38.352 "traddr": "10.0.0.2", 00:06:38.352 "adrfam": "ipv4", 00:06:38.352 "trsvcid": "4420", 00:06:38.352 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:38.352 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:38.352 "hdgst": false, 00:06:38.352 "ddgst": false 00:06:38.352 }, 00:06:38.352 "method": "bdev_nvme_attach_controller" 00:06:38.352 }' 00:06:38.352 [2024-11-28 12:30:20.646746] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:06:38.352 [2024-11-28 12:30:20.646793] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2367667 ] 00:06:38.352 [2024-11-28 12:30:20.711378] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.352 [2024-11-28 12:30:20.752770] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.611 Running I/O for 10 seconds... 00:06:38.611 12:30:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:38.611 12:30:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:06:38.611 12:30:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:06:38.611 12:30:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.611 12:30:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:38.611 12:30:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.611 12:30:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:38.611 12:30:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:06:38.611 12:30:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:06:38.611 12:30:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:06:38.611 12:30:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:06:38.611 12:30:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:06:38.611 12:30:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:06:38.611 12:30:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:06:38.611 12:30:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:06:38.611 12:30:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:06:38.611 12:30:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.611 12:30:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:38.611 12:30:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.870 12:30:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=78 00:06:38.870 12:30:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 78 -ge 100 ']' 00:06:38.870 12:30:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:06:38.870 12:30:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:06:38.870 12:30:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:06:38.870 12:30:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:06:39.130 12:30:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:06:39.130 12:30:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:39.130 12:30:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:39.130 12:30:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:39.130 12:30:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=643 00:06:39.130 12:30:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 643 -ge 100 ']' 00:06:39.130 12:30:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:06:39.130 12:30:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:06:39.130 12:30:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:06:39.130 12:30:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:39.130 12:30:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:39.130 12:30:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:39.130 [2024-11-28 12:30:21.434749] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d90b0 is same with the state(6) to be set 00:06:39.130 [2024-11-28 12:30:21.434827] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d90b0 is same with the state(6) to be set 00:06:39.130 [2024-11-28 12:30:21.434835] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d90b0 is same with the state(6) to be set 00:06:39.130 [2024-11-28 12:30:21.434842] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d90b0 is same with the state(6) to be set 00:06:39.130 [2024-11-28 12:30:21.434848] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d90b0 is same with the state(6) to be set 00:06:39.130 [2024-11-28 12:30:21.434854] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d90b0 is same with the state(6) to be set 00:06:39.130 [2024-11-28 12:30:21.434861] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d90b0 is same with the state(6) to be set 00:06:39.130 [2024-11-28 12:30:21.434867] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d90b0 is same with the state(6) to be set 00:06:39.130 [2024-11-28 12:30:21.434873] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d90b0 is same with the state(6) to be set 00:06:39.130 [2024-11-28 12:30:21.434884] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d90b0 is same with the state(6) to be set 00:06:39.130 [2024-11-28 12:30:21.434890] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d90b0 is same with the state(6) to be set 00:06:39.130 [2024-11-28 12:30:21.434897] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d90b0 is same with the state(6) to be set 00:06:39.130 [2024-11-28 12:30:21.434904] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d90b0 is same with the state(6) to be set 00:06:39.130 [2024-11-28 12:30:21.434910] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d90b0 is same with the state(6) to be set 00:06:39.130 [2024-11-28 12:30:21.434918] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d90b0 is same with the state(6) to be set 00:06:39.130 [2024-11-28 12:30:21.434924] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d90b0 is same with the state(6) to be set 00:06:39.130 [2024-11-28 12:30:21.434932] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d90b0 is same with the state(6) to be set 00:06:39.130 [2024-11-28 12:30:21.434938] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d90b0 is same with the state(6) to be set 00:06:39.130 [2024-11-28 12:30:21.434944] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d90b0 is same with the state(6) to be set 00:06:39.130 [2024-11-28 12:30:21.434961] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d90b0 is same with the state(6) to be set 00:06:39.130 [2024-11-28 12:30:21.434967] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d90b0 is same with the state(6) to be set 00:06:39.130 [2024-11-28 12:30:21.434973] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d90b0 is same with the state(6) to be set 00:06:39.130 [2024-11-28 12:30:21.434979] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d90b0 is same with the state(6) to be set 00:06:39.130 [2024-11-28 12:30:21.434986] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d90b0 is same with the state(6) to be set 00:06:39.130 [2024-11-28 12:30:21.434994] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d90b0 is same with the state(6) to be set 00:06:39.130 [2024-11-28 12:30:21.435000] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d90b0 is same with the state(6) to be set 00:06:39.130 [2024-11-28 12:30:21.435006] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d90b0 is same with the state(6) to be set 00:06:39.130 [2024-11-28 12:30:21.435012] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d90b0 is same with the state(6) to be set 00:06:39.130 [2024-11-28 12:30:21.435019] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d90b0 is same with the state(6) to be set 00:06:39.130 [2024-11-28 12:30:21.435025] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d90b0 is same with the state(6) to be set 00:06:39.130 [2024-11-28 12:30:21.435032] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d90b0 is same with the state(6) to be set 00:06:39.130 [2024-11-28 12:30:21.435039] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d90b0 is same with the state(6) to be set 00:06:39.130 [2024-11-28 12:30:21.435045] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d90b0 is same with the state(6) to be set 00:06:39.130 [2024-11-28 12:30:21.435052] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d90b0 is same with the state(6) to be set 00:06:39.130 [2024-11-28 12:30:21.435059] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d90b0 is same with the state(6) to be set 00:06:39.130 [2024-11-28 12:30:21.435067] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d90b0 is same with the state(6) to be set 00:06:39.130 [2024-11-28 12:30:21.436920] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:06:39.130 [2024-11-28 12:30:21.436961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:39.131 [2024-11-28 12:30:21.436972] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:06:39.131 [2024-11-28 12:30:21.436980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:39.131 [2024-11-28 12:30:21.436989] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:06:39.131 [2024-11-28 12:30:21.436996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:39.131 [2024-11-28 12:30:21.437003] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:06:39.131 [2024-11-28 12:30:21.437010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:39.131 [2024-11-28 12:30:21.437017] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b09510 is same with the state(6) to be set 00:06:39.131 12:30:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:39.131 12:30:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:39.131 12:30:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:39.131 12:30:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:39.131 [2024-11-28 12:30:21.446244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:98048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.131 [2024-11-28 12:30:21.446271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:39.131 [2024-11-28 12:30:21.446286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:98176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.131 [2024-11-28 12:30:21.446295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:39.131 [2024-11-28 12:30:21.446304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.131 [2024-11-28 12:30:21.446312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:39.131 [2024-11-28 12:30:21.446320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:98432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.131 [2024-11-28 12:30:21.446328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:39.131 [2024-11-28 12:30:21.446337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:98560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.131 [2024-11-28 12:30:21.446344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:39.131 [2024-11-28 12:30:21.446352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:98688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.131 [2024-11-28 12:30:21.446360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:39.131 [2024-11-28 12:30:21.446369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:98816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.131 [2024-11-28 12:30:21.446382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:39.131 [2024-11-28 12:30:21.446390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:98944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.131 [2024-11-28 12:30:21.446397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:39.131 [2024-11-28 12:30:21.446406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:99072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.131 [2024-11-28 12:30:21.446414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:39.131 [2024-11-28 12:30:21.446423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:99200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.131 [2024-11-28 12:30:21.446430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:39.131 [2024-11-28 12:30:21.446439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:99328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.131 [2024-11-28 12:30:21.446446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:39.131 [2024-11-28 12:30:21.446455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:99456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.131 [2024-11-28 12:30:21.446462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:39.131 [2024-11-28 12:30:21.446471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:99584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.131 [2024-11-28 12:30:21.446478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:39.131 [2024-11-28 12:30:21.446487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:99712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.131 [2024-11-28 12:30:21.446494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:39.131 [2024-11-28 12:30:21.446503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:99840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.131 [2024-11-28 12:30:21.446511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:39.131 [2024-11-28 12:30:21.446519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:99968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.131 [2024-11-28 12:30:21.446526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:39.131 [2024-11-28 12:30:21.446535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:100096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.131 [2024-11-28 12:30:21.446542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:39.131 [2024-11-28 12:30:21.446551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:100224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.131 [2024-11-28 12:30:21.446558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:39.131 [2024-11-28 12:30:21.446567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:100352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.131 [2024-11-28 12:30:21.446574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:39.131 [2024-11-28 12:30:21.446585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:100480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.131 [2024-11-28 12:30:21.446593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:39.131 [2024-11-28 12:30:21.446601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:100608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.131 [2024-11-28 12:30:21.446608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:39.131 [2024-11-28 12:30:21.446616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:100736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.131 [2024-11-28 12:30:21.446623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:39.131 [2024-11-28 12:30:21.446632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:100864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.131 [2024-11-28 12:30:21.446639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:39.131 [2024-11-28 12:30:21.446647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:100992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.131 [2024-11-28 12:30:21.446655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:39.131 [2024-11-28 12:30:21.446663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:101120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.131 [2024-11-28 12:30:21.446670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:39.131 [2024-11-28 12:30:21.446679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:101248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.131 [2024-11-28 12:30:21.446686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:39.131 [2024-11-28 12:30:21.446694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:101376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.131 [2024-11-28 12:30:21.446702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:39.131 [2024-11-28 12:30:21.446711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:101504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.131 [2024-11-28 12:30:21.446718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:39.131 [2024-11-28 12:30:21.446726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:101632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.131 [2024-11-28 12:30:21.446733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:39.131 [2024-11-28 12:30:21.446741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:101760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.131 [2024-11-28 12:30:21.446748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:39.131 [2024-11-28 12:30:21.446756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:101888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.131 [2024-11-28 12:30:21.446765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:39.131 [2024-11-28 12:30:21.446774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:102016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.131 [2024-11-28 12:30:21.446785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:39.131 [2024-11-28 12:30:21.446794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:102144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.131 [2024-11-28 12:30:21.446800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:39.131 [2024-11-28 12:30:21.446809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:102272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.131 [2024-11-28 12:30:21.446817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:39.131 [2024-11-28 12:30:21.446826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:102400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.132 [2024-11-28 12:30:21.446833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:39.132 [2024-11-28 12:30:21.446842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:102528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.132 [2024-11-28 12:30:21.446849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:39.132 [2024-11-28 12:30:21.446858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:102656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.132 [2024-11-28 12:30:21.446865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:39.132 [2024-11-28 12:30:21.446874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:102784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.132 [2024-11-28 12:30:21.446881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:39.132 [2024-11-28 12:30:21.446890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:102912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.132 [2024-11-28 12:30:21.446897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:39.132 [2024-11-28 12:30:21.446906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:103040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.132 [2024-11-28 12:30:21.446913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:39.132 [2024-11-28 12:30:21.446921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:103168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.132 [2024-11-28 12:30:21.446929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:39.132 [2024-11-28 12:30:21.446937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:103296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.132 [2024-11-28 12:30:21.446945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:39.132 [2024-11-28 12:30:21.446960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:103424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.132 [2024-11-28 12:30:21.446968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:39.132 [2024-11-28 12:30:21.446976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:103552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.132 [2024-11-28 12:30:21.446983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:39.132 [2024-11-28 12:30:21.446994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:103680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.132 [2024-11-28 12:30:21.447002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:39.132 [2024-11-28 12:30:21.447010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:103808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.132 [2024-11-28 12:30:21.447018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:39.132 [2024-11-28 12:30:21.447026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:103936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.132 [2024-11-28 12:30:21.447033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:39.132 [2024-11-28 12:30:21.447041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:104064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.132 [2024-11-28 12:30:21.447049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:39.132 [2024-11-28 12:30:21.447058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:104192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.132 [2024-11-28 12:30:21.447065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:39.132 [2024-11-28 12:30:21.447073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:104320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.132 [2024-11-28 12:30:21.447080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:39.132 [2024-11-28 12:30:21.447088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:104448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.132 [2024-11-28 12:30:21.447095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:39.132 [2024-11-28 12:30:21.447104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:104576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.132 [2024-11-28 12:30:21.447112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:39.132 [2024-11-28 12:30:21.447120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:104704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.132 [2024-11-28 12:30:21.447127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:39.132 [2024-11-28 12:30:21.447136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:104832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.132 [2024-11-28 12:30:21.447143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:39.132 [2024-11-28 12:30:21.447151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:104960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.132 [2024-11-28 12:30:21.447158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:39.132 [2024-11-28 12:30:21.447167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:105088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.132 [2024-11-28 12:30:21.447174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:39.132 [2024-11-28 12:30:21.447184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:105216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.132 [2024-11-28 12:30:21.447200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:39.132 [2024-11-28 12:30:21.447209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:105344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.132 [2024-11-28 12:30:21.447216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:39.132 [2024-11-28 12:30:21.447225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:105472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.132 [2024-11-28 12:30:21.447233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:39.132 [2024-11-28 12:30:21.447242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:105600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.132 [2024-11-28 12:30:21.447249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:39.132 [2024-11-28 12:30:21.447257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:105728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.132 [2024-11-28 12:30:21.447264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:39.132 12:30:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:39.132 [2024-11-28 12:30:21.447274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:105856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.132 [2024-11-28 12:30:21.447282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:39.132 [2024-11-28 12:30:21.447290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:105984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.132 [2024-11-28 12:30:21.447297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:39.132 [2024-11-28 12:30:21.447305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:106112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.132 [2024-11-28 12:30:21.447312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:39.132 [2024-11-28 12:30:21.447394] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b09510 (9): Bad file descriptor 00:06:39.132 12:30:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:06:39.132 [2024-11-28 12:30:21.448290] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:06:39.132 task offset: 98048 on job bdev=Nvme0n1 fails 00:06:39.132 00:06:39.132 Latency(us) 00:06:39.132 [2024-11-28T11:30:21.651Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:39.132 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:39.132 Job: Nvme0n1 ended in about 0.41 seconds with error 00:06:39.132 Verification LBA range: start 0x0 length 0x400 00:06:39.132 Nvme0n1 : 0.41 1888.49 118.03 157.79 0.00 30428.61 1431.82 27582.11 00:06:39.132 [2024-11-28T11:30:21.651Z] =================================================================================================================== 00:06:39.132 [2024-11-28T11:30:21.651Z] Total : 1888.49 118.03 157.79 0.00 30428.61 1431.82 27582.11 00:06:39.132 [2024-11-28 12:30:21.450674] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:39.132 [2024-11-28 12:30:21.461385] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:06:40.069 12:30:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2367667 00:06:40.069 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (2367667) - No such process 00:06:40.069 12:30:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:06:40.069 12:30:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:06:40.069 12:30:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:06:40.069 12:30:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:06:40.069 12:30:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:06:40.069 12:30:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:06:40.069 12:30:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:06:40.069 12:30:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:06:40.069 { 00:06:40.069 "params": { 00:06:40.069 "name": "Nvme$subsystem", 00:06:40.069 "trtype": "$TEST_TRANSPORT", 00:06:40.069 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:40.069 "adrfam": "ipv4", 00:06:40.069 "trsvcid": "$NVMF_PORT", 00:06:40.069 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:40.069 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:40.069 "hdgst": ${hdgst:-false}, 00:06:40.069 "ddgst": ${ddgst:-false} 00:06:40.069 }, 00:06:40.069 "method": "bdev_nvme_attach_controller" 00:06:40.069 } 00:06:40.069 EOF 00:06:40.069 )") 00:06:40.069 12:30:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:06:40.069 12:30:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:06:40.069 12:30:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:06:40.069 12:30:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:06:40.069 "params": { 00:06:40.069 "name": "Nvme0", 00:06:40.069 "trtype": "tcp", 00:06:40.069 "traddr": "10.0.0.2", 00:06:40.069 "adrfam": "ipv4", 00:06:40.069 "trsvcid": "4420", 00:06:40.069 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:40.069 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:40.069 "hdgst": false, 00:06:40.069 "ddgst": false 00:06:40.069 }, 00:06:40.069 "method": "bdev_nvme_attach_controller" 00:06:40.069 }' 00:06:40.069 [2024-11-28 12:30:22.500848] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:06:40.069 [2024-11-28 12:30:22.500896] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2367987 ] 00:06:40.069 [2024-11-28 12:30:22.563345] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.328 [2024-11-28 12:30:22.605167] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.587 Running I/O for 1 seconds... 00:06:41.524 1920.00 IOPS, 120.00 MiB/s 00:06:41.524 Latency(us) 00:06:41.524 [2024-11-28T11:30:24.043Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:41.524 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:41.524 Verification LBA range: start 0x0 length 0x400 00:06:41.524 Nvme0n1 : 1.01 1964.33 122.77 0.00 0.00 32065.38 5157.40 27696.08 00:06:41.524 [2024-11-28T11:30:24.043Z] =================================================================================================================== 00:06:41.524 [2024-11-28T11:30:24.043Z] Total : 1964.33 122.77 0.00 0.00 32065.38 5157.40 27696.08 00:06:41.784 12:30:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:06:41.784 12:30:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:06:41.784 12:30:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:06:41.784 12:30:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:06:41.784 12:30:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:06:41.784 12:30:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:41.784 12:30:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:06:41.784 12:30:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:41.784 12:30:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:06:41.784 12:30:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:41.784 12:30:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:41.784 rmmod nvme_tcp 00:06:41.784 rmmod nvme_fabrics 00:06:41.784 rmmod nvme_keyring 00:06:41.784 12:30:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:41.784 12:30:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:06:41.784 12:30:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:06:41.784 12:30:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 2367612 ']' 00:06:41.784 12:30:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 2367612 00:06:41.784 12:30:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 2367612 ']' 00:06:41.784 12:30:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 2367612 00:06:41.784 12:30:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:06:41.784 12:30:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:41.784 12:30:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2367612 00:06:41.784 12:30:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:41.784 12:30:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:41.784 12:30:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2367612' 00:06:41.784 killing process with pid 2367612 00:06:41.784 12:30:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 2367612 00:06:41.784 12:30:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 2367612 00:06:42.044 [2024-11-28 12:30:24.349236] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:06:42.044 12:30:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:42.044 12:30:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:42.044 12:30:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:42.044 12:30:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:06:42.044 12:30:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:42.044 12:30:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:06:42.044 12:30:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:06:42.044 12:30:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:42.044 12:30:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:42.044 12:30:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:42.044 12:30:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:42.044 12:30:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:43.951 12:30:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:43.951 12:30:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:06:43.951 00:06:43.951 real 0m11.515s 00:06:43.951 user 0m19.684s 00:06:43.951 sys 0m4.861s 00:06:43.951 12:30:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:43.951 12:30:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:43.951 ************************************ 00:06:43.951 END TEST nvmf_host_management 00:06:43.951 ************************************ 00:06:44.212 12:30:26 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:44.212 12:30:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:44.212 12:30:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:44.212 12:30:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:44.212 ************************************ 00:06:44.212 START TEST nvmf_lvol 00:06:44.212 ************************************ 00:06:44.212 12:30:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:44.212 * Looking for test storage... 00:06:44.212 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:44.212 12:30:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:44.212 12:30:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:44.212 12:30:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:06:44.212 12:30:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:44.212 12:30:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:44.212 12:30:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:44.212 12:30:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:44.212 12:30:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:06:44.212 12:30:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:06:44.212 12:30:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:06:44.212 12:30:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:06:44.212 12:30:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:06:44.212 12:30:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:06:44.212 12:30:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:06:44.212 12:30:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:44.212 12:30:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:06:44.212 12:30:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:06:44.212 12:30:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:44.212 12:30:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:44.212 12:30:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:06:44.212 12:30:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:06:44.212 12:30:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:44.212 12:30:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:06:44.212 12:30:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:06:44.212 12:30:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:06:44.212 12:30:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:06:44.212 12:30:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:44.212 12:30:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:06:44.212 12:30:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:06:44.212 12:30:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:44.212 12:30:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:44.212 12:30:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:06:44.212 12:30:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:44.212 12:30:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:44.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.212 --rc genhtml_branch_coverage=1 00:06:44.212 --rc genhtml_function_coverage=1 00:06:44.212 --rc genhtml_legend=1 00:06:44.212 --rc geninfo_all_blocks=1 00:06:44.212 --rc geninfo_unexecuted_blocks=1 00:06:44.212 00:06:44.212 ' 00:06:44.212 12:30:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:44.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.212 --rc genhtml_branch_coverage=1 00:06:44.212 --rc genhtml_function_coverage=1 00:06:44.212 --rc genhtml_legend=1 00:06:44.212 --rc geninfo_all_blocks=1 00:06:44.212 --rc geninfo_unexecuted_blocks=1 00:06:44.212 00:06:44.212 ' 00:06:44.212 12:30:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:44.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.212 --rc genhtml_branch_coverage=1 00:06:44.212 --rc genhtml_function_coverage=1 00:06:44.212 --rc genhtml_legend=1 00:06:44.212 --rc geninfo_all_blocks=1 00:06:44.212 --rc geninfo_unexecuted_blocks=1 00:06:44.212 00:06:44.212 ' 00:06:44.212 12:30:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:44.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.212 --rc genhtml_branch_coverage=1 00:06:44.212 --rc genhtml_function_coverage=1 00:06:44.212 --rc genhtml_legend=1 00:06:44.212 --rc geninfo_all_blocks=1 00:06:44.212 --rc geninfo_unexecuted_blocks=1 00:06:44.212 00:06:44.212 ' 00:06:44.212 12:30:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:44.212 12:30:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:06:44.212 12:30:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:44.212 12:30:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:44.212 12:30:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:44.212 12:30:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:44.212 12:30:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:44.212 12:30:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:44.212 12:30:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:44.212 12:30:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:44.212 12:30:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:44.212 12:30:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:44.212 12:30:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:06:44.212 12:30:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:06:44.212 12:30:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:44.212 12:30:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:44.212 12:30:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:44.212 12:30:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:44.212 12:30:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:44.212 12:30:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:06:44.213 12:30:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:44.213 12:30:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:44.213 12:30:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:44.213 12:30:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.213 12:30:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.213 12:30:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.213 12:30:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:06:44.213 12:30:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.213 12:30:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:06:44.213 12:30:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:44.213 12:30:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:44.213 12:30:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:44.213 12:30:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:44.213 12:30:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:44.213 12:30:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:44.213 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:44.213 12:30:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:44.213 12:30:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:44.213 12:30:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:44.213 12:30:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:44.213 12:30:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:44.213 12:30:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:06:44.213 12:30:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:06:44.213 12:30:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:44.213 12:30:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:06:44.213 12:30:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:44.213 12:30:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:44.213 12:30:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:44.213 12:30:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:44.213 12:30:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:44.213 12:30:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:44.213 12:30:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:44.213 12:30:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:44.473 12:30:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:44.473 12:30:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:44.473 12:30:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:06:44.473 12:30:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:49.748 12:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:49.748 12:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:06:49.748 12:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:49.748 12:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:49.748 12:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:49.748 12:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:49.748 12:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:49.748 12:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:06:49.748 12:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:49.748 12:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:06:49.748 12:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:06:49.748 12:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:06:49.748 12:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:06:49.748 12:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:06:49.748 12:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:06:49.748 12:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:49.749 12:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:49.749 12:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:49.749 12:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:49.749 12:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:49.749 12:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:49.749 12:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:49.749 12:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:49.749 12:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:49.749 12:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:49.749 12:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:49.749 12:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:49.749 12:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:49.749 12:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:49.749 12:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:49.749 12:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:49.749 12:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:49.749 12:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:49.749 12:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:49.749 12:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:06:49.749 Found 0000:86:00.0 (0x8086 - 0x159b) 00:06:49.749 12:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:49.749 12:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:49.749 12:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:49.749 12:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:49.749 12:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:49.749 12:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:49.749 12:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:06:49.749 Found 0000:86:00.1 (0x8086 - 0x159b) 00:06:49.749 12:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:49.749 12:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:49.749 12:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:49.749 12:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:49.749 12:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:49.749 12:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:49.749 12:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:49.749 12:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:49.749 12:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:49.749 12:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:49.749 12:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:49.749 12:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:49.749 12:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:49.749 12:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:49.749 12:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:49.749 12:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:06:49.749 Found net devices under 0000:86:00.0: cvl_0_0 00:06:49.749 12:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:49.749 12:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:49.749 12:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:49.749 12:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:49.749 12:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:49.749 12:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:49.749 12:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:49.749 12:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:49.749 12:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:06:49.749 Found net devices under 0000:86:00.1: cvl_0_1 00:06:49.749 12:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:49.749 12:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:49.749 12:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:06:49.749 12:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:49.749 12:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:49.749 12:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:49.749 12:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:49.749 12:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:49.749 12:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:49.749 12:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:49.749 12:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:49.749 12:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:49.749 12:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:49.749 12:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:49.749 12:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:49.749 12:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:49.749 12:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:49.749 12:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:49.749 12:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:49.749 12:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:49.749 12:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:49.749 12:30:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:49.749 12:30:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:49.749 12:30:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:49.749 12:30:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:49.749 12:30:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:49.749 12:30:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:49.749 12:30:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:49.749 12:30:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:49.749 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:49.749 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.436 ms 00:06:49.749 00:06:49.749 --- 10.0.0.2 ping statistics --- 00:06:49.749 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:49.749 rtt min/avg/max/mdev = 0.436/0.436/0.436/0.000 ms 00:06:49.749 12:30:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:49.749 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:49.749 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.148 ms 00:06:49.749 00:06:49.749 --- 10.0.0.1 ping statistics --- 00:06:49.749 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:49.749 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:06:49.749 12:30:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:49.749 12:30:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:06:49.749 12:30:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:49.749 12:30:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:49.749 12:30:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:49.749 12:30:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:49.749 12:30:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:49.749 12:30:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:49.749 12:30:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:49.749 12:30:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:06:49.749 12:30:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:49.749 12:30:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:49.749 12:30:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:49.749 12:30:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=2371866 00:06:49.749 12:30:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 2371866 00:06:49.749 12:30:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 2371866 ']' 00:06:49.749 12:30:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:49.749 12:30:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:49.749 12:30:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:49.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:49.750 12:30:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:49.750 12:30:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:49.750 12:30:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:06:49.750 [2024-11-28 12:30:32.265319] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:06:49.750 [2024-11-28 12:30:32.265365] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:50.008 [2024-11-28 12:30:32.331885] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:50.008 [2024-11-28 12:30:32.374324] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:50.008 [2024-11-28 12:30:32.374362] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:50.008 [2024-11-28 12:30:32.374369] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:50.008 [2024-11-28 12:30:32.374376] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:50.008 [2024-11-28 12:30:32.374381] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:50.008 [2024-11-28 12:30:32.375669] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:50.008 [2024-11-28 12:30:32.375766] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.008 [2024-11-28 12:30:32.375766] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:50.008 12:30:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:50.008 12:30:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:06:50.008 12:30:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:50.008 12:30:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:50.008 12:30:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:50.008 12:30:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:50.008 12:30:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:50.266 [2024-11-28 12:30:32.682391] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:50.266 12:30:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:06:50.523 12:30:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:06:50.524 12:30:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:06:50.780 12:30:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:06:50.780 12:30:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:06:51.038 12:30:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:06:51.038 12:30:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=766e2101-8a98-48a3-ba6a-87d4b93f918a 00:06:51.038 12:30:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 766e2101-8a98-48a3-ba6a-87d4b93f918a lvol 20 00:06:51.296 12:30:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=570df24d-dde8-48f6-990f-98a0d06292cf 00:06:51.296 12:30:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:51.554 12:30:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 570df24d-dde8-48f6-990f-98a0d06292cf 00:06:51.813 12:30:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:51.813 [2024-11-28 12:30:34.288706] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:51.813 12:30:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:52.071 12:30:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2372183 00:06:52.071 12:30:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:06:52.071 12:30:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:06:53.008 12:30:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 570df24d-dde8-48f6-990f-98a0d06292cf MY_SNAPSHOT 00:06:53.266 12:30:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=7f78cd3f-b870-4218-8a12-9487da8f4c6f 00:06:53.266 12:30:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 570df24d-dde8-48f6-990f-98a0d06292cf 30 00:06:53.525 12:30:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 7f78cd3f-b870-4218-8a12-9487da8f4c6f MY_CLONE 00:06:53.785 12:30:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=8bf0256e-d2ab-4b0b-944c-c24b79ca6ca7 00:06:53.785 12:30:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 8bf0256e-d2ab-4b0b-944c-c24b79ca6ca7 00:06:54.354 12:30:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2372183 00:07:02.514 Initializing NVMe Controllers 00:07:02.514 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:02.514 Controller IO queue size 128, less than required. 00:07:02.514 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:02.514 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:07:02.514 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:07:02.514 Initialization complete. Launching workers. 00:07:02.514 ======================================================== 00:07:02.514 Latency(us) 00:07:02.514 Device Information : IOPS MiB/s Average min max 00:07:02.514 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 11710.23 45.74 10939.23 2073.37 40573.72 00:07:02.514 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 11852.03 46.30 10806.38 3620.77 107172.53 00:07:02.514 ======================================================== 00:07:02.514 Total : 23562.26 92.04 10872.41 2073.37 107172.53 00:07:02.514 00:07:02.514 12:30:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:02.794 12:30:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 570df24d-dde8-48f6-990f-98a0d06292cf 00:07:03.078 12:30:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 766e2101-8a98-48a3-ba6a-87d4b93f918a 00:07:03.078 12:30:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:07:03.078 12:30:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:07:03.078 12:30:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:07:03.078 12:30:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:03.078 12:30:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:07:03.078 12:30:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:03.078 12:30:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:07:03.078 12:30:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:03.078 12:30:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:03.078 rmmod nvme_tcp 00:07:03.078 rmmod nvme_fabrics 00:07:03.078 rmmod nvme_keyring 00:07:03.078 12:30:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:03.078 12:30:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:07:03.078 12:30:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:07:03.078 12:30:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 2371866 ']' 00:07:03.078 12:30:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 2371866 00:07:03.078 12:30:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 2371866 ']' 00:07:03.078 12:30:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 2371866 00:07:03.078 12:30:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:07:03.078 12:30:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:03.078 12:30:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2371866 00:07:03.380 12:30:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:03.380 12:30:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:03.380 12:30:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2371866' 00:07:03.380 killing process with pid 2371866 00:07:03.380 12:30:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 2371866 00:07:03.380 12:30:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 2371866 00:07:03.380 12:30:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:03.380 12:30:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:03.380 12:30:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:03.380 12:30:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:07:03.380 12:30:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:07:03.380 12:30:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:03.380 12:30:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:07:03.380 12:30:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:03.380 12:30:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:03.380 12:30:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:03.380 12:30:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:03.380 12:30:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:05.934 12:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:05.934 00:07:05.934 real 0m21.386s 00:07:05.934 user 1m2.707s 00:07:05.934 sys 0m7.299s 00:07:05.934 12:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:05.934 12:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:05.934 ************************************ 00:07:05.934 END TEST nvmf_lvol 00:07:05.934 ************************************ 00:07:05.934 12:30:47 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:05.934 12:30:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:05.934 12:30:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:05.934 12:30:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:05.934 ************************************ 00:07:05.934 START TEST nvmf_lvs_grow 00:07:05.934 ************************************ 00:07:05.934 12:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:05.934 * Looking for test storage... 00:07:05.934 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:05.934 12:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:05.934 12:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:07:05.934 12:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:05.934 12:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:05.934 12:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:05.934 12:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:05.934 12:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:05.934 12:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:07:05.934 12:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:07:05.934 12:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:07:05.934 12:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:07:05.934 12:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:07:05.934 12:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:07:05.934 12:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:07:05.934 12:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:05.934 12:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:07:05.934 12:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:07:05.934 12:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:05.934 12:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:05.934 12:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:07:05.934 12:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:07:05.934 12:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:05.934 12:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:07:05.934 12:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:07:05.934 12:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:07:05.934 12:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:07:05.934 12:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:05.934 12:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:07:05.934 12:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:07:05.934 12:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:05.934 12:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:05.934 12:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:07:05.934 12:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:05.934 12:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:05.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.934 --rc genhtml_branch_coverage=1 00:07:05.934 --rc genhtml_function_coverage=1 00:07:05.934 --rc genhtml_legend=1 00:07:05.934 --rc geninfo_all_blocks=1 00:07:05.934 --rc geninfo_unexecuted_blocks=1 00:07:05.934 00:07:05.934 ' 00:07:05.934 12:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:05.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.934 --rc genhtml_branch_coverage=1 00:07:05.934 --rc genhtml_function_coverage=1 00:07:05.934 --rc genhtml_legend=1 00:07:05.934 --rc geninfo_all_blocks=1 00:07:05.934 --rc geninfo_unexecuted_blocks=1 00:07:05.934 00:07:05.934 ' 00:07:05.934 12:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:05.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.934 --rc genhtml_branch_coverage=1 00:07:05.934 --rc genhtml_function_coverage=1 00:07:05.934 --rc genhtml_legend=1 00:07:05.934 --rc geninfo_all_blocks=1 00:07:05.934 --rc geninfo_unexecuted_blocks=1 00:07:05.934 00:07:05.934 ' 00:07:05.934 12:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:05.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.934 --rc genhtml_branch_coverage=1 00:07:05.934 --rc genhtml_function_coverage=1 00:07:05.934 --rc genhtml_legend=1 00:07:05.934 --rc geninfo_all_blocks=1 00:07:05.934 --rc geninfo_unexecuted_blocks=1 00:07:05.934 00:07:05.934 ' 00:07:05.934 12:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:05.934 12:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:07:05.934 12:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:05.934 12:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:05.934 12:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:05.934 12:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:05.934 12:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:05.934 12:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:05.934 12:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:05.934 12:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:05.934 12:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:05.934 12:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:05.934 12:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:05.934 12:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:05.934 12:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:05.934 12:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:05.934 12:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:05.934 12:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:05.934 12:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:05.934 12:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:07:05.934 12:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:05.934 12:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:05.934 12:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:05.934 12:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.934 12:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.935 12:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.935 12:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:07:05.935 12:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.935 12:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:07:05.935 12:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:05.935 12:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:05.935 12:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:05.935 12:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:05.935 12:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:05.935 12:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:05.935 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:05.935 12:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:05.935 12:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:05.935 12:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:05.935 12:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:05.935 12:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:05.935 12:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:07:05.935 12:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:05.935 12:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:05.935 12:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:05.935 12:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:05.935 12:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:05.935 12:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:05.935 12:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:05.935 12:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:05.935 12:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:05.935 12:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:05.935 12:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:07:05.935 12:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:11.210 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:11.210 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:07:11.210 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:11.210 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:11.210 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:11.210 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:11.210 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:11.210 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:07:11.210 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:11.210 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:07:11.210 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:07:11.210 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:07:11.210 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:07:11.210 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:07:11.210 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:07:11.210 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:11.210 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:11.210 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:11.210 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:11.210 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:11.211 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:11.211 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:11.211 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:11.211 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:11.211 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:11.211 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:11.211 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:11.211 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:11.211 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:11.211 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:11.211 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:11.211 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:11.211 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:11.211 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:11.211 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:11.211 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:11.211 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:11.211 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:11.211 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:11.211 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:11.211 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:11.211 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:11.211 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:11.211 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:11.211 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:11.211 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:11.211 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:11.211 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:11.211 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:11.211 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:11.211 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:11.211 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:11.211 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:11.211 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:11.211 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:11.211 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:11.211 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:11.211 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:11.211 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:11.211 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:11.211 Found net devices under 0000:86:00.0: cvl_0_0 00:07:11.211 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:11.211 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:11.211 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:11.211 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:11.211 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:11.211 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:11.211 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:11.211 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:11.211 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:11.211 Found net devices under 0000:86:00.1: cvl_0_1 00:07:11.211 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:11.211 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:11.211 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:07:11.211 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:11.211 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:11.211 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:11.211 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:11.211 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:11.211 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:11.211 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:11.211 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:11.211 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:11.211 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:11.211 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:11.211 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:11.211 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:11.211 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:11.211 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:11.211 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:11.211 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:11.211 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:11.211 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:11.211 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:11.211 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:11.211 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:11.211 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:11.211 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:11.211 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:11.211 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:11.211 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:11.211 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.466 ms 00:07:11.211 00:07:11.211 --- 10.0.0.2 ping statistics --- 00:07:11.211 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:11.211 rtt min/avg/max/mdev = 0.466/0.466/0.466/0.000 ms 00:07:11.211 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:11.211 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:11.211 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.195 ms 00:07:11.211 00:07:11.211 --- 10.0.0.1 ping statistics --- 00:07:11.211 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:11.211 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:07:11.211 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:11.211 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:07:11.211 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:11.211 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:11.211 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:11.211 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:11.211 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:11.211 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:11.211 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:11.211 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:07:11.211 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:11.211 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:11.211 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:11.211 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=2377573 00:07:11.212 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:11.212 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 2377573 00:07:11.212 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 2377573 ']' 00:07:11.212 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:11.212 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:11.212 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:11.212 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:11.212 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:11.212 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:11.212 [2024-11-28 12:30:53.709175] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:07:11.212 [2024-11-28 12:30:53.709225] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:11.472 [2024-11-28 12:30:53.777491] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.472 [2024-11-28 12:30:53.818083] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:11.472 [2024-11-28 12:30:53.818122] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:11.472 [2024-11-28 12:30:53.818128] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:11.472 [2024-11-28 12:30:53.818134] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:11.472 [2024-11-28 12:30:53.818139] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:11.472 [2024-11-28 12:30:53.818693] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.472 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:11.472 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:07:11.472 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:11.472 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:11.472 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:11.472 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:11.472 12:30:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:11.731 [2024-11-28 12:30:54.119596] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:11.731 12:30:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:07:11.731 12:30:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:11.731 12:30:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:11.731 12:30:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:11.731 ************************************ 00:07:11.731 START TEST lvs_grow_clean 00:07:11.731 ************************************ 00:07:11.731 12:30:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:07:11.731 12:30:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:11.731 12:30:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:11.731 12:30:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:11.731 12:30:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:11.731 12:30:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:11.731 12:30:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:11.731 12:30:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:11.731 12:30:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:11.731 12:30:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:11.990 12:30:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:11.990 12:30:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:12.248 12:30:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=9e2920bc-5643-4bb2-a275-d977db0588c0 00:07:12.248 12:30:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9e2920bc-5643-4bb2-a275-d977db0588c0 00:07:12.249 12:30:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:12.507 12:30:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:12.507 12:30:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:12.507 12:30:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 9e2920bc-5643-4bb2-a275-d977db0588c0 lvol 150 00:07:12.507 12:30:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=db47d0f0-9636-45f9-8a68-965a4c105b3d 00:07:12.507 12:30:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:12.507 12:30:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:12.765 [2024-11-28 12:30:55.154386] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:12.765 [2024-11-28 12:30:55.154438] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:12.765 true 00:07:12.765 12:30:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9e2920bc-5643-4bb2-a275-d977db0588c0 00:07:12.765 12:30:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:13.024 12:30:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:13.024 12:30:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:13.024 12:30:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 db47d0f0-9636-45f9-8a68-965a4c105b3d 00:07:13.283 12:30:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:13.542 [2024-11-28 12:30:55.880597] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:13.542 12:30:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:13.802 12:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2378071 00:07:13.802 12:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:13.802 12:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:13.802 12:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2378071 /var/tmp/bdevperf.sock 00:07:13.802 12:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 2378071 ']' 00:07:13.802 12:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:13.802 12:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:13.802 12:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:13.802 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:13.802 12:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:13.802 12:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:13.802 [2024-11-28 12:30:56.133767] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:07:13.802 [2024-11-28 12:30:56.133815] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2378071 ] 00:07:13.802 [2024-11-28 12:30:56.195017] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.802 [2024-11-28 12:30:56.235330] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:14.061 12:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:14.061 12:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:07:14.061 12:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:14.320 Nvme0n1 00:07:14.320 12:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:14.580 [ 00:07:14.580 { 00:07:14.580 "name": "Nvme0n1", 00:07:14.580 "aliases": [ 00:07:14.580 "db47d0f0-9636-45f9-8a68-965a4c105b3d" 00:07:14.580 ], 00:07:14.580 "product_name": "NVMe disk", 00:07:14.580 "block_size": 4096, 00:07:14.580 "num_blocks": 38912, 00:07:14.580 "uuid": "db47d0f0-9636-45f9-8a68-965a4c105b3d", 00:07:14.580 "numa_id": 1, 00:07:14.580 "assigned_rate_limits": { 00:07:14.580 "rw_ios_per_sec": 0, 00:07:14.580 "rw_mbytes_per_sec": 0, 00:07:14.580 "r_mbytes_per_sec": 0, 00:07:14.580 "w_mbytes_per_sec": 0 00:07:14.580 }, 00:07:14.580 "claimed": false, 00:07:14.580 "zoned": false, 00:07:14.580 "supported_io_types": { 00:07:14.580 "read": true, 00:07:14.580 "write": true, 00:07:14.580 "unmap": true, 00:07:14.580 "flush": true, 00:07:14.580 "reset": true, 00:07:14.580 "nvme_admin": true, 00:07:14.580 "nvme_io": true, 00:07:14.580 "nvme_io_md": false, 00:07:14.580 "write_zeroes": true, 00:07:14.580 "zcopy": false, 00:07:14.580 "get_zone_info": false, 00:07:14.580 "zone_management": false, 00:07:14.580 "zone_append": false, 00:07:14.580 "compare": true, 00:07:14.580 "compare_and_write": true, 00:07:14.580 "abort": true, 00:07:14.580 "seek_hole": false, 00:07:14.580 "seek_data": false, 00:07:14.580 "copy": true, 00:07:14.580 "nvme_iov_md": false 00:07:14.580 }, 00:07:14.580 "memory_domains": [ 00:07:14.580 { 00:07:14.580 "dma_device_id": "system", 00:07:14.580 "dma_device_type": 1 00:07:14.580 } 00:07:14.580 ], 00:07:14.580 "driver_specific": { 00:07:14.580 "nvme": [ 00:07:14.580 { 00:07:14.580 "trid": { 00:07:14.580 "trtype": "TCP", 00:07:14.580 "adrfam": "IPv4", 00:07:14.580 "traddr": "10.0.0.2", 00:07:14.580 "trsvcid": "4420", 00:07:14.580 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:14.580 }, 00:07:14.580 "ctrlr_data": { 00:07:14.580 "cntlid": 1, 00:07:14.580 "vendor_id": "0x8086", 00:07:14.580 "model_number": "SPDK bdev Controller", 00:07:14.580 "serial_number": "SPDK0", 00:07:14.580 "firmware_revision": "25.01", 00:07:14.580 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:14.580 "oacs": { 00:07:14.580 "security": 0, 00:07:14.580 "format": 0, 00:07:14.580 "firmware": 0, 00:07:14.580 "ns_manage": 0 00:07:14.580 }, 00:07:14.580 "multi_ctrlr": true, 00:07:14.580 "ana_reporting": false 00:07:14.580 }, 00:07:14.580 "vs": { 00:07:14.580 "nvme_version": "1.3" 00:07:14.580 }, 00:07:14.580 "ns_data": { 00:07:14.580 "id": 1, 00:07:14.580 "can_share": true 00:07:14.580 } 00:07:14.580 } 00:07:14.580 ], 00:07:14.580 "mp_policy": "active_passive" 00:07:14.580 } 00:07:14.580 } 00:07:14.580 ] 00:07:14.580 12:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2378156 00:07:14.580 12:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:14.580 12:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:14.580 Running I/O for 10 seconds... 00:07:15.516 Latency(us) 00:07:15.516 [2024-11-28T11:30:58.035Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:15.516 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:15.516 Nvme0n1 : 1.00 22875.00 89.36 0.00 0.00 0.00 0.00 0.00 00:07:15.516 [2024-11-28T11:30:58.035Z] =================================================================================================================== 00:07:15.516 [2024-11-28T11:30:58.035Z] Total : 22875.00 89.36 0.00 0.00 0.00 0.00 0.00 00:07:15.516 00:07:16.505 12:30:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 9e2920bc-5643-4bb2-a275-d977db0588c0 00:07:16.505 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:16.505 Nvme0n1 : 2.00 22996.00 89.83 0.00 0.00 0.00 0.00 0.00 00:07:16.505 [2024-11-28T11:30:59.024Z] =================================================================================================================== 00:07:16.505 [2024-11-28T11:30:59.024Z] Total : 22996.00 89.83 0.00 0.00 0.00 0.00 0.00 00:07:16.505 00:07:16.763 true 00:07:16.763 12:30:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9e2920bc-5643-4bb2-a275-d977db0588c0 00:07:16.763 12:30:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:17.021 12:30:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:17.021 12:30:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:17.022 12:30:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2378156 00:07:17.587 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:17.587 Nvme0n1 : 3.00 22998.00 89.84 0.00 0.00 0.00 0.00 0.00 00:07:17.587 [2024-11-28T11:31:00.106Z] =================================================================================================================== 00:07:17.587 [2024-11-28T11:31:00.106Z] Total : 22998.00 89.84 0.00 0.00 0.00 0.00 0.00 00:07:17.587 00:07:18.523 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:18.523 Nvme0n1 : 4.00 22995.25 89.83 0.00 0.00 0.00 0.00 0.00 00:07:18.523 [2024-11-28T11:31:01.042Z] =================================================================================================================== 00:07:18.523 [2024-11-28T11:31:01.042Z] Total : 22995.25 89.83 0.00 0.00 0.00 0.00 0.00 00:07:18.523 00:07:19.901 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:19.901 Nvme0n1 : 5.00 23039.60 90.00 0.00 0.00 0.00 0.00 0.00 00:07:19.901 [2024-11-28T11:31:02.420Z] =================================================================================================================== 00:07:19.901 [2024-11-28T11:31:02.420Z] Total : 23039.60 90.00 0.00 0.00 0.00 0.00 0.00 00:07:19.901 00:07:20.839 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:20.839 Nvme0n1 : 6.00 23056.33 90.06 0.00 0.00 0.00 0.00 0.00 00:07:20.839 [2024-11-28T11:31:03.358Z] =================================================================================================================== 00:07:20.839 [2024-11-28T11:31:03.358Z] Total : 23056.33 90.06 0.00 0.00 0.00 0.00 0.00 00:07:20.839 00:07:21.775 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:21.775 Nvme0n1 : 7.00 23083.57 90.17 0.00 0.00 0.00 0.00 0.00 00:07:21.775 [2024-11-28T11:31:04.294Z] =================================================================================================================== 00:07:21.775 [2024-11-28T11:31:04.294Z] Total : 23083.57 90.17 0.00 0.00 0.00 0.00 0.00 00:07:21.775 00:07:22.716 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:22.716 Nvme0n1 : 8.00 23109.38 90.27 0.00 0.00 0.00 0.00 0.00 00:07:22.716 [2024-11-28T11:31:05.235Z] =================================================================================================================== 00:07:22.716 [2024-11-28T11:31:05.235Z] Total : 23109.38 90.27 0.00 0.00 0.00 0.00 0.00 00:07:22.716 00:07:23.655 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:23.655 Nvme0n1 : 9.00 23138.78 90.39 0.00 0.00 0.00 0.00 0.00 00:07:23.655 [2024-11-28T11:31:06.174Z] =================================================================================================================== 00:07:23.655 [2024-11-28T11:31:06.174Z] Total : 23138.78 90.39 0.00 0.00 0.00 0.00 0.00 00:07:23.655 00:07:24.592 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:24.592 Nvme0n1 : 10.00 23157.20 90.46 0.00 0.00 0.00 0.00 0.00 00:07:24.592 [2024-11-28T11:31:07.111Z] =================================================================================================================== 00:07:24.592 [2024-11-28T11:31:07.111Z] Total : 23157.20 90.46 0.00 0.00 0.00 0.00 0.00 00:07:24.592 00:07:24.592 00:07:24.592 Latency(us) 00:07:24.592 [2024-11-28T11:31:07.111Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:24.592 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:24.592 Nvme0n1 : 10.00 23160.08 90.47 0.00 0.00 5523.70 2265.27 11226.60 00:07:24.592 [2024-11-28T11:31:07.111Z] =================================================================================================================== 00:07:24.592 [2024-11-28T11:31:07.111Z] Total : 23160.08 90.47 0.00 0.00 5523.70 2265.27 11226.60 00:07:24.592 { 00:07:24.592 "results": [ 00:07:24.592 { 00:07:24.592 "job": "Nvme0n1", 00:07:24.592 "core_mask": "0x2", 00:07:24.592 "workload": "randwrite", 00:07:24.592 "status": "finished", 00:07:24.592 "queue_depth": 128, 00:07:24.592 "io_size": 4096, 00:07:24.592 "runtime": 10.004282, 00:07:24.592 "iops": 23160.08285252255, 00:07:24.592 "mibps": 90.46907364266622, 00:07:24.592 "io_failed": 0, 00:07:24.592 "io_timeout": 0, 00:07:24.592 "avg_latency_us": 5523.697043808524, 00:07:24.592 "min_latency_us": 2265.2660869565216, 00:07:24.592 "max_latency_us": 11226.601739130434 00:07:24.592 } 00:07:24.592 ], 00:07:24.592 "core_count": 1 00:07:24.592 } 00:07:24.592 12:31:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2378071 00:07:24.592 12:31:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 2378071 ']' 00:07:24.592 12:31:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 2378071 00:07:24.592 12:31:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:07:24.592 12:31:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:24.592 12:31:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2378071 00:07:24.592 12:31:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:24.592 12:31:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:24.592 12:31:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2378071' 00:07:24.592 killing process with pid 2378071 00:07:24.592 12:31:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 2378071 00:07:24.592 Received shutdown signal, test time was about 10.000000 seconds 00:07:24.592 00:07:24.592 Latency(us) 00:07:24.592 [2024-11-28T11:31:07.111Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:24.592 [2024-11-28T11:31:07.111Z] =================================================================================================================== 00:07:24.592 [2024-11-28T11:31:07.111Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:24.592 12:31:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 2378071 00:07:24.852 12:31:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:25.111 12:31:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:25.370 12:31:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9e2920bc-5643-4bb2-a275-d977db0588c0 00:07:25.370 12:31:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:25.370 12:31:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:25.370 12:31:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:07:25.370 12:31:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:25.630 [2024-11-28 12:31:08.019614] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:25.630 12:31:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9e2920bc-5643-4bb2-a275-d977db0588c0 00:07:25.630 12:31:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:07:25.630 12:31:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9e2920bc-5643-4bb2-a275-d977db0588c0 00:07:25.630 12:31:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:25.630 12:31:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:25.630 12:31:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:25.630 12:31:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:25.630 12:31:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:25.630 12:31:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:25.630 12:31:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:25.630 12:31:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:25.630 12:31:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9e2920bc-5643-4bb2-a275-d977db0588c0 00:07:25.890 request: 00:07:25.890 { 00:07:25.890 "uuid": "9e2920bc-5643-4bb2-a275-d977db0588c0", 00:07:25.890 "method": "bdev_lvol_get_lvstores", 00:07:25.890 "req_id": 1 00:07:25.890 } 00:07:25.890 Got JSON-RPC error response 00:07:25.890 response: 00:07:25.890 { 00:07:25.890 "code": -19, 00:07:25.890 "message": "No such device" 00:07:25.890 } 00:07:25.890 12:31:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:07:25.890 12:31:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:25.890 12:31:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:25.890 12:31:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:25.890 12:31:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:26.149 aio_bdev 00:07:26.149 12:31:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev db47d0f0-9636-45f9-8a68-965a4c105b3d 00:07:26.149 12:31:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=db47d0f0-9636-45f9-8a68-965a4c105b3d 00:07:26.149 12:31:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:26.149 12:31:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:07:26.149 12:31:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:26.149 12:31:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:26.149 12:31:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:26.149 12:31:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b db47d0f0-9636-45f9-8a68-965a4c105b3d -t 2000 00:07:26.408 [ 00:07:26.408 { 00:07:26.408 "name": "db47d0f0-9636-45f9-8a68-965a4c105b3d", 00:07:26.408 "aliases": [ 00:07:26.408 "lvs/lvol" 00:07:26.408 ], 00:07:26.408 "product_name": "Logical Volume", 00:07:26.408 "block_size": 4096, 00:07:26.408 "num_blocks": 38912, 00:07:26.408 "uuid": "db47d0f0-9636-45f9-8a68-965a4c105b3d", 00:07:26.408 "assigned_rate_limits": { 00:07:26.408 "rw_ios_per_sec": 0, 00:07:26.408 "rw_mbytes_per_sec": 0, 00:07:26.408 "r_mbytes_per_sec": 0, 00:07:26.408 "w_mbytes_per_sec": 0 00:07:26.408 }, 00:07:26.408 "claimed": false, 00:07:26.408 "zoned": false, 00:07:26.408 "supported_io_types": { 00:07:26.408 "read": true, 00:07:26.408 "write": true, 00:07:26.408 "unmap": true, 00:07:26.408 "flush": false, 00:07:26.408 "reset": true, 00:07:26.408 "nvme_admin": false, 00:07:26.408 "nvme_io": false, 00:07:26.408 "nvme_io_md": false, 00:07:26.408 "write_zeroes": true, 00:07:26.408 "zcopy": false, 00:07:26.408 "get_zone_info": false, 00:07:26.408 "zone_management": false, 00:07:26.408 "zone_append": false, 00:07:26.408 "compare": false, 00:07:26.408 "compare_and_write": false, 00:07:26.408 "abort": false, 00:07:26.408 "seek_hole": true, 00:07:26.408 "seek_data": true, 00:07:26.408 "copy": false, 00:07:26.408 "nvme_iov_md": false 00:07:26.408 }, 00:07:26.408 "driver_specific": { 00:07:26.408 "lvol": { 00:07:26.408 "lvol_store_uuid": "9e2920bc-5643-4bb2-a275-d977db0588c0", 00:07:26.408 "base_bdev": "aio_bdev", 00:07:26.408 "thin_provision": false, 00:07:26.408 "num_allocated_clusters": 38, 00:07:26.408 "snapshot": false, 00:07:26.408 "clone": false, 00:07:26.408 "esnap_clone": false 00:07:26.408 } 00:07:26.408 } 00:07:26.408 } 00:07:26.408 ] 00:07:26.408 12:31:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:07:26.408 12:31:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9e2920bc-5643-4bb2-a275-d977db0588c0 00:07:26.408 12:31:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:26.667 12:31:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:26.667 12:31:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:26.667 12:31:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9e2920bc-5643-4bb2-a275-d977db0588c0 00:07:26.926 12:31:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:26.926 12:31:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete db47d0f0-9636-45f9-8a68-965a4c105b3d 00:07:26.926 12:31:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 9e2920bc-5643-4bb2-a275-d977db0588c0 00:07:27.186 12:31:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:27.445 12:31:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:27.445 00:07:27.445 real 0m15.672s 00:07:27.445 user 0m15.256s 00:07:27.445 sys 0m1.407s 00:07:27.445 12:31:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:27.445 12:31:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:27.445 ************************************ 00:07:27.445 END TEST lvs_grow_clean 00:07:27.445 ************************************ 00:07:27.445 12:31:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:07:27.445 12:31:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:27.445 12:31:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:27.445 12:31:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:27.445 ************************************ 00:07:27.445 START TEST lvs_grow_dirty 00:07:27.445 ************************************ 00:07:27.445 12:31:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:07:27.445 12:31:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:27.445 12:31:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:27.445 12:31:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:27.445 12:31:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:27.445 12:31:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:27.445 12:31:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:27.445 12:31:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:27.445 12:31:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:27.445 12:31:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:27.704 12:31:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:27.704 12:31:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:27.963 12:31:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=19def632-068e-4d22-8798-8d18c6986e7f 00:07:27.963 12:31:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 19def632-068e-4d22-8798-8d18c6986e7f 00:07:27.963 12:31:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:28.222 12:31:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:28.222 12:31:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:28.222 12:31:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 19def632-068e-4d22-8798-8d18c6986e7f lvol 150 00:07:28.222 12:31:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=ebce656a-cbb7-4835-be7d-ddf190344bcc 00:07:28.222 12:31:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:28.223 12:31:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:28.480 [2024-11-28 12:31:10.901878] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:28.480 [2024-11-28 12:31:10.901933] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:28.480 true 00:07:28.480 12:31:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 19def632-068e-4d22-8798-8d18c6986e7f 00:07:28.480 12:31:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:28.739 12:31:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:28.739 12:31:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:28.998 12:31:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 ebce656a-cbb7-4835-be7d-ddf190344bcc 00:07:28.998 12:31:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:29.256 [2024-11-28 12:31:11.660131] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:29.256 12:31:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:29.515 12:31:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2380679 00:07:29.515 12:31:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:29.515 12:31:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:29.515 12:31:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2380679 /var/tmp/bdevperf.sock 00:07:29.515 12:31:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 2380679 ']' 00:07:29.515 12:31:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:29.515 12:31:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:29.515 12:31:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:29.515 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:29.515 12:31:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:29.515 12:31:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:29.515 [2024-11-28 12:31:11.896044] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:07:29.515 [2024-11-28 12:31:11.896091] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2380679 ] 00:07:29.515 [2024-11-28 12:31:11.956817] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.515 [2024-11-28 12:31:11.997103] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:29.775 12:31:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:29.775 12:31:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:07:29.775 12:31:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:30.033 Nvme0n1 00:07:30.034 12:31:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:30.034 [ 00:07:30.034 { 00:07:30.034 "name": "Nvme0n1", 00:07:30.034 "aliases": [ 00:07:30.034 "ebce656a-cbb7-4835-be7d-ddf190344bcc" 00:07:30.034 ], 00:07:30.034 "product_name": "NVMe disk", 00:07:30.034 "block_size": 4096, 00:07:30.034 "num_blocks": 38912, 00:07:30.034 "uuid": "ebce656a-cbb7-4835-be7d-ddf190344bcc", 00:07:30.034 "numa_id": 1, 00:07:30.034 "assigned_rate_limits": { 00:07:30.034 "rw_ios_per_sec": 0, 00:07:30.034 "rw_mbytes_per_sec": 0, 00:07:30.034 "r_mbytes_per_sec": 0, 00:07:30.034 "w_mbytes_per_sec": 0 00:07:30.034 }, 00:07:30.034 "claimed": false, 00:07:30.034 "zoned": false, 00:07:30.034 "supported_io_types": { 00:07:30.034 "read": true, 00:07:30.034 "write": true, 00:07:30.034 "unmap": true, 00:07:30.034 "flush": true, 00:07:30.034 "reset": true, 00:07:30.034 "nvme_admin": true, 00:07:30.034 "nvme_io": true, 00:07:30.034 "nvme_io_md": false, 00:07:30.034 "write_zeroes": true, 00:07:30.034 "zcopy": false, 00:07:30.034 "get_zone_info": false, 00:07:30.034 "zone_management": false, 00:07:30.034 "zone_append": false, 00:07:30.034 "compare": true, 00:07:30.034 "compare_and_write": true, 00:07:30.034 "abort": true, 00:07:30.034 "seek_hole": false, 00:07:30.034 "seek_data": false, 00:07:30.034 "copy": true, 00:07:30.034 "nvme_iov_md": false 00:07:30.034 }, 00:07:30.034 "memory_domains": [ 00:07:30.034 { 00:07:30.034 "dma_device_id": "system", 00:07:30.034 "dma_device_type": 1 00:07:30.034 } 00:07:30.034 ], 00:07:30.034 "driver_specific": { 00:07:30.034 "nvme": [ 00:07:30.034 { 00:07:30.034 "trid": { 00:07:30.034 "trtype": "TCP", 00:07:30.034 "adrfam": "IPv4", 00:07:30.034 "traddr": "10.0.0.2", 00:07:30.034 "trsvcid": "4420", 00:07:30.034 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:30.034 }, 00:07:30.034 "ctrlr_data": { 00:07:30.034 "cntlid": 1, 00:07:30.034 "vendor_id": "0x8086", 00:07:30.034 "model_number": "SPDK bdev Controller", 00:07:30.034 "serial_number": "SPDK0", 00:07:30.034 "firmware_revision": "25.01", 00:07:30.034 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:30.034 "oacs": { 00:07:30.034 "security": 0, 00:07:30.034 "format": 0, 00:07:30.034 "firmware": 0, 00:07:30.034 "ns_manage": 0 00:07:30.034 }, 00:07:30.034 "multi_ctrlr": true, 00:07:30.034 "ana_reporting": false 00:07:30.034 }, 00:07:30.034 "vs": { 00:07:30.034 "nvme_version": "1.3" 00:07:30.034 }, 00:07:30.034 "ns_data": { 00:07:30.034 "id": 1, 00:07:30.034 "can_share": true 00:07:30.034 } 00:07:30.034 } 00:07:30.034 ], 00:07:30.034 "mp_policy": "active_passive" 00:07:30.034 } 00:07:30.034 } 00:07:30.034 ] 00:07:30.034 12:31:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2380903 00:07:30.034 12:31:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:30.034 12:31:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:30.292 Running I/O for 10 seconds... 00:07:31.229 Latency(us) 00:07:31.229 [2024-11-28T11:31:13.748Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:31.229 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:31.229 Nvme0n1 : 1.00 22672.00 88.56 0.00 0.00 0.00 0.00 0.00 00:07:31.229 [2024-11-28T11:31:13.748Z] =================================================================================================================== 00:07:31.229 [2024-11-28T11:31:13.748Z] Total : 22672.00 88.56 0.00 0.00 0.00 0.00 0.00 00:07:31.229 00:07:32.167 12:31:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 19def632-068e-4d22-8798-8d18c6986e7f 00:07:32.167 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:32.167 Nvme0n1 : 2.00 22880.50 89.38 0.00 0.00 0.00 0.00 0.00 00:07:32.167 [2024-11-28T11:31:14.686Z] =================================================================================================================== 00:07:32.167 [2024-11-28T11:31:14.686Z] Total : 22880.50 89.38 0.00 0.00 0.00 0.00 0.00 00:07:32.167 00:07:32.426 true 00:07:32.426 12:31:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 19def632-068e-4d22-8798-8d18c6986e7f 00:07:32.426 12:31:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:32.685 12:31:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:32.685 12:31:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:32.685 12:31:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2380903 00:07:33.251 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:33.251 Nvme0n1 : 3.00 22917.00 89.52 0.00 0.00 0.00 0.00 0.00 00:07:33.251 [2024-11-28T11:31:15.770Z] =================================================================================================================== 00:07:33.251 [2024-11-28T11:31:15.770Z] Total : 22917.00 89.52 0.00 0.00 0.00 0.00 0.00 00:07:33.251 00:07:34.187 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:34.187 Nvme0n1 : 4.00 22967.25 89.72 0.00 0.00 0.00 0.00 0.00 00:07:34.187 [2024-11-28T11:31:16.706Z] =================================================================================================================== 00:07:34.187 [2024-11-28T11:31:16.706Z] Total : 22967.25 89.72 0.00 0.00 0.00 0.00 0.00 00:07:34.187 00:07:35.565 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:35.565 Nvme0n1 : 5.00 23008.20 89.88 0.00 0.00 0.00 0.00 0.00 00:07:35.565 [2024-11-28T11:31:18.084Z] =================================================================================================================== 00:07:35.565 [2024-11-28T11:31:18.084Z] Total : 23008.20 89.88 0.00 0.00 0.00 0.00 0.00 00:07:35.565 00:07:36.133 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:36.133 Nvme0n1 : 6.00 23045.50 90.02 0.00 0.00 0.00 0.00 0.00 00:07:36.133 [2024-11-28T11:31:18.652Z] =================================================================================================================== 00:07:36.133 [2024-11-28T11:31:18.652Z] Total : 23045.50 90.02 0.00 0.00 0.00 0.00 0.00 00:07:36.133 00:07:37.509 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:37.509 Nvme0n1 : 7.00 23082.71 90.17 0.00 0.00 0.00 0.00 0.00 00:07:37.509 [2024-11-28T11:31:20.028Z] =================================================================================================================== 00:07:37.509 [2024-11-28T11:31:20.028Z] Total : 23082.71 90.17 0.00 0.00 0.00 0.00 0.00 00:07:37.509 00:07:38.445 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:38.446 Nvme0n1 : 8.00 23052.38 90.05 0.00 0.00 0.00 0.00 0.00 00:07:38.446 [2024-11-28T11:31:20.965Z] =================================================================================================================== 00:07:38.446 [2024-11-28T11:31:20.965Z] Total : 23052.38 90.05 0.00 0.00 0.00 0.00 0.00 00:07:38.446 00:07:39.382 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:39.382 Nvme0n1 : 9.00 23072.67 90.13 0.00 0.00 0.00 0.00 0.00 00:07:39.382 [2024-11-28T11:31:21.901Z] =================================================================================================================== 00:07:39.382 [2024-11-28T11:31:21.901Z] Total : 23072.67 90.13 0.00 0.00 0.00 0.00 0.00 00:07:39.382 00:07:40.317 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:40.318 Nvme0n1 : 10.00 23093.50 90.21 0.00 0.00 0.00 0.00 0.00 00:07:40.318 [2024-11-28T11:31:22.837Z] =================================================================================================================== 00:07:40.318 [2024-11-28T11:31:22.837Z] Total : 23093.50 90.21 0.00 0.00 0.00 0.00 0.00 00:07:40.318 00:07:40.318 00:07:40.318 Latency(us) 00:07:40.318 [2024-11-28T11:31:22.837Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:40.318 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:40.318 Nvme0n1 : 10.00 23098.45 90.23 0.00 0.00 5538.53 3390.78 14816.83 00:07:40.318 [2024-11-28T11:31:22.837Z] =================================================================================================================== 00:07:40.318 [2024-11-28T11:31:22.837Z] Total : 23098.45 90.23 0.00 0.00 5538.53 3390.78 14816.83 00:07:40.318 { 00:07:40.318 "results": [ 00:07:40.318 { 00:07:40.318 "job": "Nvme0n1", 00:07:40.318 "core_mask": "0x2", 00:07:40.318 "workload": "randwrite", 00:07:40.318 "status": "finished", 00:07:40.318 "queue_depth": 128, 00:07:40.318 "io_size": 4096, 00:07:40.318 "runtime": 10.003397, 00:07:40.318 "iops": 23098.453455361214, 00:07:40.318 "mibps": 90.22833381000474, 00:07:40.318 "io_failed": 0, 00:07:40.318 "io_timeout": 0, 00:07:40.318 "avg_latency_us": 5538.5338975762115, 00:07:40.318 "min_latency_us": 3390.775652173913, 00:07:40.318 "max_latency_us": 14816.834782608696 00:07:40.318 } 00:07:40.318 ], 00:07:40.318 "core_count": 1 00:07:40.318 } 00:07:40.318 12:31:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2380679 00:07:40.318 12:31:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 2380679 ']' 00:07:40.318 12:31:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 2380679 00:07:40.318 12:31:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:07:40.318 12:31:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:40.318 12:31:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2380679 00:07:40.318 12:31:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:40.318 12:31:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:40.318 12:31:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2380679' 00:07:40.318 killing process with pid 2380679 00:07:40.318 12:31:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 2380679 00:07:40.318 Received shutdown signal, test time was about 10.000000 seconds 00:07:40.318 00:07:40.318 Latency(us) 00:07:40.318 [2024-11-28T11:31:22.837Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:40.318 [2024-11-28T11:31:22.837Z] =================================================================================================================== 00:07:40.318 [2024-11-28T11:31:22.837Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:40.318 12:31:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 2380679 00:07:40.577 12:31:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:40.577 12:31:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:40.836 12:31:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 19def632-068e-4d22-8798-8d18c6986e7f 00:07:40.836 12:31:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:41.094 12:31:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:41.094 12:31:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:07:41.094 12:31:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2377573 00:07:41.094 12:31:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2377573 00:07:41.094 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2377573 Killed "${NVMF_APP[@]}" "$@" 00:07:41.094 12:31:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:07:41.094 12:31:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:07:41.094 12:31:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:41.094 12:31:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:41.094 12:31:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:41.094 12:31:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=2382752 00:07:41.094 12:31:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 2382752 00:07:41.094 12:31:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:41.094 12:31:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 2382752 ']' 00:07:41.094 12:31:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:41.094 12:31:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:41.094 12:31:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:41.094 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:41.094 12:31:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:41.094 12:31:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:41.094 [2024-11-28 12:31:23.582191] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:07:41.094 [2024-11-28 12:31:23.582239] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:41.352 [2024-11-28 12:31:23.649576] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.352 [2024-11-28 12:31:23.690842] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:41.352 [2024-11-28 12:31:23.690875] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:41.352 [2024-11-28 12:31:23.690882] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:41.352 [2024-11-28 12:31:23.690888] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:41.352 [2024-11-28 12:31:23.690893] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:41.352 [2024-11-28 12:31:23.691480] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.352 12:31:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:41.352 12:31:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:07:41.352 12:31:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:41.352 12:31:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:41.352 12:31:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:41.352 12:31:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:41.352 12:31:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:41.611 [2024-11-28 12:31:24.003118] blobstore.c:4896:bs_recover: *NOTICE*: Performing recovery on blobstore 00:07:41.611 [2024-11-28 12:31:24.003205] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:07:41.611 [2024-11-28 12:31:24.003231] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:07:41.611 12:31:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:07:41.611 12:31:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev ebce656a-cbb7-4835-be7d-ddf190344bcc 00:07:41.611 12:31:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=ebce656a-cbb7-4835-be7d-ddf190344bcc 00:07:41.611 12:31:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:41.611 12:31:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:07:41.611 12:31:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:41.611 12:31:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:41.611 12:31:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:41.869 12:31:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b ebce656a-cbb7-4835-be7d-ddf190344bcc -t 2000 00:07:41.869 [ 00:07:41.869 { 00:07:41.869 "name": "ebce656a-cbb7-4835-be7d-ddf190344bcc", 00:07:41.869 "aliases": [ 00:07:41.869 "lvs/lvol" 00:07:41.869 ], 00:07:41.869 "product_name": "Logical Volume", 00:07:41.869 "block_size": 4096, 00:07:41.869 "num_blocks": 38912, 00:07:41.869 "uuid": "ebce656a-cbb7-4835-be7d-ddf190344bcc", 00:07:41.869 "assigned_rate_limits": { 00:07:41.869 "rw_ios_per_sec": 0, 00:07:41.869 "rw_mbytes_per_sec": 0, 00:07:41.869 "r_mbytes_per_sec": 0, 00:07:41.869 "w_mbytes_per_sec": 0 00:07:41.869 }, 00:07:41.869 "claimed": false, 00:07:41.869 "zoned": false, 00:07:41.869 "supported_io_types": { 00:07:41.869 "read": true, 00:07:41.869 "write": true, 00:07:41.869 "unmap": true, 00:07:41.869 "flush": false, 00:07:41.869 "reset": true, 00:07:41.869 "nvme_admin": false, 00:07:41.869 "nvme_io": false, 00:07:41.869 "nvme_io_md": false, 00:07:41.869 "write_zeroes": true, 00:07:41.869 "zcopy": false, 00:07:41.869 "get_zone_info": false, 00:07:41.869 "zone_management": false, 00:07:41.869 "zone_append": false, 00:07:41.869 "compare": false, 00:07:41.869 "compare_and_write": false, 00:07:41.869 "abort": false, 00:07:41.869 "seek_hole": true, 00:07:41.869 "seek_data": true, 00:07:41.869 "copy": false, 00:07:41.869 "nvme_iov_md": false 00:07:41.869 }, 00:07:41.869 "driver_specific": { 00:07:41.869 "lvol": { 00:07:41.869 "lvol_store_uuid": "19def632-068e-4d22-8798-8d18c6986e7f", 00:07:41.869 "base_bdev": "aio_bdev", 00:07:41.869 "thin_provision": false, 00:07:41.869 "num_allocated_clusters": 38, 00:07:41.869 "snapshot": false, 00:07:41.869 "clone": false, 00:07:41.869 "esnap_clone": false 00:07:41.869 } 00:07:41.869 } 00:07:41.869 } 00:07:41.870 ] 00:07:42.128 12:31:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:07:42.128 12:31:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 19def632-068e-4d22-8798-8d18c6986e7f 00:07:42.128 12:31:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:07:42.128 12:31:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:07:42.128 12:31:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 19def632-068e-4d22-8798-8d18c6986e7f 00:07:42.128 12:31:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:07:42.386 12:31:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:07:42.386 12:31:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:42.645 [2024-11-28 12:31:24.972237] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:42.645 12:31:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 19def632-068e-4d22-8798-8d18c6986e7f 00:07:42.645 12:31:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:07:42.645 12:31:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 19def632-068e-4d22-8798-8d18c6986e7f 00:07:42.645 12:31:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:42.645 12:31:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:42.645 12:31:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:42.645 12:31:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:42.645 12:31:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:42.645 12:31:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:42.645 12:31:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:42.645 12:31:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:42.645 12:31:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 19def632-068e-4d22-8798-8d18c6986e7f 00:07:42.903 request: 00:07:42.903 { 00:07:42.903 "uuid": "19def632-068e-4d22-8798-8d18c6986e7f", 00:07:42.903 "method": "bdev_lvol_get_lvstores", 00:07:42.903 "req_id": 1 00:07:42.903 } 00:07:42.903 Got JSON-RPC error response 00:07:42.903 response: 00:07:42.903 { 00:07:42.903 "code": -19, 00:07:42.903 "message": "No such device" 00:07:42.903 } 00:07:42.903 12:31:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:07:42.903 12:31:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:42.903 12:31:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:42.903 12:31:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:42.903 12:31:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:42.903 aio_bdev 00:07:42.903 12:31:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev ebce656a-cbb7-4835-be7d-ddf190344bcc 00:07:42.903 12:31:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=ebce656a-cbb7-4835-be7d-ddf190344bcc 00:07:42.903 12:31:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:42.903 12:31:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:07:42.903 12:31:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:42.903 12:31:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:42.903 12:31:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:43.161 12:31:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b ebce656a-cbb7-4835-be7d-ddf190344bcc -t 2000 00:07:43.420 [ 00:07:43.420 { 00:07:43.420 "name": "ebce656a-cbb7-4835-be7d-ddf190344bcc", 00:07:43.420 "aliases": [ 00:07:43.420 "lvs/lvol" 00:07:43.420 ], 00:07:43.420 "product_name": "Logical Volume", 00:07:43.420 "block_size": 4096, 00:07:43.420 "num_blocks": 38912, 00:07:43.420 "uuid": "ebce656a-cbb7-4835-be7d-ddf190344bcc", 00:07:43.420 "assigned_rate_limits": { 00:07:43.420 "rw_ios_per_sec": 0, 00:07:43.420 "rw_mbytes_per_sec": 0, 00:07:43.420 "r_mbytes_per_sec": 0, 00:07:43.420 "w_mbytes_per_sec": 0 00:07:43.420 }, 00:07:43.420 "claimed": false, 00:07:43.420 "zoned": false, 00:07:43.420 "supported_io_types": { 00:07:43.420 "read": true, 00:07:43.420 "write": true, 00:07:43.420 "unmap": true, 00:07:43.420 "flush": false, 00:07:43.420 "reset": true, 00:07:43.420 "nvme_admin": false, 00:07:43.420 "nvme_io": false, 00:07:43.420 "nvme_io_md": false, 00:07:43.420 "write_zeroes": true, 00:07:43.420 "zcopy": false, 00:07:43.420 "get_zone_info": false, 00:07:43.420 "zone_management": false, 00:07:43.420 "zone_append": false, 00:07:43.420 "compare": false, 00:07:43.420 "compare_and_write": false, 00:07:43.420 "abort": false, 00:07:43.420 "seek_hole": true, 00:07:43.420 "seek_data": true, 00:07:43.420 "copy": false, 00:07:43.420 "nvme_iov_md": false 00:07:43.420 }, 00:07:43.420 "driver_specific": { 00:07:43.420 "lvol": { 00:07:43.420 "lvol_store_uuid": "19def632-068e-4d22-8798-8d18c6986e7f", 00:07:43.420 "base_bdev": "aio_bdev", 00:07:43.420 "thin_provision": false, 00:07:43.420 "num_allocated_clusters": 38, 00:07:43.420 "snapshot": false, 00:07:43.420 "clone": false, 00:07:43.420 "esnap_clone": false 00:07:43.420 } 00:07:43.420 } 00:07:43.420 } 00:07:43.420 ] 00:07:43.420 12:31:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:07:43.420 12:31:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 19def632-068e-4d22-8798-8d18c6986e7f 00:07:43.420 12:31:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:43.680 12:31:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:43.680 12:31:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 19def632-068e-4d22-8798-8d18c6986e7f 00:07:43.680 12:31:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:43.680 12:31:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:43.680 12:31:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete ebce656a-cbb7-4835-be7d-ddf190344bcc 00:07:43.939 12:31:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 19def632-068e-4d22-8798-8d18c6986e7f 00:07:44.198 12:31:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:44.456 12:31:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:44.456 00:07:44.456 real 0m16.870s 00:07:44.456 user 0m43.700s 00:07:44.456 sys 0m3.664s 00:07:44.456 12:31:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:44.457 12:31:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:44.457 ************************************ 00:07:44.457 END TEST lvs_grow_dirty 00:07:44.457 ************************************ 00:07:44.457 12:31:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:07:44.457 12:31:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:07:44.457 12:31:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:07:44.457 12:31:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:07:44.457 12:31:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:07:44.457 12:31:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:07:44.457 12:31:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:07:44.457 12:31:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:07:44.457 12:31:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:07:44.457 nvmf_trace.0 00:07:44.457 12:31:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:07:44.457 12:31:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:07:44.457 12:31:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:44.457 12:31:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:07:44.457 12:31:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:44.457 12:31:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:07:44.457 12:31:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:44.457 12:31:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:44.457 rmmod nvme_tcp 00:07:44.457 rmmod nvme_fabrics 00:07:44.457 rmmod nvme_keyring 00:07:44.457 12:31:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:44.457 12:31:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:07:44.457 12:31:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:07:44.457 12:31:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 2382752 ']' 00:07:44.457 12:31:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 2382752 00:07:44.457 12:31:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 2382752 ']' 00:07:44.457 12:31:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 2382752 00:07:44.457 12:31:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:07:44.457 12:31:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:44.457 12:31:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2382752 00:07:44.716 12:31:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:44.716 12:31:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:44.716 12:31:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2382752' 00:07:44.716 killing process with pid 2382752 00:07:44.716 12:31:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 2382752 00:07:44.716 12:31:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 2382752 00:07:44.716 12:31:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:44.716 12:31:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:44.716 12:31:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:44.716 12:31:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:07:44.716 12:31:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:07:44.716 12:31:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:44.716 12:31:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:07:44.716 12:31:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:44.716 12:31:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:44.716 12:31:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:44.716 12:31:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:44.716 12:31:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:47.254 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:47.254 00:07:47.254 real 0m41.245s 00:07:47.254 user 1m4.366s 00:07:47.254 sys 0m9.706s 00:07:47.254 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:47.254 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:47.254 ************************************ 00:07:47.254 END TEST nvmf_lvs_grow 00:07:47.254 ************************************ 00:07:47.254 12:31:29 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:07:47.254 12:31:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:47.254 12:31:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:47.254 12:31:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:47.254 ************************************ 00:07:47.254 START TEST nvmf_bdev_io_wait 00:07:47.254 ************************************ 00:07:47.254 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:07:47.254 * Looking for test storage... 00:07:47.254 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:47.254 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:47.254 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:07:47.254 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:47.254 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:47.254 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:47.254 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:47.254 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:47.254 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:07:47.254 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:07:47.254 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:07:47.254 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:07:47.254 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:07:47.254 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:07:47.254 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:07:47.254 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:47.254 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:07:47.254 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:07:47.254 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:47.254 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:47.254 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:07:47.254 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:07:47.254 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:47.254 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:07:47.254 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:07:47.254 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:07:47.254 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:07:47.254 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:47.254 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:07:47.254 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:07:47.254 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:47.254 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:47.254 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:07:47.254 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:47.254 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:47.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:47.254 --rc genhtml_branch_coverage=1 00:07:47.254 --rc genhtml_function_coverage=1 00:07:47.254 --rc genhtml_legend=1 00:07:47.254 --rc geninfo_all_blocks=1 00:07:47.254 --rc geninfo_unexecuted_blocks=1 00:07:47.254 00:07:47.254 ' 00:07:47.254 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:47.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:47.254 --rc genhtml_branch_coverage=1 00:07:47.254 --rc genhtml_function_coverage=1 00:07:47.254 --rc genhtml_legend=1 00:07:47.254 --rc geninfo_all_blocks=1 00:07:47.254 --rc geninfo_unexecuted_blocks=1 00:07:47.254 00:07:47.254 ' 00:07:47.254 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:47.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:47.254 --rc genhtml_branch_coverage=1 00:07:47.254 --rc genhtml_function_coverage=1 00:07:47.254 --rc genhtml_legend=1 00:07:47.254 --rc geninfo_all_blocks=1 00:07:47.254 --rc geninfo_unexecuted_blocks=1 00:07:47.254 00:07:47.254 ' 00:07:47.254 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:47.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:47.254 --rc genhtml_branch_coverage=1 00:07:47.254 --rc genhtml_function_coverage=1 00:07:47.254 --rc genhtml_legend=1 00:07:47.255 --rc geninfo_all_blocks=1 00:07:47.255 --rc geninfo_unexecuted_blocks=1 00:07:47.255 00:07:47.255 ' 00:07:47.255 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:47.255 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:07:47.255 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:47.255 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:47.255 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:47.255 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:47.255 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:47.255 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:47.255 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:47.255 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:47.255 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:47.255 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:47.255 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:47.255 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:47.255 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:47.255 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:47.255 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:47.255 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:47.255 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:47.255 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:07:47.255 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:47.255 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:47.255 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:47.255 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:47.255 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:47.255 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:47.255 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:07:47.255 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:47.255 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:07:47.255 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:47.255 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:47.255 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:47.255 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:47.255 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:47.255 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:47.255 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:47.255 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:47.255 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:47.255 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:47.255 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:47.255 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:47.255 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:07:47.255 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:47.255 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:47.255 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:47.255 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:47.255 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:47.255 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:47.255 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:47.255 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:47.255 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:47.255 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:47.255 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:07:47.255 12:31:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:52.528 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:52.528 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:07:52.528 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:52.528 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:52.528 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:52.528 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:52.528 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:52.528 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:07:52.528 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:52.528 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:07:52.528 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:07:52.528 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:07:52.528 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:07:52.528 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:07:52.528 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:07:52.528 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:52.528 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:52.528 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:52.528 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:52.528 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:52.528 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:52.528 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:52.528 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:52.528 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:52.528 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:52.528 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:52.528 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:52.528 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:52.528 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:52.528 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:52.528 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:52.528 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:52.528 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:52.528 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:52.528 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:52.528 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:52.528 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:52.528 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:52.528 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:52.528 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:52.528 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:52.528 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:52.528 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:52.528 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:52.528 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:52.528 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:52.528 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:52.528 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:52.528 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:52.528 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:52.528 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:52.528 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:52.528 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:52.529 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:52.529 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:52.529 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:52.529 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:52.529 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:52.529 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:52.529 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:52.529 Found net devices under 0000:86:00.0: cvl_0_0 00:07:52.529 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:52.529 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:52.529 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:52.529 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:52.529 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:52.529 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:52.529 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:52.529 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:52.529 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:52.529 Found net devices under 0000:86:00.1: cvl_0_1 00:07:52.529 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:52.529 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:52.529 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:07:52.529 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:52.529 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:52.529 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:52.529 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:52.529 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:52.529 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:52.529 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:52.529 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:52.529 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:52.529 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:52.529 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:52.529 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:52.529 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:52.529 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:52.529 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:52.529 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:52.529 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:52.529 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:52.529 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:52.529 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:52.529 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:52.529 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:52.529 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:52.529 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:52.529 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:52.529 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:52.529 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:52.529 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.406 ms 00:07:52.529 00:07:52.529 --- 10.0.0.2 ping statistics --- 00:07:52.529 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:52.529 rtt min/avg/max/mdev = 0.406/0.406/0.406/0.000 ms 00:07:52.529 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:52.529 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:52.529 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.196 ms 00:07:52.529 00:07:52.529 --- 10.0.0.1 ping statistics --- 00:07:52.529 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:52.529 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:07:52.529 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:52.529 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:07:52.529 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:52.529 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:52.529 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:52.529 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:52.529 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:52.529 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:52.529 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:52.529 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:07:52.529 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:52.529 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:52.529 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:52.529 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=2386811 00:07:52.529 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 2386811 00:07:52.529 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 2386811 ']' 00:07:52.529 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:52.529 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:52.529 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:52.529 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:52.529 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:07:52.529 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:52.529 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:52.529 [2024-11-28 12:31:34.688186] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:07:52.529 [2024-11-28 12:31:34.688228] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:52.529 [2024-11-28 12:31:34.752754] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:52.529 [2024-11-28 12:31:34.796871] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:52.529 [2024-11-28 12:31:34.796910] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:52.529 [2024-11-28 12:31:34.796918] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:52.529 [2024-11-28 12:31:34.796925] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:52.529 [2024-11-28 12:31:34.796930] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:52.529 [2024-11-28 12:31:34.798438] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:52.529 [2024-11-28 12:31:34.798535] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:52.529 [2024-11-28 12:31:34.798623] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:52.529 [2024-11-28 12:31:34.798625] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.529 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:52.529 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:07:52.529 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:52.529 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:52.529 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:52.529 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:52.529 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:07:52.529 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.529 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:52.529 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.529 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:07:52.529 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.529 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:52.529 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.529 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:52.530 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.530 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:52.530 [2024-11-28 12:31:34.943134] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:52.530 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.530 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:07:52.530 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.530 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:52.530 Malloc0 00:07:52.530 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.530 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:52.530 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.530 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:52.530 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.530 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:52.530 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.530 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:52.530 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.530 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:52.530 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.530 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:52.530 [2024-11-28 12:31:34.994765] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:52.530 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.530 12:31:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2386835 00:07:52.530 12:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:07:52.530 12:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:07:52.530 12:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2386837 00:07:52.530 12:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:52.530 12:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:52.530 12:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:52.530 12:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:52.530 { 00:07:52.530 "params": { 00:07:52.530 "name": "Nvme$subsystem", 00:07:52.530 "trtype": "$TEST_TRANSPORT", 00:07:52.530 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:52.530 "adrfam": "ipv4", 00:07:52.530 "trsvcid": "$NVMF_PORT", 00:07:52.530 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:52.530 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:52.530 "hdgst": ${hdgst:-false}, 00:07:52.530 "ddgst": ${ddgst:-false} 00:07:52.530 }, 00:07:52.530 "method": "bdev_nvme_attach_controller" 00:07:52.530 } 00:07:52.530 EOF 00:07:52.530 )") 00:07:52.530 12:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:07:52.530 12:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2386839 00:07:52.530 12:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:07:52.530 12:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:52.530 12:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:52.530 12:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:52.530 12:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:07:52.530 12:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:07:52.530 12:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:52.530 { 00:07:52.530 "params": { 00:07:52.530 "name": "Nvme$subsystem", 00:07:52.530 "trtype": "$TEST_TRANSPORT", 00:07:52.530 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:52.530 "adrfam": "ipv4", 00:07:52.530 "trsvcid": "$NVMF_PORT", 00:07:52.530 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:52.530 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:52.530 "hdgst": ${hdgst:-false}, 00:07:52.530 "ddgst": ${ddgst:-false} 00:07:52.530 }, 00:07:52.530 "method": "bdev_nvme_attach_controller" 00:07:52.530 } 00:07:52.530 EOF 00:07:52.530 )") 00:07:52.530 12:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2386842 00:07:52.530 12:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:07:52.530 12:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:52.530 12:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:52.530 12:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:52.530 12:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:52.530 12:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:07:52.530 12:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:52.530 { 00:07:52.530 "params": { 00:07:52.530 "name": "Nvme$subsystem", 00:07:52.530 "trtype": "$TEST_TRANSPORT", 00:07:52.530 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:52.530 "adrfam": "ipv4", 00:07:52.530 "trsvcid": "$NVMF_PORT", 00:07:52.530 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:52.530 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:52.530 "hdgst": ${hdgst:-false}, 00:07:52.530 "ddgst": ${ddgst:-false} 00:07:52.530 }, 00:07:52.530 "method": "bdev_nvme_attach_controller" 00:07:52.530 } 00:07:52.530 EOF 00:07:52.530 )") 00:07:52.530 12:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:07:52.530 12:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:52.530 12:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:52.530 12:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:52.530 12:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:52.530 12:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:52.530 { 00:07:52.530 "params": { 00:07:52.530 "name": "Nvme$subsystem", 00:07:52.530 "trtype": "$TEST_TRANSPORT", 00:07:52.530 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:52.530 "adrfam": "ipv4", 00:07:52.530 "trsvcid": "$NVMF_PORT", 00:07:52.530 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:52.530 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:52.530 "hdgst": ${hdgst:-false}, 00:07:52.530 "ddgst": ${ddgst:-false} 00:07:52.530 }, 00:07:52.530 "method": "bdev_nvme_attach_controller" 00:07:52.530 } 00:07:52.530 EOF 00:07:52.530 )") 00:07:52.530 12:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:52.530 12:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2386835 00:07:52.530 12:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:52.530 12:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:52.530 12:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:52.530 12:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:52.530 12:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:52.530 "params": { 00:07:52.530 "name": "Nvme1", 00:07:52.530 "trtype": "tcp", 00:07:52.530 "traddr": "10.0.0.2", 00:07:52.530 "adrfam": "ipv4", 00:07:52.530 "trsvcid": "4420", 00:07:52.530 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:52.530 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:52.530 "hdgst": false, 00:07:52.530 "ddgst": false 00:07:52.530 }, 00:07:52.530 "method": "bdev_nvme_attach_controller" 00:07:52.530 }' 00:07:52.530 12:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:52.530 12:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:52.530 12:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:52.530 12:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:52.530 "params": { 00:07:52.530 "name": "Nvme1", 00:07:52.530 "trtype": "tcp", 00:07:52.530 "traddr": "10.0.0.2", 00:07:52.530 "adrfam": "ipv4", 00:07:52.530 "trsvcid": "4420", 00:07:52.530 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:52.530 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:52.530 "hdgst": false, 00:07:52.530 "ddgst": false 00:07:52.530 }, 00:07:52.530 "method": "bdev_nvme_attach_controller" 00:07:52.530 }' 00:07:52.530 12:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:52.530 12:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:52.530 "params": { 00:07:52.530 "name": "Nvme1", 00:07:52.530 "trtype": "tcp", 00:07:52.530 "traddr": "10.0.0.2", 00:07:52.530 "adrfam": "ipv4", 00:07:52.530 "trsvcid": "4420", 00:07:52.530 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:52.530 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:52.530 "hdgst": false, 00:07:52.530 "ddgst": false 00:07:52.530 }, 00:07:52.531 "method": "bdev_nvme_attach_controller" 00:07:52.531 }' 00:07:52.531 12:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:52.531 12:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:52.531 "params": { 00:07:52.531 "name": "Nvme1", 00:07:52.531 "trtype": "tcp", 00:07:52.531 "traddr": "10.0.0.2", 00:07:52.531 "adrfam": "ipv4", 00:07:52.531 "trsvcid": "4420", 00:07:52.531 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:52.531 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:52.531 "hdgst": false, 00:07:52.531 "ddgst": false 00:07:52.531 }, 00:07:52.531 "method": "bdev_nvme_attach_controller" 00:07:52.531 }' 00:07:52.788 [2024-11-28 12:31:35.047657] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:07:52.788 [2024-11-28 12:31:35.047658] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:07:52.788 [2024-11-28 12:31:35.047659] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:07:52.788 [2024-11-28 12:31:35.047708] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-28 12:31:35.047708] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-28 12:31:35.047707] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:07:52.788 --proc-type=auto ] 00:07:52.788 --proc-type=auto ] 00:07:52.788 [2024-11-28 12:31:35.050242] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:07:52.788 [2024-11-28 12:31:35.050290] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:07:52.788 [2024-11-28 12:31:35.242906] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.788 [2024-11-28 12:31:35.285929] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:53.046 [2024-11-28 12:31:35.335584] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.046 [2024-11-28 12:31:35.378708] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:07:53.046 [2024-11-28 12:31:35.434009] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.046 [2024-11-28 12:31:35.477934] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.046 [2024-11-28 12:31:35.494014] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:07:53.046 [2024-11-28 12:31:35.520706] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:07:53.305 Running I/O for 1 seconds... 00:07:53.305 Running I/O for 1 seconds... 00:07:53.305 Running I/O for 1 seconds... 00:07:53.305 Running I/O for 1 seconds... 00:07:54.242 236576.00 IOPS, 924.12 MiB/s 00:07:54.242 Latency(us) 00:07:54.242 [2024-11-28T11:31:36.761Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:54.242 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:07:54.242 Nvme1n1 : 1.00 236209.14 922.69 0.00 0.00 539.04 227.06 1538.67 00:07:54.242 [2024-11-28T11:31:36.761Z] =================================================================================================================== 00:07:54.243 [2024-11-28T11:31:36.762Z] Total : 236209.14 922.69 0.00 0.00 539.04 227.06 1538.67 00:07:54.243 11371.00 IOPS, 44.42 MiB/s 00:07:54.243 Latency(us) 00:07:54.243 [2024-11-28T11:31:36.762Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:54.243 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:07:54.243 Nvme1n1 : 1.01 11430.39 44.65 0.00 0.00 11159.99 5613.30 18350.08 00:07:54.243 [2024-11-28T11:31:36.762Z] =================================================================================================================== 00:07:54.243 [2024-11-28T11:31:36.762Z] Total : 11430.39 44.65 0.00 0.00 11159.99 5613.30 18350.08 00:07:54.243 11179.00 IOPS, 43.67 MiB/s 00:07:54.243 Latency(us) 00:07:54.243 [2024-11-28T11:31:36.762Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:54.243 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:07:54.243 Nvme1n1 : 1.01 11249.82 43.94 0.00 0.00 11343.54 4331.07 19147.91 00:07:54.243 [2024-11-28T11:31:36.762Z] =================================================================================================================== 00:07:54.243 [2024-11-28T11:31:36.762Z] Total : 11249.82 43.94 0.00 0.00 11343.54 4331.07 19147.91 00:07:54.502 9663.00 IOPS, 37.75 MiB/s 00:07:54.502 Latency(us) 00:07:54.502 [2024-11-28T11:31:37.021Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:54.502 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:07:54.502 Nvme1n1 : 1.01 9738.80 38.04 0.00 0.00 13103.18 4673.00 22567.18 00:07:54.502 [2024-11-28T11:31:37.021Z] =================================================================================================================== 00:07:54.502 [2024-11-28T11:31:37.021Z] Total : 9738.80 38.04 0.00 0.00 13103.18 4673.00 22567.18 00:07:54.502 12:31:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2386837 00:07:54.502 12:31:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2386839 00:07:54.502 12:31:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2386842 00:07:54.502 12:31:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:54.502 12:31:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.502 12:31:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:54.502 12:31:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.502 12:31:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:07:54.502 12:31:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:07:54.502 12:31:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:54.502 12:31:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:07:54.502 12:31:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:54.502 12:31:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:07:54.502 12:31:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:54.502 12:31:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:54.502 rmmod nvme_tcp 00:07:54.502 rmmod nvme_fabrics 00:07:54.502 rmmod nvme_keyring 00:07:54.502 12:31:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:54.502 12:31:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:07:54.502 12:31:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:07:54.502 12:31:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 2386811 ']' 00:07:54.502 12:31:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 2386811 00:07:54.502 12:31:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 2386811 ']' 00:07:54.502 12:31:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 2386811 00:07:54.502 12:31:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:07:54.502 12:31:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:54.502 12:31:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2386811 00:07:54.761 12:31:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:54.761 12:31:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:54.761 12:31:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2386811' 00:07:54.761 killing process with pid 2386811 00:07:54.761 12:31:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 2386811 00:07:54.761 12:31:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 2386811 00:07:54.761 12:31:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:54.761 12:31:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:54.761 12:31:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:54.761 12:31:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:07:54.761 12:31:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:07:54.761 12:31:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:54.761 12:31:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:07:54.761 12:31:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:54.761 12:31:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:54.761 12:31:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:54.761 12:31:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:54.761 12:31:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:57.298 12:31:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:57.298 00:07:57.298 real 0m10.016s 00:07:57.298 user 0m16.325s 00:07:57.298 sys 0m5.629s 00:07:57.298 12:31:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:57.298 12:31:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:57.298 ************************************ 00:07:57.298 END TEST nvmf_bdev_io_wait 00:07:57.298 ************************************ 00:07:57.298 12:31:39 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:07:57.298 12:31:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:57.298 12:31:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:57.298 12:31:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:57.298 ************************************ 00:07:57.298 START TEST nvmf_queue_depth 00:07:57.298 ************************************ 00:07:57.298 12:31:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:07:57.298 * Looking for test storage... 00:07:57.298 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:57.298 12:31:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:57.298 12:31:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:07:57.298 12:31:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:57.298 12:31:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:57.299 12:31:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:57.299 12:31:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:57.299 12:31:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:57.299 12:31:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:07:57.299 12:31:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:07:57.299 12:31:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:07:57.299 12:31:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:07:57.299 12:31:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:07:57.299 12:31:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:07:57.299 12:31:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:07:57.299 12:31:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:57.299 12:31:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:07:57.299 12:31:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:07:57.299 12:31:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:57.299 12:31:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:57.299 12:31:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:07:57.299 12:31:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:07:57.299 12:31:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:57.299 12:31:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:07:57.299 12:31:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:07:57.299 12:31:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:07:57.299 12:31:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:07:57.299 12:31:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:57.299 12:31:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:07:57.299 12:31:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:07:57.299 12:31:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:57.299 12:31:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:57.299 12:31:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:07:57.299 12:31:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:57.299 12:31:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:57.299 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:57.299 --rc genhtml_branch_coverage=1 00:07:57.299 --rc genhtml_function_coverage=1 00:07:57.299 --rc genhtml_legend=1 00:07:57.299 --rc geninfo_all_blocks=1 00:07:57.299 --rc geninfo_unexecuted_blocks=1 00:07:57.299 00:07:57.299 ' 00:07:57.299 12:31:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:57.299 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:57.299 --rc genhtml_branch_coverage=1 00:07:57.299 --rc genhtml_function_coverage=1 00:07:57.299 --rc genhtml_legend=1 00:07:57.299 --rc geninfo_all_blocks=1 00:07:57.299 --rc geninfo_unexecuted_blocks=1 00:07:57.299 00:07:57.299 ' 00:07:57.299 12:31:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:57.299 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:57.299 --rc genhtml_branch_coverage=1 00:07:57.299 --rc genhtml_function_coverage=1 00:07:57.299 --rc genhtml_legend=1 00:07:57.299 --rc geninfo_all_blocks=1 00:07:57.299 --rc geninfo_unexecuted_blocks=1 00:07:57.299 00:07:57.299 ' 00:07:57.299 12:31:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:57.299 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:57.299 --rc genhtml_branch_coverage=1 00:07:57.299 --rc genhtml_function_coverage=1 00:07:57.299 --rc genhtml_legend=1 00:07:57.299 --rc geninfo_all_blocks=1 00:07:57.299 --rc geninfo_unexecuted_blocks=1 00:07:57.299 00:07:57.299 ' 00:07:57.299 12:31:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:57.299 12:31:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:07:57.299 12:31:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:57.299 12:31:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:57.299 12:31:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:57.299 12:31:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:57.299 12:31:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:57.299 12:31:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:57.299 12:31:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:57.299 12:31:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:57.299 12:31:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:57.299 12:31:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:57.299 12:31:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:57.299 12:31:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:57.299 12:31:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:57.299 12:31:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:57.299 12:31:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:57.299 12:31:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:57.299 12:31:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:57.299 12:31:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:07:57.299 12:31:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:57.299 12:31:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:57.299 12:31:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:57.299 12:31:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.299 12:31:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.299 12:31:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.299 12:31:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:07:57.299 12:31:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.299 12:31:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:07:57.299 12:31:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:57.299 12:31:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:57.299 12:31:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:57.299 12:31:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:57.299 12:31:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:57.299 12:31:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:57.299 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:57.299 12:31:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:57.299 12:31:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:57.299 12:31:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:57.299 12:31:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:07:57.299 12:31:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:07:57.299 12:31:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:57.299 12:31:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:07:57.299 12:31:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:57.299 12:31:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:57.299 12:31:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:57.300 12:31:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:57.300 12:31:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:57.300 12:31:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:57.300 12:31:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:57.300 12:31:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:57.300 12:31:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:57.300 12:31:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:57.300 12:31:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:07:57.300 12:31:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:02.568 12:31:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:02.568 12:31:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:08:02.569 12:31:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:02.569 12:31:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:02.569 12:31:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:02.569 12:31:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:02.569 12:31:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:02.569 12:31:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:08:02.569 12:31:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:02.569 12:31:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:08:02.569 12:31:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:08:02.569 12:31:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:08:02.569 12:31:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:08:02.569 12:31:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:08:02.569 12:31:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:08:02.569 12:31:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:02.569 12:31:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:02.569 12:31:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:02.569 12:31:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:02.569 12:31:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:02.569 12:31:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:02.569 12:31:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:02.569 12:31:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:02.569 12:31:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:02.569 12:31:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:02.569 12:31:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:02.569 12:31:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:02.569 12:31:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:02.569 12:31:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:02.569 12:31:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:02.569 12:31:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:02.569 12:31:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:02.569 12:31:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:02.569 12:31:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:02.569 12:31:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:02.569 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:02.569 12:31:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:02.569 12:31:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:02.569 12:31:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:02.569 12:31:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:02.569 12:31:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:02.569 12:31:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:02.569 12:31:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:02.569 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:02.569 12:31:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:02.569 12:31:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:02.569 12:31:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:02.569 12:31:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:02.569 12:31:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:02.569 12:31:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:02.569 12:31:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:02.569 12:31:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:02.569 12:31:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:02.569 12:31:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:02.569 12:31:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:02.569 12:31:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:02.569 12:31:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:02.569 12:31:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:02.569 12:31:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:02.569 12:31:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:02.569 Found net devices under 0000:86:00.0: cvl_0_0 00:08:02.569 12:31:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:02.569 12:31:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:02.569 12:31:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:02.569 12:31:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:02.569 12:31:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:02.569 12:31:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:02.569 12:31:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:02.569 12:31:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:02.569 12:31:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:02.569 Found net devices under 0000:86:00.1: cvl_0_1 00:08:02.569 12:31:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:02.569 12:31:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:02.569 12:31:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:08:02.569 12:31:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:02.569 12:31:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:02.569 12:31:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:02.569 12:31:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:02.569 12:31:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:02.569 12:31:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:02.569 12:31:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:02.569 12:31:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:02.569 12:31:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:02.569 12:31:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:02.569 12:31:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:02.569 12:31:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:02.569 12:31:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:02.569 12:31:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:02.569 12:31:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:02.569 12:31:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:02.569 12:31:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:02.569 12:31:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:02.569 12:31:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:02.569 12:31:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:02.828 12:31:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:02.828 12:31:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:02.828 12:31:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:02.828 12:31:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:02.828 12:31:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:02.828 12:31:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:02.828 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:02.828 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.361 ms 00:08:02.828 00:08:02.828 --- 10.0.0.2 ping statistics --- 00:08:02.828 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:02.828 rtt min/avg/max/mdev = 0.361/0.361/0.361/0.000 ms 00:08:02.828 12:31:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:02.828 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:02.828 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.224 ms 00:08:02.828 00:08:02.828 --- 10.0.0.1 ping statistics --- 00:08:02.828 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:02.828 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:08:02.828 12:31:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:02.828 12:31:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:08:02.828 12:31:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:02.828 12:31:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:02.828 12:31:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:02.828 12:31:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:02.828 12:31:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:02.828 12:31:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:02.828 12:31:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:02.828 12:31:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:08:02.828 12:31:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:02.828 12:31:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:02.828 12:31:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:02.828 12:31:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=2390633 00:08:02.828 12:31:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 2390633 00:08:02.828 12:31:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:02.829 12:31:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 2390633 ']' 00:08:02.829 12:31:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:02.829 12:31:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:02.829 12:31:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:02.829 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:02.829 12:31:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:02.829 12:31:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:02.829 [2024-11-28 12:31:45.294691] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:08:02.829 [2024-11-28 12:31:45.294745] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:03.089 [2024-11-28 12:31:45.368583] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.089 [2024-11-28 12:31:45.411962] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:03.089 [2024-11-28 12:31:45.411999] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:03.089 [2024-11-28 12:31:45.412007] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:03.089 [2024-11-28 12:31:45.412014] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:03.089 [2024-11-28 12:31:45.412019] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:03.089 [2024-11-28 12:31:45.412578] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:03.089 12:31:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:03.089 12:31:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:08:03.089 12:31:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:03.089 12:31:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:03.089 12:31:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:03.089 12:31:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:03.089 12:31:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:03.089 12:31:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.089 12:31:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:03.089 [2024-11-28 12:31:45.550056] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:03.089 12:31:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.089 12:31:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:03.089 12:31:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.089 12:31:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:03.089 Malloc0 00:08:03.089 12:31:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.089 12:31:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:03.089 12:31:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.089 12:31:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:03.089 12:31:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.089 12:31:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:03.089 12:31:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.089 12:31:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:03.089 12:31:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.089 12:31:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:03.089 12:31:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.089 12:31:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:03.089 [2024-11-28 12:31:45.592581] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:03.089 12:31:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.089 12:31:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=2390856 00:08:03.089 12:31:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:03.089 12:31:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 2390856 /var/tmp/bdevperf.sock 00:08:03.089 12:31:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 2390856 ']' 00:08:03.089 12:31:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:03.089 12:31:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:03.089 12:31:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:08:03.089 12:31:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:03.089 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:03.089 12:31:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:03.089 12:31:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:03.348 [2024-11-28 12:31:45.642441] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:08:03.348 [2024-11-28 12:31:45.642482] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2390856 ] 00:08:03.348 [2024-11-28 12:31:45.705312] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.348 [2024-11-28 12:31:45.746188] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.348 12:31:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:03.348 12:31:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:08:03.348 12:31:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:08:03.348 12:31:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.348 12:31:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:03.608 NVMe0n1 00:08:03.608 12:31:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.608 12:31:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:03.608 Running I/O for 10 seconds... 00:08:06.014 11278.00 IOPS, 44.05 MiB/s [2024-11-28T11:31:49.163Z] 11767.50 IOPS, 45.97 MiB/s [2024-11-28T11:31:50.099Z] 11937.33 IOPS, 46.63 MiB/s [2024-11-28T11:31:51.074Z] 11848.00 IOPS, 46.28 MiB/s [2024-11-28T11:31:52.451Z] 11876.20 IOPS, 46.39 MiB/s [2024-11-28T11:31:53.390Z] 11933.83 IOPS, 46.62 MiB/s [2024-11-28T11:31:54.327Z] 11969.29 IOPS, 46.76 MiB/s [2024-11-28T11:31:55.267Z] 12004.50 IOPS, 46.89 MiB/s [2024-11-28T11:31:56.207Z] 11993.22 IOPS, 46.85 MiB/s [2024-11-28T11:31:56.207Z] 11994.70 IOPS, 46.85 MiB/s 00:08:13.688 Latency(us) 00:08:13.688 [2024-11-28T11:31:56.207Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:13.688 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:08:13.688 Verification LBA range: start 0x0 length 0x4000 00:08:13.688 NVMe0n1 : 10.05 12029.50 46.99 0.00 0.00 84816.22 10086.85 57215.78 00:08:13.688 [2024-11-28T11:31:56.207Z] =================================================================================================================== 00:08:13.688 [2024-11-28T11:31:56.207Z] Total : 12029.50 46.99 0.00 0.00 84816.22 10086.85 57215.78 00:08:13.688 { 00:08:13.688 "results": [ 00:08:13.688 { 00:08:13.688 "job": "NVMe0n1", 00:08:13.688 "core_mask": "0x1", 00:08:13.688 "workload": "verify", 00:08:13.688 "status": "finished", 00:08:13.688 "verify_range": { 00:08:13.688 "start": 0, 00:08:13.688 "length": 16384 00:08:13.688 }, 00:08:13.688 "queue_depth": 1024, 00:08:13.688 "io_size": 4096, 00:08:13.688 "runtime": 10.051372, 00:08:13.688 "iops": 12029.502042109276, 00:08:13.688 "mibps": 46.99024235198936, 00:08:13.688 "io_failed": 0, 00:08:13.688 "io_timeout": 0, 00:08:13.688 "avg_latency_us": 84816.22493511144, 00:08:13.688 "min_latency_us": 10086.845217391305, 00:08:13.688 "max_latency_us": 57215.77739130435 00:08:13.688 } 00:08:13.688 ], 00:08:13.688 "core_count": 1 00:08:13.688 } 00:08:13.688 12:31:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 2390856 00:08:13.688 12:31:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 2390856 ']' 00:08:13.688 12:31:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 2390856 00:08:13.688 12:31:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:08:13.688 12:31:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:13.688 12:31:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2390856 00:08:13.688 12:31:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:13.688 12:31:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:13.688 12:31:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2390856' 00:08:13.688 killing process with pid 2390856 00:08:13.688 12:31:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 2390856 00:08:13.688 Received shutdown signal, test time was about 10.000000 seconds 00:08:13.688 00:08:13.688 Latency(us) 00:08:13.688 [2024-11-28T11:31:56.207Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:13.688 [2024-11-28T11:31:56.207Z] =================================================================================================================== 00:08:13.688 [2024-11-28T11:31:56.207Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:13.688 12:31:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 2390856 00:08:13.947 12:31:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:08:13.947 12:31:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:08:13.947 12:31:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:13.947 12:31:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:08:13.947 12:31:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:13.947 12:31:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:08:13.947 12:31:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:13.947 12:31:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:13.947 rmmod nvme_tcp 00:08:13.947 rmmod nvme_fabrics 00:08:13.947 rmmod nvme_keyring 00:08:13.947 12:31:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:13.947 12:31:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:08:13.947 12:31:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:08:13.947 12:31:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 2390633 ']' 00:08:13.947 12:31:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 2390633 00:08:13.947 12:31:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 2390633 ']' 00:08:13.947 12:31:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 2390633 00:08:13.947 12:31:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:08:13.947 12:31:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:13.947 12:31:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2390633 00:08:14.207 12:31:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:14.207 12:31:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:14.207 12:31:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2390633' 00:08:14.207 killing process with pid 2390633 00:08:14.207 12:31:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 2390633 00:08:14.207 12:31:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 2390633 00:08:14.207 12:31:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:14.207 12:31:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:14.207 12:31:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:14.207 12:31:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:08:14.207 12:31:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:08:14.207 12:31:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:08:14.207 12:31:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:14.207 12:31:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:14.207 12:31:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:14.207 12:31:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:14.207 12:31:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:14.207 12:31:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:16.743 12:31:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:16.743 00:08:16.743 real 0m19.369s 00:08:16.743 user 0m22.825s 00:08:16.743 sys 0m5.877s 00:08:16.743 12:31:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:16.743 12:31:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:16.743 ************************************ 00:08:16.743 END TEST nvmf_queue_depth 00:08:16.743 ************************************ 00:08:16.743 12:31:58 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:16.743 12:31:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:16.743 12:31:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:16.743 12:31:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:16.743 ************************************ 00:08:16.743 START TEST nvmf_target_multipath 00:08:16.743 ************************************ 00:08:16.743 12:31:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:16.743 * Looking for test storage... 00:08:16.743 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:16.743 12:31:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:16.743 12:31:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:08:16.743 12:31:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:16.743 12:31:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:16.743 12:31:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:16.743 12:31:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:16.743 12:31:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:16.743 12:31:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:08:16.743 12:31:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:08:16.743 12:31:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:08:16.743 12:31:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:08:16.744 12:31:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:08:16.744 12:31:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:08:16.744 12:31:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:08:16.744 12:31:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:16.744 12:31:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:08:16.744 12:31:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:08:16.744 12:31:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:16.744 12:31:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:16.744 12:31:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:08:16.744 12:31:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:08:16.744 12:31:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:16.744 12:31:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:08:16.744 12:31:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:08:16.744 12:31:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:08:16.744 12:31:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:08:16.744 12:31:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:16.744 12:31:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:08:16.744 12:31:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:08:16.744 12:31:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:16.744 12:31:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:16.744 12:31:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:08:16.744 12:31:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:16.744 12:31:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:16.744 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:16.744 --rc genhtml_branch_coverage=1 00:08:16.744 --rc genhtml_function_coverage=1 00:08:16.744 --rc genhtml_legend=1 00:08:16.744 --rc geninfo_all_blocks=1 00:08:16.744 --rc geninfo_unexecuted_blocks=1 00:08:16.744 00:08:16.744 ' 00:08:16.744 12:31:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:16.744 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:16.744 --rc genhtml_branch_coverage=1 00:08:16.744 --rc genhtml_function_coverage=1 00:08:16.744 --rc genhtml_legend=1 00:08:16.744 --rc geninfo_all_blocks=1 00:08:16.744 --rc geninfo_unexecuted_blocks=1 00:08:16.744 00:08:16.744 ' 00:08:16.744 12:31:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:16.744 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:16.744 --rc genhtml_branch_coverage=1 00:08:16.744 --rc genhtml_function_coverage=1 00:08:16.744 --rc genhtml_legend=1 00:08:16.744 --rc geninfo_all_blocks=1 00:08:16.744 --rc geninfo_unexecuted_blocks=1 00:08:16.744 00:08:16.744 ' 00:08:16.744 12:31:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:16.744 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:16.744 --rc genhtml_branch_coverage=1 00:08:16.744 --rc genhtml_function_coverage=1 00:08:16.744 --rc genhtml_legend=1 00:08:16.744 --rc geninfo_all_blocks=1 00:08:16.744 --rc geninfo_unexecuted_blocks=1 00:08:16.744 00:08:16.744 ' 00:08:16.744 12:31:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:16.744 12:31:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:08:16.744 12:31:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:16.744 12:31:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:16.744 12:31:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:16.744 12:31:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:16.744 12:31:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:16.744 12:31:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:16.744 12:31:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:16.744 12:31:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:16.744 12:31:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:16.744 12:31:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:16.744 12:31:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:16.744 12:31:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:16.744 12:31:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:16.744 12:31:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:16.744 12:31:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:16.744 12:31:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:16.744 12:31:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:16.744 12:31:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:08:16.744 12:31:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:16.744 12:31:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:16.744 12:31:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:16.744 12:31:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:16.744 12:31:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:16.744 12:31:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:16.744 12:31:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:08:16.744 12:31:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:16.744 12:31:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:08:16.744 12:31:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:16.744 12:31:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:16.744 12:31:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:16.744 12:31:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:16.744 12:31:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:16.744 12:31:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:16.744 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:16.744 12:31:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:16.744 12:31:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:16.744 12:31:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:16.744 12:31:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:16.744 12:31:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:16.744 12:31:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:08:16.744 12:31:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:16.744 12:31:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:08:16.744 12:31:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:16.744 12:31:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:16.744 12:31:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:16.744 12:31:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:16.744 12:31:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:16.744 12:31:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:16.744 12:31:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:16.745 12:31:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:16.745 12:31:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:16.745 12:31:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:16.745 12:31:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:08:16.745 12:31:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:22.019 12:32:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:22.019 12:32:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:08:22.019 12:32:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:22.019 12:32:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:22.019 12:32:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:22.019 12:32:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:22.019 12:32:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:22.019 12:32:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:08:22.019 12:32:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:22.019 12:32:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:08:22.019 12:32:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:08:22.019 12:32:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:08:22.019 12:32:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:08:22.019 12:32:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:08:22.019 12:32:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:08:22.019 12:32:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:22.019 12:32:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:22.019 12:32:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:22.019 12:32:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:22.019 12:32:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:22.019 12:32:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:22.019 12:32:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:22.019 12:32:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:22.019 12:32:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:22.019 12:32:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:22.019 12:32:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:22.019 12:32:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:22.019 12:32:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:22.019 12:32:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:22.019 12:32:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:22.019 12:32:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:22.019 12:32:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:22.019 12:32:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:22.019 12:32:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:22.019 12:32:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:22.020 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:22.020 12:32:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:22.020 12:32:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:22.020 12:32:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:22.020 12:32:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:22.020 12:32:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:22.020 12:32:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:22.020 12:32:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:22.020 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:22.020 12:32:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:22.020 12:32:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:22.020 12:32:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:22.020 12:32:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:22.020 12:32:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:22.020 12:32:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:22.020 12:32:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:22.020 12:32:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:22.020 12:32:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:22.020 12:32:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:22.020 12:32:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:22.020 12:32:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:22.020 12:32:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:22.020 12:32:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:22.020 12:32:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:22.020 12:32:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:22.020 Found net devices under 0000:86:00.0: cvl_0_0 00:08:22.020 12:32:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:22.020 12:32:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:22.020 12:32:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:22.020 12:32:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:22.020 12:32:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:22.020 12:32:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:22.020 12:32:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:22.020 12:32:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:22.020 12:32:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:22.020 Found net devices under 0000:86:00.1: cvl_0_1 00:08:22.020 12:32:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:22.020 12:32:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:22.020 12:32:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:08:22.020 12:32:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:22.020 12:32:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:22.020 12:32:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:22.020 12:32:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:22.020 12:32:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:22.020 12:32:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:22.020 12:32:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:22.020 12:32:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:22.020 12:32:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:22.020 12:32:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:22.020 12:32:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:22.020 12:32:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:22.020 12:32:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:22.020 12:32:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:22.020 12:32:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:22.020 12:32:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:22.020 12:32:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:22.020 12:32:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:22.020 12:32:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:22.020 12:32:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:22.020 12:32:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:22.020 12:32:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:22.020 12:32:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:22.020 12:32:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:22.020 12:32:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:22.020 12:32:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:22.020 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:22.020 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.481 ms 00:08:22.020 00:08:22.020 --- 10.0.0.2 ping statistics --- 00:08:22.020 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:22.020 rtt min/avg/max/mdev = 0.481/0.481/0.481/0.000 ms 00:08:22.020 12:32:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:22.020 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:22.020 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.200 ms 00:08:22.020 00:08:22.020 --- 10.0.0.1 ping statistics --- 00:08:22.020 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:22.020 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:08:22.020 12:32:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:22.020 12:32:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:08:22.020 12:32:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:22.020 12:32:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:22.020 12:32:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:22.020 12:32:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:22.020 12:32:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:22.020 12:32:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:22.020 12:32:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:22.020 12:32:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:08:22.020 12:32:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:08:22.020 only one NIC for nvmf test 00:08:22.020 12:32:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:08:22.020 12:32:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:22.020 12:32:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:08:22.020 12:32:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:22.020 12:32:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:08:22.020 12:32:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:22.020 12:32:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:22.020 rmmod nvme_tcp 00:08:22.020 rmmod nvme_fabrics 00:08:22.020 rmmod nvme_keyring 00:08:22.020 12:32:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:22.020 12:32:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:08:22.020 12:32:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:08:22.020 12:32:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:08:22.020 12:32:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:22.020 12:32:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:22.020 12:32:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:22.020 12:32:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:08:22.020 12:32:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:08:22.020 12:32:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:08:22.020 12:32:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:22.020 12:32:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:22.020 12:32:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:22.020 12:32:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:22.020 12:32:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:22.020 12:32:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:24.557 12:32:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:24.557 12:32:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:08:24.558 12:32:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:08:24.558 12:32:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:24.558 12:32:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:08:24.558 12:32:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:24.558 12:32:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:08:24.558 12:32:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:24.558 12:32:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:24.558 12:32:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:24.558 12:32:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:08:24.558 12:32:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:08:24.558 12:32:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:08:24.558 12:32:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:24.558 12:32:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:24.558 12:32:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:24.558 12:32:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:08:24.558 12:32:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:08:24.558 12:32:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:24.558 12:32:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:08:24.558 12:32:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:24.558 12:32:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:24.558 12:32:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:24.558 12:32:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:24.558 12:32:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:24.558 12:32:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:24.558 00:08:24.558 real 0m7.744s 00:08:24.558 user 0m1.503s 00:08:24.558 sys 0m4.134s 00:08:24.558 12:32:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:24.558 12:32:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:24.558 ************************************ 00:08:24.558 END TEST nvmf_target_multipath 00:08:24.558 ************************************ 00:08:24.558 12:32:06 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:24.558 12:32:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:24.558 12:32:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:24.558 12:32:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:24.558 ************************************ 00:08:24.558 START TEST nvmf_zcopy 00:08:24.558 ************************************ 00:08:24.558 12:32:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:24.558 * Looking for test storage... 00:08:24.558 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:24.558 12:32:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:24.558 12:32:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:08:24.558 12:32:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:24.558 12:32:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:24.558 12:32:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:24.558 12:32:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:24.558 12:32:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:24.558 12:32:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:08:24.558 12:32:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:08:24.558 12:32:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:08:24.558 12:32:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:08:24.558 12:32:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:08:24.558 12:32:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:08:24.558 12:32:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:08:24.558 12:32:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:24.558 12:32:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:08:24.558 12:32:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:08:24.558 12:32:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:24.558 12:32:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:24.558 12:32:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:08:24.558 12:32:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:08:24.558 12:32:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:24.558 12:32:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:08:24.558 12:32:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:08:24.558 12:32:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:08:24.558 12:32:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:08:24.558 12:32:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:24.558 12:32:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:08:24.558 12:32:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:08:24.558 12:32:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:24.558 12:32:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:24.558 12:32:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:08:24.558 12:32:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:24.558 12:32:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:24.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:24.558 --rc genhtml_branch_coverage=1 00:08:24.558 --rc genhtml_function_coverage=1 00:08:24.558 --rc genhtml_legend=1 00:08:24.558 --rc geninfo_all_blocks=1 00:08:24.558 --rc geninfo_unexecuted_blocks=1 00:08:24.558 00:08:24.558 ' 00:08:24.558 12:32:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:24.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:24.558 --rc genhtml_branch_coverage=1 00:08:24.558 --rc genhtml_function_coverage=1 00:08:24.558 --rc genhtml_legend=1 00:08:24.558 --rc geninfo_all_blocks=1 00:08:24.558 --rc geninfo_unexecuted_blocks=1 00:08:24.558 00:08:24.558 ' 00:08:24.558 12:32:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:24.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:24.558 --rc genhtml_branch_coverage=1 00:08:24.558 --rc genhtml_function_coverage=1 00:08:24.558 --rc genhtml_legend=1 00:08:24.558 --rc geninfo_all_blocks=1 00:08:24.558 --rc geninfo_unexecuted_blocks=1 00:08:24.558 00:08:24.558 ' 00:08:24.558 12:32:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:24.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:24.558 --rc genhtml_branch_coverage=1 00:08:24.558 --rc genhtml_function_coverage=1 00:08:24.558 --rc genhtml_legend=1 00:08:24.558 --rc geninfo_all_blocks=1 00:08:24.558 --rc geninfo_unexecuted_blocks=1 00:08:24.558 00:08:24.558 ' 00:08:24.558 12:32:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:24.558 12:32:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:08:24.558 12:32:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:24.558 12:32:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:24.558 12:32:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:24.558 12:32:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:24.558 12:32:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:24.558 12:32:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:24.558 12:32:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:24.558 12:32:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:24.558 12:32:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:24.558 12:32:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:24.558 12:32:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:24.558 12:32:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:24.558 12:32:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:24.558 12:32:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:24.558 12:32:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:24.558 12:32:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:24.558 12:32:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:24.558 12:32:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:08:24.558 12:32:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:24.558 12:32:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:24.558 12:32:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:24.559 12:32:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:24.559 12:32:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:24.559 12:32:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:24.559 12:32:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:08:24.559 12:32:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:24.559 12:32:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:08:24.559 12:32:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:24.559 12:32:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:24.559 12:32:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:24.559 12:32:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:24.559 12:32:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:24.559 12:32:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:24.559 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:24.559 12:32:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:24.559 12:32:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:24.559 12:32:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:24.559 12:32:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:08:24.559 12:32:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:24.559 12:32:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:24.559 12:32:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:24.559 12:32:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:24.559 12:32:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:24.559 12:32:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:24.559 12:32:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:24.559 12:32:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:24.559 12:32:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:24.559 12:32:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:24.559 12:32:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:08:24.559 12:32:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:29.833 12:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:29.833 12:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:08:29.833 12:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:29.833 12:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:29.833 12:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:29.833 12:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:29.833 12:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:29.833 12:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:08:29.833 12:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:29.833 12:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:08:29.833 12:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:08:29.833 12:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:08:29.833 12:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:08:29.833 12:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:08:29.833 12:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:08:29.833 12:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:29.833 12:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:29.833 12:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:29.833 12:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:29.833 12:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:29.833 12:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:29.833 12:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:29.833 12:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:29.833 12:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:29.833 12:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:29.833 12:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:29.833 12:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:29.833 12:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:29.833 12:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:29.833 12:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:29.833 12:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:29.833 12:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:29.833 12:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:29.833 12:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:29.833 12:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:29.833 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:29.833 12:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:29.833 12:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:29.833 12:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:29.833 12:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:29.833 12:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:29.833 12:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:29.833 12:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:29.833 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:29.833 12:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:29.833 12:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:29.833 12:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:29.833 12:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:29.833 12:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:29.833 12:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:29.833 12:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:29.833 12:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:29.833 12:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:29.833 12:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:29.833 12:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:29.833 12:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:29.833 12:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:29.834 12:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:29.834 12:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:29.834 12:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:29.834 Found net devices under 0000:86:00.0: cvl_0_0 00:08:29.834 12:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:29.834 12:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:29.834 12:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:29.834 12:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:29.834 12:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:29.834 12:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:29.834 12:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:29.834 12:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:29.834 12:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:29.834 Found net devices under 0000:86:00.1: cvl_0_1 00:08:29.834 12:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:29.834 12:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:29.834 12:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:08:29.834 12:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:29.834 12:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:29.834 12:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:29.834 12:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:29.834 12:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:29.834 12:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:29.834 12:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:29.834 12:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:29.834 12:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:29.834 12:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:29.834 12:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:29.834 12:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:29.834 12:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:29.834 12:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:29.834 12:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:29.834 12:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:29.834 12:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:29.834 12:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:29.834 12:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:29.834 12:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:29.834 12:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:30.093 12:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:30.093 12:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:30.093 12:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:30.093 12:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:30.093 12:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:30.093 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:30.093 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.475 ms 00:08:30.093 00:08:30.093 --- 10.0.0.2 ping statistics --- 00:08:30.093 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:30.093 rtt min/avg/max/mdev = 0.475/0.475/0.475/0.000 ms 00:08:30.093 12:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:30.093 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:30.093 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.205 ms 00:08:30.093 00:08:30.093 --- 10.0.0.1 ping statistics --- 00:08:30.093 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:30.093 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:08:30.093 12:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:30.093 12:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:08:30.093 12:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:30.093 12:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:30.093 12:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:30.093 12:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:30.093 12:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:30.093 12:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:30.093 12:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:30.093 12:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:08:30.093 12:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:30.093 12:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:30.093 12:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:30.093 12:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=2399564 00:08:30.093 12:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 2399564 00:08:30.093 12:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 2399564 ']' 00:08:30.093 12:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:30.093 12:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:30.093 12:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:30.093 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:30.093 12:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:30.093 12:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:30.093 12:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:30.093 [2024-11-28 12:32:12.535709] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:08:30.093 [2024-11-28 12:32:12.535755] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:30.093 [2024-11-28 12:32:12.601351] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:30.352 [2024-11-28 12:32:12.642573] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:30.352 [2024-11-28 12:32:12.642606] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:30.352 [2024-11-28 12:32:12.642613] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:30.352 [2024-11-28 12:32:12.642619] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:30.352 [2024-11-28 12:32:12.642625] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:30.352 [2024-11-28 12:32:12.643168] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:30.352 12:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:30.352 12:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:08:30.352 12:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:30.352 12:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:30.352 12:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:30.352 12:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:30.352 12:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:08:30.352 12:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:08:30.352 12:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.352 12:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:30.352 [2024-11-28 12:32:12.770716] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:30.352 12:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.352 12:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:30.352 12:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.352 12:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:30.352 12:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.352 12:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:30.352 12:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.352 12:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:30.352 [2024-11-28 12:32:12.786904] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:30.352 12:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.352 12:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:30.352 12:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.352 12:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:30.352 12:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.352 12:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:08:30.352 12:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.352 12:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:30.352 malloc0 00:08:30.352 12:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.352 12:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:08:30.352 12:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.353 12:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:30.353 12:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.353 12:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:08:30.353 12:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:08:30.353 12:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:08:30.353 12:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:08:30.353 12:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:30.353 12:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:30.353 { 00:08:30.353 "params": { 00:08:30.353 "name": "Nvme$subsystem", 00:08:30.353 "trtype": "$TEST_TRANSPORT", 00:08:30.353 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:30.353 "adrfam": "ipv4", 00:08:30.353 "trsvcid": "$NVMF_PORT", 00:08:30.353 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:30.353 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:30.353 "hdgst": ${hdgst:-false}, 00:08:30.353 "ddgst": ${ddgst:-false} 00:08:30.353 }, 00:08:30.353 "method": "bdev_nvme_attach_controller" 00:08:30.353 } 00:08:30.353 EOF 00:08:30.353 )") 00:08:30.353 12:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:08:30.353 12:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:08:30.353 12:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:08:30.353 12:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:30.353 "params": { 00:08:30.353 "name": "Nvme1", 00:08:30.353 "trtype": "tcp", 00:08:30.353 "traddr": "10.0.0.2", 00:08:30.353 "adrfam": "ipv4", 00:08:30.353 "trsvcid": "4420", 00:08:30.353 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:30.353 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:30.353 "hdgst": false, 00:08:30.353 "ddgst": false 00:08:30.353 }, 00:08:30.353 "method": "bdev_nvme_attach_controller" 00:08:30.353 }' 00:08:30.353 [2024-11-28 12:32:12.865229] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:08:30.353 [2024-11-28 12:32:12.865274] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2399626 ] 00:08:30.612 [2024-11-28 12:32:12.926772] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:30.612 [2024-11-28 12:32:12.968178] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:30.870 Running I/O for 10 seconds... 00:08:32.818 8496.00 IOPS, 66.38 MiB/s [2024-11-28T11:32:16.715Z] 8553.00 IOPS, 66.82 MiB/s [2024-11-28T11:32:17.651Z] 8584.00 IOPS, 67.06 MiB/s [2024-11-28T11:32:18.587Z] 8601.75 IOPS, 67.20 MiB/s [2024-11-28T11:32:19.522Z] 8605.20 IOPS, 67.23 MiB/s [2024-11-28T11:32:20.456Z] 8606.83 IOPS, 67.24 MiB/s [2024-11-28T11:32:21.391Z] 8607.43 IOPS, 67.25 MiB/s [2024-11-28T11:32:22.768Z] 8609.38 IOPS, 67.26 MiB/s [2024-11-28T11:32:23.707Z] 8613.78 IOPS, 67.30 MiB/s [2024-11-28T11:32:23.707Z] 8616.10 IOPS, 67.31 MiB/s 00:08:41.188 Latency(us) 00:08:41.188 [2024-11-28T11:32:23.707Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:41.188 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:08:41.188 Verification LBA range: start 0x0 length 0x1000 00:08:41.188 Nvme1n1 : 10.01 8615.96 67.31 0.00 0.00 14806.58 751.53 22909.11 00:08:41.188 [2024-11-28T11:32:23.707Z] =================================================================================================================== 00:08:41.188 [2024-11-28T11:32:23.707Z] Total : 8615.96 67.31 0.00 0.00 14806.58 751.53 22909.11 00:08:41.188 12:32:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=2401421 00:08:41.188 12:32:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:08:41.188 12:32:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:41.188 12:32:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:08:41.188 12:32:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:08:41.188 12:32:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:08:41.188 12:32:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:08:41.188 12:32:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:41.188 12:32:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:41.188 { 00:08:41.188 "params": { 00:08:41.188 "name": "Nvme$subsystem", 00:08:41.188 "trtype": "$TEST_TRANSPORT", 00:08:41.188 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:41.188 "adrfam": "ipv4", 00:08:41.188 "trsvcid": "$NVMF_PORT", 00:08:41.188 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:41.188 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:41.188 "hdgst": ${hdgst:-false}, 00:08:41.188 "ddgst": ${ddgst:-false} 00:08:41.188 }, 00:08:41.188 "method": "bdev_nvme_attach_controller" 00:08:41.188 } 00:08:41.188 EOF 00:08:41.188 )") 00:08:41.188 12:32:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:08:41.188 [2024-11-28 12:32:23.523961] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.188 [2024-11-28 12:32:23.523992] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.188 12:32:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:08:41.188 12:32:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:08:41.188 12:32:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:41.188 "params": { 00:08:41.188 "name": "Nvme1", 00:08:41.188 "trtype": "tcp", 00:08:41.188 "traddr": "10.0.0.2", 00:08:41.188 "adrfam": "ipv4", 00:08:41.188 "trsvcid": "4420", 00:08:41.188 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:41.188 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:41.188 "hdgst": false, 00:08:41.188 "ddgst": false 00:08:41.188 }, 00:08:41.188 "method": "bdev_nvme_attach_controller" 00:08:41.188 }' 00:08:41.188 [2024-11-28 12:32:23.531951] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.188 [2024-11-28 12:32:23.531968] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.188 [2024-11-28 12:32:23.539970] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.188 [2024-11-28 12:32:23.539981] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.188 [2024-11-28 12:32:23.547991] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.188 [2024-11-28 12:32:23.548002] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.188 [2024-11-28 12:32:23.550094] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:08:41.188 [2024-11-28 12:32:23.550138] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2401421 ] 00:08:41.188 [2024-11-28 12:32:23.556011] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.188 [2024-11-28 12:32:23.556022] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.188 [2024-11-28 12:32:23.568048] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.188 [2024-11-28 12:32:23.568065] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.188 [2024-11-28 12:32:23.576066] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.188 [2024-11-28 12:32:23.576078] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.188 [2024-11-28 12:32:23.584088] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.188 [2024-11-28 12:32:23.584099] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.188 [2024-11-28 12:32:23.592108] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.188 [2024-11-28 12:32:23.592118] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.188 [2024-11-28 12:32:23.600128] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.188 [2024-11-28 12:32:23.600138] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.188 [2024-11-28 12:32:23.608149] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.188 [2024-11-28 12:32:23.608160] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.188 [2024-11-28 12:32:23.612899] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:41.188 [2024-11-28 12:32:23.616171] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.188 [2024-11-28 12:32:23.616181] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.188 [2024-11-28 12:32:23.624196] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.188 [2024-11-28 12:32:23.624210] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.188 [2024-11-28 12:32:23.632216] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.188 [2024-11-28 12:32:23.632227] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.188 [2024-11-28 12:32:23.640237] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.188 [2024-11-28 12:32:23.640248] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.188 [2024-11-28 12:32:23.648259] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.188 [2024-11-28 12:32:23.648270] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.188 [2024-11-28 12:32:23.655160] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:41.188 [2024-11-28 12:32:23.656282] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.188 [2024-11-28 12:32:23.656295] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.188 [2024-11-28 12:32:23.664306] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.188 [2024-11-28 12:32:23.664321] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.188 [2024-11-28 12:32:23.672338] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.188 [2024-11-28 12:32:23.672359] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.188 [2024-11-28 12:32:23.680351] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.188 [2024-11-28 12:32:23.680367] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.188 [2024-11-28 12:32:23.688369] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.188 [2024-11-28 12:32:23.688383] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.188 [2024-11-28 12:32:23.696391] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.188 [2024-11-28 12:32:23.696404] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.188 [2024-11-28 12:32:23.704409] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.188 [2024-11-28 12:32:23.704421] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.447 [2024-11-28 12:32:23.712433] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.447 [2024-11-28 12:32:23.712447] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.447 [2024-11-28 12:32:23.720453] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.447 [2024-11-28 12:32:23.720466] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.447 [2024-11-28 12:32:23.728474] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.447 [2024-11-28 12:32:23.728486] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.447 [2024-11-28 12:32:23.736495] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.447 [2024-11-28 12:32:23.736506] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.447 [2024-11-28 12:32:23.744516] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.447 [2024-11-28 12:32:23.744526] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.447 [2024-11-28 12:32:23.752558] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.447 [2024-11-28 12:32:23.752580] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.447 [2024-11-28 12:32:23.760568] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.447 [2024-11-28 12:32:23.760582] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.447 [2024-11-28 12:32:23.768591] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.447 [2024-11-28 12:32:23.768606] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.447 [2024-11-28 12:32:23.776616] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.447 [2024-11-28 12:32:23.776632] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.447 [2024-11-28 12:32:23.784636] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.447 [2024-11-28 12:32:23.784649] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.447 [2024-11-28 12:32:23.792659] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.447 [2024-11-28 12:32:23.792674] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.447 [2024-11-28 12:32:23.800678] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.447 [2024-11-28 12:32:23.800689] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.447 [2024-11-28 12:32:23.808708] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.447 [2024-11-28 12:32:23.808727] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.447 [2024-11-28 12:32:23.816722] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.447 [2024-11-28 12:32:23.816739] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.447 Running I/O for 5 seconds... 00:08:41.447 [2024-11-28 12:32:23.824741] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.447 [2024-11-28 12:32:23.824752] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.448 [2024-11-28 12:32:23.836492] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.448 [2024-11-28 12:32:23.836513] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.448 [2024-11-28 12:32:23.845770] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.448 [2024-11-28 12:32:23.845790] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.448 [2024-11-28 12:32:23.855334] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.448 [2024-11-28 12:32:23.855354] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.448 [2024-11-28 12:32:23.864739] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.448 [2024-11-28 12:32:23.864758] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.448 [2024-11-28 12:32:23.874088] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.448 [2024-11-28 12:32:23.874107] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.448 [2024-11-28 12:32:23.883442] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.448 [2024-11-28 12:32:23.883464] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.448 [2024-11-28 12:32:23.892361] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.448 [2024-11-28 12:32:23.892382] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.448 [2024-11-28 12:32:23.901653] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.448 [2024-11-28 12:32:23.901673] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.448 [2024-11-28 12:32:23.910498] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.448 [2024-11-28 12:32:23.910518] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.448 [2024-11-28 12:32:23.919334] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.448 [2024-11-28 12:32:23.919353] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.448 [2024-11-28 12:32:23.928109] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.448 [2024-11-28 12:32:23.928128] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.448 [2024-11-28 12:32:23.937648] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.448 [2024-11-28 12:32:23.937668] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.448 [2024-11-28 12:32:23.946430] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.448 [2024-11-28 12:32:23.946449] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.448 [2024-11-28 12:32:23.955255] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.448 [2024-11-28 12:32:23.955275] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.448 [2024-11-28 12:32:23.964557] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.448 [2024-11-28 12:32:23.964576] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.707 [2024-11-28 12:32:23.973511] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.707 [2024-11-28 12:32:23.973531] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.707 [2024-11-28 12:32:23.982227] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.707 [2024-11-28 12:32:23.982247] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.707 [2024-11-28 12:32:23.991390] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.707 [2024-11-28 12:32:23.991410] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.707 [2024-11-28 12:32:24.000131] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.707 [2024-11-28 12:32:24.000150] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.707 [2024-11-28 12:32:24.009586] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.707 [2024-11-28 12:32:24.009606] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.707 [2024-11-28 12:32:24.019093] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.707 [2024-11-28 12:32:24.019113] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.707 [2024-11-28 12:32:24.028445] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.707 [2024-11-28 12:32:24.028464] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.707 [2024-11-28 12:32:24.037334] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.707 [2024-11-28 12:32:24.037353] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.707 [2024-11-28 12:32:24.046639] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.707 [2024-11-28 12:32:24.046659] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.707 [2024-11-28 12:32:24.055390] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.707 [2024-11-28 12:32:24.055409] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.707 [2024-11-28 12:32:24.064163] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.707 [2024-11-28 12:32:24.064182] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.707 [2024-11-28 12:32:24.073672] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.707 [2024-11-28 12:32:24.073693] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.707 [2024-11-28 12:32:24.082538] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.707 [2024-11-28 12:32:24.082558] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.707 [2024-11-28 12:32:24.091955] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.707 [2024-11-28 12:32:24.091975] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.707 [2024-11-28 12:32:24.100644] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.707 [2024-11-28 12:32:24.100664] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.707 [2024-11-28 12:32:24.110091] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.707 [2024-11-28 12:32:24.110110] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.707 [2024-11-28 12:32:24.119407] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.707 [2024-11-28 12:32:24.119425] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.707 [2024-11-28 12:32:24.128847] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.707 [2024-11-28 12:32:24.128867] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.707 [2024-11-28 12:32:24.137466] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.707 [2024-11-28 12:32:24.137485] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.707 [2024-11-28 12:32:24.147453] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.707 [2024-11-28 12:32:24.147472] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.707 [2024-11-28 12:32:24.156200] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.707 [2024-11-28 12:32:24.156219] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.707 [2024-11-28 12:32:24.165068] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.707 [2024-11-28 12:32:24.165087] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.707 [2024-11-28 12:32:24.174232] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.707 [2024-11-28 12:32:24.174252] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.707 [2024-11-28 12:32:24.183539] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.707 [2024-11-28 12:32:24.183558] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.707 [2024-11-28 12:32:24.192466] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.707 [2024-11-28 12:32:24.192485] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.707 [2024-11-28 12:32:24.201785] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.707 [2024-11-28 12:32:24.201804] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.707 [2024-11-28 12:32:24.211174] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.707 [2024-11-28 12:32:24.211193] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.707 [2024-11-28 12:32:24.220488] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.707 [2024-11-28 12:32:24.220508] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.966 [2024-11-28 12:32:24.230002] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.966 [2024-11-28 12:32:24.230022] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.966 [2024-11-28 12:32:24.239507] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.966 [2024-11-28 12:32:24.239527] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.966 [2024-11-28 12:32:24.248348] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.966 [2024-11-28 12:32:24.248368] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.966 [2024-11-28 12:32:24.257548] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.966 [2024-11-28 12:32:24.257566] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.966 [2024-11-28 12:32:24.266232] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.966 [2024-11-28 12:32:24.266251] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.966 [2024-11-28 12:32:24.275302] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.966 [2024-11-28 12:32:24.275321] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.966 [2024-11-28 12:32:24.284940] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.966 [2024-11-28 12:32:24.284964] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.966 [2024-11-28 12:32:24.294314] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.966 [2024-11-28 12:32:24.294333] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.966 [2024-11-28 12:32:24.302998] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.966 [2024-11-28 12:32:24.303016] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.966 [2024-11-28 12:32:24.311869] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.966 [2024-11-28 12:32:24.311887] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.966 [2024-11-28 12:32:24.321330] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.966 [2024-11-28 12:32:24.321349] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.966 [2024-11-28 12:32:24.330078] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.966 [2024-11-28 12:32:24.330097] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.966 [2024-11-28 12:32:24.338964] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.966 [2024-11-28 12:32:24.338982] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.966 [2024-11-28 12:32:24.348508] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.966 [2024-11-28 12:32:24.348527] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.966 [2024-11-28 12:32:24.357928] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.966 [2024-11-28 12:32:24.357953] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.966 [2024-11-28 12:32:24.367219] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.966 [2024-11-28 12:32:24.367239] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.966 [2024-11-28 12:32:24.376566] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.966 [2024-11-28 12:32:24.376584] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.966 [2024-11-28 12:32:24.385786] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.966 [2024-11-28 12:32:24.385805] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.966 [2024-11-28 12:32:24.395009] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.966 [2024-11-28 12:32:24.395028] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.966 [2024-11-28 12:32:24.404172] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.966 [2024-11-28 12:32:24.404191] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.966 [2024-11-28 12:32:24.411213] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.966 [2024-11-28 12:32:24.411231] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.966 [2024-11-28 12:32:24.421779] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.966 [2024-11-28 12:32:24.421798] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.966 [2024-11-28 12:32:24.429062] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.966 [2024-11-28 12:32:24.429080] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.966 [2024-11-28 12:32:24.439330] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.966 [2024-11-28 12:32:24.439349] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.966 [2024-11-28 12:32:24.448387] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.966 [2024-11-28 12:32:24.448405] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.966 [2024-11-28 12:32:24.457623] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.966 [2024-11-28 12:32:24.457642] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.966 [2024-11-28 12:32:24.466969] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.966 [2024-11-28 12:32:24.466988] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.966 [2024-11-28 12:32:24.476582] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.966 [2024-11-28 12:32:24.476601] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.226 [2024-11-28 12:32:24.486136] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.226 [2024-11-28 12:32:24.486155] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.226 [2024-11-28 12:32:24.495606] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.226 [2024-11-28 12:32:24.495624] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.226 [2024-11-28 12:32:24.504938] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.226 [2024-11-28 12:32:24.504963] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.226 [2024-11-28 12:32:24.513631] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.226 [2024-11-28 12:32:24.513649] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.226 [2024-11-28 12:32:24.522510] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.226 [2024-11-28 12:32:24.522528] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.226 [2024-11-28 12:32:24.531173] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.226 [2024-11-28 12:32:24.531192] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.226 [2024-11-28 12:32:24.539937] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.226 [2024-11-28 12:32:24.539963] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.226 [2024-11-28 12:32:24.549169] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.226 [2024-11-28 12:32:24.549188] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.226 [2024-11-28 12:32:24.558537] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.226 [2024-11-28 12:32:24.558556] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.226 [2024-11-28 12:32:24.567302] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.226 [2024-11-28 12:32:24.567321] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.226 [2024-11-28 12:32:24.576140] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.226 [2024-11-28 12:32:24.576160] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.226 [2024-11-28 12:32:24.584886] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.226 [2024-11-28 12:32:24.584905] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.226 [2024-11-28 12:32:24.594247] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.226 [2024-11-28 12:32:24.594267] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.226 [2024-11-28 12:32:24.609128] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.226 [2024-11-28 12:32:24.609149] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.226 [2024-11-28 12:32:24.624810] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.226 [2024-11-28 12:32:24.624830] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.226 [2024-11-28 12:32:24.639621] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.226 [2024-11-28 12:32:24.639641] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.226 [2024-11-28 12:32:24.650895] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.226 [2024-11-28 12:32:24.650914] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.226 [2024-11-28 12:32:24.665618] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.226 [2024-11-28 12:32:24.665638] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.226 [2024-11-28 12:32:24.679398] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.226 [2024-11-28 12:32:24.679418] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.226 [2024-11-28 12:32:24.693656] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.226 [2024-11-28 12:32:24.693677] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.226 [2024-11-28 12:32:24.707852] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.226 [2024-11-28 12:32:24.707871] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.226 [2024-11-28 12:32:24.722141] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.226 [2024-11-28 12:32:24.722166] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.226 [2024-11-28 12:32:24.732941] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.226 [2024-11-28 12:32:24.732965] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.485 [2024-11-28 12:32:24.747954] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.485 [2024-11-28 12:32:24.747974] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.485 [2024-11-28 12:32:24.758785] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.485 [2024-11-28 12:32:24.758804] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.485 [2024-11-28 12:32:24.773691] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.485 [2024-11-28 12:32:24.773710] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.485 [2024-11-28 12:32:24.787655] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.485 [2024-11-28 12:32:24.787674] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.485 [2024-11-28 12:32:24.801550] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.485 [2024-11-28 12:32:24.801569] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.485 [2024-11-28 12:32:24.815636] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.485 [2024-11-28 12:32:24.815655] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.485 16313.00 IOPS, 127.45 MiB/s [2024-11-28T11:32:25.004Z] [2024-11-28 12:32:24.830047] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.485 [2024-11-28 12:32:24.830066] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.485 [2024-11-28 12:32:24.845389] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.485 [2024-11-28 12:32:24.845410] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.485 [2024-11-28 12:32:24.859402] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.485 [2024-11-28 12:32:24.859422] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.485 [2024-11-28 12:32:24.873510] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.485 [2024-11-28 12:32:24.873529] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.485 [2024-11-28 12:32:24.887002] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.485 [2024-11-28 12:32:24.887022] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.485 [2024-11-28 12:32:24.901215] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.485 [2024-11-28 12:32:24.901235] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.485 [2024-11-28 12:32:24.914763] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.485 [2024-11-28 12:32:24.914782] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.485 [2024-11-28 12:32:24.929198] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.485 [2024-11-28 12:32:24.929219] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.485 [2024-11-28 12:32:24.940683] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.485 [2024-11-28 12:32:24.940703] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.485 [2024-11-28 12:32:24.955589] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.485 [2024-11-28 12:32:24.955609] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.485 [2024-11-28 12:32:24.967069] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.485 [2024-11-28 12:32:24.967088] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.485 [2024-11-28 12:32:24.981666] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.485 [2024-11-28 12:32:24.981690] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.485 [2024-11-28 12:32:24.995616] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.485 [2024-11-28 12:32:24.995635] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.745 [2024-11-28 12:32:25.007038] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.745 [2024-11-28 12:32:25.007057] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.745 [2024-11-28 12:32:25.021828] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.745 [2024-11-28 12:32:25.021847] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.745 [2024-11-28 12:32:25.036835] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.745 [2024-11-28 12:32:25.036855] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.745 [2024-11-28 12:32:25.051427] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.745 [2024-11-28 12:32:25.051446] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.745 [2024-11-28 12:32:25.062701] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.745 [2024-11-28 12:32:25.062720] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.745 [2024-11-28 12:32:25.077221] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.745 [2024-11-28 12:32:25.077240] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.745 [2024-11-28 12:32:25.090980] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.745 [2024-11-28 12:32:25.090999] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.745 [2024-11-28 12:32:25.105040] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.745 [2024-11-28 12:32:25.105059] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.745 [2024-11-28 12:32:25.118777] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.745 [2024-11-28 12:32:25.118796] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.745 [2024-11-28 12:32:25.133064] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.745 [2024-11-28 12:32:25.133083] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.745 [2024-11-28 12:32:25.147405] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.745 [2024-11-28 12:32:25.147424] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.745 [2024-11-28 12:32:25.158303] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.745 [2024-11-28 12:32:25.158322] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.745 [2024-11-28 12:32:25.173113] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.745 [2024-11-28 12:32:25.173133] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.745 [2024-11-28 12:32:25.187595] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.745 [2024-11-28 12:32:25.187614] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.745 [2024-11-28 12:32:25.201599] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.745 [2024-11-28 12:32:25.201619] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.745 [2024-11-28 12:32:25.216256] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.745 [2024-11-28 12:32:25.216275] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.745 [2024-11-28 12:32:25.226791] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.745 [2024-11-28 12:32:25.226810] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.745 [2024-11-28 12:32:25.241552] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.745 [2024-11-28 12:32:25.241575] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.745 [2024-11-28 12:32:25.252289] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.745 [2024-11-28 12:32:25.252310] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.005 [2024-11-28 12:32:25.267174] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.005 [2024-11-28 12:32:25.267193] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.005 [2024-11-28 12:32:25.280960] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.005 [2024-11-28 12:32:25.280981] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.005 [2024-11-28 12:32:25.295253] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.005 [2024-11-28 12:32:25.295274] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.005 [2024-11-28 12:32:25.309482] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.005 [2024-11-28 12:32:25.309503] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.005 [2024-11-28 12:32:25.320403] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.005 [2024-11-28 12:32:25.320422] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.005 [2024-11-28 12:32:25.334973] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.005 [2024-11-28 12:32:25.334993] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.005 [2024-11-28 12:32:25.347907] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.005 [2024-11-28 12:32:25.347926] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.005 [2024-11-28 12:32:25.362233] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.005 [2024-11-28 12:32:25.362254] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.005 [2024-11-28 12:32:25.376644] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.005 [2024-11-28 12:32:25.376665] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.005 [2024-11-28 12:32:25.387431] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.005 [2024-11-28 12:32:25.387451] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.005 [2024-11-28 12:32:25.402177] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.005 [2024-11-28 12:32:25.402198] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.005 [2024-11-28 12:32:25.412744] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.005 [2024-11-28 12:32:25.412763] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.005 [2024-11-28 12:32:25.426862] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.005 [2024-11-28 12:32:25.426881] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.005 [2024-11-28 12:32:25.441323] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.005 [2024-11-28 12:32:25.441343] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.005 [2024-11-28 12:32:25.454777] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.005 [2024-11-28 12:32:25.454798] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.005 [2024-11-28 12:32:25.469246] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.005 [2024-11-28 12:32:25.469266] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.005 [2024-11-28 12:32:25.483323] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.005 [2024-11-28 12:32:25.483343] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.005 [2024-11-28 12:32:25.497496] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.005 [2024-11-28 12:32:25.497516] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.005 [2024-11-28 12:32:25.511916] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.005 [2024-11-28 12:32:25.511936] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.005 [2024-11-28 12:32:25.522928] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.005 [2024-11-28 12:32:25.522954] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.264 [2024-11-28 12:32:25.537596] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.264 [2024-11-28 12:32:25.537616] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.264 [2024-11-28 12:32:25.551981] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.264 [2024-11-28 12:32:25.552001] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.264 [2024-11-28 12:32:25.567064] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.264 [2024-11-28 12:32:25.567084] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.264 [2024-11-28 12:32:25.581395] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.264 [2024-11-28 12:32:25.581415] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.264 [2024-11-28 12:32:25.595758] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.264 [2024-11-28 12:32:25.595778] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.264 [2024-11-28 12:32:25.606639] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.264 [2024-11-28 12:32:25.606658] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.264 [2024-11-28 12:32:25.621859] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.264 [2024-11-28 12:32:25.621879] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.264 [2024-11-28 12:32:25.637467] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.264 [2024-11-28 12:32:25.637488] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.264 [2024-11-28 12:32:25.651633] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.264 [2024-11-28 12:32:25.651654] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.264 [2024-11-28 12:32:25.665907] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.265 [2024-11-28 12:32:25.665927] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.265 [2024-11-28 12:32:25.680451] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.265 [2024-11-28 12:32:25.680473] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.265 [2024-11-28 12:32:25.694378] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.265 [2024-11-28 12:32:25.694398] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.265 [2024-11-28 12:32:25.708606] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.265 [2024-11-28 12:32:25.708625] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.265 [2024-11-28 12:32:25.722535] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.265 [2024-11-28 12:32:25.722555] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.265 [2024-11-28 12:32:25.736710] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.265 [2024-11-28 12:32:25.736730] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.265 [2024-11-28 12:32:25.750647] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.265 [2024-11-28 12:32:25.750666] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.265 [2024-11-28 12:32:25.764854] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.265 [2024-11-28 12:32:25.764873] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.265 [2024-11-28 12:32:25.778693] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.265 [2024-11-28 12:32:25.778714] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.524 [2024-11-28 12:32:25.793096] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.524 [2024-11-28 12:32:25.793115] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.524 [2024-11-28 12:32:25.803705] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.524 [2024-11-28 12:32:25.803724] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.524 [2024-11-28 12:32:25.818385] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.524 [2024-11-28 12:32:25.818403] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.524 16398.50 IOPS, 128.11 MiB/s [2024-11-28T11:32:26.043Z] [2024-11-28 12:32:25.832078] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.524 [2024-11-28 12:32:25.832097] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.524 [2024-11-28 12:32:25.846493] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.524 [2024-11-28 12:32:25.846513] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.524 [2024-11-28 12:32:25.857303] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.524 [2024-11-28 12:32:25.857322] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.524 [2024-11-28 12:32:25.872059] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.524 [2024-11-28 12:32:25.872079] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.524 [2024-11-28 12:32:25.886035] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.524 [2024-11-28 12:32:25.886056] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.524 [2024-11-28 12:32:25.900315] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.524 [2024-11-28 12:32:25.900334] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.524 [2024-11-28 12:32:25.914024] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.524 [2024-11-28 12:32:25.914043] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.525 [2024-11-28 12:32:25.928597] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.525 [2024-11-28 12:32:25.928617] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.525 [2024-11-28 12:32:25.944158] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.525 [2024-11-28 12:32:25.944180] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.525 [2024-11-28 12:32:25.958764] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.525 [2024-11-28 12:32:25.958785] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.525 [2024-11-28 12:32:25.969450] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.525 [2024-11-28 12:32:25.969468] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.525 [2024-11-28 12:32:25.984118] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.525 [2024-11-28 12:32:25.984137] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.525 [2024-11-28 12:32:25.998197] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.525 [2024-11-28 12:32:25.998217] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.525 [2024-11-28 12:32:26.012433] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.525 [2024-11-28 12:32:26.012456] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.525 [2024-11-28 12:32:26.026442] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.525 [2024-11-28 12:32:26.026462] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.525 [2024-11-28 12:32:26.040428] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.525 [2024-11-28 12:32:26.040449] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.784 [2024-11-28 12:32:26.054416] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.784 [2024-11-28 12:32:26.054436] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.784 [2024-11-28 12:32:26.068397] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.784 [2024-11-28 12:32:26.068417] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.784 [2024-11-28 12:32:26.079490] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.785 [2024-11-28 12:32:26.079510] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.785 [2024-11-28 12:32:26.094033] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.785 [2024-11-28 12:32:26.094053] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.785 [2024-11-28 12:32:26.107694] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.785 [2024-11-28 12:32:26.107713] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.785 [2024-11-28 12:32:26.121844] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.785 [2024-11-28 12:32:26.121864] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.785 [2024-11-28 12:32:26.136358] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.785 [2024-11-28 12:32:26.136377] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.785 [2024-11-28 12:32:26.147659] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.785 [2024-11-28 12:32:26.147678] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.785 [2024-11-28 12:32:26.162144] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.785 [2024-11-28 12:32:26.162163] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.785 [2024-11-28 12:32:26.176316] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.785 [2024-11-28 12:32:26.176336] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.785 [2024-11-28 12:32:26.190394] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.785 [2024-11-28 12:32:26.190413] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.785 [2024-11-28 12:32:26.204628] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.785 [2024-11-28 12:32:26.204647] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.785 [2024-11-28 12:32:26.218591] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.785 [2024-11-28 12:32:26.218612] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.785 [2024-11-28 12:32:26.233045] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.785 [2024-11-28 12:32:26.233065] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.785 [2024-11-28 12:32:26.243905] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.785 [2024-11-28 12:32:26.243926] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.785 [2024-11-28 12:32:26.258550] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.785 [2024-11-28 12:32:26.258569] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.785 [2024-11-28 12:32:26.274457] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.785 [2024-11-28 12:32:26.274483] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.785 [2024-11-28 12:32:26.288607] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.785 [2024-11-28 12:32:26.288626] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.044 [2024-11-28 12:32:26.303067] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.044 [2024-11-28 12:32:26.303087] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.044 [2024-11-28 12:32:26.313796] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.044 [2024-11-28 12:32:26.313815] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.044 [2024-11-28 12:32:26.328612] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.044 [2024-11-28 12:32:26.328631] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.044 [2024-11-28 12:32:26.340246] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.044 [2024-11-28 12:32:26.340266] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.044 [2024-11-28 12:32:26.354970] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.044 [2024-11-28 12:32:26.354990] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.044 [2024-11-28 12:32:26.368917] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.044 [2024-11-28 12:32:26.368936] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.044 [2024-11-28 12:32:26.383257] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.044 [2024-11-28 12:32:26.383276] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.044 [2024-11-28 12:32:26.394802] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.044 [2024-11-28 12:32:26.394821] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.044 [2024-11-28 12:32:26.409193] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.044 [2024-11-28 12:32:26.409213] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.044 [2024-11-28 12:32:26.423186] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.044 [2024-11-28 12:32:26.423206] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.044 [2024-11-28 12:32:26.437407] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.044 [2024-11-28 12:32:26.437426] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.044 [2024-11-28 12:32:26.451806] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.044 [2024-11-28 12:32:26.451825] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.044 [2024-11-28 12:32:26.466451] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.044 [2024-11-28 12:32:26.466471] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.044 [2024-11-28 12:32:26.477651] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.044 [2024-11-28 12:32:26.477670] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.044 [2024-11-28 12:32:26.492276] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.044 [2024-11-28 12:32:26.492295] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.044 [2024-11-28 12:32:26.505880] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.044 [2024-11-28 12:32:26.505899] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.044 [2024-11-28 12:32:26.519989] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.044 [2024-11-28 12:32:26.520008] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.044 [2024-11-28 12:32:26.534223] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.044 [2024-11-28 12:32:26.534246] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.044 [2024-11-28 12:32:26.545280] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.044 [2024-11-28 12:32:26.545299] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.044 [2024-11-28 12:32:26.559535] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.044 [2024-11-28 12:32:26.559554] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.303 [2024-11-28 12:32:26.573370] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.303 [2024-11-28 12:32:26.573389] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.303 [2024-11-28 12:32:26.587582] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.303 [2024-11-28 12:32:26.587601] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.303 [2024-11-28 12:32:26.601867] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.303 [2024-11-28 12:32:26.601887] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.303 [2024-11-28 12:32:26.613138] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.303 [2024-11-28 12:32:26.613157] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.303 [2024-11-28 12:32:26.627547] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.303 [2024-11-28 12:32:26.627567] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.303 [2024-11-28 12:32:26.641707] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.303 [2024-11-28 12:32:26.641727] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.303 [2024-11-28 12:32:26.653037] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.303 [2024-11-28 12:32:26.653057] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.303 [2024-11-28 12:32:26.667499] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.303 [2024-11-28 12:32:26.667520] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.303 [2024-11-28 12:32:26.681279] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.303 [2024-11-28 12:32:26.681301] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.303 [2024-11-28 12:32:26.695613] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.303 [2024-11-28 12:32:26.695633] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.303 [2024-11-28 12:32:26.709494] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.303 [2024-11-28 12:32:26.709514] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.303 [2024-11-28 12:32:26.723957] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.303 [2024-11-28 12:32:26.723975] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.303 [2024-11-28 12:32:26.739353] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.303 [2024-11-28 12:32:26.739374] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.303 [2024-11-28 12:32:26.753652] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.303 [2024-11-28 12:32:26.753672] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.303 [2024-11-28 12:32:26.764931] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.303 [2024-11-28 12:32:26.764957] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.303 [2024-11-28 12:32:26.779529] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.303 [2024-11-28 12:32:26.779550] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.303 [2024-11-28 12:32:26.793110] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.303 [2024-11-28 12:32:26.793133] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.303 [2024-11-28 12:32:26.807311] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.303 [2024-11-28 12:32:26.807332] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.562 [2024-11-28 12:32:26.821818] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.563 [2024-11-28 12:32:26.821839] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.563 16435.00 IOPS, 128.40 MiB/s [2024-11-28T11:32:27.082Z] [2024-11-28 12:32:26.832987] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.563 [2024-11-28 12:32:26.833006] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.563 [2024-11-28 12:32:26.847726] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.563 [2024-11-28 12:32:26.847746] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.563 [2024-11-28 12:32:26.862009] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.563 [2024-11-28 12:32:26.862029] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.563 [2024-11-28 12:32:26.872701] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.563 [2024-11-28 12:32:26.872722] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.563 [2024-11-28 12:32:26.887695] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.563 [2024-11-28 12:32:26.887716] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.563 [2024-11-28 12:32:26.902545] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.563 [2024-11-28 12:32:26.902566] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.563 [2024-11-28 12:32:26.916830] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.563 [2024-11-28 12:32:26.916850] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.563 [2024-11-28 12:32:26.927965] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.563 [2024-11-28 12:32:26.927985] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.563 [2024-11-28 12:32:26.942495] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.563 [2024-11-28 12:32:26.942514] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.563 [2024-11-28 12:32:26.956420] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.563 [2024-11-28 12:32:26.956439] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.563 [2024-11-28 12:32:26.970673] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.563 [2024-11-28 12:32:26.970692] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.563 [2024-11-28 12:32:26.984858] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.563 [2024-11-28 12:32:26.984877] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.563 [2024-11-28 12:32:26.998902] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.563 [2024-11-28 12:32:26.998922] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.563 [2024-11-28 12:32:27.013218] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.563 [2024-11-28 12:32:27.013238] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.563 [2024-11-28 12:32:27.027321] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.563 [2024-11-28 12:32:27.027342] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.563 [2024-11-28 12:32:27.041600] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.563 [2024-11-28 12:32:27.041619] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.563 [2024-11-28 12:32:27.055614] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.563 [2024-11-28 12:32:27.055633] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.563 [2024-11-28 12:32:27.070018] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.563 [2024-11-28 12:32:27.070036] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.822 [2024-11-28 12:32:27.085117] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.822 [2024-11-28 12:32:27.085137] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.822 [2024-11-28 12:32:27.099581] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.822 [2024-11-28 12:32:27.099601] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.822 [2024-11-28 12:32:27.110430] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.822 [2024-11-28 12:32:27.110449] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.822 [2024-11-28 12:32:27.124780] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.822 [2024-11-28 12:32:27.124799] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.822 [2024-11-28 12:32:27.138639] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.822 [2024-11-28 12:32:27.138658] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.822 [2024-11-28 12:32:27.152701] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.822 [2024-11-28 12:32:27.152721] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.822 [2024-11-28 12:32:27.166107] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.822 [2024-11-28 12:32:27.166126] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.822 [2024-11-28 12:32:27.180491] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.822 [2024-11-28 12:32:27.180511] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.822 [2024-11-28 12:32:27.194329] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.822 [2024-11-28 12:32:27.194348] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.822 [2024-11-28 12:32:27.208558] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.822 [2024-11-28 12:32:27.208577] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.822 [2024-11-28 12:32:27.222115] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.822 [2024-11-28 12:32:27.222135] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.822 [2024-11-28 12:32:27.236856] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.822 [2024-11-28 12:32:27.236875] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.822 [2024-11-28 12:32:27.252905] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.822 [2024-11-28 12:32:27.252924] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.822 [2024-11-28 12:32:27.267430] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.822 [2024-11-28 12:32:27.267449] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.822 [2024-11-28 12:32:27.277842] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.822 [2024-11-28 12:32:27.277862] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.822 [2024-11-28 12:32:27.292420] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.822 [2024-11-28 12:32:27.292441] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.822 [2024-11-28 12:32:27.303382] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.822 [2024-11-28 12:32:27.303401] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.822 [2024-11-28 12:32:27.317998] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.822 [2024-11-28 12:32:27.318017] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.822 [2024-11-28 12:32:27.331572] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.822 [2024-11-28 12:32:27.331591] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.082 [2024-11-28 12:32:27.345724] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.082 [2024-11-28 12:32:27.345744] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.082 [2024-11-28 12:32:27.359765] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.082 [2024-11-28 12:32:27.359784] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.082 [2024-11-28 12:32:27.373723] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.082 [2024-11-28 12:32:27.373742] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.082 [2024-11-28 12:32:27.387835] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.082 [2024-11-28 12:32:27.387854] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.082 [2024-11-28 12:32:27.401696] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.082 [2024-11-28 12:32:27.401716] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.082 [2024-11-28 12:32:27.415302] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.082 [2024-11-28 12:32:27.415322] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.082 [2024-11-28 12:32:27.429649] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.082 [2024-11-28 12:32:27.429670] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.082 [2024-11-28 12:32:27.443600] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.082 [2024-11-28 12:32:27.443619] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.082 [2024-11-28 12:32:27.457872] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.082 [2024-11-28 12:32:27.457896] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.082 [2024-11-28 12:32:27.471567] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.082 [2024-11-28 12:32:27.471587] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.082 [2024-11-28 12:32:27.485875] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.082 [2024-11-28 12:32:27.485898] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.082 [2024-11-28 12:32:27.500088] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.082 [2024-11-28 12:32:27.500108] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.082 [2024-11-28 12:32:27.514102] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.082 [2024-11-28 12:32:27.514121] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.082 [2024-11-28 12:32:27.528007] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.082 [2024-11-28 12:32:27.528026] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.082 [2024-11-28 12:32:27.542319] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.082 [2024-11-28 12:32:27.542343] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.082 [2024-11-28 12:32:27.552757] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.082 [2024-11-28 12:32:27.552776] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.082 [2024-11-28 12:32:27.567293] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.082 [2024-11-28 12:32:27.567312] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.082 [2024-11-28 12:32:27.581394] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.082 [2024-11-28 12:32:27.581413] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.082 [2024-11-28 12:32:27.595761] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.082 [2024-11-28 12:32:27.595780] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.339 [2024-11-28 12:32:27.610037] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.339 [2024-11-28 12:32:27.610059] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.339 [2024-11-28 12:32:27.624051] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.340 [2024-11-28 12:32:27.624071] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.340 [2024-11-28 12:32:27.638075] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.340 [2024-11-28 12:32:27.638094] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.340 [2024-11-28 12:32:27.652096] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.340 [2024-11-28 12:32:27.652115] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.340 [2024-11-28 12:32:27.666117] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.340 [2024-11-28 12:32:27.666136] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.340 [2024-11-28 12:32:27.680071] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.340 [2024-11-28 12:32:27.680091] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.340 [2024-11-28 12:32:27.694266] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.340 [2024-11-28 12:32:27.694288] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.340 [2024-11-28 12:32:27.708665] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.340 [2024-11-28 12:32:27.708684] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.340 [2024-11-28 12:32:27.720048] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.340 [2024-11-28 12:32:27.720068] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.340 [2024-11-28 12:32:27.734441] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.340 [2024-11-28 12:32:27.734460] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.340 [2024-11-28 12:32:27.748653] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.340 [2024-11-28 12:32:27.748672] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.340 [2024-11-28 12:32:27.763087] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.340 [2024-11-28 12:32:27.763105] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.340 [2024-11-28 12:32:27.773961] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.340 [2024-11-28 12:32:27.773996] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.340 [2024-11-28 12:32:27.788433] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.340 [2024-11-28 12:32:27.788452] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.340 [2024-11-28 12:32:27.802219] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.340 [2024-11-28 12:32:27.802238] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.340 [2024-11-28 12:32:27.816391] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.340 [2024-11-28 12:32:27.816411] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.340 [2024-11-28 12:32:27.830510] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.340 [2024-11-28 12:32:27.830534] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.340 16464.50 IOPS, 128.63 MiB/s [2024-11-28T11:32:27.859Z] [2024-11-28 12:32:27.845130] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.340 [2024-11-28 12:32:27.845150] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.340 [2024-11-28 12:32:27.856279] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.340 [2024-11-28 12:32:27.856298] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.598 [2024-11-28 12:32:27.870861] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.598 [2024-11-28 12:32:27.870880] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.598 [2024-11-28 12:32:27.882226] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.598 [2024-11-28 12:32:27.882244] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.598 [2024-11-28 12:32:27.896828] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.598 [2024-11-28 12:32:27.896848] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.598 [2024-11-28 12:32:27.911064] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.598 [2024-11-28 12:32:27.911084] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.598 [2024-11-28 12:32:27.922230] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.598 [2024-11-28 12:32:27.922248] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.598 [2024-11-28 12:32:27.936312] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.598 [2024-11-28 12:32:27.936331] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.598 [2024-11-28 12:32:27.949850] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.598 [2024-11-28 12:32:27.949869] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.598 [2024-11-28 12:32:27.964118] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.598 [2024-11-28 12:32:27.964138] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.598 [2024-11-28 12:32:27.977964] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.598 [2024-11-28 12:32:27.977983] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.598 [2024-11-28 12:32:27.992202] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.598 [2024-11-28 12:32:27.992222] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.598 [2024-11-28 12:32:28.006456] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.598 [2024-11-28 12:32:28.006475] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.598 [2024-11-28 12:32:28.017478] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.598 [2024-11-28 12:32:28.017497] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.598 [2024-11-28 12:32:28.032354] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.598 [2024-11-28 12:32:28.032374] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.598 [2024-11-28 12:32:28.046131] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.598 [2024-11-28 12:32:28.046151] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.598 [2024-11-28 12:32:28.060067] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.598 [2024-11-28 12:32:28.060087] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.598 [2024-11-28 12:32:28.074191] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.598 [2024-11-28 12:32:28.074211] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.598 [2024-11-28 12:32:28.088030] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.598 [2024-11-28 12:32:28.088058] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.598 [2024-11-28 12:32:28.102105] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.598 [2024-11-28 12:32:28.102125] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.857 [2024-11-28 12:32:28.116245] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.857 [2024-11-28 12:32:28.116266] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.857 [2024-11-28 12:32:28.130408] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.857 [2024-11-28 12:32:28.130428] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.857 [2024-11-28 12:32:28.144257] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.857 [2024-11-28 12:32:28.144277] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.857 [2024-11-28 12:32:28.158770] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.857 [2024-11-28 12:32:28.158790] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.857 [2024-11-28 12:32:28.169425] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.857 [2024-11-28 12:32:28.169445] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.857 [2024-11-28 12:32:28.183786] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.857 [2024-11-28 12:32:28.183806] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.857 [2024-11-28 12:32:28.197546] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.857 [2024-11-28 12:32:28.197566] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.857 [2024-11-28 12:32:28.211598] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.857 [2024-11-28 12:32:28.211618] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.857 [2024-11-28 12:32:28.225364] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.857 [2024-11-28 12:32:28.225384] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.857 [2024-11-28 12:32:28.239260] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.857 [2024-11-28 12:32:28.239280] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.857 [2024-11-28 12:32:28.253634] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.857 [2024-11-28 12:32:28.253654] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.857 [2024-11-28 12:32:28.267577] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.857 [2024-11-28 12:32:28.267597] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.857 [2024-11-28 12:32:28.281608] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.857 [2024-11-28 12:32:28.281628] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.857 [2024-11-28 12:32:28.295540] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.857 [2024-11-28 12:32:28.295559] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.857 [2024-11-28 12:32:28.309690] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.857 [2024-11-28 12:32:28.309709] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.857 [2024-11-28 12:32:28.324141] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.857 [2024-11-28 12:32:28.324162] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.858 [2024-11-28 12:32:28.339751] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.858 [2024-11-28 12:32:28.339771] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.858 [2024-11-28 12:32:28.354049] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.858 [2024-11-28 12:32:28.354074] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.858 [2024-11-28 12:32:28.368409] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.858 [2024-11-28 12:32:28.368429] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.117 [2024-11-28 12:32:28.380153] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.117 [2024-11-28 12:32:28.380172] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.117 [2024-11-28 12:32:28.394436] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.117 [2024-11-28 12:32:28.394457] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.117 [2024-11-28 12:32:28.408448] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.117 [2024-11-28 12:32:28.408468] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.117 [2024-11-28 12:32:28.422301] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.117 [2024-11-28 12:32:28.422321] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.117 [2024-11-28 12:32:28.436394] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.117 [2024-11-28 12:32:28.436413] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.117 [2024-11-28 12:32:28.450144] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.117 [2024-11-28 12:32:28.450163] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.117 [2024-11-28 12:32:28.463874] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.117 [2024-11-28 12:32:28.463892] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.117 [2024-11-28 12:32:28.477834] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.117 [2024-11-28 12:32:28.477853] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.117 [2024-11-28 12:32:28.491994] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.117 [2024-11-28 12:32:28.492013] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.117 [2024-11-28 12:32:28.502559] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.117 [2024-11-28 12:32:28.502578] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.117 [2024-11-28 12:32:28.517084] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.117 [2024-11-28 12:32:28.517103] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.117 [2024-11-28 12:32:28.531111] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.117 [2024-11-28 12:32:28.531132] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.117 [2024-11-28 12:32:28.545800] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.117 [2024-11-28 12:32:28.545820] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.117 [2024-11-28 12:32:28.557255] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.117 [2024-11-28 12:32:28.557274] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.117 [2024-11-28 12:32:28.571535] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.117 [2024-11-28 12:32:28.571554] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.117 [2024-11-28 12:32:28.586081] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.117 [2024-11-28 12:32:28.586100] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.117 [2024-11-28 12:32:28.601529] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.117 [2024-11-28 12:32:28.601548] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.117 [2024-11-28 12:32:28.616166] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.117 [2024-11-28 12:32:28.616185] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.117 [2024-11-28 12:32:28.627054] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.117 [2024-11-28 12:32:28.627075] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.377 [2024-11-28 12:32:28.641463] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.377 [2024-11-28 12:32:28.641483] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.377 [2024-11-28 12:32:28.655239] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.377 [2024-11-28 12:32:28.655258] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.377 [2024-11-28 12:32:28.669378] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.377 [2024-11-28 12:32:28.669397] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.377 [2024-11-28 12:32:28.683377] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.377 [2024-11-28 12:32:28.683396] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.377 [2024-11-28 12:32:28.697515] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.377 [2024-11-28 12:32:28.697536] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.377 [2024-11-28 12:32:28.708515] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.377 [2024-11-28 12:32:28.708535] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.377 [2024-11-28 12:32:28.723069] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.377 [2024-11-28 12:32:28.723089] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.377 [2024-11-28 12:32:28.737009] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.377 [2024-11-28 12:32:28.737029] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.377 [2024-11-28 12:32:28.751185] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.377 [2024-11-28 12:32:28.751205] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.377 [2024-11-28 12:32:28.765248] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.377 [2024-11-28 12:32:28.765283] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.377 [2024-11-28 12:32:28.779348] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.377 [2024-11-28 12:32:28.779367] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.377 [2024-11-28 12:32:28.793657] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.377 [2024-11-28 12:32:28.793676] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.377 [2024-11-28 12:32:28.807275] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.377 [2024-11-28 12:32:28.807294] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.377 [2024-11-28 12:32:28.821188] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.377 [2024-11-28 12:32:28.821207] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.377 16478.00 IOPS, 128.73 MiB/s [2024-11-28T11:32:28.896Z] [2024-11-28 12:32:28.835131] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.377 [2024-11-28 12:32:28.835150] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.377 00:08:46.377 Latency(us) 00:08:46.377 [2024-11-28T11:32:28.896Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:46.377 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:08:46.377 Nvme1n1 : 5.01 16480.48 128.75 0.00 0.00 7759.39 3476.26 18919.96 00:08:46.377 [2024-11-28T11:32:28.896Z] =================================================================================================================== 00:08:46.377 [2024-11-28T11:32:28.896Z] Total : 16480.48 128.75 0.00 0.00 7759.39 3476.26 18919.96 00:08:46.377 [2024-11-28 12:32:28.845081] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.377 [2024-11-28 12:32:28.845099] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.377 [2024-11-28 12:32:28.857109] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.377 [2024-11-28 12:32:28.857125] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.377 [2024-11-28 12:32:28.869151] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.377 [2024-11-28 12:32:28.869166] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.377 [2024-11-28 12:32:28.881186] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.377 [2024-11-28 12:32:28.881204] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.377 [2024-11-28 12:32:28.893209] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.377 [2024-11-28 12:32:28.893223] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.637 [2024-11-28 12:32:28.905240] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.637 [2024-11-28 12:32:28.905255] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.637 [2024-11-28 12:32:28.917271] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.637 [2024-11-28 12:32:28.917287] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.637 [2024-11-28 12:32:28.929303] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.637 [2024-11-28 12:32:28.929317] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.637 [2024-11-28 12:32:28.941335] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.637 [2024-11-28 12:32:28.941348] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.637 [2024-11-28 12:32:28.953368] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.637 [2024-11-28 12:32:28.953379] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.637 [2024-11-28 12:32:28.965398] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.637 [2024-11-28 12:32:28.965408] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.637 [2024-11-28 12:32:28.977436] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.637 [2024-11-28 12:32:28.977449] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.637 [2024-11-28 12:32:28.989464] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.637 [2024-11-28 12:32:28.989474] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.637 [2024-11-28 12:32:29.001497] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.637 [2024-11-28 12:32:29.001507] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.637 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (2401421) - No such process 00:08:46.637 12:32:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 2401421 00:08:46.637 12:32:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:46.637 12:32:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.637 12:32:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:46.637 12:32:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.637 12:32:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:46.637 12:32:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.637 12:32:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:46.637 delay0 00:08:46.637 12:32:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.637 12:32:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:08:46.637 12:32:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.637 12:32:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:46.637 12:32:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.637 12:32:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:08:46.637 [2024-11-28 12:32:29.137880] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:08:53.206 Initializing NVMe Controllers 00:08:53.207 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:53.207 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:53.207 Initialization complete. Launching workers. 00:08:53.207 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 273 00:08:53.207 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 560, failed to submit 33 00:08:53.207 success 360, unsuccessful 200, failed 0 00:08:53.207 12:32:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:08:53.207 12:32:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:08:53.207 12:32:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:53.207 12:32:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:08:53.207 12:32:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:53.207 12:32:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:08:53.207 12:32:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:53.207 12:32:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:53.207 rmmod nvme_tcp 00:08:53.207 rmmod nvme_fabrics 00:08:53.207 rmmod nvme_keyring 00:08:53.207 12:32:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:53.207 12:32:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:08:53.207 12:32:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:08:53.207 12:32:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 2399564 ']' 00:08:53.207 12:32:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 2399564 00:08:53.207 12:32:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 2399564 ']' 00:08:53.207 12:32:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 2399564 00:08:53.207 12:32:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:08:53.207 12:32:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:53.207 12:32:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2399564 00:08:53.207 12:32:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:53.207 12:32:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:53.207 12:32:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2399564' 00:08:53.207 killing process with pid 2399564 00:08:53.207 12:32:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 2399564 00:08:53.207 12:32:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 2399564 00:08:53.207 12:32:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:53.207 12:32:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:53.207 12:32:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:53.207 12:32:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:08:53.207 12:32:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:08:53.207 12:32:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:53.207 12:32:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:08:53.207 12:32:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:53.207 12:32:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:53.207 12:32:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:53.207 12:32:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:53.207 12:32:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:55.115 12:32:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:55.115 00:08:55.115 real 0m30.961s 00:08:55.115 user 0m41.818s 00:08:55.115 sys 0m10.562s 00:08:55.115 12:32:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:55.115 12:32:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:55.115 ************************************ 00:08:55.115 END TEST nvmf_zcopy 00:08:55.115 ************************************ 00:08:55.115 12:32:37 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:08:55.115 12:32:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:55.115 12:32:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:55.115 12:32:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:55.376 ************************************ 00:08:55.376 START TEST nvmf_nmic 00:08:55.376 ************************************ 00:08:55.376 12:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:08:55.376 * Looking for test storage... 00:08:55.376 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:55.376 12:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:55.376 12:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:08:55.376 12:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:55.376 12:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:55.376 12:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:55.376 12:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:55.376 12:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:55.376 12:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:08:55.376 12:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:08:55.376 12:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:08:55.376 12:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:08:55.376 12:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:08:55.376 12:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:08:55.376 12:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:08:55.376 12:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:55.376 12:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:08:55.376 12:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:08:55.376 12:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:55.376 12:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:55.376 12:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:08:55.376 12:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:08:55.376 12:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:55.376 12:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:08:55.376 12:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:08:55.376 12:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:08:55.376 12:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:08:55.376 12:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:55.376 12:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:08:55.376 12:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:08:55.376 12:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:55.376 12:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:55.376 12:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:08:55.376 12:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:55.376 12:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:55.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:55.376 --rc genhtml_branch_coverage=1 00:08:55.376 --rc genhtml_function_coverage=1 00:08:55.376 --rc genhtml_legend=1 00:08:55.376 --rc geninfo_all_blocks=1 00:08:55.376 --rc geninfo_unexecuted_blocks=1 00:08:55.376 00:08:55.376 ' 00:08:55.376 12:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:55.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:55.376 --rc genhtml_branch_coverage=1 00:08:55.376 --rc genhtml_function_coverage=1 00:08:55.376 --rc genhtml_legend=1 00:08:55.376 --rc geninfo_all_blocks=1 00:08:55.376 --rc geninfo_unexecuted_blocks=1 00:08:55.376 00:08:55.376 ' 00:08:55.376 12:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:55.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:55.376 --rc genhtml_branch_coverage=1 00:08:55.376 --rc genhtml_function_coverage=1 00:08:55.376 --rc genhtml_legend=1 00:08:55.376 --rc geninfo_all_blocks=1 00:08:55.376 --rc geninfo_unexecuted_blocks=1 00:08:55.376 00:08:55.376 ' 00:08:55.376 12:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:55.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:55.377 --rc genhtml_branch_coverage=1 00:08:55.377 --rc genhtml_function_coverage=1 00:08:55.377 --rc genhtml_legend=1 00:08:55.377 --rc geninfo_all_blocks=1 00:08:55.377 --rc geninfo_unexecuted_blocks=1 00:08:55.377 00:08:55.377 ' 00:08:55.377 12:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:55.377 12:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:08:55.377 12:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:55.377 12:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:55.377 12:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:55.377 12:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:55.377 12:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:55.377 12:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:55.377 12:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:55.377 12:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:55.377 12:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:55.377 12:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:55.377 12:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:55.377 12:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:55.377 12:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:55.377 12:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:55.377 12:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:55.377 12:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:55.377 12:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:55.377 12:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:08:55.377 12:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:55.377 12:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:55.377 12:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:55.377 12:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:55.377 12:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:55.377 12:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:55.377 12:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:08:55.377 12:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:55.377 12:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:08:55.377 12:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:55.377 12:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:55.377 12:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:55.377 12:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:55.377 12:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:55.377 12:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:55.377 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:55.377 12:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:55.377 12:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:55.377 12:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:55.377 12:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:55.377 12:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:55.377 12:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:08:55.377 12:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:55.377 12:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:55.377 12:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:55.377 12:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:55.377 12:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:55.377 12:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:55.377 12:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:55.377 12:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:55.377 12:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:55.377 12:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:55.377 12:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:08:55.377 12:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:00.655 12:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:00.655 12:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:09:00.655 12:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:00.655 12:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:00.655 12:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:00.655 12:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:00.655 12:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:00.655 12:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:09:00.655 12:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:00.655 12:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:09:00.655 12:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:09:00.655 12:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:09:00.655 12:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:09:00.655 12:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:09:00.655 12:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:09:00.655 12:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:00.655 12:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:00.655 12:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:00.655 12:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:00.655 12:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:00.655 12:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:00.655 12:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:00.655 12:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:00.655 12:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:00.655 12:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:00.655 12:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:00.655 12:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:00.655 12:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:00.655 12:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:00.655 12:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:00.655 12:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:00.655 12:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:00.655 12:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:00.655 12:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:00.655 12:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:00.655 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:00.655 12:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:00.655 12:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:00.655 12:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:00.655 12:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:00.655 12:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:00.655 12:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:00.655 12:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:00.655 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:00.655 12:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:00.655 12:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:00.655 12:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:00.655 12:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:00.655 12:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:00.655 12:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:00.655 12:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:00.655 12:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:00.655 12:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:00.655 12:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:00.655 12:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:00.655 12:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:00.655 12:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:00.655 12:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:00.655 12:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:00.655 12:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:00.655 Found net devices under 0000:86:00.0: cvl_0_0 00:09:00.655 12:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:00.655 12:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:00.655 12:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:00.655 12:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:00.655 12:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:00.655 12:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:00.655 12:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:00.655 12:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:00.655 12:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:00.655 Found net devices under 0000:86:00.1: cvl_0_1 00:09:00.655 12:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:00.655 12:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:00.655 12:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:09:00.655 12:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:00.655 12:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:00.655 12:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:00.655 12:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:00.655 12:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:00.655 12:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:00.655 12:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:00.655 12:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:00.655 12:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:00.655 12:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:00.655 12:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:00.655 12:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:00.655 12:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:00.655 12:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:00.655 12:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:00.655 12:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:00.655 12:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:00.655 12:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:00.655 12:32:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:00.655 12:32:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:00.655 12:32:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:00.655 12:32:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:00.915 12:32:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:00.915 12:32:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:00.915 12:32:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:00.915 12:32:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:00.915 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:00.915 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.396 ms 00:09:00.915 00:09:00.915 --- 10.0.0.2 ping statistics --- 00:09:00.915 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:00.915 rtt min/avg/max/mdev = 0.396/0.396/0.396/0.000 ms 00:09:00.915 12:32:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:00.915 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:00.915 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.152 ms 00:09:00.915 00:09:00.915 --- 10.0.0.1 ping statistics --- 00:09:00.915 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:00.915 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:09:00.915 12:32:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:00.915 12:32:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:09:00.915 12:32:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:00.915 12:32:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:00.915 12:32:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:00.915 12:32:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:00.915 12:32:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:00.915 12:32:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:00.915 12:32:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:00.915 12:32:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:09:00.915 12:32:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:00.915 12:32:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:00.915 12:32:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:00.915 12:32:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=2406812 00:09:00.915 12:32:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 2406812 00:09:00.915 12:32:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:00.915 12:32:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 2406812 ']' 00:09:00.915 12:32:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:00.915 12:32:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:00.915 12:32:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:00.915 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:00.915 12:32:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:00.915 12:32:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:00.915 [2024-11-28 12:32:43.318453] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:09:00.915 [2024-11-28 12:32:43.318504] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:00.915 [2024-11-28 12:32:43.389493] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:00.915 [2024-11-28 12:32:43.432199] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:00.915 [2024-11-28 12:32:43.432242] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:00.915 [2024-11-28 12:32:43.432250] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:00.915 [2024-11-28 12:32:43.432257] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:00.915 [2024-11-28 12:32:43.432263] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:01.174 [2024-11-28 12:32:43.433789] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:01.174 [2024-11-28 12:32:43.433888] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:01.174 [2024-11-28 12:32:43.434105] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:01.174 [2024-11-28 12:32:43.434108] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:01.174 12:32:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:01.174 12:32:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:09:01.174 12:32:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:01.174 12:32:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:01.174 12:32:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:01.174 12:32:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:01.174 12:32:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:01.175 12:32:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.175 12:32:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:01.175 [2024-11-28 12:32:43.580746] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:01.175 12:32:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.175 12:32:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:01.175 12:32:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.175 12:32:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:01.175 Malloc0 00:09:01.175 12:32:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.175 12:32:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:01.175 12:32:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.175 12:32:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:01.175 12:32:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.175 12:32:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:01.175 12:32:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.175 12:32:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:01.175 12:32:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.175 12:32:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:01.175 12:32:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.175 12:32:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:01.175 [2024-11-28 12:32:43.641814] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:01.175 12:32:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.175 12:32:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:09:01.175 test case1: single bdev can't be used in multiple subsystems 00:09:01.175 12:32:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:09:01.175 12:32:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.175 12:32:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:01.175 12:32:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.175 12:32:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:09:01.175 12:32:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.175 12:32:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:01.175 12:32:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.175 12:32:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:09:01.175 12:32:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:09:01.175 12:32:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.175 12:32:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:01.175 [2024-11-28 12:32:43.669697] bdev.c:8515:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:09:01.175 [2024-11-28 12:32:43.669718] subsystem.c:2156:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:09:01.175 [2024-11-28 12:32:43.669726] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.175 request: 00:09:01.175 { 00:09:01.175 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:01.175 "namespace": { 00:09:01.175 "bdev_name": "Malloc0", 00:09:01.175 "no_auto_visible": false, 00:09:01.175 "hide_metadata": false 00:09:01.175 }, 00:09:01.175 "method": "nvmf_subsystem_add_ns", 00:09:01.175 "req_id": 1 00:09:01.175 } 00:09:01.175 Got JSON-RPC error response 00:09:01.175 response: 00:09:01.175 { 00:09:01.175 "code": -32602, 00:09:01.175 "message": "Invalid parameters" 00:09:01.175 } 00:09:01.175 12:32:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:01.175 12:32:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:09:01.175 12:32:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:09:01.175 12:32:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:09:01.175 Adding namespace failed - expected result. 00:09:01.175 12:32:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:09:01.175 test case2: host connect to nvmf target in multiple paths 00:09:01.175 12:32:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:09:01.175 12:32:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.175 12:32:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:01.175 [2024-11-28 12:32:43.681829] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:09:01.175 12:32:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.175 12:32:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:02.553 12:32:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:09:03.495 12:32:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:09:03.495 12:32:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:09:03.495 12:32:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:09:03.495 12:32:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:09:03.495 12:32:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:09:06.029 12:32:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:09:06.029 12:32:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:09:06.029 12:32:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:09:06.029 12:32:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:09:06.029 12:32:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:09:06.029 12:32:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:09:06.029 12:32:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:06.029 [global] 00:09:06.029 thread=1 00:09:06.029 invalidate=1 00:09:06.029 rw=write 00:09:06.029 time_based=1 00:09:06.029 runtime=1 00:09:06.029 ioengine=libaio 00:09:06.029 direct=1 00:09:06.029 bs=4096 00:09:06.029 iodepth=1 00:09:06.029 norandommap=0 00:09:06.029 numjobs=1 00:09:06.029 00:09:06.029 verify_dump=1 00:09:06.029 verify_backlog=512 00:09:06.029 verify_state_save=0 00:09:06.029 do_verify=1 00:09:06.029 verify=crc32c-intel 00:09:06.029 [job0] 00:09:06.029 filename=/dev/nvme0n1 00:09:06.029 Could not set queue depth (nvme0n1) 00:09:06.029 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:06.029 fio-3.35 00:09:06.029 Starting 1 thread 00:09:06.965 00:09:06.965 job0: (groupid=0, jobs=1): err= 0: pid=2407878: Thu Nov 28 12:32:49 2024 00:09:06.965 read: IOPS=1561, BW=6247KiB/s (6397kB/s)(6416KiB/1027msec) 00:09:06.965 slat (nsec): min=7045, max=45756, avg=8196.05, stdev=2143.99 00:09:06.965 clat (usec): min=176, max=41128, avg=377.96, stdev=2273.12 00:09:06.965 lat (usec): min=184, max=41153, avg=386.16, stdev=2273.84 00:09:06.965 clat percentiles (usec): 00:09:06.965 | 1.00th=[ 206], 5.00th=[ 217], 10.00th=[ 219], 20.00th=[ 227], 00:09:06.965 | 30.00th=[ 243], 40.00th=[ 253], 50.00th=[ 258], 60.00th=[ 262], 00:09:06.965 | 70.00th=[ 265], 80.00th=[ 269], 90.00th=[ 273], 95.00th=[ 277], 00:09:06.965 | 99.00th=[ 297], 99.50th=[ 310], 99.90th=[41157], 99.95th=[41157], 00:09:06.965 | 99.99th=[41157] 00:09:06.965 write: IOPS=1994, BW=7977KiB/s (8168kB/s)(8192KiB/1027msec); 0 zone resets 00:09:06.965 slat (usec): min=10, max=28486, avg=25.66, stdev=629.21 00:09:06.965 clat (usec): min=115, max=489, avg=167.67, stdev=27.75 00:09:06.965 lat (usec): min=125, max=28806, avg=193.34, stdev=633.19 00:09:06.965 clat percentiles (usec): 00:09:06.965 | 1.00th=[ 124], 5.00th=[ 129], 10.00th=[ 133], 20.00th=[ 151], 00:09:06.965 | 30.00th=[ 159], 40.00th=[ 163], 50.00th=[ 165], 60.00th=[ 169], 00:09:06.965 | 70.00th=[ 174], 80.00th=[ 180], 90.00th=[ 208], 95.00th=[ 217], 00:09:06.965 | 99.00th=[ 243], 99.50th=[ 258], 99.90th=[ 322], 99.95th=[ 334], 00:09:06.965 | 99.99th=[ 490] 00:09:06.965 bw ( KiB/s): min= 6720, max= 9664, per=100.00%, avg=8192.00, stdev=2081.72, samples=2 00:09:06.965 iops : min= 1680, max= 2416, avg=2048.00, stdev=520.43, samples=2 00:09:06.965 lat (usec) : 250=71.03%, 500=28.83% 00:09:06.965 lat (msec) : 50=0.14% 00:09:06.965 cpu : usr=3.12%, sys=5.46%, ctx=3654, majf=0, minf=1 00:09:06.965 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:06.965 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:06.965 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:06.965 issued rwts: total=1604,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:06.965 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:06.965 00:09:06.965 Run status group 0 (all jobs): 00:09:06.965 READ: bw=6247KiB/s (6397kB/s), 6247KiB/s-6247KiB/s (6397kB/s-6397kB/s), io=6416KiB (6570kB), run=1027-1027msec 00:09:06.965 WRITE: bw=7977KiB/s (8168kB/s), 7977KiB/s-7977KiB/s (8168kB/s-8168kB/s), io=8192KiB (8389kB), run=1027-1027msec 00:09:06.965 00:09:06.965 Disk stats (read/write): 00:09:06.965 nvme0n1: ios=1626/2048, merge=0/0, ticks=1420/329, in_queue=1749, util=98.60% 00:09:06.965 12:32:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:07.224 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:07.224 12:32:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:07.224 12:32:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:09:07.224 12:32:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:09:07.224 12:32:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:07.224 12:32:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:07.224 12:32:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:09:07.224 12:32:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:09:07.224 12:32:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:09:07.224 12:32:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:09:07.224 12:32:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:07.224 12:32:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:09:07.224 12:32:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:07.224 12:32:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:09:07.224 12:32:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:07.224 12:32:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:07.224 rmmod nvme_tcp 00:09:07.224 rmmod nvme_fabrics 00:09:07.224 rmmod nvme_keyring 00:09:07.224 12:32:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:07.224 12:32:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:09:07.224 12:32:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:09:07.224 12:32:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 2406812 ']' 00:09:07.224 12:32:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 2406812 00:09:07.224 12:32:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 2406812 ']' 00:09:07.224 12:32:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 2406812 00:09:07.483 12:32:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:09:07.483 12:32:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:07.483 12:32:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2406812 00:09:07.483 12:32:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:07.483 12:32:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:07.483 12:32:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2406812' 00:09:07.483 killing process with pid 2406812 00:09:07.483 12:32:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 2406812 00:09:07.483 12:32:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 2406812 00:09:07.483 12:32:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:07.483 12:32:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:07.483 12:32:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:07.483 12:32:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:09:07.483 12:32:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:07.483 12:32:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:09:07.483 12:32:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:09:07.483 12:32:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:07.483 12:32:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:07.484 12:32:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:07.484 12:32:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:07.484 12:32:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:10.021 12:32:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:10.021 00:09:10.021 real 0m14.405s 00:09:10.021 user 0m33.198s 00:09:10.021 sys 0m4.922s 00:09:10.021 12:32:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:10.021 12:32:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:10.021 ************************************ 00:09:10.021 END TEST nvmf_nmic 00:09:10.021 ************************************ 00:09:10.021 12:32:52 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:10.021 12:32:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:10.021 12:32:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:10.021 12:32:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:10.021 ************************************ 00:09:10.021 START TEST nvmf_fio_target 00:09:10.021 ************************************ 00:09:10.021 12:32:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:10.021 * Looking for test storage... 00:09:10.021 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:10.021 12:32:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:10.021 12:32:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:09:10.021 12:32:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:10.021 12:32:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:10.021 12:32:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:10.021 12:32:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:10.021 12:32:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:10.021 12:32:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:09:10.021 12:32:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:09:10.021 12:32:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:09:10.021 12:32:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:09:10.021 12:32:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:09:10.021 12:32:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:09:10.021 12:32:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:09:10.021 12:32:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:10.021 12:32:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:09:10.021 12:32:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:09:10.021 12:32:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:10.021 12:32:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:10.021 12:32:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:09:10.021 12:32:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:09:10.021 12:32:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:10.021 12:32:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:09:10.021 12:32:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:09:10.021 12:32:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:09:10.021 12:32:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:09:10.021 12:32:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:10.021 12:32:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:09:10.021 12:32:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:09:10.021 12:32:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:10.021 12:32:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:10.021 12:32:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:09:10.022 12:32:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:10.022 12:32:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:10.022 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:10.022 --rc genhtml_branch_coverage=1 00:09:10.022 --rc genhtml_function_coverage=1 00:09:10.022 --rc genhtml_legend=1 00:09:10.022 --rc geninfo_all_blocks=1 00:09:10.022 --rc geninfo_unexecuted_blocks=1 00:09:10.022 00:09:10.022 ' 00:09:10.022 12:32:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:10.022 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:10.022 --rc genhtml_branch_coverage=1 00:09:10.022 --rc genhtml_function_coverage=1 00:09:10.022 --rc genhtml_legend=1 00:09:10.022 --rc geninfo_all_blocks=1 00:09:10.022 --rc geninfo_unexecuted_blocks=1 00:09:10.022 00:09:10.022 ' 00:09:10.022 12:32:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:10.022 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:10.022 --rc genhtml_branch_coverage=1 00:09:10.022 --rc genhtml_function_coverage=1 00:09:10.022 --rc genhtml_legend=1 00:09:10.022 --rc geninfo_all_blocks=1 00:09:10.022 --rc geninfo_unexecuted_blocks=1 00:09:10.022 00:09:10.022 ' 00:09:10.022 12:32:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:10.022 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:10.022 --rc genhtml_branch_coverage=1 00:09:10.022 --rc genhtml_function_coverage=1 00:09:10.022 --rc genhtml_legend=1 00:09:10.022 --rc geninfo_all_blocks=1 00:09:10.022 --rc geninfo_unexecuted_blocks=1 00:09:10.022 00:09:10.022 ' 00:09:10.022 12:32:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:10.022 12:32:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:09:10.022 12:32:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:10.022 12:32:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:10.022 12:32:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:10.022 12:32:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:10.022 12:32:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:10.022 12:32:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:10.022 12:32:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:10.022 12:32:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:10.022 12:32:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:10.022 12:32:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:10.022 12:32:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:10.022 12:32:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:10.022 12:32:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:10.022 12:32:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:10.022 12:32:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:10.022 12:32:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:10.022 12:32:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:10.022 12:32:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:09:10.022 12:32:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:10.022 12:32:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:10.022 12:32:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:10.022 12:32:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:10.022 12:32:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:10.022 12:32:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:10.022 12:32:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:09:10.022 12:32:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:10.022 12:32:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:09:10.022 12:32:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:10.022 12:32:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:10.022 12:32:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:10.022 12:32:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:10.022 12:32:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:10.022 12:32:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:10.022 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:10.022 12:32:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:10.022 12:32:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:10.022 12:32:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:10.022 12:32:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:10.022 12:32:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:10.022 12:32:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:10.022 12:32:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:09:10.022 12:32:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:10.022 12:32:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:10.022 12:32:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:10.022 12:32:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:10.022 12:32:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:10.022 12:32:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:10.022 12:32:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:10.022 12:32:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:10.022 12:32:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:10.022 12:32:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:10.022 12:32:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:09:10.022 12:32:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:15.296 12:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:15.296 12:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:09:15.296 12:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:15.296 12:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:15.296 12:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:15.296 12:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:15.296 12:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:15.296 12:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:09:15.296 12:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:15.296 12:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:09:15.296 12:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:09:15.296 12:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:09:15.296 12:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:09:15.296 12:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:09:15.296 12:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:09:15.296 12:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:15.296 12:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:15.296 12:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:15.296 12:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:15.296 12:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:15.296 12:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:15.296 12:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:15.296 12:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:15.296 12:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:15.296 12:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:15.296 12:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:15.296 12:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:15.296 12:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:15.296 12:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:15.296 12:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:15.296 12:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:15.296 12:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:15.296 12:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:15.296 12:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:15.296 12:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:15.296 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:15.296 12:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:15.296 12:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:15.296 12:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:15.296 12:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:15.296 12:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:15.296 12:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:15.296 12:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:15.296 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:15.296 12:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:15.297 12:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:15.297 12:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:15.297 12:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:15.297 12:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:15.297 12:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:15.297 12:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:15.297 12:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:15.297 12:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:15.297 12:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:15.297 12:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:15.297 12:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:15.297 12:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:15.297 12:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:15.297 12:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:15.297 12:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:15.297 Found net devices under 0000:86:00.0: cvl_0_0 00:09:15.297 12:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:15.297 12:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:15.297 12:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:15.297 12:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:15.297 12:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:15.297 12:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:15.297 12:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:15.297 12:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:15.297 12:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:15.297 Found net devices under 0000:86:00.1: cvl_0_1 00:09:15.297 12:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:15.297 12:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:15.297 12:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:09:15.297 12:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:15.297 12:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:15.297 12:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:15.297 12:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:15.297 12:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:15.297 12:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:15.297 12:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:15.297 12:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:15.297 12:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:15.297 12:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:15.297 12:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:15.297 12:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:15.297 12:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:15.297 12:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:15.297 12:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:15.297 12:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:15.297 12:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:15.297 12:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:15.556 12:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:15.556 12:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:15.556 12:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:15.556 12:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:15.556 12:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:15.556 12:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:15.556 12:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:15.556 12:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:15.556 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:15.556 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.433 ms 00:09:15.556 00:09:15.556 --- 10.0.0.2 ping statistics --- 00:09:15.556 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:15.556 rtt min/avg/max/mdev = 0.433/0.433/0.433/0.000 ms 00:09:15.556 12:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:15.556 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:15.556 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.226 ms 00:09:15.556 00:09:15.556 --- 10.0.0.1 ping statistics --- 00:09:15.556 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:15.556 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:09:15.556 12:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:15.556 12:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:09:15.556 12:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:15.556 12:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:15.556 12:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:15.556 12:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:15.556 12:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:15.556 12:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:15.556 12:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:15.556 12:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:09:15.556 12:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:15.556 12:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:15.556 12:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:15.556 12:32:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=2411641 00:09:15.556 12:32:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 2411641 00:09:15.556 12:32:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:15.556 12:32:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 2411641 ']' 00:09:15.556 12:32:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:15.556 12:32:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:15.556 12:32:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:15.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:15.556 12:32:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:15.556 12:32:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:15.556 [2024-11-28 12:32:58.061216] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:09:15.556 [2024-11-28 12:32:58.061267] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:15.814 [2024-11-28 12:32:58.129398] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:15.814 [2024-11-28 12:32:58.172230] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:15.814 [2024-11-28 12:32:58.172269] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:15.814 [2024-11-28 12:32:58.172277] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:15.814 [2024-11-28 12:32:58.172283] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:15.814 [2024-11-28 12:32:58.172288] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:15.814 [2024-11-28 12:32:58.176964] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:15.814 [2024-11-28 12:32:58.176983] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:15.814 [2024-11-28 12:32:58.177068] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:15.814 [2024-11-28 12:32:58.177070] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:15.814 12:32:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:15.814 12:32:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:09:15.814 12:32:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:15.814 12:32:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:15.814 12:32:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:15.814 12:32:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:15.814 12:32:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:16.072 [2024-11-28 12:32:58.492978] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:16.072 12:32:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:16.330 12:32:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:09:16.330 12:32:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:16.588 12:32:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:09:16.588 12:32:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:16.846 12:32:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:09:16.846 12:32:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:17.105 12:32:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:09:17.105 12:32:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:09:17.363 12:32:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:17.363 12:32:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:09:17.363 12:32:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:17.620 12:33:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:09:17.620 12:33:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:17.878 12:33:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:09:17.878 12:33:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:09:18.136 12:33:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:18.395 12:33:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:18.395 12:33:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:18.395 12:33:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:18.395 12:33:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:18.653 12:33:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:18.912 [2024-11-28 12:33:01.251572] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:18.912 12:33:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:09:19.171 12:33:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:09:19.430 12:33:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:20.366 12:33:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:09:20.366 12:33:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:09:20.366 12:33:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:09:20.366 12:33:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:09:20.366 12:33:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:09:20.366 12:33:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:09:22.899 12:33:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:09:22.899 12:33:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:09:22.899 12:33:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:09:22.899 12:33:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:09:22.899 12:33:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:09:22.899 12:33:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:09:22.899 12:33:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:22.899 [global] 00:09:22.899 thread=1 00:09:22.899 invalidate=1 00:09:22.899 rw=write 00:09:22.899 time_based=1 00:09:22.899 runtime=1 00:09:22.899 ioengine=libaio 00:09:22.899 direct=1 00:09:22.899 bs=4096 00:09:22.899 iodepth=1 00:09:22.899 norandommap=0 00:09:22.899 numjobs=1 00:09:22.899 00:09:22.899 verify_dump=1 00:09:22.899 verify_backlog=512 00:09:22.899 verify_state_save=0 00:09:22.899 do_verify=1 00:09:22.899 verify=crc32c-intel 00:09:22.899 [job0] 00:09:22.899 filename=/dev/nvme0n1 00:09:22.899 [job1] 00:09:22.899 filename=/dev/nvme0n2 00:09:22.899 [job2] 00:09:22.899 filename=/dev/nvme0n3 00:09:22.899 [job3] 00:09:22.899 filename=/dev/nvme0n4 00:09:22.899 Could not set queue depth (nvme0n1) 00:09:22.899 Could not set queue depth (nvme0n2) 00:09:22.899 Could not set queue depth (nvme0n3) 00:09:22.899 Could not set queue depth (nvme0n4) 00:09:22.899 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:22.899 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:22.900 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:22.900 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:22.900 fio-3.35 00:09:22.900 Starting 4 threads 00:09:24.278 00:09:24.278 job0: (groupid=0, jobs=1): err= 0: pid=2413116: Thu Nov 28 12:33:06 2024 00:09:24.278 read: IOPS=21, BW=86.4KiB/s (88.5kB/s)(88.0KiB/1018msec) 00:09:24.278 slat (nsec): min=9814, max=27509, avg=14905.18, stdev=5135.07 00:09:24.278 clat (usec): min=40516, max=41942, avg=41055.00, stdev=312.72 00:09:24.278 lat (usec): min=40528, max=41969, avg=41069.91, stdev=315.81 00:09:24.278 clat percentiles (usec): 00:09:24.278 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:09:24.278 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:24.278 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:09:24.278 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:09:24.278 | 99.99th=[41681] 00:09:24.278 write: IOPS=502, BW=2012KiB/s (2060kB/s)(2048KiB/1018msec); 0 zone resets 00:09:24.278 slat (nsec): min=9910, max=44611, avg=13194.29, stdev=2863.32 00:09:24.278 clat (usec): min=131, max=993, avg=206.06, stdev=53.24 00:09:24.278 lat (usec): min=142, max=1011, avg=219.26, stdev=54.11 00:09:24.278 clat percentiles (usec): 00:09:24.278 | 1.00th=[ 137], 5.00th=[ 147], 10.00th=[ 157], 20.00th=[ 172], 00:09:24.278 | 30.00th=[ 184], 40.00th=[ 192], 50.00th=[ 204], 60.00th=[ 215], 00:09:24.278 | 70.00th=[ 229], 80.00th=[ 239], 90.00th=[ 247], 95.00th=[ 255], 00:09:24.278 | 99.00th=[ 285], 99.50th=[ 347], 99.90th=[ 996], 99.95th=[ 996], 00:09:24.278 | 99.99th=[ 996] 00:09:24.278 bw ( KiB/s): min= 4096, max= 4096, per=18.17%, avg=4096.00, stdev= 0.00, samples=1 00:09:24.278 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:24.278 lat (usec) : 250=88.20%, 500=7.30%, 750=0.19%, 1000=0.19% 00:09:24.278 lat (msec) : 50=4.12% 00:09:24.278 cpu : usr=0.59%, sys=0.69%, ctx=537, majf=0, minf=1 00:09:24.278 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:24.278 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:24.278 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:24.278 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:24.278 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:24.278 job1: (groupid=0, jobs=1): err= 0: pid=2413118: Thu Nov 28 12:33:06 2024 00:09:24.278 read: IOPS=419, BW=1679KiB/s (1719kB/s)(1736KiB/1034msec) 00:09:24.278 slat (nsec): min=6428, max=25457, avg=7918.37, stdev=2452.19 00:09:24.278 clat (usec): min=199, max=41976, avg=2162.89, stdev=8538.74 00:09:24.278 lat (usec): min=206, max=41987, avg=2170.81, stdev=8539.90 00:09:24.278 clat percentiles (usec): 00:09:24.278 | 1.00th=[ 210], 5.00th=[ 262], 10.00th=[ 265], 20.00th=[ 273], 00:09:24.278 | 30.00th=[ 277], 40.00th=[ 281], 50.00th=[ 281], 60.00th=[ 285], 00:09:24.278 | 70.00th=[ 289], 80.00th=[ 293], 90.00th=[ 310], 95.00th=[ 652], 00:09:24.278 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:09:24.278 | 99.99th=[42206] 00:09:24.278 write: IOPS=495, BW=1981KiB/s (2028kB/s)(2048KiB/1034msec); 0 zone resets 00:09:24.278 slat (nsec): min=9148, max=40457, avg=10714.47, stdev=2167.48 00:09:24.278 clat (usec): min=135, max=414, avg=162.30, stdev=16.79 00:09:24.278 lat (usec): min=146, max=454, avg=173.01, stdev=17.83 00:09:24.278 clat percentiles (usec): 00:09:24.278 | 1.00th=[ 141], 5.00th=[ 145], 10.00th=[ 147], 20.00th=[ 151], 00:09:24.278 | 30.00th=[ 155], 40.00th=[ 159], 50.00th=[ 161], 60.00th=[ 163], 00:09:24.278 | 70.00th=[ 167], 80.00th=[ 174], 90.00th=[ 180], 95.00th=[ 184], 00:09:24.278 | 99.00th=[ 192], 99.50th=[ 229], 99.90th=[ 416], 99.95th=[ 416], 00:09:24.278 | 99.99th=[ 416] 00:09:24.278 bw ( KiB/s): min= 4096, max= 4096, per=18.17%, avg=4096.00, stdev= 0.00, samples=1 00:09:24.278 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:24.278 lat (usec) : 250=54.86%, 500=42.60%, 750=0.32%, 1000=0.11% 00:09:24.278 lat (msec) : 50=2.11% 00:09:24.278 cpu : usr=0.29%, sys=1.06%, ctx=946, majf=0, minf=2 00:09:24.278 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:24.278 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:24.278 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:24.278 issued rwts: total=434,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:24.278 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:24.278 job2: (groupid=0, jobs=1): err= 0: pid=2413119: Thu Nov 28 12:33:06 2024 00:09:24.278 read: IOPS=2342, BW=9371KiB/s (9596kB/s)(9380KiB/1001msec) 00:09:24.278 slat (nsec): min=6348, max=26641, avg=7379.99, stdev=917.98 00:09:24.278 clat (usec): min=154, max=295, avg=212.28, stdev=22.10 00:09:24.278 lat (usec): min=161, max=302, avg=219.66, stdev=22.19 00:09:24.279 clat percentiles (usec): 00:09:24.279 | 1.00th=[ 169], 5.00th=[ 180], 10.00th=[ 186], 20.00th=[ 194], 00:09:24.279 | 30.00th=[ 198], 40.00th=[ 204], 50.00th=[ 210], 60.00th=[ 217], 00:09:24.279 | 70.00th=[ 223], 80.00th=[ 231], 90.00th=[ 245], 95.00th=[ 251], 00:09:24.279 | 99.00th=[ 265], 99.50th=[ 269], 99.90th=[ 285], 99.95th=[ 289], 00:09:24.279 | 99.99th=[ 297] 00:09:24.279 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:09:24.279 slat (usec): min=9, max=22419, avg=19.37, stdev=442.89 00:09:24.279 clat (usec): min=110, max=333, avg=165.79, stdev=32.40 00:09:24.279 lat (usec): min=120, max=22731, avg=185.16, stdev=446.96 00:09:24.279 clat percentiles (usec): 00:09:24.279 | 1.00th=[ 118], 5.00th=[ 124], 10.00th=[ 131], 20.00th=[ 139], 00:09:24.279 | 30.00th=[ 145], 40.00th=[ 151], 50.00th=[ 157], 60.00th=[ 169], 00:09:24.279 | 70.00th=[ 184], 80.00th=[ 192], 90.00th=[ 206], 95.00th=[ 237], 00:09:24.279 | 99.00th=[ 251], 99.50th=[ 260], 99.90th=[ 314], 99.95th=[ 322], 00:09:24.279 | 99.99th=[ 334] 00:09:24.279 bw ( KiB/s): min=10264, max=10264, per=45.53%, avg=10264.00, stdev= 0.00, samples=1 00:09:24.279 iops : min= 2566, max= 2566, avg=2566.00, stdev= 0.00, samples=1 00:09:24.279 lat (usec) : 250=96.41%, 500=3.59% 00:09:24.279 cpu : usr=2.70%, sys=4.40%, ctx=4907, majf=0, minf=1 00:09:24.279 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:24.279 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:24.279 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:24.279 issued rwts: total=2345,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:24.279 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:24.279 job3: (groupid=0, jobs=1): err= 0: pid=2413120: Thu Nov 28 12:33:06 2024 00:09:24.279 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:09:24.279 slat (nsec): min=7426, max=23842, avg=9368.39, stdev=1503.64 00:09:24.279 clat (usec): min=205, max=501, avg=258.61, stdev=33.01 00:09:24.279 lat (usec): min=214, max=511, avg=267.98, stdev=33.15 00:09:24.279 clat percentiles (usec): 00:09:24.279 | 1.00th=[ 212], 5.00th=[ 223], 10.00th=[ 229], 20.00th=[ 235], 00:09:24.279 | 30.00th=[ 241], 40.00th=[ 245], 50.00th=[ 249], 60.00th=[ 255], 00:09:24.279 | 70.00th=[ 269], 80.00th=[ 289], 90.00th=[ 297], 95.00th=[ 306], 00:09:24.279 | 99.00th=[ 408], 99.50th=[ 453], 99.90th=[ 490], 99.95th=[ 498], 00:09:24.279 | 99.99th=[ 502] 00:09:24.279 write: IOPS=2241, BW=8967KiB/s (9182kB/s)(8976KiB/1001msec); 0 zone resets 00:09:24.279 slat (nsec): min=10739, max=44395, avg=13055.13, stdev=2244.29 00:09:24.279 clat (usec): min=141, max=548, avg=182.13, stdev=30.06 00:09:24.279 lat (usec): min=154, max=562, avg=195.19, stdev=30.37 00:09:24.279 clat percentiles (usec): 00:09:24.279 | 1.00th=[ 151], 5.00th=[ 155], 10.00th=[ 159], 20.00th=[ 163], 00:09:24.279 | 30.00th=[ 167], 40.00th=[ 172], 50.00th=[ 176], 60.00th=[ 180], 00:09:24.279 | 70.00th=[ 186], 80.00th=[ 192], 90.00th=[ 206], 95.00th=[ 269], 00:09:24.279 | 99.00th=[ 289], 99.50th=[ 289], 99.90th=[ 306], 99.95th=[ 306], 00:09:24.279 | 99.99th=[ 545] 00:09:24.279 bw ( KiB/s): min= 9848, max= 9848, per=43.68%, avg=9848.00, stdev= 0.00, samples=1 00:09:24.279 iops : min= 2462, max= 2462, avg=2462.00, stdev= 0.00, samples=1 00:09:24.279 lat (usec) : 250=73.63%, 500=26.33%, 750=0.05% 00:09:24.279 cpu : usr=4.10%, sys=6.90%, ctx=4293, majf=0, minf=1 00:09:24.279 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:24.279 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:24.279 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:24.279 issued rwts: total=2048,2244,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:24.279 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:24.279 00:09:24.279 Run status group 0 (all jobs): 00:09:24.279 READ: bw=18.3MiB/s (19.2MB/s), 86.4KiB/s-9371KiB/s (88.5kB/s-9596kB/s), io=18.9MiB (19.9MB), run=1001-1034msec 00:09:24.279 WRITE: bw=22.0MiB/s (23.1MB/s), 1981KiB/s-9.99MiB/s (2028kB/s-10.5MB/s), io=22.8MiB (23.9MB), run=1001-1034msec 00:09:24.279 00:09:24.279 Disk stats (read/write): 00:09:24.279 nvme0n1: ios=69/512, merge=0/0, ticks=1176/99, in_queue=1275, util=98.10% 00:09:24.279 nvme0n2: ios=429/512, merge=0/0, ticks=730/82, in_queue=812, util=86.78% 00:09:24.279 nvme0n3: ios=2072/2152, merge=0/0, ticks=1401/341, in_queue=1742, util=98.43% 00:09:24.279 nvme0n4: ios=1727/2048, merge=0/0, ticks=785/343, in_queue=1128, util=98.63% 00:09:24.279 12:33:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:09:24.279 [global] 00:09:24.279 thread=1 00:09:24.279 invalidate=1 00:09:24.279 rw=randwrite 00:09:24.279 time_based=1 00:09:24.279 runtime=1 00:09:24.279 ioengine=libaio 00:09:24.279 direct=1 00:09:24.279 bs=4096 00:09:24.279 iodepth=1 00:09:24.279 norandommap=0 00:09:24.279 numjobs=1 00:09:24.279 00:09:24.279 verify_dump=1 00:09:24.279 verify_backlog=512 00:09:24.279 verify_state_save=0 00:09:24.279 do_verify=1 00:09:24.279 verify=crc32c-intel 00:09:24.279 [job0] 00:09:24.279 filename=/dev/nvme0n1 00:09:24.279 [job1] 00:09:24.279 filename=/dev/nvme0n2 00:09:24.279 [job2] 00:09:24.279 filename=/dev/nvme0n3 00:09:24.279 [job3] 00:09:24.279 filename=/dev/nvme0n4 00:09:24.279 Could not set queue depth (nvme0n1) 00:09:24.279 Could not set queue depth (nvme0n2) 00:09:24.279 Could not set queue depth (nvme0n3) 00:09:24.279 Could not set queue depth (nvme0n4) 00:09:24.279 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:24.279 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:24.279 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:24.279 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:24.279 fio-3.35 00:09:24.279 Starting 4 threads 00:09:25.657 00:09:25.657 job0: (groupid=0, jobs=1): err= 0: pid=2413492: Thu Nov 28 12:33:08 2024 00:09:25.657 read: IOPS=21, BW=87.6KiB/s (89.7kB/s)(88.0KiB/1005msec) 00:09:25.657 slat (nsec): min=9894, max=23197, avg=16801.41, stdev=6155.31 00:09:25.657 clat (usec): min=33091, max=42023, avg=40762.50, stdev=1750.32 00:09:25.657 lat (usec): min=33113, max=42033, avg=40779.30, stdev=1748.49 00:09:25.657 clat percentiles (usec): 00:09:25.657 | 1.00th=[33162], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:09:25.657 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:25.657 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:09:25.657 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:25.657 | 99.99th=[42206] 00:09:25.657 write: IOPS=509, BW=2038KiB/s (2087kB/s)(2048KiB/1005msec); 0 zone resets 00:09:25.657 slat (nsec): min=8988, max=39471, avg=10079.67, stdev=2215.43 00:09:25.657 clat (usec): min=137, max=405, avg=198.82, stdev=38.87 00:09:25.657 lat (usec): min=146, max=444, avg=208.90, stdev=39.33 00:09:25.657 clat percentiles (usec): 00:09:25.657 | 1.00th=[ 147], 5.00th=[ 153], 10.00th=[ 157], 20.00th=[ 163], 00:09:25.657 | 30.00th=[ 169], 40.00th=[ 176], 50.00th=[ 188], 60.00th=[ 202], 00:09:25.657 | 70.00th=[ 227], 80.00th=[ 239], 90.00th=[ 253], 95.00th=[ 262], 00:09:25.657 | 99.00th=[ 285], 99.50th=[ 302], 99.90th=[ 404], 99.95th=[ 404], 00:09:25.657 | 99.99th=[ 404] 00:09:25.657 bw ( KiB/s): min= 4096, max= 4096, per=19.12%, avg=4096.00, stdev= 0.00, samples=1 00:09:25.657 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:25.657 lat (usec) : 250=84.08%, 500=11.80% 00:09:25.657 lat (msec) : 50=4.12% 00:09:25.657 cpu : usr=0.50%, sys=0.30%, ctx=534, majf=0, minf=1 00:09:25.657 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:25.657 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:25.657 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:25.657 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:25.657 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:25.657 job1: (groupid=0, jobs=1): err= 0: pid=2413493: Thu Nov 28 12:33:08 2024 00:09:25.657 read: IOPS=1737, BW=6949KiB/s (7116kB/s)(6956KiB/1001msec) 00:09:25.657 slat (nsec): min=6241, max=25550, avg=7286.06, stdev=1042.76 00:09:25.657 clat (usec): min=206, max=41149, avg=364.12, stdev=1692.23 00:09:25.657 lat (usec): min=213, max=41157, avg=371.40, stdev=1692.66 00:09:25.657 clat percentiles (usec): 00:09:25.657 | 1.00th=[ 217], 5.00th=[ 231], 10.00th=[ 239], 20.00th=[ 247], 00:09:25.657 | 30.00th=[ 255], 40.00th=[ 269], 50.00th=[ 289], 60.00th=[ 302], 00:09:25.657 | 70.00th=[ 310], 80.00th=[ 322], 90.00th=[ 388], 95.00th=[ 437], 00:09:25.657 | 99.00th=[ 457], 99.50th=[ 465], 99.90th=[41157], 99.95th=[41157], 00:09:25.657 | 99.99th=[41157] 00:09:25.657 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:09:25.658 slat (nsec): min=8970, max=42911, avg=9874.31, stdev=1437.67 00:09:25.658 clat (usec): min=120, max=338, avg=159.46, stdev=28.30 00:09:25.658 lat (usec): min=130, max=374, avg=169.34, stdev=28.38 00:09:25.658 clat percentiles (usec): 00:09:25.658 | 1.00th=[ 125], 5.00th=[ 129], 10.00th=[ 133], 20.00th=[ 139], 00:09:25.658 | 30.00th=[ 145], 40.00th=[ 147], 50.00th=[ 151], 60.00th=[ 153], 00:09:25.658 | 70.00th=[ 161], 80.00th=[ 184], 90.00th=[ 206], 95.00th=[ 221], 00:09:25.658 | 99.00th=[ 241], 99.50th=[ 247], 99.90th=[ 253], 99.95th=[ 255], 00:09:25.658 | 99.99th=[ 338] 00:09:25.658 bw ( KiB/s): min= 8192, max= 8192, per=38.24%, avg=8192.00, stdev= 0.00, samples=1 00:09:25.658 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:09:25.658 lat (usec) : 250=64.64%, 500=35.28% 00:09:25.658 lat (msec) : 50=0.08% 00:09:25.658 cpu : usr=1.90%, sys=3.20%, ctx=3787, majf=0, minf=1 00:09:25.658 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:25.658 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:25.658 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:25.658 issued rwts: total=1739,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:25.658 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:25.658 job2: (groupid=0, jobs=1): err= 0: pid=2413496: Thu Nov 28 12:33:08 2024 00:09:25.658 read: IOPS=21, BW=85.8KiB/s (87.8kB/s)(88.0KiB/1026msec) 00:09:25.658 slat (nsec): min=10114, max=25907, avg=22067.95, stdev=2825.46 00:09:25.658 clat (usec): min=40824, max=41105, avg=40975.48, stdev=60.29 00:09:25.658 lat (usec): min=40847, max=41126, avg=40997.55, stdev=59.21 00:09:25.658 clat percentiles (usec): 00:09:25.658 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:09:25.658 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:25.658 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:09:25.658 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:09:25.658 | 99.99th=[41157] 00:09:25.658 write: IOPS=499, BW=1996KiB/s (2044kB/s)(2048KiB/1026msec); 0 zone resets 00:09:25.658 slat (nsec): min=10014, max=35382, avg=11534.14, stdev=2133.43 00:09:25.658 clat (usec): min=145, max=370, avg=227.62, stdev=29.02 00:09:25.658 lat (usec): min=157, max=406, avg=239.15, stdev=29.39 00:09:25.658 clat percentiles (usec): 00:09:25.658 | 1.00th=[ 153], 5.00th=[ 165], 10.00th=[ 184], 20.00th=[ 200], 00:09:25.658 | 30.00th=[ 229], 40.00th=[ 237], 50.00th=[ 239], 60.00th=[ 241], 00:09:25.658 | 70.00th=[ 243], 80.00th=[ 245], 90.00th=[ 249], 95.00th=[ 258], 00:09:25.658 | 99.00th=[ 277], 99.50th=[ 285], 99.90th=[ 371], 99.95th=[ 371], 00:09:25.658 | 99.99th=[ 371] 00:09:25.658 bw ( KiB/s): min= 4096, max= 4096, per=19.12%, avg=4096.00, stdev= 0.00, samples=1 00:09:25.658 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:25.658 lat (usec) : 250=87.08%, 500=8.80% 00:09:25.658 lat (msec) : 50=4.12% 00:09:25.658 cpu : usr=0.39%, sys=0.88%, ctx=534, majf=0, minf=1 00:09:25.658 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:25.658 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:25.658 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:25.658 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:25.658 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:25.658 job3: (groupid=0, jobs=1): err= 0: pid=2413500: Thu Nov 28 12:33:08 2024 00:09:25.658 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:09:25.658 slat (nsec): min=4986, max=16916, avg=7411.83, stdev=816.11 00:09:25.658 clat (usec): min=208, max=370, avg=265.22, stdev=30.96 00:09:25.658 lat (usec): min=216, max=378, avg=272.63, stdev=30.99 00:09:25.658 clat percentiles (usec): 00:09:25.658 | 1.00th=[ 219], 5.00th=[ 227], 10.00th=[ 233], 20.00th=[ 239], 00:09:25.658 | 30.00th=[ 243], 40.00th=[ 249], 50.00th=[ 255], 60.00th=[ 265], 00:09:25.658 | 70.00th=[ 285], 80.00th=[ 297], 90.00th=[ 310], 95.00th=[ 318], 00:09:25.658 | 99.00th=[ 343], 99.50th=[ 351], 99.90th=[ 367], 99.95th=[ 371], 00:09:25.658 | 99.99th=[ 371] 00:09:25.658 write: IOPS=2420, BW=9682KiB/s (9915kB/s)(9692KiB/1001msec); 0 zone resets 00:09:25.658 slat (nsec): min=9018, max=37706, avg=10458.50, stdev=1796.93 00:09:25.658 clat (usec): min=128, max=427, avg=168.14, stdev=41.73 00:09:25.658 lat (usec): min=138, max=465, avg=178.60, stdev=42.33 00:09:25.658 clat percentiles (usec): 00:09:25.658 | 1.00th=[ 133], 5.00th=[ 137], 10.00th=[ 139], 20.00th=[ 143], 00:09:25.658 | 30.00th=[ 147], 40.00th=[ 149], 50.00th=[ 151], 60.00th=[ 155], 00:09:25.658 | 70.00th=[ 163], 80.00th=[ 192], 90.00th=[ 223], 95.00th=[ 255], 00:09:25.658 | 99.00th=[ 338], 99.50th=[ 347], 99.90th=[ 363], 99.95th=[ 367], 00:09:25.658 | 99.99th=[ 429] 00:09:25.658 bw ( KiB/s): min= 9040, max= 9040, per=42.20%, avg=9040.00, stdev= 0.00, samples=1 00:09:25.658 iops : min= 2260, max= 2260, avg=2260.00, stdev= 0.00, samples=1 00:09:25.658 lat (usec) : 250=70.16%, 500=29.84% 00:09:25.658 cpu : usr=2.50%, sys=3.80%, ctx=4471, majf=0, minf=1 00:09:25.658 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:25.658 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:25.658 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:25.658 issued rwts: total=2048,2423,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:25.658 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:25.658 00:09:25.658 Run status group 0 (all jobs): 00:09:25.658 READ: bw=14.6MiB/s (15.3MB/s), 85.8KiB/s-8184KiB/s (87.8kB/s-8380kB/s), io=15.0MiB (15.7MB), run=1001-1026msec 00:09:25.658 WRITE: bw=20.9MiB/s (21.9MB/s), 1996KiB/s-9682KiB/s (2044kB/s-9915kB/s), io=21.5MiB (22.5MB), run=1001-1026msec 00:09:25.658 00:09:25.658 Disk stats (read/write): 00:09:25.658 nvme0n1: ios=65/512, merge=0/0, ticks=690/97, in_queue=787, util=82.77% 00:09:25.658 nvme0n2: ios=1357/1536, merge=0/0, ticks=517/245, in_queue=762, util=82.96% 00:09:25.658 nvme0n3: ios=16/512, merge=0/0, ticks=656/110, in_queue=766, util=87.54% 00:09:25.658 nvme0n4: ios=1586/2048, merge=0/0, ticks=401/342, in_queue=743, util=89.18% 00:09:25.658 12:33:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:09:25.658 [global] 00:09:25.658 thread=1 00:09:25.658 invalidate=1 00:09:25.658 rw=write 00:09:25.658 time_based=1 00:09:25.658 runtime=1 00:09:25.658 ioengine=libaio 00:09:25.658 direct=1 00:09:25.658 bs=4096 00:09:25.658 iodepth=128 00:09:25.658 norandommap=0 00:09:25.658 numjobs=1 00:09:25.658 00:09:25.658 verify_dump=1 00:09:25.658 verify_backlog=512 00:09:25.658 verify_state_save=0 00:09:25.658 do_verify=1 00:09:25.658 verify=crc32c-intel 00:09:25.658 [job0] 00:09:25.658 filename=/dev/nvme0n1 00:09:25.658 [job1] 00:09:25.658 filename=/dev/nvme0n2 00:09:25.658 [job2] 00:09:25.658 filename=/dev/nvme0n3 00:09:25.658 [job3] 00:09:25.658 filename=/dev/nvme0n4 00:09:25.658 Could not set queue depth (nvme0n1) 00:09:25.658 Could not set queue depth (nvme0n2) 00:09:25.658 Could not set queue depth (nvme0n3) 00:09:25.658 Could not set queue depth (nvme0n4) 00:09:25.917 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:25.917 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:25.917 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:25.917 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:25.917 fio-3.35 00:09:25.917 Starting 4 threads 00:09:27.309 00:09:27.309 job0: (groupid=0, jobs=1): err= 0: pid=2414058: Thu Nov 28 12:33:09 2024 00:09:27.309 read: IOPS=2640, BW=10.3MiB/s (10.8MB/s)(10.4MiB/1011msec) 00:09:27.309 slat (nsec): min=1738, max=27921k, avg=191127.06, stdev=1423566.38 00:09:27.309 clat (usec): min=3516, max=84548, avg=24017.39, stdev=15123.38 00:09:27.309 lat (usec): min=3529, max=84559, avg=24208.51, stdev=15243.18 00:09:27.309 clat percentiles (usec): 00:09:27.309 | 1.00th=[ 6063], 5.00th=[ 9110], 10.00th=[ 9634], 20.00th=[11207], 00:09:27.309 | 30.00th=[14746], 40.00th=[17171], 50.00th=[20317], 60.00th=[22938], 00:09:27.309 | 70.00th=[25822], 80.00th=[32375], 90.00th=[46924], 95.00th=[64226], 00:09:27.309 | 99.00th=[66323], 99.50th=[66847], 99.90th=[76022], 99.95th=[80217], 00:09:27.309 | 99.99th=[84411] 00:09:27.309 write: IOPS=3038, BW=11.9MiB/s (12.4MB/s)(12.0MiB/1011msec); 0 zone resets 00:09:27.309 slat (usec): min=3, max=14402, avg=144.97, stdev=814.50 00:09:27.309 clat (usec): min=682, max=56601, avg=20792.87, stdev=11238.63 00:09:27.309 lat (usec): min=695, max=56606, avg=20937.84, stdev=11312.93 00:09:27.309 clat percentiles (usec): 00:09:27.309 | 1.00th=[ 6783], 5.00th=[ 7373], 10.00th=[ 8586], 20.00th=[10159], 00:09:27.309 | 30.00th=[12911], 40.00th=[14746], 50.00th=[19268], 60.00th=[25035], 00:09:27.309 | 70.00th=[25560], 80.00th=[26870], 90.00th=[35914], 95.00th=[45351], 00:09:27.309 | 99.00th=[52167], 99.50th=[53216], 99.90th=[54264], 99.95th=[54264], 00:09:27.309 | 99.99th=[56361] 00:09:27.309 bw ( KiB/s): min= 9672, max=14768, per=19.86%, avg=12220.00, stdev=3603.42, samples=2 00:09:27.309 iops : min= 2418, max= 3692, avg=3055.00, stdev=900.85, samples=2 00:09:27.309 lat (usec) : 750=0.07% 00:09:27.309 lat (msec) : 2=0.03%, 4=0.51%, 10=14.16%, 20=33.00%, 50=46.15% 00:09:27.309 lat (msec) : 100=6.08% 00:09:27.309 cpu : usr=3.27%, sys=3.56%, ctx=269, majf=0, minf=1 00:09:27.309 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:09:27.309 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:27.309 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:27.309 issued rwts: total=2670,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:27.309 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:27.309 job1: (groupid=0, jobs=1): err= 0: pid=2414069: Thu Nov 28 12:33:09 2024 00:09:27.309 read: IOPS=2829, BW=11.1MiB/s (11.6MB/s)(11.1MiB/1003msec) 00:09:27.309 slat (nsec): min=1199, max=18148k, avg=140172.51, stdev=989422.99 00:09:27.309 clat (usec): min=639, max=57687, avg=17082.56, stdev=9395.40 00:09:27.309 lat (usec): min=1012, max=59721, avg=17222.73, stdev=9483.74 00:09:27.309 clat percentiles (usec): 00:09:27.309 | 1.00th=[ 1696], 5.00th=[ 3195], 10.00th=[ 8586], 20.00th=[ 9896], 00:09:27.309 | 30.00th=[10290], 40.00th=[11863], 50.00th=[15533], 60.00th=[17695], 00:09:27.309 | 70.00th=[20317], 80.00th=[25035], 90.00th=[31327], 95.00th=[35390], 00:09:27.310 | 99.00th=[44827], 99.50th=[45876], 99.90th=[57934], 99.95th=[57934], 00:09:27.310 | 99.99th=[57934] 00:09:27.310 write: IOPS=3062, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1003msec); 0 zone resets 00:09:27.310 slat (usec): min=2, max=32031, avg=185.54, stdev=1101.19 00:09:27.310 clat (usec): min=1592, max=102088, avg=25609.99, stdev=16375.13 00:09:27.310 lat (usec): min=1606, max=102601, avg=25795.52, stdev=16486.67 00:09:27.310 clat percentiles (msec): 00:09:27.310 | 1.00th=[ 8], 5.00th=[ 10], 10.00th=[ 11], 20.00th=[ 12], 00:09:27.310 | 30.00th=[ 18], 40.00th=[ 22], 50.00th=[ 25], 60.00th=[ 26], 00:09:27.310 | 70.00th=[ 27], 80.00th=[ 31], 90.00th=[ 43], 95.00th=[ 65], 00:09:27.310 | 99.00th=[ 91], 99.50th=[ 97], 99.90th=[ 103], 99.95th=[ 103], 00:09:27.310 | 99.99th=[ 103] 00:09:27.310 bw ( KiB/s): min=11312, max=13264, per=19.97%, avg=12288.00, stdev=1380.27, samples=2 00:09:27.310 iops : min= 2828, max= 3316, avg=3072.00, stdev=345.07, samples=2 00:09:27.310 lat (usec) : 750=0.02%, 1000=0.02% 00:09:27.310 lat (msec) : 2=0.52%, 4=2.89%, 10=10.25%, 20=36.92%, 50=45.11% 00:09:27.310 lat (msec) : 100=4.15%, 250=0.12% 00:09:27.310 cpu : usr=1.90%, sys=4.39%, ctx=336, majf=0, minf=2 00:09:27.310 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:09:27.310 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:27.310 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:27.310 issued rwts: total=2838,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:27.310 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:27.310 job2: (groupid=0, jobs=1): err= 0: pid=2414087: Thu Nov 28 12:33:09 2024 00:09:27.310 read: IOPS=4562, BW=17.8MiB/s (18.7MB/s)(18.0MiB/1010msec) 00:09:27.310 slat (nsec): min=1132, max=32289k, avg=102560.94, stdev=945783.53 00:09:27.310 clat (usec): min=3641, max=72413, avg=14496.37, stdev=9350.37 00:09:27.310 lat (usec): min=3650, max=72437, avg=14598.93, stdev=9430.06 00:09:27.310 clat percentiles (usec): 00:09:27.310 | 1.00th=[ 5014], 5.00th=[ 6783], 10.00th=[ 8455], 20.00th=[ 9110], 00:09:27.310 | 30.00th=[ 9503], 40.00th=[10028], 50.00th=[10945], 60.00th=[12125], 00:09:27.310 | 70.00th=[13173], 80.00th=[16188], 90.00th=[28181], 95.00th=[39060], 00:09:27.310 | 99.00th=[47973], 99.50th=[47973], 99.90th=[47973], 99.95th=[47973], 00:09:27.310 | 99.99th=[72877] 00:09:27.310 write: IOPS=4813, BW=18.8MiB/s (19.7MB/s)(19.0MiB/1010msec); 0 zone resets 00:09:27.310 slat (usec): min=2, max=12050, avg=85.39, stdev=635.28 00:09:27.310 clat (usec): min=840, max=64762, avg=12640.09, stdev=9489.69 00:09:27.310 lat (usec): min=849, max=64771, avg=12725.48, stdev=9538.65 00:09:27.310 clat percentiles (usec): 00:09:27.310 | 1.00th=[ 3359], 5.00th=[ 4752], 10.00th=[ 5735], 20.00th=[ 8029], 00:09:27.310 | 30.00th=[ 8848], 40.00th=[ 9372], 50.00th=[ 9503], 60.00th=[10421], 00:09:27.310 | 70.00th=[12518], 80.00th=[14484], 90.00th=[21365], 95.00th=[28705], 00:09:27.310 | 99.00th=[64750], 99.50th=[64750], 99.90th=[64750], 99.95th=[64750], 00:09:27.310 | 99.99th=[64750] 00:09:27.310 bw ( KiB/s): min=12536, max=25344, per=30.78%, avg=18940.00, stdev=9056.62, samples=2 00:09:27.310 iops : min= 3134, max= 6336, avg=4735.00, stdev=2264.16, samples=2 00:09:27.310 lat (usec) : 1000=0.21% 00:09:27.310 lat (msec) : 2=0.06%, 4=1.18%, 10=46.43%, 20=38.47%, 50=12.53% 00:09:27.310 lat (msec) : 100=1.11% 00:09:27.310 cpu : usr=4.36%, sys=5.25%, ctx=393, majf=0, minf=2 00:09:27.310 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:09:27.310 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:27.310 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:27.310 issued rwts: total=4608,4862,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:27.310 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:27.310 job3: (groupid=0, jobs=1): err= 0: pid=2414093: Thu Nov 28 12:33:09 2024 00:09:27.310 read: IOPS=4051, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1011msec) 00:09:27.310 slat (nsec): min=1107, max=15139k, avg=113230.78, stdev=747690.88 00:09:27.310 clat (usec): min=3823, max=75789, avg=14273.37, stdev=10052.34 00:09:27.310 lat (usec): min=3834, max=75818, avg=14386.60, stdev=10125.43 00:09:27.310 clat percentiles (usec): 00:09:27.310 | 1.00th=[ 4359], 5.00th=[ 7701], 10.00th=[ 8586], 20.00th=[ 9110], 00:09:27.310 | 30.00th=[ 9372], 40.00th=[ 9896], 50.00th=[10945], 60.00th=[12518], 00:09:27.310 | 70.00th=[14746], 80.00th=[16450], 90.00th=[20841], 95.00th=[31065], 00:09:27.310 | 99.00th=[64226], 99.50th=[65799], 99.90th=[68682], 99.95th=[68682], 00:09:27.310 | 99.99th=[76022] 00:09:27.310 write: IOPS=4496, BW=17.6MiB/s (18.4MB/s)(17.8MiB/1011msec); 0 zone resets 00:09:27.310 slat (usec): min=2, max=31467, avg=110.77, stdev=637.27 00:09:27.310 clat (usec): min=2814, max=55664, avg=15118.34, stdev=10538.27 00:09:27.310 lat (usec): min=2824, max=55669, avg=15229.11, stdev=10594.44 00:09:27.310 clat percentiles (usec): 00:09:27.310 | 1.00th=[ 3654], 5.00th=[ 5604], 10.00th=[ 7373], 20.00th=[ 9372], 00:09:27.310 | 30.00th=[ 9503], 40.00th=[ 9634], 50.00th=[10028], 60.00th=[11600], 00:09:27.310 | 70.00th=[13829], 80.00th=[24249], 90.00th=[25822], 95.00th=[41157], 00:09:27.310 | 99.00th=[53216], 99.50th=[54264], 99.90th=[55837], 99.95th=[55837], 00:09:27.310 | 99.99th=[55837] 00:09:27.310 bw ( KiB/s): min=10776, max=24576, per=28.73%, avg=17676.00, stdev=9758.07, samples=2 00:09:27.310 iops : min= 2694, max= 6144, avg=4419.00, stdev=2439.52, samples=2 00:09:27.310 lat (msec) : 4=0.88%, 10=46.59%, 20=34.68%, 50=15.36%, 100=2.50% 00:09:27.310 cpu : usr=2.38%, sys=5.05%, ctx=590, majf=0, minf=1 00:09:27.310 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:09:27.310 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:27.310 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:27.310 issued rwts: total=4096,4546,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:27.310 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:27.310 00:09:27.310 Run status group 0 (all jobs): 00:09:27.310 READ: bw=54.9MiB/s (57.6MB/s), 10.3MiB/s-17.8MiB/s (10.8MB/s-18.7MB/s), io=55.5MiB (58.2MB), run=1003-1011msec 00:09:27.310 WRITE: bw=60.1MiB/s (63.0MB/s), 11.9MiB/s-18.8MiB/s (12.4MB/s-19.7MB/s), io=60.8MiB (63.7MB), run=1003-1011msec 00:09:27.310 00:09:27.310 Disk stats (read/write): 00:09:27.310 nvme0n1: ios=2270/2560, merge=0/0, ticks=44733/50115, in_queue=94848, util=100.00% 00:09:27.310 nvme0n2: ios=2456/2560, merge=0/0, ticks=37775/63535, in_queue=101310, util=98.98% 00:09:27.310 nvme0n3: ios=3584/3702, merge=0/0, ticks=50273/42584, in_queue=92857, util=88.95% 00:09:27.310 nvme0n4: ios=3885/4096, merge=0/0, ticks=30283/29648, in_queue=59931, util=97.79% 00:09:27.310 12:33:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:09:27.310 [global] 00:09:27.310 thread=1 00:09:27.310 invalidate=1 00:09:27.310 rw=randwrite 00:09:27.310 time_based=1 00:09:27.310 runtime=1 00:09:27.310 ioengine=libaio 00:09:27.310 direct=1 00:09:27.310 bs=4096 00:09:27.310 iodepth=128 00:09:27.310 norandommap=0 00:09:27.310 numjobs=1 00:09:27.310 00:09:27.310 verify_dump=1 00:09:27.310 verify_backlog=512 00:09:27.310 verify_state_save=0 00:09:27.310 do_verify=1 00:09:27.310 verify=crc32c-intel 00:09:27.310 [job0] 00:09:27.310 filename=/dev/nvme0n1 00:09:27.310 [job1] 00:09:27.310 filename=/dev/nvme0n2 00:09:27.310 [job2] 00:09:27.310 filename=/dev/nvme0n3 00:09:27.310 [job3] 00:09:27.310 filename=/dev/nvme0n4 00:09:27.310 Could not set queue depth (nvme0n1) 00:09:27.310 Could not set queue depth (nvme0n2) 00:09:27.310 Could not set queue depth (nvme0n3) 00:09:27.310 Could not set queue depth (nvme0n4) 00:09:27.569 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:27.569 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:27.569 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:27.569 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:27.569 fio-3.35 00:09:27.569 Starting 4 threads 00:09:28.963 00:09:28.963 job0: (groupid=0, jobs=1): err= 0: pid=2414771: Thu Nov 28 12:33:11 2024 00:09:28.963 read: IOPS=3378, BW=13.2MiB/s (13.8MB/s)(13.2MiB/1003msec) 00:09:28.963 slat (nsec): min=1474, max=21093k, avg=131795.31, stdev=795422.89 00:09:28.963 clat (usec): min=1872, max=40273, avg=14775.13, stdev=5388.78 00:09:28.963 lat (usec): min=1889, max=40298, avg=14906.93, stdev=5459.80 00:09:28.963 clat percentiles (usec): 00:09:28.963 | 1.00th=[ 7701], 5.00th=[ 9110], 10.00th=[10421], 20.00th=[11207], 00:09:28.963 | 30.00th=[11731], 40.00th=[13173], 50.00th=[13435], 60.00th=[14091], 00:09:28.963 | 70.00th=[14746], 80.00th=[17171], 90.00th=[20579], 95.00th=[26870], 00:09:28.963 | 99.00th=[36439], 99.50th=[36439], 99.90th=[38011], 99.95th=[38536], 00:09:28.963 | 99.99th=[40109] 00:09:28.963 write: IOPS=3573, BW=14.0MiB/s (14.6MB/s)(14.0MiB/1003msec); 0 zone resets 00:09:28.963 slat (usec): min=2, max=11476, avg=148.29, stdev=702.25 00:09:28.963 clat (msec): min=5, max=100, avg=21.47, stdev=16.60 00:09:28.963 lat (msec): min=5, max=100, avg=21.61, stdev=16.70 00:09:28.963 clat percentiles (msec): 00:09:28.963 | 1.00th=[ 8], 5.00th=[ 10], 10.00th=[ 11], 20.00th=[ 12], 00:09:28.963 | 30.00th=[ 12], 40.00th=[ 13], 50.00th=[ 15], 60.00th=[ 19], 00:09:28.963 | 70.00th=[ 22], 80.00th=[ 30], 90.00th=[ 40], 95.00th=[ 57], 00:09:28.963 | 99.00th=[ 95], 99.50th=[ 97], 99.90th=[ 101], 99.95th=[ 101], 00:09:28.963 | 99.99th=[ 101] 00:09:28.963 bw ( KiB/s): min=13448, max=15224, per=20.65%, avg=14336.00, stdev=1255.82, samples=2 00:09:28.963 iops : min= 3362, max= 3806, avg=3584.00, stdev=313.96, samples=2 00:09:28.963 lat (msec) : 2=0.03%, 10=7.17%, 20=69.44%, 50=20.18%, 100=3.08% 00:09:28.963 lat (msec) : 250=0.10% 00:09:28.963 cpu : usr=2.20%, sys=4.99%, ctx=441, majf=0, minf=2 00:09:28.963 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:09:28.963 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:28.963 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:28.963 issued rwts: total=3389,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:28.963 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:28.963 job1: (groupid=0, jobs=1): err= 0: pid=2414789: Thu Nov 28 12:33:11 2024 00:09:28.963 read: IOPS=5943, BW=23.2MiB/s (24.3MB/s)(23.3MiB/1003msec) 00:09:28.963 slat (nsec): min=1515, max=12591k, avg=84163.15, stdev=552295.09 00:09:28.963 clat (usec): min=2278, max=36261, avg=10948.60, stdev=3813.83 00:09:28.963 lat (usec): min=2283, max=36265, avg=11032.76, stdev=3837.23 00:09:28.963 clat percentiles (usec): 00:09:28.963 | 1.00th=[ 5604], 5.00th=[ 7439], 10.00th=[ 8160], 20.00th=[ 8848], 00:09:28.963 | 30.00th=[ 9634], 40.00th=[10028], 50.00th=[10290], 60.00th=[10552], 00:09:28.963 | 70.00th=[10945], 80.00th=[11863], 90.00th=[13698], 95.00th=[16319], 00:09:28.963 | 99.00th=[31589], 99.50th=[36439], 99.90th=[36439], 99.95th=[36439], 00:09:28.963 | 99.99th=[36439] 00:09:28.963 write: IOPS=6125, BW=23.9MiB/s (25.1MB/s)(24.0MiB/1003msec); 0 zone resets 00:09:28.963 slat (usec): min=2, max=9709, avg=74.67, stdev=428.28 00:09:28.963 clat (usec): min=1398, max=32153, avg=10081.96, stdev=2161.08 00:09:28.963 lat (usec): min=1410, max=32156, avg=10156.63, stdev=2192.15 00:09:28.963 clat percentiles (usec): 00:09:28.963 | 1.00th=[ 5932], 5.00th=[ 7504], 10.00th=[ 8029], 20.00th=[ 8586], 00:09:28.963 | 30.00th=[ 9765], 40.00th=[10028], 50.00th=[10159], 60.00th=[10290], 00:09:28.963 | 70.00th=[10421], 80.00th=[10814], 90.00th=[11863], 95.00th=[12518], 00:09:28.963 | 99.00th=[17695], 99.50th=[27395], 99.90th=[29230], 99.95th=[29754], 00:09:28.963 | 99.99th=[32113] 00:09:28.964 bw ( KiB/s): min=24576, max=24576, per=35.40%, avg=24576.00, stdev= 0.00, samples=2 00:09:28.964 iops : min= 6144, max= 6144, avg=6144.00, stdev= 0.00, samples=2 00:09:28.964 lat (msec) : 2=0.07%, 4=0.41%, 10=38.72%, 20=59.06%, 50=1.73% 00:09:28.964 cpu : usr=3.59%, sys=7.29%, ctx=612, majf=0, minf=2 00:09:28.964 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:09:28.964 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:28.964 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:28.964 issued rwts: total=5961,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:28.964 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:28.964 job2: (groupid=0, jobs=1): err= 0: pid=2414810: Thu Nov 28 12:33:11 2024 00:09:28.964 read: IOPS=2897, BW=11.3MiB/s (11.9MB/s)(11.4MiB/1003msec) 00:09:28.964 slat (nsec): min=1377, max=16475k, avg=195382.71, stdev=1018383.63 00:09:28.964 clat (usec): min=1170, max=53657, avg=22948.30, stdev=12180.66 00:09:28.964 lat (usec): min=5229, max=53664, avg=23143.68, stdev=12247.32 00:09:28.964 clat percentiles (usec): 00:09:28.964 | 1.00th=[ 8455], 5.00th=[ 9503], 10.00th=[11731], 20.00th=[12649], 00:09:28.964 | 30.00th=[15270], 40.00th=[17433], 50.00th=[18744], 60.00th=[21103], 00:09:28.964 | 70.00th=[22938], 80.00th=[36439], 90.00th=[42730], 95.00th=[48497], 00:09:28.964 | 99.00th=[53216], 99.50th=[53216], 99.90th=[53740], 99.95th=[53740], 00:09:28.964 | 99.99th=[53740] 00:09:28.964 write: IOPS=3062, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1003msec); 0 zone resets 00:09:28.964 slat (usec): min=2, max=17844, avg=133.19, stdev=853.53 00:09:28.964 clat (usec): min=1655, max=58953, avg=19614.32, stdev=11976.99 00:09:28.964 lat (usec): min=1669, max=58965, avg=19747.51, stdev=12025.80 00:09:28.964 clat percentiles (usec): 00:09:28.964 | 1.00th=[ 3392], 5.00th=[ 7242], 10.00th=[ 9634], 20.00th=[12518], 00:09:28.964 | 30.00th=[12911], 40.00th=[13829], 50.00th=[15664], 60.00th=[15926], 00:09:28.964 | 70.00th=[20317], 80.00th=[26870], 90.00th=[38011], 95.00th=[46924], 00:09:28.964 | 99.00th=[58459], 99.50th=[58983], 99.90th=[58983], 99.95th=[58983], 00:09:28.964 | 99.99th=[58983] 00:09:28.964 bw ( KiB/s): min= 9520, max=15056, per=17.70%, avg=12288.00, stdev=3914.54, samples=2 00:09:28.964 iops : min= 2380, max= 3764, avg=3072.00, stdev=978.64, samples=2 00:09:28.964 lat (msec) : 2=0.33%, 4=0.27%, 10=7.56%, 20=53.95%, 50=34.28% 00:09:28.964 lat (msec) : 100=3.61% 00:09:28.964 cpu : usr=2.50%, sys=4.19%, ctx=352, majf=0, minf=1 00:09:28.964 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:09:28.964 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:28.964 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:28.964 issued rwts: total=2906,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:28.964 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:28.964 job3: (groupid=0, jobs=1): err= 0: pid=2414819: Thu Nov 28 12:33:11 2024 00:09:28.964 read: IOPS=4101, BW=16.0MiB/s (16.8MB/s)(16.1MiB/1002msec) 00:09:28.964 slat (nsec): min=1142, max=60882k, avg=128653.31, stdev=1368503.94 00:09:28.964 clat (usec): min=638, max=87174, avg=16870.67, stdev=14228.89 00:09:28.964 lat (usec): min=3348, max=95359, avg=16999.32, stdev=14307.44 00:09:28.964 clat percentiles (usec): 00:09:28.964 | 1.00th=[ 7242], 5.00th=[ 8848], 10.00th=[10028], 20.00th=[11600], 00:09:28.964 | 30.00th=[12125], 40.00th=[12387], 50.00th=[12518], 60.00th=[12780], 00:09:28.964 | 70.00th=[13173], 80.00th=[14615], 90.00th=[28181], 95.00th=[52691], 00:09:28.964 | 99.00th=[85459], 99.50th=[87557], 99.90th=[87557], 99.95th=[87557], 00:09:28.964 | 99.99th=[87557] 00:09:28.964 write: IOPS=4598, BW=18.0MiB/s (18.8MB/s)(18.0MiB/1002msec); 0 zone resets 00:09:28.964 slat (nsec): min=1869, max=9865.6k, avg=92356.03, stdev=516570.12 00:09:28.964 clat (usec): min=1838, max=37192, avg=12381.37, stdev=3968.81 00:09:28.964 lat (usec): min=1842, max=37200, avg=12473.72, stdev=3982.86 00:09:28.964 clat percentiles (usec): 00:09:28.964 | 1.00th=[ 3621], 5.00th=[ 8586], 10.00th=[ 9372], 20.00th=[10159], 00:09:28.964 | 30.00th=[11338], 40.00th=[11863], 50.00th=[12125], 60.00th=[12387], 00:09:28.964 | 70.00th=[12780], 80.00th=[13304], 90.00th=[14877], 95.00th=[17433], 00:09:28.964 | 99.00th=[32113], 99.50th=[32637], 99.90th=[36963], 99.95th=[36963], 00:09:28.964 | 99.99th=[36963] 00:09:28.964 bw ( KiB/s): min=16384, max=19576, per=25.90%, avg=17980.00, stdev=2257.08, samples=2 00:09:28.964 iops : min= 4096, max= 4894, avg=4495.00, stdev=564.27, samples=2 00:09:28.964 lat (usec) : 750=0.01% 00:09:28.964 lat (msec) : 2=0.16%, 4=0.85%, 10=12.87%, 20=76.84%, 50=6.37% 00:09:28.964 lat (msec) : 100=2.90% 00:09:28.964 cpu : usr=2.40%, sys=3.80%, ctx=446, majf=0, minf=1 00:09:28.964 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:09:28.964 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:28.964 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:28.964 issued rwts: total=4110,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:28.964 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:28.964 00:09:28.964 Run status group 0 (all jobs): 00:09:28.964 READ: bw=63.7MiB/s (66.8MB/s), 11.3MiB/s-23.2MiB/s (11.9MB/s-24.3MB/s), io=63.9MiB (67.0MB), run=1002-1003msec 00:09:28.964 WRITE: bw=67.8MiB/s (71.1MB/s), 12.0MiB/s-23.9MiB/s (12.5MB/s-25.1MB/s), io=68.0MiB (71.3MB), run=1002-1003msec 00:09:28.964 00:09:28.964 Disk stats (read/write): 00:09:28.964 nvme0n1: ios=2664/3072, merge=0/0, ticks=19437/33221, in_queue=52658, util=86.57% 00:09:28.964 nvme0n2: ios=5112/5120, merge=0/0, ticks=30173/28301, in_queue=58474, util=98.47% 00:09:28.964 nvme0n3: ios=2603/2809, merge=0/0, ticks=25636/23613, in_queue=49249, util=98.64% 00:09:28.964 nvme0n4: ios=3578/3584, merge=0/0, ticks=27501/18267, in_queue=45768, util=97.47% 00:09:28.964 12:33:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:09:28.964 12:33:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=2414909 00:09:28.964 12:33:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:09:28.964 12:33:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:09:28.964 [global] 00:09:28.964 thread=1 00:09:28.964 invalidate=1 00:09:28.964 rw=read 00:09:28.964 time_based=1 00:09:28.964 runtime=10 00:09:28.964 ioengine=libaio 00:09:28.964 direct=1 00:09:28.964 bs=4096 00:09:28.964 iodepth=1 00:09:28.964 norandommap=1 00:09:28.964 numjobs=1 00:09:28.964 00:09:28.964 [job0] 00:09:28.964 filename=/dev/nvme0n1 00:09:28.964 [job1] 00:09:28.964 filename=/dev/nvme0n2 00:09:28.964 [job2] 00:09:28.964 filename=/dev/nvme0n3 00:09:28.964 [job3] 00:09:28.964 filename=/dev/nvme0n4 00:09:28.964 Could not set queue depth (nvme0n1) 00:09:28.964 Could not set queue depth (nvme0n2) 00:09:28.964 Could not set queue depth (nvme0n3) 00:09:28.964 Could not set queue depth (nvme0n4) 00:09:29.225 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:29.225 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:29.225 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:29.225 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:29.225 fio-3.35 00:09:29.225 Starting 4 threads 00:09:31.878 12:33:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:09:32.202 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=39108608, buflen=4096 00:09:32.202 fio: pid=2415228, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:32.202 12:33:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:09:32.202 12:33:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:32.202 12:33:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:09:32.202 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=303104, buflen=4096 00:09:32.202 fio: pid=2415227, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:32.465 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=47267840, buflen=4096 00:09:32.465 fio: pid=2415225, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:32.465 12:33:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:32.465 12:33:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:09:32.730 fio: io_u error on file /dev/nvme0n2: Input/output error: read offset=41377792, buflen=4096 00:09:32.730 fio: pid=2415226, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:09:32.730 12:33:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:32.730 12:33:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:09:32.730 00:09:32.730 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2415225: Thu Nov 28 12:33:15 2024 00:09:32.730 read: IOPS=3681, BW=14.4MiB/s (15.1MB/s)(45.1MiB/3135msec) 00:09:32.730 slat (usec): min=6, max=28509, avg=10.76, stdev=289.96 00:09:32.730 clat (usec): min=172, max=1852, avg=258.03, stdev=30.67 00:09:32.730 lat (usec): min=179, max=28925, avg=268.79, stdev=293.47 00:09:32.730 clat percentiles (usec): 00:09:32.730 | 1.00th=[ 196], 5.00th=[ 208], 10.00th=[ 221], 20.00th=[ 237], 00:09:32.730 | 30.00th=[ 247], 40.00th=[ 253], 50.00th=[ 262], 60.00th=[ 269], 00:09:32.730 | 70.00th=[ 273], 80.00th=[ 281], 90.00th=[ 285], 95.00th=[ 289], 00:09:32.730 | 99.00th=[ 302], 99.50th=[ 314], 99.90th=[ 441], 99.95th=[ 498], 00:09:32.730 | 99.99th=[ 685] 00:09:32.730 bw ( KiB/s): min=14008, max=15866, per=39.77%, avg=14805.67, stdev=640.20, samples=6 00:09:32.730 iops : min= 3502, max= 3966, avg=3701.33, stdev=159.88, samples=6 00:09:32.730 lat (usec) : 250=35.44%, 500=64.51%, 750=0.03% 00:09:32.730 lat (msec) : 2=0.01% 00:09:32.730 cpu : usr=0.73%, sys=3.41%, ctx=11544, majf=0, minf=1 00:09:32.730 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:32.730 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:32.730 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:32.730 issued rwts: total=11541,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:32.730 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:32.730 job1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=2415226: Thu Nov 28 12:33:15 2024 00:09:32.730 read: IOPS=3007, BW=11.7MiB/s (12.3MB/s)(39.5MiB/3359msec) 00:09:32.730 slat (usec): min=6, max=15511, avg=14.84, stdev=320.85 00:09:32.730 clat (usec): min=187, max=42321, avg=316.70, stdev=1338.94 00:09:32.730 lat (usec): min=194, max=54966, avg=330.89, stdev=1414.04 00:09:32.730 clat percentiles (usec): 00:09:32.730 | 1.00th=[ 208], 5.00th=[ 225], 10.00th=[ 237], 20.00th=[ 258], 00:09:32.730 | 30.00th=[ 265], 40.00th=[ 269], 50.00th=[ 273], 60.00th=[ 277], 00:09:32.730 | 70.00th=[ 281], 80.00th=[ 285], 90.00th=[ 293], 95.00th=[ 302], 00:09:32.730 | 99.00th=[ 322], 99.50th=[ 363], 99.90th=[28443], 99.95th=[41157], 00:09:32.730 | 99.99th=[41681] 00:09:32.730 bw ( KiB/s): min= 4504, max=14648, per=33.71%, avg=12552.00, stdev=3950.40, samples=6 00:09:32.730 iops : min= 1126, max= 3662, avg=3138.00, stdev=987.60, samples=6 00:09:32.730 lat (usec) : 250=15.91%, 500=83.90%, 750=0.05%, 1000=0.01% 00:09:32.730 lat (msec) : 10=0.01%, 50=0.12% 00:09:32.730 cpu : usr=0.92%, sys=2.68%, ctx=10108, majf=0, minf=1 00:09:32.730 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:32.730 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:32.730 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:32.730 issued rwts: total=10103,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:32.730 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:32.730 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2415227: Thu Nov 28 12:33:15 2024 00:09:32.730 read: IOPS=25, BW=100KiB/s (103kB/s)(296KiB/2949msec) 00:09:32.730 slat (usec): min=9, max=16810, avg=239.92, stdev=1939.25 00:09:32.730 clat (usec): min=222, max=41178, avg=39318.52, stdev=8080.88 00:09:32.730 lat (usec): min=233, max=57988, avg=39561.53, stdev=8364.68 00:09:32.730 clat percentiles (usec): 00:09:32.730 | 1.00th=[ 223], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:09:32.730 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:32.730 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:09:32.730 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:09:32.730 | 99.99th=[41157] 00:09:32.730 bw ( KiB/s): min= 96, max= 104, per=0.27%, avg=99.20, stdev= 4.38, samples=5 00:09:32.730 iops : min= 24, max= 26, avg=24.80, stdev= 1.10, samples=5 00:09:32.730 lat (usec) : 250=2.67%, 500=1.33% 00:09:32.730 lat (msec) : 50=94.67% 00:09:32.730 cpu : usr=0.10%, sys=0.00%, ctx=76, majf=0, minf=2 00:09:32.730 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:32.730 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:32.730 complete : 0=1.3%, 4=98.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:32.730 issued rwts: total=75,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:32.730 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:32.730 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2415228: Thu Nov 28 12:33:15 2024 00:09:32.730 read: IOPS=3495, BW=13.7MiB/s (14.3MB/s)(37.3MiB/2732msec) 00:09:32.730 slat (nsec): min=6033, max=39982, avg=8122.23, stdev=1296.29 00:09:32.730 clat (usec): min=200, max=1855, avg=273.37, stdev=31.69 00:09:32.730 lat (usec): min=208, max=1863, avg=281.49, stdev=31.70 00:09:32.730 clat percentiles (usec): 00:09:32.730 | 1.00th=[ 227], 5.00th=[ 239], 10.00th=[ 247], 20.00th=[ 255], 00:09:32.730 | 30.00th=[ 265], 40.00th=[ 269], 50.00th=[ 273], 60.00th=[ 277], 00:09:32.730 | 70.00th=[ 281], 80.00th=[ 289], 90.00th=[ 297], 95.00th=[ 306], 00:09:32.730 | 99.00th=[ 330], 99.50th=[ 420], 99.90th=[ 506], 99.95th=[ 635], 00:09:32.730 | 99.99th=[ 1860] 00:09:32.730 bw ( KiB/s): min=13880, max=14752, per=38.02%, avg=14155.20, stdev=343.75, samples=5 00:09:32.730 iops : min= 3470, max= 3688, avg=3538.80, stdev=85.94, samples=5 00:09:32.730 lat (usec) : 250=13.66%, 500=86.19%, 750=0.13% 00:09:32.730 lat (msec) : 2=0.02% 00:09:32.730 cpu : usr=2.01%, sys=5.53%, ctx=9549, majf=0, minf=2 00:09:32.730 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:32.730 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:32.730 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:32.730 issued rwts: total=9549,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:32.730 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:32.730 00:09:32.730 Run status group 0 (all jobs): 00:09:32.730 READ: bw=36.4MiB/s (38.1MB/s), 100KiB/s-14.4MiB/s (103kB/s-15.1MB/s), io=122MiB (128MB), run=2732-3359msec 00:09:32.730 00:09:32.730 Disk stats (read/write): 00:09:32.730 nvme0n1: ios=11504/0, merge=0/0, ticks=2915/0, in_queue=2915, util=94.48% 00:09:32.730 nvme0n2: ios=10123/0, merge=0/0, ticks=3195/0, in_queue=3195, util=94.83% 00:09:32.730 nvme0n3: ios=71/0, merge=0/0, ticks=2829/0, in_queue=2829, util=96.01% 00:09:32.730 nvme0n4: ios=9198/0, merge=0/0, ticks=2419/0, in_queue=2419, util=96.45% 00:09:32.730 12:33:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:32.730 12:33:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:09:32.992 12:33:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:32.992 12:33:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:09:33.252 12:33:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:33.252 12:33:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:09:33.511 12:33:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:33.511 12:33:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:09:33.771 12:33:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:09:33.771 12:33:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 2414909 00:09:33.771 12:33:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:09:33.771 12:33:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:33.771 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:33.771 12:33:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:33.771 12:33:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:09:33.771 12:33:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:09:33.771 12:33:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:33.771 12:33:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:09:33.771 12:33:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:33.771 12:33:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:09:33.771 12:33:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:09:33.771 12:33:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:09:33.771 nvmf hotplug test: fio failed as expected 00:09:33.771 12:33:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:34.030 12:33:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:09:34.030 12:33:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:09:34.030 12:33:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:09:34.030 12:33:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:09:34.030 12:33:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:09:34.030 12:33:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:34.030 12:33:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:09:34.030 12:33:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:34.030 12:33:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:09:34.030 12:33:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:34.030 12:33:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:34.030 rmmod nvme_tcp 00:09:34.030 rmmod nvme_fabrics 00:09:34.030 rmmod nvme_keyring 00:09:34.030 12:33:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:34.030 12:33:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:09:34.030 12:33:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:09:34.030 12:33:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 2411641 ']' 00:09:34.030 12:33:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 2411641 00:09:34.030 12:33:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 2411641 ']' 00:09:34.030 12:33:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 2411641 00:09:34.030 12:33:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:09:34.030 12:33:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:34.030 12:33:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2411641 00:09:34.030 12:33:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:34.030 12:33:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:34.030 12:33:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2411641' 00:09:34.030 killing process with pid 2411641 00:09:34.030 12:33:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 2411641 00:09:34.030 12:33:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 2411641 00:09:34.290 12:33:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:34.290 12:33:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:34.290 12:33:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:34.290 12:33:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:09:34.290 12:33:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:09:34.290 12:33:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:34.290 12:33:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:09:34.290 12:33:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:34.290 12:33:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:34.290 12:33:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:34.290 12:33:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:34.290 12:33:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:36.827 12:33:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:36.827 00:09:36.827 real 0m26.669s 00:09:36.827 user 1m46.532s 00:09:36.827 sys 0m8.614s 00:09:36.827 12:33:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:36.827 12:33:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:36.827 ************************************ 00:09:36.827 END TEST nvmf_fio_target 00:09:36.827 ************************************ 00:09:36.827 12:33:18 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:36.827 12:33:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:36.827 12:33:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:36.827 12:33:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:36.827 ************************************ 00:09:36.827 START TEST nvmf_bdevio 00:09:36.827 ************************************ 00:09:36.827 12:33:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:36.827 * Looking for test storage... 00:09:36.827 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:36.827 12:33:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:36.827 12:33:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:09:36.827 12:33:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:36.827 12:33:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:36.827 12:33:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:36.827 12:33:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:36.827 12:33:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:36.827 12:33:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:09:36.827 12:33:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:09:36.827 12:33:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:09:36.827 12:33:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:09:36.827 12:33:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:09:36.827 12:33:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:09:36.827 12:33:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:09:36.827 12:33:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:36.827 12:33:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:09:36.827 12:33:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:09:36.827 12:33:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:36.827 12:33:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:36.827 12:33:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:09:36.827 12:33:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:09:36.827 12:33:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:36.827 12:33:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:09:36.827 12:33:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:09:36.827 12:33:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:09:36.827 12:33:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:09:36.827 12:33:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:36.827 12:33:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:09:36.827 12:33:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:09:36.827 12:33:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:36.827 12:33:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:36.827 12:33:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:09:36.827 12:33:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:36.827 12:33:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:36.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.827 --rc genhtml_branch_coverage=1 00:09:36.827 --rc genhtml_function_coverage=1 00:09:36.827 --rc genhtml_legend=1 00:09:36.827 --rc geninfo_all_blocks=1 00:09:36.827 --rc geninfo_unexecuted_blocks=1 00:09:36.827 00:09:36.827 ' 00:09:36.827 12:33:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:36.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.827 --rc genhtml_branch_coverage=1 00:09:36.827 --rc genhtml_function_coverage=1 00:09:36.827 --rc genhtml_legend=1 00:09:36.827 --rc geninfo_all_blocks=1 00:09:36.827 --rc geninfo_unexecuted_blocks=1 00:09:36.827 00:09:36.827 ' 00:09:36.827 12:33:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:36.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.827 --rc genhtml_branch_coverage=1 00:09:36.827 --rc genhtml_function_coverage=1 00:09:36.827 --rc genhtml_legend=1 00:09:36.827 --rc geninfo_all_blocks=1 00:09:36.827 --rc geninfo_unexecuted_blocks=1 00:09:36.827 00:09:36.827 ' 00:09:36.827 12:33:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:36.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.827 --rc genhtml_branch_coverage=1 00:09:36.827 --rc genhtml_function_coverage=1 00:09:36.827 --rc genhtml_legend=1 00:09:36.827 --rc geninfo_all_blocks=1 00:09:36.827 --rc geninfo_unexecuted_blocks=1 00:09:36.827 00:09:36.827 ' 00:09:36.827 12:33:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:36.827 12:33:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:09:36.827 12:33:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:36.827 12:33:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:36.827 12:33:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:36.827 12:33:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:36.827 12:33:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:36.827 12:33:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:36.827 12:33:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:36.827 12:33:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:36.827 12:33:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:36.827 12:33:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:36.827 12:33:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:36.827 12:33:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:36.827 12:33:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:36.827 12:33:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:36.827 12:33:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:36.827 12:33:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:36.827 12:33:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:36.827 12:33:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:09:36.827 12:33:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:36.827 12:33:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:36.827 12:33:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:36.828 12:33:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.828 12:33:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.828 12:33:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.828 12:33:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:09:36.828 12:33:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.828 12:33:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:09:36.828 12:33:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:36.828 12:33:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:36.828 12:33:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:36.828 12:33:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:36.828 12:33:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:36.828 12:33:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:36.828 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:36.828 12:33:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:36.828 12:33:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:36.828 12:33:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:36.828 12:33:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:36.828 12:33:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:36.828 12:33:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:09:36.828 12:33:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:36.828 12:33:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:36.828 12:33:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:36.828 12:33:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:36.828 12:33:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:36.828 12:33:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:36.828 12:33:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:36.828 12:33:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:36.828 12:33:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:36.828 12:33:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:36.828 12:33:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:09:36.828 12:33:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:42.100 12:33:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:42.100 12:33:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:09:42.100 12:33:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:42.100 12:33:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:42.100 12:33:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:42.100 12:33:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:42.100 12:33:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:42.100 12:33:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:09:42.100 12:33:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:42.100 12:33:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:09:42.100 12:33:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:09:42.100 12:33:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:09:42.100 12:33:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:09:42.100 12:33:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:09:42.100 12:33:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:09:42.100 12:33:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:42.100 12:33:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:42.100 12:33:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:42.100 12:33:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:42.100 12:33:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:42.100 12:33:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:42.100 12:33:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:42.100 12:33:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:42.100 12:33:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:42.100 12:33:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:42.100 12:33:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:42.100 12:33:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:42.100 12:33:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:42.100 12:33:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:42.100 12:33:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:42.100 12:33:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:42.100 12:33:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:42.100 12:33:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:42.100 12:33:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:42.100 12:33:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:42.100 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:42.100 12:33:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:42.100 12:33:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:42.100 12:33:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:42.100 12:33:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:42.100 12:33:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:42.100 12:33:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:42.100 12:33:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:42.100 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:42.100 12:33:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:42.101 12:33:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:42.101 12:33:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:42.101 12:33:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:42.101 12:33:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:42.101 12:33:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:42.101 12:33:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:42.101 12:33:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:42.101 12:33:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:42.101 12:33:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:42.101 12:33:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:42.101 12:33:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:42.101 12:33:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:42.101 12:33:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:42.101 12:33:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:42.101 12:33:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:42.101 Found net devices under 0000:86:00.0: cvl_0_0 00:09:42.101 12:33:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:42.101 12:33:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:42.101 12:33:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:42.101 12:33:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:42.101 12:33:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:42.101 12:33:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:42.101 12:33:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:42.101 12:33:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:42.101 12:33:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:42.101 Found net devices under 0000:86:00.1: cvl_0_1 00:09:42.101 12:33:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:42.101 12:33:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:42.101 12:33:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:09:42.101 12:33:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:42.101 12:33:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:42.101 12:33:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:42.101 12:33:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:42.101 12:33:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:42.101 12:33:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:42.101 12:33:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:42.101 12:33:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:42.101 12:33:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:42.101 12:33:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:42.101 12:33:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:42.101 12:33:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:42.101 12:33:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:42.101 12:33:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:42.101 12:33:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:42.101 12:33:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:42.101 12:33:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:42.101 12:33:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:42.101 12:33:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:42.101 12:33:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:42.101 12:33:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:42.101 12:33:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:42.101 12:33:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:42.101 12:33:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:42.361 12:33:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:42.361 12:33:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:42.361 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:42.361 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.384 ms 00:09:42.361 00:09:42.361 --- 10.0.0.2 ping statistics --- 00:09:42.361 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:42.361 rtt min/avg/max/mdev = 0.384/0.384/0.384/0.000 ms 00:09:42.361 12:33:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:42.361 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:42.361 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.234 ms 00:09:42.361 00:09:42.361 --- 10.0.0.1 ping statistics --- 00:09:42.361 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:42.361 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:09:42.361 12:33:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:42.361 12:33:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:09:42.361 12:33:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:42.361 12:33:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:42.361 12:33:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:42.361 12:33:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:42.361 12:33:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:42.361 12:33:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:42.361 12:33:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:42.361 12:33:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:09:42.361 12:33:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:42.361 12:33:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:42.361 12:33:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:42.361 12:33:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=2419487 00:09:42.361 12:33:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 2419487 00:09:42.361 12:33:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:09:42.361 12:33:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 2419487 ']' 00:09:42.361 12:33:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:42.361 12:33:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:42.361 12:33:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:42.361 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:42.361 12:33:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:42.361 12:33:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:42.361 [2024-11-28 12:33:24.735129] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:09:42.361 [2024-11-28 12:33:24.735174] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:42.361 [2024-11-28 12:33:24.800908] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:42.361 [2024-11-28 12:33:24.840371] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:42.361 [2024-11-28 12:33:24.840412] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:42.361 [2024-11-28 12:33:24.840420] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:42.361 [2024-11-28 12:33:24.840426] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:42.361 [2024-11-28 12:33:24.840431] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:42.361 [2024-11-28 12:33:24.842083] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:42.361 [2024-11-28 12:33:24.842115] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:09:42.361 [2024-11-28 12:33:24.842226] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:42.361 [2024-11-28 12:33:24.842226] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:09:42.620 12:33:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:42.620 12:33:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:09:42.620 12:33:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:42.620 12:33:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:42.620 12:33:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:42.620 12:33:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:42.620 12:33:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:42.620 12:33:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.620 12:33:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:42.620 [2024-11-28 12:33:24.991952] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:42.620 12:33:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.620 12:33:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:42.620 12:33:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.620 12:33:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:42.620 Malloc0 00:09:42.620 12:33:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.620 12:33:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:42.620 12:33:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.620 12:33:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:42.620 12:33:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.620 12:33:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:42.620 12:33:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.620 12:33:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:42.620 12:33:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.620 12:33:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:42.620 12:33:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.620 12:33:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:42.620 [2024-11-28 12:33:25.054934] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:42.620 12:33:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.620 12:33:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:09:42.620 12:33:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:09:42.620 12:33:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:09:42.620 12:33:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:09:42.620 12:33:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:42.620 12:33:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:42.620 { 00:09:42.620 "params": { 00:09:42.620 "name": "Nvme$subsystem", 00:09:42.620 "trtype": "$TEST_TRANSPORT", 00:09:42.620 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:42.620 "adrfam": "ipv4", 00:09:42.620 "trsvcid": "$NVMF_PORT", 00:09:42.620 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:42.620 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:42.620 "hdgst": ${hdgst:-false}, 00:09:42.620 "ddgst": ${ddgst:-false} 00:09:42.620 }, 00:09:42.620 "method": "bdev_nvme_attach_controller" 00:09:42.620 } 00:09:42.620 EOF 00:09:42.620 )") 00:09:42.620 12:33:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:09:42.620 12:33:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:09:42.620 12:33:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:09:42.620 12:33:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:42.620 "params": { 00:09:42.620 "name": "Nvme1", 00:09:42.620 "trtype": "tcp", 00:09:42.620 "traddr": "10.0.0.2", 00:09:42.620 "adrfam": "ipv4", 00:09:42.620 "trsvcid": "4420", 00:09:42.620 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:42.620 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:42.620 "hdgst": false, 00:09:42.620 "ddgst": false 00:09:42.620 }, 00:09:42.620 "method": "bdev_nvme_attach_controller" 00:09:42.620 }' 00:09:42.620 [2024-11-28 12:33:25.107762] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:09:42.620 [2024-11-28 12:33:25.107804] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2419537 ] 00:09:42.878 [2024-11-28 12:33:25.171253] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:42.878 [2024-11-28 12:33:25.215916] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:42.878 [2024-11-28 12:33:25.216031] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:42.878 [2024-11-28 12:33:25.216034] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:43.135 I/O targets: 00:09:43.135 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:09:43.135 00:09:43.135 00:09:43.135 CUnit - A unit testing framework for C - Version 2.1-3 00:09:43.135 http://cunit.sourceforge.net/ 00:09:43.135 00:09:43.135 00:09:43.135 Suite: bdevio tests on: Nvme1n1 00:09:43.135 Test: blockdev write read block ...passed 00:09:43.135 Test: blockdev write zeroes read block ...passed 00:09:43.135 Test: blockdev write zeroes read no split ...passed 00:09:43.135 Test: blockdev write zeroes read split ...passed 00:09:43.135 Test: blockdev write zeroes read split partial ...passed 00:09:43.135 Test: blockdev reset ...[2024-11-28 12:33:25.609116] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:09:43.135 [2024-11-28 12:33:25.609178] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x191d350 (9): Bad file descriptor 00:09:43.393 [2024-11-28 12:33:25.713047] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:09:43.393 passed 00:09:43.393 Test: blockdev write read 8 blocks ...passed 00:09:43.393 Test: blockdev write read size > 128k ...passed 00:09:43.393 Test: blockdev write read invalid size ...passed 00:09:43.393 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:43.393 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:43.393 Test: blockdev write read max offset ...passed 00:09:43.393 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:43.652 Test: blockdev writev readv 8 blocks ...passed 00:09:43.652 Test: blockdev writev readv 30 x 1block ...passed 00:09:43.652 Test: blockdev writev readv block ...passed 00:09:43.652 Test: blockdev writev readv size > 128k ...passed 00:09:43.652 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:43.652 Test: blockdev comparev and writev ...[2024-11-28 12:33:26.003762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:43.652 [2024-11-28 12:33:26.003788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:09:43.652 [2024-11-28 12:33:26.003801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:43.652 [2024-11-28 12:33:26.003810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:09:43.652 [2024-11-28 12:33:26.004080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:43.652 [2024-11-28 12:33:26.004091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:09:43.652 [2024-11-28 12:33:26.004103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:43.652 [2024-11-28 12:33:26.004110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:09:43.652 [2024-11-28 12:33:26.004363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:43.652 [2024-11-28 12:33:26.004373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:09:43.652 [2024-11-28 12:33:26.004385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:43.652 [2024-11-28 12:33:26.004392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:09:43.652 [2024-11-28 12:33:26.004642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:43.652 [2024-11-28 12:33:26.004651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:09:43.652 [2024-11-28 12:33:26.004662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:43.652 [2024-11-28 12:33:26.004669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:09:43.652 passed 00:09:43.653 Test: blockdev nvme passthru rw ...passed 00:09:43.653 Test: blockdev nvme passthru vendor specific ...[2024-11-28 12:33:26.086308] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:43.653 [2024-11-28 12:33:26.086325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:09:43.653 [2024-11-28 12:33:26.086433] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:43.653 [2024-11-28 12:33:26.086442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:09:43.653 [2024-11-28 12:33:26.086549] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:43.653 [2024-11-28 12:33:26.086558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:09:43.653 [2024-11-28 12:33:26.086661] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:43.653 [2024-11-28 12:33:26.086670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:09:43.653 passed 00:09:43.653 Test: blockdev nvme admin passthru ...passed 00:09:43.653 Test: blockdev copy ...passed 00:09:43.653 00:09:43.653 Run Summary: Type Total Ran Passed Failed Inactive 00:09:43.653 suites 1 1 n/a 0 0 00:09:43.653 tests 23 23 23 0 0 00:09:43.653 asserts 152 152 152 0 n/a 00:09:43.653 00:09:43.653 Elapsed time = 1.378 seconds 00:09:43.912 12:33:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:43.912 12:33:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.912 12:33:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:43.912 12:33:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.912 12:33:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:09:43.912 12:33:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:09:43.912 12:33:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:43.912 12:33:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:09:43.912 12:33:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:43.912 12:33:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:09:43.912 12:33:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:43.912 12:33:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:43.912 rmmod nvme_tcp 00:09:43.912 rmmod nvme_fabrics 00:09:43.912 rmmod nvme_keyring 00:09:43.912 12:33:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:43.912 12:33:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:09:43.912 12:33:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:09:43.912 12:33:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 2419487 ']' 00:09:43.912 12:33:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 2419487 00:09:43.912 12:33:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 2419487 ']' 00:09:43.912 12:33:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 2419487 00:09:43.912 12:33:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:09:43.912 12:33:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:43.912 12:33:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2419487 00:09:43.912 12:33:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:09:43.912 12:33:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:09:43.912 12:33:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2419487' 00:09:43.912 killing process with pid 2419487 00:09:43.912 12:33:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 2419487 00:09:43.912 12:33:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 2419487 00:09:44.171 12:33:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:44.171 12:33:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:44.171 12:33:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:44.171 12:33:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:09:44.171 12:33:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:09:44.171 12:33:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:44.171 12:33:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:09:44.171 12:33:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:44.171 12:33:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:44.171 12:33:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:44.171 12:33:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:44.171 12:33:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:46.708 12:33:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:46.708 00:09:46.708 real 0m9.837s 00:09:46.708 user 0m11.095s 00:09:46.708 sys 0m4.729s 00:09:46.708 12:33:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:46.708 12:33:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:46.708 ************************************ 00:09:46.708 END TEST nvmf_bdevio 00:09:46.708 ************************************ 00:09:46.708 12:33:28 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:09:46.708 00:09:46.708 real 4m27.999s 00:09:46.708 user 10m14.672s 00:09:46.708 sys 1m32.725s 00:09:46.708 12:33:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:46.708 12:33:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:46.708 ************************************ 00:09:46.708 END TEST nvmf_target_core 00:09:46.708 ************************************ 00:09:46.708 12:33:28 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:09:46.708 12:33:28 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:46.708 12:33:28 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:46.708 12:33:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:46.708 ************************************ 00:09:46.708 START TEST nvmf_target_extra 00:09:46.708 ************************************ 00:09:46.708 12:33:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:09:46.708 * Looking for test storage... 00:09:46.708 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:09:46.708 12:33:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:46.708 12:33:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lcov --version 00:09:46.708 12:33:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:46.708 12:33:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:46.708 12:33:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:46.708 12:33:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:46.708 12:33:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:46.708 12:33:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:09:46.708 12:33:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:09:46.708 12:33:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:09:46.708 12:33:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:09:46.708 12:33:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:09:46.708 12:33:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:09:46.708 12:33:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:09:46.708 12:33:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:46.708 12:33:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:09:46.708 12:33:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:09:46.708 12:33:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:46.708 12:33:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:46.708 12:33:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:09:46.708 12:33:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:09:46.708 12:33:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:46.708 12:33:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:09:46.708 12:33:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:09:46.708 12:33:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:09:46.708 12:33:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:09:46.708 12:33:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:46.708 12:33:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:09:46.708 12:33:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:09:46.708 12:33:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:46.708 12:33:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:46.708 12:33:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:09:46.708 12:33:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:46.708 12:33:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:46.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:46.708 --rc genhtml_branch_coverage=1 00:09:46.708 --rc genhtml_function_coverage=1 00:09:46.708 --rc genhtml_legend=1 00:09:46.708 --rc geninfo_all_blocks=1 00:09:46.708 --rc geninfo_unexecuted_blocks=1 00:09:46.708 00:09:46.708 ' 00:09:46.708 12:33:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:46.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:46.708 --rc genhtml_branch_coverage=1 00:09:46.708 --rc genhtml_function_coverage=1 00:09:46.708 --rc genhtml_legend=1 00:09:46.708 --rc geninfo_all_blocks=1 00:09:46.708 --rc geninfo_unexecuted_blocks=1 00:09:46.708 00:09:46.708 ' 00:09:46.708 12:33:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:46.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:46.708 --rc genhtml_branch_coverage=1 00:09:46.708 --rc genhtml_function_coverage=1 00:09:46.708 --rc genhtml_legend=1 00:09:46.708 --rc geninfo_all_blocks=1 00:09:46.708 --rc geninfo_unexecuted_blocks=1 00:09:46.708 00:09:46.708 ' 00:09:46.708 12:33:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:46.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:46.708 --rc genhtml_branch_coverage=1 00:09:46.708 --rc genhtml_function_coverage=1 00:09:46.708 --rc genhtml_legend=1 00:09:46.708 --rc geninfo_all_blocks=1 00:09:46.708 --rc geninfo_unexecuted_blocks=1 00:09:46.708 00:09:46.708 ' 00:09:46.708 12:33:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:46.708 12:33:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:09:46.708 12:33:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:46.708 12:33:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:46.708 12:33:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:46.708 12:33:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:46.708 12:33:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:46.708 12:33:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:46.708 12:33:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:46.709 12:33:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:46.709 12:33:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:46.709 12:33:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:46.709 12:33:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:46.709 12:33:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:46.709 12:33:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:46.709 12:33:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:46.709 12:33:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:46.709 12:33:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:46.709 12:33:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:46.709 12:33:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:09:46.709 12:33:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:46.709 12:33:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:46.709 12:33:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:46.709 12:33:28 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:46.709 12:33:28 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:46.709 12:33:28 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:46.709 12:33:28 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:09:46.709 12:33:28 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:46.709 12:33:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:09:46.709 12:33:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:46.709 12:33:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:46.709 12:33:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:46.709 12:33:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:46.709 12:33:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:46.709 12:33:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:46.709 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:46.709 12:33:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:46.709 12:33:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:46.709 12:33:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:46.709 12:33:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:09:46.709 12:33:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:09:46.709 12:33:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:09:46.709 12:33:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:09:46.709 12:33:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:46.709 12:33:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:46.709 12:33:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:46.709 ************************************ 00:09:46.709 START TEST nvmf_example 00:09:46.709 ************************************ 00:09:46.709 12:33:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:09:46.709 * Looking for test storage... 00:09:46.709 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:46.709 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:46.709 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lcov --version 00:09:46.709 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:46.709 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:46.709 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:46.709 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:46.709 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:46.709 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:09:46.709 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:09:46.709 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:09:46.709 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:09:46.709 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:09:46.709 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:09:46.709 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:09:46.709 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:46.709 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:09:46.709 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:09:46.709 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:46.709 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:46.709 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:09:46.709 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:09:46.709 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:46.709 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:09:46.709 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:09:46.709 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:09:46.709 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:09:46.709 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:46.709 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:09:46.709 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:09:46.709 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:46.709 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:46.709 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:09:46.709 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:46.709 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:46.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:46.709 --rc genhtml_branch_coverage=1 00:09:46.710 --rc genhtml_function_coverage=1 00:09:46.710 --rc genhtml_legend=1 00:09:46.710 --rc geninfo_all_blocks=1 00:09:46.710 --rc geninfo_unexecuted_blocks=1 00:09:46.710 00:09:46.710 ' 00:09:46.710 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:46.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:46.710 --rc genhtml_branch_coverage=1 00:09:46.710 --rc genhtml_function_coverage=1 00:09:46.710 --rc genhtml_legend=1 00:09:46.710 --rc geninfo_all_blocks=1 00:09:46.710 --rc geninfo_unexecuted_blocks=1 00:09:46.710 00:09:46.710 ' 00:09:46.710 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:46.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:46.710 --rc genhtml_branch_coverage=1 00:09:46.710 --rc genhtml_function_coverage=1 00:09:46.710 --rc genhtml_legend=1 00:09:46.710 --rc geninfo_all_blocks=1 00:09:46.710 --rc geninfo_unexecuted_blocks=1 00:09:46.710 00:09:46.710 ' 00:09:46.710 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:46.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:46.710 --rc genhtml_branch_coverage=1 00:09:46.710 --rc genhtml_function_coverage=1 00:09:46.710 --rc genhtml_legend=1 00:09:46.710 --rc geninfo_all_blocks=1 00:09:46.710 --rc geninfo_unexecuted_blocks=1 00:09:46.710 00:09:46.710 ' 00:09:46.710 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:46.710 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:09:46.710 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:46.710 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:46.710 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:46.710 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:46.710 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:46.710 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:46.710 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:46.710 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:46.710 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:46.710 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:46.710 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:46.710 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:46.710 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:46.710 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:46.710 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:46.710 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:46.710 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:46.710 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:09:46.710 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:46.710 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:46.710 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:46.710 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:46.710 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:46.710 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:46.710 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:09:46.710 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:46.710 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:09:46.710 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:46.710 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:46.710 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:46.710 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:46.710 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:46.710 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:46.710 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:46.710 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:46.710 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:46.710 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:46.710 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:09:46.710 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:09:46.710 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:09:46.710 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:09:46.710 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:09:46.710 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:09:46.710 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:09:46.710 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:09:46.710 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:46.710 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:46.710 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:09:46.710 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:46.710 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:46.711 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:46.711 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:46.711 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:46.711 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:46.711 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:46.711 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:46.711 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:46.711 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:46.711 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:09:46.711 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:51.978 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:51.978 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:09:51.978 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:51.978 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:51.978 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:51.978 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:51.978 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:51.978 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:09:51.978 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:51.978 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:09:51.978 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:09:51.978 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:09:51.978 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:09:51.978 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:09:51.978 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:09:51.978 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:51.978 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:51.978 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:51.978 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:51.978 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:51.978 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:51.978 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:51.978 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:51.978 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:51.978 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:51.978 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:51.978 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:51.978 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:51.978 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:51.978 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:51.978 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:51.978 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:51.978 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:51.978 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:51.978 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:51.978 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:51.978 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:51.978 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:51.978 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:51.978 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:51.978 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:51.978 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:51.978 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:51.978 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:51.978 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:51.978 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:51.978 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:51.978 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:51.978 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:51.978 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:51.978 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:51.978 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:51.978 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:51.978 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:51.978 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:51.978 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:51.978 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:51.978 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:51.978 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:51.978 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:51.979 Found net devices under 0000:86:00.0: cvl_0_0 00:09:52.237 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:52.237 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:52.237 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:52.237 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:52.237 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:52.237 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:52.237 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:52.237 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:52.237 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:52.237 Found net devices under 0000:86:00.1: cvl_0_1 00:09:52.237 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:52.237 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:52.237 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:09:52.237 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:52.237 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:52.237 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:52.237 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:52.237 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:52.237 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:52.237 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:52.237 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:52.237 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:52.237 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:52.237 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:52.237 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:52.237 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:52.237 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:52.237 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:52.237 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:52.237 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:52.237 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:52.237 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:52.237 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:52.237 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:52.237 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:52.237 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:52.237 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:52.237 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:52.237 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:52.237 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:52.237 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.402 ms 00:09:52.237 00:09:52.237 --- 10.0.0.2 ping statistics --- 00:09:52.237 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:52.237 rtt min/avg/max/mdev = 0.402/0.402/0.402/0.000 ms 00:09:52.237 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:52.237 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:52.237 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:09:52.237 00:09:52.237 --- 10.0.0.1 ping statistics --- 00:09:52.237 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:52.237 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:09:52.237 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:52.237 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:09:52.237 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:52.237 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:52.237 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:52.238 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:52.238 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:52.238 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:52.238 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:52.496 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:09:52.496 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:09:52.496 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:52.496 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:52.496 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:09:52.496 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:09:52.496 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=2423417 00:09:52.496 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:52.496 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 2423417 00:09:52.496 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 2423417 ']' 00:09:52.496 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:52.496 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:52.496 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:52.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:52.496 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:52.496 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:52.496 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:09:53.430 12:33:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:53.430 12:33:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:09:53.430 12:33:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:09:53.430 12:33:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:53.430 12:33:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:53.430 12:33:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:53.430 12:33:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.430 12:33:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:53.430 12:33:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.430 12:33:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:09:53.430 12:33:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.430 12:33:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:53.430 12:33:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.430 12:33:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:09:53.430 12:33:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:53.430 12:33:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.430 12:33:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:53.431 12:33:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.431 12:33:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:09:53.431 12:33:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:53.431 12:33:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.431 12:33:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:53.431 12:33:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.431 12:33:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:53.431 12:33:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.431 12:33:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:53.431 12:33:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.431 12:33:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:09:53.431 12:33:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:10:05.628 Initializing NVMe Controllers 00:10:05.628 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:05.628 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:05.628 Initialization complete. Launching workers. 00:10:05.628 ======================================================== 00:10:05.628 Latency(us) 00:10:05.628 Device Information : IOPS MiB/s Average min max 00:10:05.628 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 17662.07 68.99 3622.88 711.40 16330.85 00:10:05.628 ======================================================== 00:10:05.628 Total : 17662.07 68.99 3622.88 711.40 16330.85 00:10:05.628 00:10:05.628 12:33:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:10:05.628 12:33:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:10:05.628 12:33:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:05.628 12:33:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:10:05.628 12:33:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:05.628 12:33:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:10:05.628 12:33:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:05.628 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:05.628 rmmod nvme_tcp 00:10:05.628 rmmod nvme_fabrics 00:10:05.628 rmmod nvme_keyring 00:10:05.628 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:05.628 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:10:05.628 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:10:05.628 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 2423417 ']' 00:10:05.628 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 2423417 00:10:05.628 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 2423417 ']' 00:10:05.628 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 2423417 00:10:05.628 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:10:05.628 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:05.628 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2423417 00:10:05.628 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:10:05.628 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:10:05.628 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2423417' 00:10:05.628 killing process with pid 2423417 00:10:05.628 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 2423417 00:10:05.628 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 2423417 00:10:05.628 nvmf threads initialize successfully 00:10:05.628 bdev subsystem init successfully 00:10:05.628 created a nvmf target service 00:10:05.628 create targets's poll groups done 00:10:05.628 all subsystems of target started 00:10:05.628 nvmf target is running 00:10:05.628 all subsystems of target stopped 00:10:05.628 destroy targets's poll groups done 00:10:05.628 destroyed the nvmf target service 00:10:05.628 bdev subsystem finish successfully 00:10:05.628 nvmf threads destroy successfully 00:10:05.628 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:05.628 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:05.628 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:05.628 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:10:05.628 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:10:05.628 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:05.628 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:10:05.628 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:05.628 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:05.628 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:05.628 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:05.628 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:05.887 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:05.887 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:10:05.887 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:05.887 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:06.145 00:10:06.145 real 0m19.416s 00:10:06.145 user 0m45.977s 00:10:06.145 sys 0m5.829s 00:10:06.145 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:06.145 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:06.145 ************************************ 00:10:06.145 END TEST nvmf_example 00:10:06.145 ************************************ 00:10:06.145 12:33:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:06.145 12:33:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:06.145 12:33:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:06.145 12:33:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:06.145 ************************************ 00:10:06.145 START TEST nvmf_filesystem 00:10:06.145 ************************************ 00:10:06.145 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:06.145 * Looking for test storage... 00:10:06.145 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:06.145 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:06.145 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:10:06.145 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:06.145 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:06.145 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:06.145 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:06.145 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:06.145 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:06.145 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:06.145 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:06.145 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:06.145 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:06.145 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:06.145 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:06.145 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:06.145 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:10:06.145 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:10:06.145 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:06.145 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:06.145 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:10:06.145 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:10:06.145 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:06.145 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:10:06.145 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:06.145 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:10:06.145 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:10:06.145 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:06.145 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:10:06.145 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:06.146 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:06.146 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:06.146 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:10:06.146 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:06.146 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:06.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:06.146 --rc genhtml_branch_coverage=1 00:10:06.146 --rc genhtml_function_coverage=1 00:10:06.146 --rc genhtml_legend=1 00:10:06.146 --rc geninfo_all_blocks=1 00:10:06.146 --rc geninfo_unexecuted_blocks=1 00:10:06.146 00:10:06.146 ' 00:10:06.146 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:06.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:06.146 --rc genhtml_branch_coverage=1 00:10:06.146 --rc genhtml_function_coverage=1 00:10:06.146 --rc genhtml_legend=1 00:10:06.146 --rc geninfo_all_blocks=1 00:10:06.146 --rc geninfo_unexecuted_blocks=1 00:10:06.146 00:10:06.146 ' 00:10:06.146 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:06.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:06.146 --rc genhtml_branch_coverage=1 00:10:06.146 --rc genhtml_function_coverage=1 00:10:06.146 --rc genhtml_legend=1 00:10:06.146 --rc geninfo_all_blocks=1 00:10:06.146 --rc geninfo_unexecuted_blocks=1 00:10:06.146 00:10:06.146 ' 00:10:06.146 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:06.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:06.146 --rc genhtml_branch_coverage=1 00:10:06.146 --rc genhtml_function_coverage=1 00:10:06.146 --rc genhtml_legend=1 00:10:06.146 --rc geninfo_all_blocks=1 00:10:06.146 --rc geninfo_unexecuted_blocks=1 00:10:06.146 00:10:06.146 ' 00:10:06.146 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:10:06.146 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:10:06.146 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:10:06.146 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:10:06.146 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:10:06.146 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:10:06.146 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:10:06.146 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:10:06.146 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:10:06.146 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:10:06.146 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:10:06.146 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:10:06.146 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:10:06.146 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:10:06.146 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:10:06.146 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:10:06.146 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:10:06.146 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:10:06.146 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:10:06.146 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:10:06.408 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:10:06.408 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:10:06.408 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:10:06.408 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:10:06.408 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:10:06.408 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:10:06.408 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:10:06.408 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:10:06.408 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:10:06.408 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:10:06.408 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:10:06.408 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:10:06.408 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:10:06.408 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:10:06.408 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:10:06.408 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:10:06.408 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:10:06.408 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:10:06.408 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:10:06.408 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:10:06.408 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:10:06.408 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:10:06.408 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:10:06.408 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:10:06.408 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:10:06.408 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:10:06.408 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:10:06.408 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:10:06.408 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:10:06.408 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:10:06.408 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:10:06.408 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:10:06.408 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:10:06.408 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:10:06.408 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:10:06.408 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:10:06.408 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:10:06.408 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:10:06.408 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:10:06.408 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:10:06.408 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:10:06.408 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:10:06.408 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:10:06.408 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:10:06.408 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:10:06.408 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:10:06.408 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:10:06.408 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:10:06.408 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:10:06.408 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:10:06.408 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:10:06.408 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:10:06.409 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:10:06.409 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:10:06.409 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:10:06.409 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:10:06.409 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:10:06.409 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:10:06.409 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:10:06.409 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:10:06.409 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:10:06.409 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:10:06.409 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:10:06.409 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:10:06.409 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:10:06.409 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:10:06.409 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:10:06.409 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:10:06.409 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:10:06.409 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:10:06.409 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:10:06.409 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:10:06.409 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:10:06.409 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:10:06.409 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:10:06.409 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:10:06.409 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:10:06.409 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:10:06.409 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:10:06.409 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:10:06.409 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:10:06.409 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:10:06.409 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:10:06.409 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:06.409 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:06.409 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:10:06.409 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:06.409 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:10:06.409 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:10:06.409 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:10:06.409 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:10:06.409 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:10:06.409 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:10:06.409 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:10:06.409 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:10:06.409 #define SPDK_CONFIG_H 00:10:06.409 #define SPDK_CONFIG_AIO_FSDEV 1 00:10:06.409 #define SPDK_CONFIG_APPS 1 00:10:06.409 #define SPDK_CONFIG_ARCH native 00:10:06.409 #undef SPDK_CONFIG_ASAN 00:10:06.409 #undef SPDK_CONFIG_AVAHI 00:10:06.409 #undef SPDK_CONFIG_CET 00:10:06.409 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:10:06.409 #define SPDK_CONFIG_COVERAGE 1 00:10:06.409 #define SPDK_CONFIG_CROSS_PREFIX 00:10:06.409 #undef SPDK_CONFIG_CRYPTO 00:10:06.409 #undef SPDK_CONFIG_CRYPTO_MLX5 00:10:06.409 #undef SPDK_CONFIG_CUSTOMOCF 00:10:06.409 #undef SPDK_CONFIG_DAOS 00:10:06.409 #define SPDK_CONFIG_DAOS_DIR 00:10:06.409 #define SPDK_CONFIG_DEBUG 1 00:10:06.409 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:10:06.409 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:10:06.409 #define SPDK_CONFIG_DPDK_INC_DIR 00:10:06.409 #define SPDK_CONFIG_DPDK_LIB_DIR 00:10:06.409 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:10:06.409 #undef SPDK_CONFIG_DPDK_UADK 00:10:06.409 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:10:06.409 #define SPDK_CONFIG_EXAMPLES 1 00:10:06.409 #undef SPDK_CONFIG_FC 00:10:06.409 #define SPDK_CONFIG_FC_PATH 00:10:06.409 #define SPDK_CONFIG_FIO_PLUGIN 1 00:10:06.409 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:10:06.409 #define SPDK_CONFIG_FSDEV 1 00:10:06.409 #undef SPDK_CONFIG_FUSE 00:10:06.409 #undef SPDK_CONFIG_FUZZER 00:10:06.409 #define SPDK_CONFIG_FUZZER_LIB 00:10:06.409 #undef SPDK_CONFIG_GOLANG 00:10:06.409 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:10:06.409 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:10:06.409 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:10:06.409 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:10:06.409 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:10:06.409 #undef SPDK_CONFIG_HAVE_LIBBSD 00:10:06.409 #undef SPDK_CONFIG_HAVE_LZ4 00:10:06.409 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:10:06.409 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:10:06.409 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:10:06.409 #define SPDK_CONFIG_IDXD 1 00:10:06.409 #define SPDK_CONFIG_IDXD_KERNEL 1 00:10:06.409 #undef SPDK_CONFIG_IPSEC_MB 00:10:06.409 #define SPDK_CONFIG_IPSEC_MB_DIR 00:10:06.409 #define SPDK_CONFIG_ISAL 1 00:10:06.409 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:10:06.409 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:10:06.409 #define SPDK_CONFIG_LIBDIR 00:10:06.409 #undef SPDK_CONFIG_LTO 00:10:06.409 #define SPDK_CONFIG_MAX_LCORES 128 00:10:06.409 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:10:06.409 #define SPDK_CONFIG_NVME_CUSE 1 00:10:06.409 #undef SPDK_CONFIG_OCF 00:10:06.409 #define SPDK_CONFIG_OCF_PATH 00:10:06.409 #define SPDK_CONFIG_OPENSSL_PATH 00:10:06.409 #undef SPDK_CONFIG_PGO_CAPTURE 00:10:06.409 #define SPDK_CONFIG_PGO_DIR 00:10:06.409 #undef SPDK_CONFIG_PGO_USE 00:10:06.409 #define SPDK_CONFIG_PREFIX /usr/local 00:10:06.409 #undef SPDK_CONFIG_RAID5F 00:10:06.409 #undef SPDK_CONFIG_RBD 00:10:06.409 #define SPDK_CONFIG_RDMA 1 00:10:06.409 #define SPDK_CONFIG_RDMA_PROV verbs 00:10:06.409 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:10:06.409 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:10:06.409 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:10:06.409 #define SPDK_CONFIG_SHARED 1 00:10:06.409 #undef SPDK_CONFIG_SMA 00:10:06.409 #define SPDK_CONFIG_TESTS 1 00:10:06.409 #undef SPDK_CONFIG_TSAN 00:10:06.409 #define SPDK_CONFIG_UBLK 1 00:10:06.409 #define SPDK_CONFIG_UBSAN 1 00:10:06.409 #undef SPDK_CONFIG_UNIT_TESTS 00:10:06.409 #undef SPDK_CONFIG_URING 00:10:06.409 #define SPDK_CONFIG_URING_PATH 00:10:06.409 #undef SPDK_CONFIG_URING_ZNS 00:10:06.409 #undef SPDK_CONFIG_USDT 00:10:06.409 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:10:06.409 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:10:06.409 #define SPDK_CONFIG_VFIO_USER 1 00:10:06.409 #define SPDK_CONFIG_VFIO_USER_DIR 00:10:06.409 #define SPDK_CONFIG_VHOST 1 00:10:06.409 #define SPDK_CONFIG_VIRTIO 1 00:10:06.409 #undef SPDK_CONFIG_VTUNE 00:10:06.409 #define SPDK_CONFIG_VTUNE_DIR 00:10:06.409 #define SPDK_CONFIG_WERROR 1 00:10:06.409 #define SPDK_CONFIG_WPDK_DIR 00:10:06.409 #undef SPDK_CONFIG_XNVME 00:10:06.409 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:10:06.409 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:10:06.409 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:06.409 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:06.409 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:06.409 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:06.410 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:06.410 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:06.410 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:06.410 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:06.410 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:06.410 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:06.410 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:10:06.410 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:10:06.410 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:10:06.410 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:10:06.410 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:10:06.410 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:06.410 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:10:06.410 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:10:06.410 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:10:06.410 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:10:06.410 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:10:06.410 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:10:06.410 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:10:06.410 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:10:06.410 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:10:06.410 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:10:06.410 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:10:06.410 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:10:06.410 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:10:06.410 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:10:06.410 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:10:06.410 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:10:06.410 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:10:06.410 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:10:06.410 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:10:06.410 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:10:06.410 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:10:06.410 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:10:06.410 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:10:06.410 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:10:06.410 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:10:06.410 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:10:06.410 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:10:06.410 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:10:06.410 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:10:06.410 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:10:06.410 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:10:06.410 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:10:06.410 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:10:06.410 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:10:06.410 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:10:06.410 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:10:06.410 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:10:06.410 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:10:06.410 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:10:06.410 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:10:06.410 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:10:06.410 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:10:06.410 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:10:06.410 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:10:06.410 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:10:06.410 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:10:06.410 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:10:06.410 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:10:06.410 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:10:06.410 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:10:06.410 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:10:06.410 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:10:06.410 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:10:06.410 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:10:06.410 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:10:06.410 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:10:06.410 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:10:06.410 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:10:06.410 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:10:06.410 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:10:06.410 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:10:06.410 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:10:06.410 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:10:06.410 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:10:06.410 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:10:06.410 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:10:06.410 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:10:06.410 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:10:06.410 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:10:06.410 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:10:06.410 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:10:06.410 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:10:06.411 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:10:06.411 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:10:06.411 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:10:06.411 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:10:06.411 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:10:06.411 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:10:06.411 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:10:06.411 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:10:06.411 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:10:06.411 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:10:06.411 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:10:06.411 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:10:06.411 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:10:06.411 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:10:06.411 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:10:06.411 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:10:06.411 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:10:06.411 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:10:06.411 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:10:06.411 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:10:06.411 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:10:06.411 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:10:06.411 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:10:06.411 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:10:06.411 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:10:06.411 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:10:06.411 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:10:06.411 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:10:06.411 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:10:06.411 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:10:06.411 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:10:06.411 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:10:06.411 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:10:06.411 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:10:06.411 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:10:06.411 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:10:06.411 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:10:06.411 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:10:06.411 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:10:06.411 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:10:06.411 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:10:06.411 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:10:06.411 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:10:06.411 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:10:06.411 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:10:06.411 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:10:06.411 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:10:06.411 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:10:06.411 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:10:06.411 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:10:06.411 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:10:06.411 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:10:06.411 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:10:06.411 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:10:06.411 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:10:06.411 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:10:06.411 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:10:06.411 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:10:06.411 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:10:06.411 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:10:06.411 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:10:06.411 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:10:06.411 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:10:06.411 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:10:06.411 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:10:06.411 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:10:06.411 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:10:06.411 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:10:06.411 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:10:06.411 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:10:06.411 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:10:06.411 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:06.411 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:06.411 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:06.411 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:06.411 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:10:06.411 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:10:06.412 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:06.412 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:06.412 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:10:06.412 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:10:06.412 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:06.412 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:06.412 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:06.412 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:06.412 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:10:06.412 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:10:06.412 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:10:06.412 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:10:06.412 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:06.412 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:06.412 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:06.412 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:06.412 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:10:06.412 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:10:06.412 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:06.412 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:06.412 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:06.412 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:06.412 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:06.412 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:06.412 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:06.412 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:06.412 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:06.412 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:06.412 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:06.412 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:06.412 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:10:06.412 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:10:06.412 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:10:06.412 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:10:06.412 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:10:06.412 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:10:06.412 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:10:06.412 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:10:06.412 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:10:06.412 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:10:06.412 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:10:06.412 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:10:06.412 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:10:06.412 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:10:06.412 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:10:06.412 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:10:06.412 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:10:06.412 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j96 00:10:06.412 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:10:06.412 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:10:06.412 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:10:06.412 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:10:06.412 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:10:06.412 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:10:06.412 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:10:06.412 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 2425734 ]] 00:10:06.412 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 2425734 00:10:06.412 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:10:06.412 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:10:06.412 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:10:06.412 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:10:06.412 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:10:06.412 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:10:06.412 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:10:06.412 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:10:06.412 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.mdqrB3 00:10:06.412 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:10:06.412 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:10:06.412 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:10:06.412 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.mdqrB3/tests/target /tmp/spdk.mdqrB3 00:10:06.412 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:10:06.412 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:06.412 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:10:06.412 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:10:06.412 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:10:06.412 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:10:06.412 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:10:06.412 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:10:06.412 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:10:06.412 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:06.412 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:10:06.412 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:10:06.412 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=4096 00:10:06.412 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:10:06.412 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5284425728 00:10:06.412 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:06.412 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:10:06.412 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:10:06.412 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=189041430528 00:10:06.412 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=195963961344 00:10:06.412 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=6922530816 00:10:06.412 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:06.413 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:06.413 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:06.413 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=97971949568 00:10:06.413 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=97981980672 00:10:06.413 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=10031104 00:10:06.413 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:06.413 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:06.413 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:06.413 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=39169748992 00:10:06.413 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=39192793088 00:10:06.413 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=23044096 00:10:06.413 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:06.413 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:06.413 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:06.413 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=97980502016 00:10:06.413 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=97981980672 00:10:06.413 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=1478656 00:10:06.413 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:06.413 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:06.413 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:06.413 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=19596382208 00:10:06.413 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=19596394496 00:10:06.413 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:10:06.413 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:06.413 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:10:06.413 * Looking for test storage... 00:10:06.413 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:10:06.413 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:10:06.413 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:06.413 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:10:06.413 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:10:06.413 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=189041430528 00:10:06.413 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:10:06.413 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:10:06.413 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:10:06.413 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:10:06.413 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:10:06.413 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=9137123328 00:10:06.413 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:10:06.413 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:06.413 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:06.413 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:06.413 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:06.413 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:10:06.413 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set -o errtrace 00:10:06.413 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:10:06.413 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:10:06.413 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:10:06.413 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1685 -- # true 00:10:06.413 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1687 -- # xtrace_fd 00:10:06.413 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:10:06.413 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:10:06.413 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:10:06.413 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:10:06.413 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:10:06.413 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:10:06.413 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:10:06.413 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:10:06.413 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:06.413 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:10:06.413 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:06.413 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:06.413 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:06.413 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:06.413 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:06.413 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:06.413 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:06.413 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:06.413 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:06.413 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:06.413 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:06.413 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:06.413 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:06.413 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:10:06.413 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:10:06.413 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:06.413 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:06.413 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:10:06.413 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:10:06.413 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:06.413 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:10:06.413 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:06.413 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:10:06.413 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:10:06.413 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:06.413 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:10:06.413 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:06.413 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:06.414 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:06.414 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:10:06.414 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:06.414 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:06.414 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:06.414 --rc genhtml_branch_coverage=1 00:10:06.414 --rc genhtml_function_coverage=1 00:10:06.414 --rc genhtml_legend=1 00:10:06.414 --rc geninfo_all_blocks=1 00:10:06.414 --rc geninfo_unexecuted_blocks=1 00:10:06.414 00:10:06.414 ' 00:10:06.414 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:06.414 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:06.414 --rc genhtml_branch_coverage=1 00:10:06.414 --rc genhtml_function_coverage=1 00:10:06.414 --rc genhtml_legend=1 00:10:06.414 --rc geninfo_all_blocks=1 00:10:06.414 --rc geninfo_unexecuted_blocks=1 00:10:06.414 00:10:06.414 ' 00:10:06.414 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:06.414 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:06.414 --rc genhtml_branch_coverage=1 00:10:06.414 --rc genhtml_function_coverage=1 00:10:06.414 --rc genhtml_legend=1 00:10:06.414 --rc geninfo_all_blocks=1 00:10:06.414 --rc geninfo_unexecuted_blocks=1 00:10:06.414 00:10:06.414 ' 00:10:06.414 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:06.414 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:06.414 --rc genhtml_branch_coverage=1 00:10:06.414 --rc genhtml_function_coverage=1 00:10:06.414 --rc genhtml_legend=1 00:10:06.414 --rc geninfo_all_blocks=1 00:10:06.414 --rc geninfo_unexecuted_blocks=1 00:10:06.414 00:10:06.414 ' 00:10:06.414 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:06.414 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:10:06.414 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:06.414 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:06.414 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:06.414 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:06.414 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:06.414 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:06.414 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:06.414 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:06.414 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:06.414 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:06.414 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:06.414 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:06.414 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:06.414 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:06.414 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:06.414 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:06.414 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:06.414 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:06.414 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:06.414 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:06.414 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:06.414 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:06.414 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:06.414 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:06.414 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:06.414 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:06.414 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:10:06.414 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:06.414 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:06.414 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:06.414 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:06.414 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:06.673 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:06.673 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:06.673 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:06.673 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:06.673 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:06.673 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:10:06.673 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:10:06.673 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:10:06.673 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:06.673 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:06.673 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:06.673 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:06.673 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:06.673 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:06.673 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:06.673 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:06.673 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:06.673 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:06.673 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:10:06.673 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:11.944 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:11.944 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:10:11.944 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:11.944 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:11.944 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:11.944 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:11.944 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:11.944 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:10:11.944 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:11.944 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:10:11.944 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:10:11.944 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:10:11.944 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:10:11.944 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:10:11.944 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:10:11.944 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:11.944 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:11.944 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:11.944 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:11.944 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:11.944 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:11.944 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:11.944 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:11.944 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:11.944 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:11.944 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:11.944 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:11.944 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:11.944 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:11.944 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:11.944 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:11.944 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:11.944 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:11.944 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:11.944 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:11.944 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:11.944 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:11.944 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:11.944 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:11.944 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:11.944 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:11.944 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:11.944 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:11.944 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:11.944 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:11.944 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:11.944 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:11.944 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:11.944 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:11.944 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:11.944 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:11.944 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:11.944 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:11.944 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:11.944 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:11.944 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:11.944 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:11.944 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:11.944 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:11.944 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:11.944 Found net devices under 0000:86:00.0: cvl_0_0 00:10:11.944 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:11.944 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:11.944 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:11.944 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:11.944 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:11.944 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:11.944 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:11.944 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:11.944 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:11.944 Found net devices under 0000:86:00.1: cvl_0_1 00:10:11.944 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:11.944 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:11.944 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:10:11.944 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:11.944 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:11.944 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:11.944 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:11.944 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:11.944 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:11.944 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:11.944 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:11.944 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:11.944 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:11.944 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:11.944 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:11.944 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:11.944 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:11.944 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:11.944 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:11.944 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:11.944 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:11.944 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:11.944 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:11.944 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:11.944 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:12.203 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:12.203 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:12.203 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:12.203 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:12.203 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:12.203 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.237 ms 00:10:12.203 00:10:12.203 --- 10.0.0.2 ping statistics --- 00:10:12.203 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:12.203 rtt min/avg/max/mdev = 0.237/0.237/0.237/0.000 ms 00:10:12.203 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:12.203 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:12.203 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.214 ms 00:10:12.203 00:10:12.203 --- 10.0.0.1 ping statistics --- 00:10:12.203 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:12.203 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:10:12.203 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:12.203 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:10:12.203 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:12.203 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:12.203 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:12.203 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:12.203 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:12.203 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:12.203 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:12.203 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:10:12.203 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:12.203 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:12.204 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:12.204 ************************************ 00:10:12.204 START TEST nvmf_filesystem_no_in_capsule 00:10:12.204 ************************************ 00:10:12.204 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:10:12.204 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:10:12.204 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:12.204 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:12.204 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:12.204 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:12.204 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=2428987 00:10:12.204 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 2428987 00:10:12.204 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:12.204 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 2428987 ']' 00:10:12.204 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:12.204 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:12.204 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:12.204 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:12.204 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:12.204 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:12.204 [2024-11-28 12:33:54.661159] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:10:12.204 [2024-11-28 12:33:54.661202] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:12.462 [2024-11-28 12:33:54.728262] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:12.462 [2024-11-28 12:33:54.768771] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:12.462 [2024-11-28 12:33:54.768812] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:12.462 [2024-11-28 12:33:54.768819] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:12.462 [2024-11-28 12:33:54.768826] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:12.462 [2024-11-28 12:33:54.768832] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:12.462 [2024-11-28 12:33:54.770363] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:12.463 [2024-11-28 12:33:54.770450] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:12.463 [2024-11-28 12:33:54.770514] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:12.463 [2024-11-28 12:33:54.770516] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:12.463 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:12.463 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:10:12.463 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:12.463 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:12.463 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:12.463 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:12.463 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:12.463 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:10:12.463 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.463 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:12.463 [2024-11-28 12:33:54.912997] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:12.463 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.463 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:12.463 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.463 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:12.723 Malloc1 00:10:12.723 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.723 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:12.723 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.723 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:12.723 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.723 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:12.724 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.724 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:12.724 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.724 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:12.724 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.724 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:12.724 [2024-11-28 12:33:55.067745] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:12.724 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.724 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:12.724 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:10:12.724 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:10:12.724 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:10:12.724 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:10:12.724 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:12.724 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.724 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:12.724 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.724 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:10:12.724 { 00:10:12.724 "name": "Malloc1", 00:10:12.724 "aliases": [ 00:10:12.724 "08b2cda5-42c7-4597-8475-f6838236daa3" 00:10:12.724 ], 00:10:12.724 "product_name": "Malloc disk", 00:10:12.724 "block_size": 512, 00:10:12.724 "num_blocks": 1048576, 00:10:12.724 "uuid": "08b2cda5-42c7-4597-8475-f6838236daa3", 00:10:12.724 "assigned_rate_limits": { 00:10:12.724 "rw_ios_per_sec": 0, 00:10:12.724 "rw_mbytes_per_sec": 0, 00:10:12.724 "r_mbytes_per_sec": 0, 00:10:12.724 "w_mbytes_per_sec": 0 00:10:12.724 }, 00:10:12.724 "claimed": true, 00:10:12.724 "claim_type": "exclusive_write", 00:10:12.724 "zoned": false, 00:10:12.724 "supported_io_types": { 00:10:12.724 "read": true, 00:10:12.724 "write": true, 00:10:12.724 "unmap": true, 00:10:12.724 "flush": true, 00:10:12.724 "reset": true, 00:10:12.724 "nvme_admin": false, 00:10:12.724 "nvme_io": false, 00:10:12.724 "nvme_io_md": false, 00:10:12.724 "write_zeroes": true, 00:10:12.724 "zcopy": true, 00:10:12.724 "get_zone_info": false, 00:10:12.724 "zone_management": false, 00:10:12.724 "zone_append": false, 00:10:12.724 "compare": false, 00:10:12.724 "compare_and_write": false, 00:10:12.724 "abort": true, 00:10:12.724 "seek_hole": false, 00:10:12.724 "seek_data": false, 00:10:12.724 "copy": true, 00:10:12.724 "nvme_iov_md": false 00:10:12.724 }, 00:10:12.724 "memory_domains": [ 00:10:12.724 { 00:10:12.724 "dma_device_id": "system", 00:10:12.724 "dma_device_type": 1 00:10:12.724 }, 00:10:12.724 { 00:10:12.724 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:12.724 "dma_device_type": 2 00:10:12.724 } 00:10:12.724 ], 00:10:12.724 "driver_specific": {} 00:10:12.724 } 00:10:12.724 ]' 00:10:12.724 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:10:12.724 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:10:12.724 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:10:12.724 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:10:12.724 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:10:12.724 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:10:12.724 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:12.724 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:14.098 12:33:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:14.098 12:33:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:10:14.098 12:33:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:14.098 12:33:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:14.098 12:33:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:10:15.998 12:33:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:15.998 12:33:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:15.998 12:33:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:15.998 12:33:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:15.998 12:33:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:15.998 12:33:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:10:15.998 12:33:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:15.998 12:33:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:15.998 12:33:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:15.998 12:33:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:15.998 12:33:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:15.998 12:33:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:15.998 12:33:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:15.998 12:33:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:15.998 12:33:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:15.998 12:33:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:15.998 12:33:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:16.256 12:33:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:16.256 12:33:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:10:17.630 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:10:17.630 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:17.630 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:17.630 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:17.630 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:17.630 ************************************ 00:10:17.630 START TEST filesystem_ext4 00:10:17.630 ************************************ 00:10:17.630 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:17.630 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:17.630 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:17.630 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:17.630 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:10:17.630 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:17.630 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:10:17.630 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:10:17.630 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:10:17.630 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:10:17.630 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:17.630 mke2fs 1.47.0 (5-Feb-2023) 00:10:17.630 Discarding device blocks: 0/522240 done 00:10:17.630 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:17.631 Filesystem UUID: 44d72cb3-593a-48a7-957b-510bb38498fe 00:10:17.631 Superblock backups stored on blocks: 00:10:17.631 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:17.631 00:10:17.631 Allocating group tables: 0/64 done 00:10:17.631 Writing inode tables: 0/64 done 00:10:20.912 Creating journal (8192 blocks): done 00:10:22.670 Writing superblocks and filesystem accounting information: 0/64 done 00:10:22.670 00:10:22.670 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:10:22.670 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:27.934 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:27.934 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:10:27.934 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:27.934 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:10:27.934 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:10:27.934 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:27.934 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 2428987 00:10:27.934 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:27.934 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:27.934 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:27.934 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:27.934 00:10:27.934 real 0m10.657s 00:10:27.934 user 0m0.027s 00:10:27.934 sys 0m0.079s 00:10:27.934 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:27.934 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:10:27.934 ************************************ 00:10:27.934 END TEST filesystem_ext4 00:10:27.934 ************************************ 00:10:28.192 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:10:28.192 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:28.192 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:28.192 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:28.192 ************************************ 00:10:28.192 START TEST filesystem_btrfs 00:10:28.192 ************************************ 00:10:28.192 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:10:28.192 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:10:28.192 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:28.192 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:10:28.192 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:10:28.192 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:28.192 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:10:28.192 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:10:28.192 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:10:28.192 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:10:28.192 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:10:28.449 btrfs-progs v6.8.1 00:10:28.449 See https://btrfs.readthedocs.io for more information. 00:10:28.449 00:10:28.449 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:10:28.449 NOTE: several default settings have changed in version 5.15, please make sure 00:10:28.449 this does not affect your deployments: 00:10:28.449 - DUP for metadata (-m dup) 00:10:28.449 - enabled no-holes (-O no-holes) 00:10:28.449 - enabled free-space-tree (-R free-space-tree) 00:10:28.449 00:10:28.449 Label: (null) 00:10:28.449 UUID: ab29bd84-7125-40f9-bb92-fccf9d2da8e3 00:10:28.449 Node size: 16384 00:10:28.449 Sector size: 4096 (CPU page size: 4096) 00:10:28.449 Filesystem size: 510.00MiB 00:10:28.449 Block group profiles: 00:10:28.449 Data: single 8.00MiB 00:10:28.449 Metadata: DUP 32.00MiB 00:10:28.449 System: DUP 8.00MiB 00:10:28.449 SSD detected: yes 00:10:28.449 Zoned device: no 00:10:28.449 Features: extref, skinny-metadata, no-holes, free-space-tree 00:10:28.449 Checksum: crc32c 00:10:28.449 Number of devices: 1 00:10:28.449 Devices: 00:10:28.449 ID SIZE PATH 00:10:28.449 1 510.00MiB /dev/nvme0n1p1 00:10:28.449 00:10:28.449 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:10:28.449 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:29.384 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:29.384 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:10:29.384 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:29.384 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:10:29.384 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:10:29.384 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:29.384 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 2428987 00:10:29.384 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:29.384 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:29.384 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:29.384 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:29.384 00:10:29.384 real 0m1.118s 00:10:29.384 user 0m0.024s 00:10:29.384 sys 0m0.118s 00:10:29.384 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:29.384 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:10:29.384 ************************************ 00:10:29.384 END TEST filesystem_btrfs 00:10:29.384 ************************************ 00:10:29.384 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:10:29.384 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:29.384 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:29.385 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:29.385 ************************************ 00:10:29.385 START TEST filesystem_xfs 00:10:29.385 ************************************ 00:10:29.385 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:10:29.385 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:10:29.385 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:29.385 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:10:29.385 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:10:29.385 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:29.385 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:10:29.385 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:10:29.385 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:10:29.385 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:10:29.385 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:10:29.385 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:10:29.385 = sectsz=512 attr=2, projid32bit=1 00:10:29.385 = crc=1 finobt=1, sparse=1, rmapbt=0 00:10:29.385 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:10:29.385 data = bsize=4096 blocks=130560, imaxpct=25 00:10:29.385 = sunit=0 swidth=0 blks 00:10:29.385 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:10:29.385 log =internal log bsize=4096 blocks=16384, version=2 00:10:29.385 = sectsz=512 sunit=0 blks, lazy-count=1 00:10:29.385 realtime =none extsz=4096 blocks=0, rtextents=0 00:10:30.320 Discarding blocks...Done. 00:10:30.320 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:10:30.320 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:32.850 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:32.850 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:10:32.850 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:32.850 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:10:32.850 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:10:32.850 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:32.850 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 2428987 00:10:32.850 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:32.850 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:32.850 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:32.850 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:32.850 00:10:32.850 real 0m3.660s 00:10:32.850 user 0m0.024s 00:10:32.850 sys 0m0.076s 00:10:32.850 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:32.850 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:10:32.850 ************************************ 00:10:32.850 END TEST filesystem_xfs 00:10:32.850 ************************************ 00:10:33.107 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:10:33.107 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:10:33.107 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:33.107 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:33.107 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:33.107 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:10:33.107 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:33.107 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:33.107 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:33.107 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:33.107 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:10:33.107 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:33.107 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.107 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:33.107 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.107 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:33.107 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 2428987 00:10:33.107 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 2428987 ']' 00:10:33.108 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 2428987 00:10:33.108 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:10:33.108 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:33.108 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2428987 00:10:33.366 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:33.366 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:33.366 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2428987' 00:10:33.366 killing process with pid 2428987 00:10:33.366 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 2428987 00:10:33.366 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 2428987 00:10:33.625 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:10:33.625 00:10:33.625 real 0m21.355s 00:10:33.625 user 1m24.182s 00:10:33.625 sys 0m1.517s 00:10:33.625 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:33.625 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:33.625 ************************************ 00:10:33.625 END TEST nvmf_filesystem_no_in_capsule 00:10:33.625 ************************************ 00:10:33.625 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:10:33.625 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:33.625 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:33.625 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:33.625 ************************************ 00:10:33.625 START TEST nvmf_filesystem_in_capsule 00:10:33.625 ************************************ 00:10:33.625 12:34:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:10:33.625 12:34:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:10:33.625 12:34:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:33.625 12:34:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:33.625 12:34:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:33.625 12:34:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:33.625 12:34:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=2432678 00:10:33.625 12:34:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 2432678 00:10:33.625 12:34:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:33.625 12:34:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 2432678 ']' 00:10:33.625 12:34:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:33.625 12:34:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:33.625 12:34:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:33.625 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:33.625 12:34:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:33.625 12:34:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:33.625 [2024-11-28 12:34:16.092381] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:10:33.625 [2024-11-28 12:34:16.092421] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:33.884 [2024-11-28 12:34:16.157881] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:33.884 [2024-11-28 12:34:16.201155] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:33.884 [2024-11-28 12:34:16.201194] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:33.884 [2024-11-28 12:34:16.201202] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:33.884 [2024-11-28 12:34:16.201208] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:33.884 [2024-11-28 12:34:16.201214] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:33.884 [2024-11-28 12:34:16.202834] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:33.884 [2024-11-28 12:34:16.202931] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:33.884 [2024-11-28 12:34:16.203019] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:33.884 [2024-11-28 12:34:16.203022] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:33.884 12:34:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:33.884 12:34:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:10:33.884 12:34:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:33.884 12:34:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:33.884 12:34:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:33.884 12:34:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:33.884 12:34:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:33.884 12:34:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:10:33.884 12:34:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.884 12:34:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:33.884 [2024-11-28 12:34:16.342395] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:33.884 12:34:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.884 12:34:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:33.884 12:34:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.884 12:34:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:34.143 Malloc1 00:10:34.143 12:34:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.143 12:34:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:34.143 12:34:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.143 12:34:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:34.143 12:34:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.143 12:34:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:34.143 12:34:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.143 12:34:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:34.143 12:34:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.143 12:34:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:34.143 12:34:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.143 12:34:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:34.143 [2024-11-28 12:34:16.522141] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:34.143 12:34:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.143 12:34:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:34.143 12:34:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:10:34.143 12:34:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:10:34.143 12:34:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:10:34.143 12:34:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:10:34.143 12:34:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:34.143 12:34:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.143 12:34:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:34.143 12:34:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.143 12:34:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:10:34.143 { 00:10:34.143 "name": "Malloc1", 00:10:34.143 "aliases": [ 00:10:34.143 "4dfc807e-4db9-4270-81b7-335a6db73763" 00:10:34.143 ], 00:10:34.143 "product_name": "Malloc disk", 00:10:34.143 "block_size": 512, 00:10:34.143 "num_blocks": 1048576, 00:10:34.143 "uuid": "4dfc807e-4db9-4270-81b7-335a6db73763", 00:10:34.143 "assigned_rate_limits": { 00:10:34.143 "rw_ios_per_sec": 0, 00:10:34.143 "rw_mbytes_per_sec": 0, 00:10:34.143 "r_mbytes_per_sec": 0, 00:10:34.143 "w_mbytes_per_sec": 0 00:10:34.143 }, 00:10:34.143 "claimed": true, 00:10:34.143 "claim_type": "exclusive_write", 00:10:34.143 "zoned": false, 00:10:34.143 "supported_io_types": { 00:10:34.143 "read": true, 00:10:34.143 "write": true, 00:10:34.143 "unmap": true, 00:10:34.143 "flush": true, 00:10:34.143 "reset": true, 00:10:34.143 "nvme_admin": false, 00:10:34.143 "nvme_io": false, 00:10:34.143 "nvme_io_md": false, 00:10:34.143 "write_zeroes": true, 00:10:34.143 "zcopy": true, 00:10:34.143 "get_zone_info": false, 00:10:34.143 "zone_management": false, 00:10:34.143 "zone_append": false, 00:10:34.143 "compare": false, 00:10:34.143 "compare_and_write": false, 00:10:34.143 "abort": true, 00:10:34.143 "seek_hole": false, 00:10:34.143 "seek_data": false, 00:10:34.143 "copy": true, 00:10:34.143 "nvme_iov_md": false 00:10:34.143 }, 00:10:34.143 "memory_domains": [ 00:10:34.143 { 00:10:34.143 "dma_device_id": "system", 00:10:34.143 "dma_device_type": 1 00:10:34.143 }, 00:10:34.143 { 00:10:34.143 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:34.143 "dma_device_type": 2 00:10:34.143 } 00:10:34.143 ], 00:10:34.143 "driver_specific": {} 00:10:34.143 } 00:10:34.143 ]' 00:10:34.143 12:34:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:10:34.143 12:34:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:10:34.143 12:34:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:10:34.143 12:34:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:10:34.143 12:34:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:10:34.143 12:34:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:10:34.143 12:34:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:34.143 12:34:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:35.516 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:35.516 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:10:35.516 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:35.516 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:35.516 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:10:37.418 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:37.418 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:37.418 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:37.418 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:37.418 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:37.418 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:10:37.418 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:37.418 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:37.418 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:37.418 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:37.418 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:37.418 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:37.418 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:37.418 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:37.418 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:37.418 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:37.418 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:37.676 12:34:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:38.610 12:34:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:10:39.544 12:34:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:10:39.544 12:34:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:39.544 12:34:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:39.544 12:34:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:39.544 12:34:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:39.544 ************************************ 00:10:39.544 START TEST filesystem_in_capsule_ext4 00:10:39.544 ************************************ 00:10:39.544 12:34:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:39.544 12:34:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:39.544 12:34:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:39.544 12:34:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:39.544 12:34:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:10:39.544 12:34:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:39.544 12:34:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:10:39.544 12:34:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:10:39.544 12:34:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:10:39.544 12:34:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:10:39.545 12:34:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:39.545 mke2fs 1.47.0 (5-Feb-2023) 00:10:39.545 Discarding device blocks: 0/522240 done 00:10:39.545 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:39.545 Filesystem UUID: c9213ca3-ef32-41c6-8529-43bca83f40c5 00:10:39.545 Superblock backups stored on blocks: 00:10:39.545 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:39.545 00:10:39.545 Allocating group tables: 0/64 done 00:10:39.545 Writing inode tables: 0/64 done 00:10:39.545 Creating journal (8192 blocks): done 00:10:39.545 Writing superblocks and filesystem accounting information: 0/64 done 00:10:39.545 00:10:39.545 12:34:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:10:39.545 12:34:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:46.098 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:46.098 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:10:46.099 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:46.099 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:10:46.099 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:10:46.099 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:46.099 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 2432678 00:10:46.099 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:46.099 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:46.099 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:46.099 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:46.099 00:10:46.099 real 0m5.673s 00:10:46.099 user 0m0.031s 00:10:46.099 sys 0m0.067s 00:10:46.099 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:46.099 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:10:46.099 ************************************ 00:10:46.099 END TEST filesystem_in_capsule_ext4 00:10:46.099 ************************************ 00:10:46.099 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:10:46.099 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:46.099 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:46.099 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:46.099 ************************************ 00:10:46.099 START TEST filesystem_in_capsule_btrfs 00:10:46.099 ************************************ 00:10:46.099 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:10:46.099 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:10:46.099 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:46.099 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:10:46.099 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:10:46.099 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:46.099 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:10:46.099 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:10:46.099 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:10:46.099 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:10:46.099 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:10:46.099 btrfs-progs v6.8.1 00:10:46.099 See https://btrfs.readthedocs.io for more information. 00:10:46.099 00:10:46.099 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:10:46.099 NOTE: several default settings have changed in version 5.15, please make sure 00:10:46.099 this does not affect your deployments: 00:10:46.099 - DUP for metadata (-m dup) 00:10:46.099 - enabled no-holes (-O no-holes) 00:10:46.099 - enabled free-space-tree (-R free-space-tree) 00:10:46.099 00:10:46.099 Label: (null) 00:10:46.099 UUID: d95197a7-19f2-4ebc-918a-88e3f39c230a 00:10:46.099 Node size: 16384 00:10:46.099 Sector size: 4096 (CPU page size: 4096) 00:10:46.099 Filesystem size: 510.00MiB 00:10:46.099 Block group profiles: 00:10:46.099 Data: single 8.00MiB 00:10:46.099 Metadata: DUP 32.00MiB 00:10:46.099 System: DUP 8.00MiB 00:10:46.099 SSD detected: yes 00:10:46.099 Zoned device: no 00:10:46.099 Features: extref, skinny-metadata, no-holes, free-space-tree 00:10:46.099 Checksum: crc32c 00:10:46.099 Number of devices: 1 00:10:46.099 Devices: 00:10:46.099 ID SIZE PATH 00:10:46.099 1 510.00MiB /dev/nvme0n1p1 00:10:46.099 00:10:46.099 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:10:46.099 12:34:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:46.099 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:46.099 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:10:46.099 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:46.099 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:10:46.099 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:10:46.099 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:46.099 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 2432678 00:10:46.099 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:46.099 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:46.099 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:46.099 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:46.099 00:10:46.099 real 0m0.992s 00:10:46.099 user 0m0.030s 00:10:46.099 sys 0m0.109s 00:10:46.099 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:46.099 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:10:46.099 ************************************ 00:10:46.099 END TEST filesystem_in_capsule_btrfs 00:10:46.099 ************************************ 00:10:46.099 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:10:46.099 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:46.099 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:46.099 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:46.359 ************************************ 00:10:46.359 START TEST filesystem_in_capsule_xfs 00:10:46.359 ************************************ 00:10:46.359 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:10:46.359 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:10:46.359 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:46.359 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:10:46.359 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:10:46.359 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:46.359 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:10:46.359 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:10:46.359 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:10:46.359 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:10:46.359 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:10:46.359 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:10:46.359 = sectsz=512 attr=2, projid32bit=1 00:10:46.359 = crc=1 finobt=1, sparse=1, rmapbt=0 00:10:46.359 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:10:46.359 data = bsize=4096 blocks=130560, imaxpct=25 00:10:46.359 = sunit=0 swidth=0 blks 00:10:46.359 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:10:46.359 log =internal log bsize=4096 blocks=16384, version=2 00:10:46.359 = sectsz=512 sunit=0 blks, lazy-count=1 00:10:46.359 realtime =none extsz=4096 blocks=0, rtextents=0 00:10:47.293 Discarding blocks...Done. 00:10:47.293 12:34:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:10:47.293 12:34:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:49.822 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:49.822 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:10:49.823 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:49.823 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:10:49.823 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:10:49.823 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:49.823 12:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 2432678 00:10:49.823 12:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:49.823 12:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:49.823 12:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:49.823 12:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:49.823 00:10:49.823 real 0m3.409s 00:10:49.823 user 0m0.025s 00:10:49.823 sys 0m0.074s 00:10:49.823 12:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:49.823 12:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:10:49.823 ************************************ 00:10:49.823 END TEST filesystem_in_capsule_xfs 00:10:49.823 ************************************ 00:10:49.823 12:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:10:49.823 12:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:10:49.823 12:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:49.823 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:49.823 12:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:49.823 12:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:10:49.823 12:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:49.823 12:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:49.823 12:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:49.823 12:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:49.823 12:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:10:49.823 12:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:49.823 12:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.823 12:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:49.823 12:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.823 12:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:49.823 12:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 2432678 00:10:49.823 12:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 2432678 ']' 00:10:49.823 12:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 2432678 00:10:49.823 12:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:10:49.823 12:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:49.823 12:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2432678 00:10:49.823 12:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:49.823 12:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:49.823 12:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2432678' 00:10:49.823 killing process with pid 2432678 00:10:49.823 12:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 2432678 00:10:49.823 12:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 2432678 00:10:50.101 12:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:10:50.101 00:10:50.101 real 0m16.567s 00:10:50.101 user 1m5.203s 00:10:50.101 sys 0m1.365s 00:10:50.101 12:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:50.101 12:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:50.101 ************************************ 00:10:50.101 END TEST nvmf_filesystem_in_capsule 00:10:50.101 ************************************ 00:10:50.360 12:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:10:50.360 12:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:50.360 12:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:10:50.360 12:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:50.360 12:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:10:50.360 12:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:50.360 12:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:50.360 rmmod nvme_tcp 00:10:50.360 rmmod nvme_fabrics 00:10:50.360 rmmod nvme_keyring 00:10:50.360 12:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:50.360 12:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:10:50.360 12:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:10:50.360 12:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:10:50.360 12:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:50.360 12:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:50.360 12:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:50.360 12:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:10:50.360 12:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:10:50.360 12:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:50.360 12:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:10:50.360 12:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:50.360 12:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:50.360 12:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:50.360 12:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:50.360 12:34:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:52.264 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:52.264 00:10:52.264 real 0m46.297s 00:10:52.264 user 2m31.372s 00:10:52.264 sys 0m7.307s 00:10:52.264 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:52.264 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:52.264 ************************************ 00:10:52.264 END TEST nvmf_filesystem 00:10:52.264 ************************************ 00:10:52.524 12:34:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:10:52.524 12:34:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:52.524 12:34:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:52.524 12:34:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:52.524 ************************************ 00:10:52.524 START TEST nvmf_target_discovery 00:10:52.524 ************************************ 00:10:52.524 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:10:52.524 * Looking for test storage... 00:10:52.524 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:52.524 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:52.524 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:10:52.524 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:52.524 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:52.524 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:52.524 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:52.524 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:52.524 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:10:52.524 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:10:52.524 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:10:52.524 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:10:52.524 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:10:52.524 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:10:52.524 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:10:52.524 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:52.524 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:10:52.524 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:10:52.524 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:52.524 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:52.524 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:10:52.524 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:10:52.524 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:52.524 12:34:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:10:52.524 12:34:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:10:52.524 12:34:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:10:52.524 12:34:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:10:52.524 12:34:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:52.524 12:34:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:10:52.524 12:34:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:10:52.524 12:34:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:52.524 12:34:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:52.524 12:34:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:10:52.524 12:34:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:52.524 12:34:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:52.524 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:52.524 --rc genhtml_branch_coverage=1 00:10:52.524 --rc genhtml_function_coverage=1 00:10:52.524 --rc genhtml_legend=1 00:10:52.524 --rc geninfo_all_blocks=1 00:10:52.524 --rc geninfo_unexecuted_blocks=1 00:10:52.524 00:10:52.524 ' 00:10:52.524 12:34:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:52.524 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:52.524 --rc genhtml_branch_coverage=1 00:10:52.524 --rc genhtml_function_coverage=1 00:10:52.524 --rc genhtml_legend=1 00:10:52.524 --rc geninfo_all_blocks=1 00:10:52.524 --rc geninfo_unexecuted_blocks=1 00:10:52.524 00:10:52.524 ' 00:10:52.524 12:34:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:52.524 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:52.524 --rc genhtml_branch_coverage=1 00:10:52.524 --rc genhtml_function_coverage=1 00:10:52.524 --rc genhtml_legend=1 00:10:52.524 --rc geninfo_all_blocks=1 00:10:52.524 --rc geninfo_unexecuted_blocks=1 00:10:52.524 00:10:52.524 ' 00:10:52.524 12:34:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:52.524 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:52.524 --rc genhtml_branch_coverage=1 00:10:52.524 --rc genhtml_function_coverage=1 00:10:52.524 --rc genhtml_legend=1 00:10:52.524 --rc geninfo_all_blocks=1 00:10:52.524 --rc geninfo_unexecuted_blocks=1 00:10:52.524 00:10:52.524 ' 00:10:52.524 12:34:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:52.524 12:34:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:10:52.524 12:34:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:52.524 12:34:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:52.524 12:34:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:52.524 12:34:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:52.524 12:34:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:52.524 12:34:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:52.524 12:34:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:52.524 12:34:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:52.524 12:34:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:52.524 12:34:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:52.524 12:34:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:52.524 12:34:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:52.524 12:34:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:52.524 12:34:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:52.524 12:34:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:52.524 12:34:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:52.524 12:34:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:52.524 12:34:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:10:52.524 12:34:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:52.524 12:34:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:52.524 12:34:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:52.524 12:34:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:52.524 12:34:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:52.524 12:34:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:52.524 12:34:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:10:52.525 12:34:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:52.525 12:34:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:10:52.525 12:34:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:52.525 12:34:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:52.525 12:34:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:52.525 12:34:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:52.525 12:34:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:52.525 12:34:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:52.525 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:52.525 12:34:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:52.525 12:34:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:52.525 12:34:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:52.783 12:34:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:10:52.783 12:34:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:10:52.783 12:34:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:10:52.783 12:34:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:10:52.783 12:34:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:10:52.783 12:34:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:52.783 12:34:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:52.783 12:34:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:52.783 12:34:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:52.783 12:34:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:52.783 12:34:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:52.783 12:34:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:52.783 12:34:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:52.783 12:34:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:52.783 12:34:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:52.783 12:34:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:10:52.783 12:34:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:58.053 12:34:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:58.053 12:34:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:10:58.053 12:34:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:58.053 12:34:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:58.053 12:34:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:58.053 12:34:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:58.053 12:34:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:58.053 12:34:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:10:58.053 12:34:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:58.053 12:34:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:10:58.053 12:34:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:10:58.053 12:34:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:10:58.053 12:34:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:10:58.053 12:34:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:10:58.053 12:34:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:10:58.053 12:34:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:58.053 12:34:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:58.053 12:34:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:58.053 12:34:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:58.053 12:34:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:58.053 12:34:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:58.053 12:34:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:58.053 12:34:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:58.053 12:34:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:58.053 12:34:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:58.053 12:34:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:58.053 12:34:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:58.053 12:34:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:58.053 12:34:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:58.053 12:34:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:58.053 12:34:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:58.053 12:34:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:58.053 12:34:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:58.053 12:34:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:58.053 12:34:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:58.053 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:58.053 12:34:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:58.053 12:34:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:58.053 12:34:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:58.053 12:34:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:58.053 12:34:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:58.053 12:34:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:58.053 12:34:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:58.053 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:58.053 12:34:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:58.053 12:34:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:58.053 12:34:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:58.053 12:34:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:58.053 12:34:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:58.053 12:34:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:58.053 12:34:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:58.053 12:34:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:58.053 12:34:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:58.053 12:34:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:58.053 12:34:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:58.053 12:34:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:58.053 12:34:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:58.053 12:34:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:58.053 12:34:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:58.053 12:34:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:58.053 Found net devices under 0000:86:00.0: cvl_0_0 00:10:58.053 12:34:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:58.054 12:34:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:58.054 12:34:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:58.054 12:34:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:58.054 12:34:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:58.054 12:34:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:58.054 12:34:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:58.054 12:34:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:58.054 12:34:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:58.054 Found net devices under 0000:86:00.1: cvl_0_1 00:10:58.054 12:34:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:58.054 12:34:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:58.054 12:34:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:10:58.054 12:34:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:58.054 12:34:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:58.054 12:34:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:58.054 12:34:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:58.054 12:34:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:58.054 12:34:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:58.054 12:34:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:58.054 12:34:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:58.054 12:34:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:58.054 12:34:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:58.054 12:34:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:58.054 12:34:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:58.054 12:34:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:58.054 12:34:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:58.054 12:34:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:58.054 12:34:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:58.054 12:34:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:58.054 12:34:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:58.054 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:58.054 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:58.054 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:58.054 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:58.054 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:58.054 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:58.054 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:58.054 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:58.054 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:58.054 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.403 ms 00:10:58.054 00:10:58.054 --- 10.0.0.2 ping statistics --- 00:10:58.054 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:58.054 rtt min/avg/max/mdev = 0.403/0.403/0.403/0.000 ms 00:10:58.054 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:58.054 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:58.054 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.145 ms 00:10:58.054 00:10:58.054 --- 10.0.0.1 ping statistics --- 00:10:58.054 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:58.054 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:10:58.054 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:58.054 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:10:58.054 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:58.054 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:58.054 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:58.054 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:58.054 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:58.054 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:58.054 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:58.054 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:10:58.054 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:58.054 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:58.054 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:58.054 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=2438965 00:10:58.054 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 2438965 00:10:58.054 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:58.054 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 2438965 ']' 00:10:58.054 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:58.054 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:58.054 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:58.054 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:58.054 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:58.054 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:58.054 [2024-11-28 12:34:40.212933] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:10:58.054 [2024-11-28 12:34:40.212992] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:58.054 [2024-11-28 12:34:40.280139] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:58.054 [2024-11-28 12:34:40.323280] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:58.054 [2024-11-28 12:34:40.323317] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:58.054 [2024-11-28 12:34:40.323325] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:58.054 [2024-11-28 12:34:40.323331] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:58.054 [2024-11-28 12:34:40.323336] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:58.054 [2024-11-28 12:34:40.324893] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:58.054 [2024-11-28 12:34:40.324993] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:58.054 [2024-11-28 12:34:40.325080] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:58.054 [2024-11-28 12:34:40.325082] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:58.054 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:58.054 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:10:58.054 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:58.054 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:58.054 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:58.054 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:58.054 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:58.054 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.054 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:58.054 [2024-11-28 12:34:40.464144] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:58.054 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.054 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:10:58.054 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:58.054 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:10:58.054 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.054 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:58.054 Null1 00:10:58.054 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.054 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:58.054 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.054 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:58.054 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.054 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:10:58.055 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.055 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:58.055 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.055 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:58.055 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.055 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:58.055 [2024-11-28 12:34:40.525091] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:58.055 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.055 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:58.055 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:10:58.055 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.055 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:58.055 Null2 00:10:58.055 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.055 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:10:58.055 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.055 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:58.055 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.055 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:10:58.055 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.055 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:58.055 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.055 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:10:58.055 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.055 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:58.055 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.055 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:58.055 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:10:58.055 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.055 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:58.313 Null3 00:10:58.313 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.313 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:10:58.313 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.313 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:58.313 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.313 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:10:58.313 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.313 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:58.313 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.313 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:10:58.313 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.313 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:58.313 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.313 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:58.313 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:10:58.313 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.313 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:58.313 Null4 00:10:58.313 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.313 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:10:58.313 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.313 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:58.313 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.313 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:10:58.313 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.313 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:58.313 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.313 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:10:58.313 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.313 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:58.313 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.313 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:58.313 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.313 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:58.313 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.313 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:10:58.313 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.313 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:58.313 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.313 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:10:58.571 00:10:58.571 Discovery Log Number of Records 6, Generation counter 6 00:10:58.571 =====Discovery Log Entry 0====== 00:10:58.571 trtype: tcp 00:10:58.571 adrfam: ipv4 00:10:58.571 subtype: current discovery subsystem 00:10:58.571 treq: not required 00:10:58.571 portid: 0 00:10:58.571 trsvcid: 4420 00:10:58.571 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:10:58.571 traddr: 10.0.0.2 00:10:58.571 eflags: explicit discovery connections, duplicate discovery information 00:10:58.571 sectype: none 00:10:58.571 =====Discovery Log Entry 1====== 00:10:58.571 trtype: tcp 00:10:58.571 adrfam: ipv4 00:10:58.571 subtype: nvme subsystem 00:10:58.571 treq: not required 00:10:58.571 portid: 0 00:10:58.571 trsvcid: 4420 00:10:58.571 subnqn: nqn.2016-06.io.spdk:cnode1 00:10:58.571 traddr: 10.0.0.2 00:10:58.571 eflags: none 00:10:58.571 sectype: none 00:10:58.571 =====Discovery Log Entry 2====== 00:10:58.571 trtype: tcp 00:10:58.571 adrfam: ipv4 00:10:58.571 subtype: nvme subsystem 00:10:58.571 treq: not required 00:10:58.571 portid: 0 00:10:58.571 trsvcid: 4420 00:10:58.571 subnqn: nqn.2016-06.io.spdk:cnode2 00:10:58.571 traddr: 10.0.0.2 00:10:58.571 eflags: none 00:10:58.571 sectype: none 00:10:58.571 =====Discovery Log Entry 3====== 00:10:58.571 trtype: tcp 00:10:58.571 adrfam: ipv4 00:10:58.571 subtype: nvme subsystem 00:10:58.571 treq: not required 00:10:58.571 portid: 0 00:10:58.571 trsvcid: 4420 00:10:58.571 subnqn: nqn.2016-06.io.spdk:cnode3 00:10:58.571 traddr: 10.0.0.2 00:10:58.571 eflags: none 00:10:58.571 sectype: none 00:10:58.571 =====Discovery Log Entry 4====== 00:10:58.571 trtype: tcp 00:10:58.571 adrfam: ipv4 00:10:58.571 subtype: nvme subsystem 00:10:58.571 treq: not required 00:10:58.571 portid: 0 00:10:58.571 trsvcid: 4420 00:10:58.571 subnqn: nqn.2016-06.io.spdk:cnode4 00:10:58.571 traddr: 10.0.0.2 00:10:58.571 eflags: none 00:10:58.571 sectype: none 00:10:58.571 =====Discovery Log Entry 5====== 00:10:58.571 trtype: tcp 00:10:58.571 adrfam: ipv4 00:10:58.571 subtype: discovery subsystem referral 00:10:58.571 treq: not required 00:10:58.571 portid: 0 00:10:58.571 trsvcid: 4430 00:10:58.571 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:10:58.571 traddr: 10.0.0.2 00:10:58.571 eflags: none 00:10:58.571 sectype: none 00:10:58.571 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:10:58.571 Perform nvmf subsystem discovery via RPC 00:10:58.571 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:10:58.571 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.571 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:58.571 [ 00:10:58.571 { 00:10:58.571 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:10:58.571 "subtype": "Discovery", 00:10:58.571 "listen_addresses": [ 00:10:58.571 { 00:10:58.571 "trtype": "TCP", 00:10:58.571 "adrfam": "IPv4", 00:10:58.571 "traddr": "10.0.0.2", 00:10:58.571 "trsvcid": "4420" 00:10:58.571 } 00:10:58.571 ], 00:10:58.571 "allow_any_host": true, 00:10:58.571 "hosts": [] 00:10:58.571 }, 00:10:58.571 { 00:10:58.571 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:10:58.571 "subtype": "NVMe", 00:10:58.571 "listen_addresses": [ 00:10:58.571 { 00:10:58.571 "trtype": "TCP", 00:10:58.571 "adrfam": "IPv4", 00:10:58.571 "traddr": "10.0.0.2", 00:10:58.571 "trsvcid": "4420" 00:10:58.571 } 00:10:58.571 ], 00:10:58.571 "allow_any_host": true, 00:10:58.571 "hosts": [], 00:10:58.571 "serial_number": "SPDK00000000000001", 00:10:58.571 "model_number": "SPDK bdev Controller", 00:10:58.571 "max_namespaces": 32, 00:10:58.571 "min_cntlid": 1, 00:10:58.571 "max_cntlid": 65519, 00:10:58.571 "namespaces": [ 00:10:58.571 { 00:10:58.571 "nsid": 1, 00:10:58.571 "bdev_name": "Null1", 00:10:58.571 "name": "Null1", 00:10:58.571 "nguid": "6CB7D891F3D34EDF95AAA68A57264AC8", 00:10:58.571 "uuid": "6cb7d891-f3d3-4edf-95aa-a68a57264ac8" 00:10:58.571 } 00:10:58.571 ] 00:10:58.571 }, 00:10:58.571 { 00:10:58.571 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:58.571 "subtype": "NVMe", 00:10:58.571 "listen_addresses": [ 00:10:58.571 { 00:10:58.571 "trtype": "TCP", 00:10:58.571 "adrfam": "IPv4", 00:10:58.571 "traddr": "10.0.0.2", 00:10:58.571 "trsvcid": "4420" 00:10:58.571 } 00:10:58.571 ], 00:10:58.571 "allow_any_host": true, 00:10:58.571 "hosts": [], 00:10:58.571 "serial_number": "SPDK00000000000002", 00:10:58.572 "model_number": "SPDK bdev Controller", 00:10:58.572 "max_namespaces": 32, 00:10:58.572 "min_cntlid": 1, 00:10:58.572 "max_cntlid": 65519, 00:10:58.572 "namespaces": [ 00:10:58.572 { 00:10:58.572 "nsid": 1, 00:10:58.572 "bdev_name": "Null2", 00:10:58.572 "name": "Null2", 00:10:58.572 "nguid": "897DF03BF8644DBBADD9E68750756D2A", 00:10:58.572 "uuid": "897df03b-f864-4dbb-add9-e68750756d2a" 00:10:58.572 } 00:10:58.572 ] 00:10:58.572 }, 00:10:58.572 { 00:10:58.572 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:10:58.572 "subtype": "NVMe", 00:10:58.572 "listen_addresses": [ 00:10:58.572 { 00:10:58.572 "trtype": "TCP", 00:10:58.572 "adrfam": "IPv4", 00:10:58.572 "traddr": "10.0.0.2", 00:10:58.572 "trsvcid": "4420" 00:10:58.572 } 00:10:58.572 ], 00:10:58.572 "allow_any_host": true, 00:10:58.572 "hosts": [], 00:10:58.572 "serial_number": "SPDK00000000000003", 00:10:58.572 "model_number": "SPDK bdev Controller", 00:10:58.572 "max_namespaces": 32, 00:10:58.572 "min_cntlid": 1, 00:10:58.572 "max_cntlid": 65519, 00:10:58.572 "namespaces": [ 00:10:58.572 { 00:10:58.572 "nsid": 1, 00:10:58.572 "bdev_name": "Null3", 00:10:58.572 "name": "Null3", 00:10:58.572 "nguid": "5375C67DB03B4774BEA24C95F659A81B", 00:10:58.572 "uuid": "5375c67d-b03b-4774-bea2-4c95f659a81b" 00:10:58.572 } 00:10:58.572 ] 00:10:58.572 }, 00:10:58.572 { 00:10:58.572 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:10:58.572 "subtype": "NVMe", 00:10:58.572 "listen_addresses": [ 00:10:58.572 { 00:10:58.572 "trtype": "TCP", 00:10:58.572 "adrfam": "IPv4", 00:10:58.572 "traddr": "10.0.0.2", 00:10:58.572 "trsvcid": "4420" 00:10:58.572 } 00:10:58.572 ], 00:10:58.572 "allow_any_host": true, 00:10:58.572 "hosts": [], 00:10:58.572 "serial_number": "SPDK00000000000004", 00:10:58.572 "model_number": "SPDK bdev Controller", 00:10:58.572 "max_namespaces": 32, 00:10:58.572 "min_cntlid": 1, 00:10:58.572 "max_cntlid": 65519, 00:10:58.572 "namespaces": [ 00:10:58.572 { 00:10:58.572 "nsid": 1, 00:10:58.572 "bdev_name": "Null4", 00:10:58.572 "name": "Null4", 00:10:58.572 "nguid": "E1C07B99F93E46128E6120AD0CC7E393", 00:10:58.572 "uuid": "e1c07b99-f93e-4612-8e61-20ad0cc7e393" 00:10:58.572 } 00:10:58.572 ] 00:10:58.572 } 00:10:58.572 ] 00:10:58.572 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.572 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:10:58.572 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:58.572 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:58.572 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.572 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:58.572 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.572 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:10:58.572 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.572 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:58.572 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.572 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:58.572 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:10:58.572 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.572 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:58.572 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.572 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:10:58.572 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.572 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:58.572 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.572 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:58.572 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:10:58.572 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.572 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:58.572 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.572 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:10:58.572 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.572 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:58.572 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.572 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:58.572 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:10:58.572 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.572 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:58.572 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.572 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:10:58.572 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.572 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:58.572 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.572 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:10:58.572 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.572 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:58.572 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.572 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:10:58.572 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:10:58.572 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.572 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:58.572 12:34:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.572 12:34:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:10:58.572 12:34:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:10:58.572 12:34:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:10:58.572 12:34:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:10:58.572 12:34:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:58.572 12:34:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:10:58.572 12:34:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:58.572 12:34:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:10:58.572 12:34:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:58.572 12:34:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:58.572 rmmod nvme_tcp 00:10:58.572 rmmod nvme_fabrics 00:10:58.572 rmmod nvme_keyring 00:10:58.572 12:34:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:58.572 12:34:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:10:58.572 12:34:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:10:58.572 12:34:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 2438965 ']' 00:10:58.572 12:34:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 2438965 00:10:58.572 12:34:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 2438965 ']' 00:10:58.572 12:34:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 2438965 00:10:58.572 12:34:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:10:58.572 12:34:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:58.572 12:34:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2438965 00:10:58.831 12:34:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:58.831 12:34:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:58.831 12:34:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2438965' 00:10:58.831 killing process with pid 2438965 00:10:58.831 12:34:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 2438965 00:10:58.831 12:34:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 2438965 00:10:58.831 12:34:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:58.831 12:34:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:58.831 12:34:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:58.831 12:34:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:10:58.831 12:34:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:10:58.831 12:34:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:10:58.831 12:34:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:58.831 12:34:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:58.831 12:34:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:58.831 12:34:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:58.831 12:34:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:58.831 12:34:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:01.364 12:34:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:01.364 00:11:01.364 real 0m8.503s 00:11:01.364 user 0m5.412s 00:11:01.364 sys 0m4.212s 00:11:01.364 12:34:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:01.364 12:34:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:01.364 ************************************ 00:11:01.364 END TEST nvmf_target_discovery 00:11:01.364 ************************************ 00:11:01.364 12:34:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:01.364 12:34:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:01.364 12:34:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:01.364 12:34:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:01.364 ************************************ 00:11:01.364 START TEST nvmf_referrals 00:11:01.364 ************************************ 00:11:01.364 12:34:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:01.364 * Looking for test storage... 00:11:01.364 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:01.364 12:34:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:01.364 12:34:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lcov --version 00:11:01.364 12:34:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:01.364 12:34:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:01.364 12:34:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:01.364 12:34:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:01.364 12:34:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:01.364 12:34:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:11:01.364 12:34:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:11:01.364 12:34:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:11:01.364 12:34:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:11:01.364 12:34:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:11:01.364 12:34:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:11:01.364 12:34:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:11:01.364 12:34:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:01.364 12:34:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:11:01.364 12:34:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:11:01.364 12:34:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:01.364 12:34:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:01.364 12:34:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:11:01.364 12:34:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:11:01.364 12:34:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:01.364 12:34:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:11:01.364 12:34:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:11:01.364 12:34:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:11:01.364 12:34:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:11:01.364 12:34:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:01.364 12:34:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:11:01.364 12:34:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:11:01.364 12:34:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:01.364 12:34:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:01.364 12:34:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:11:01.364 12:34:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:01.364 12:34:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:01.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:01.364 --rc genhtml_branch_coverage=1 00:11:01.364 --rc genhtml_function_coverage=1 00:11:01.364 --rc genhtml_legend=1 00:11:01.364 --rc geninfo_all_blocks=1 00:11:01.365 --rc geninfo_unexecuted_blocks=1 00:11:01.365 00:11:01.365 ' 00:11:01.365 12:34:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:01.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:01.365 --rc genhtml_branch_coverage=1 00:11:01.365 --rc genhtml_function_coverage=1 00:11:01.365 --rc genhtml_legend=1 00:11:01.365 --rc geninfo_all_blocks=1 00:11:01.365 --rc geninfo_unexecuted_blocks=1 00:11:01.365 00:11:01.365 ' 00:11:01.365 12:34:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:01.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:01.365 --rc genhtml_branch_coverage=1 00:11:01.365 --rc genhtml_function_coverage=1 00:11:01.365 --rc genhtml_legend=1 00:11:01.365 --rc geninfo_all_blocks=1 00:11:01.365 --rc geninfo_unexecuted_blocks=1 00:11:01.365 00:11:01.365 ' 00:11:01.365 12:34:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:01.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:01.365 --rc genhtml_branch_coverage=1 00:11:01.365 --rc genhtml_function_coverage=1 00:11:01.365 --rc genhtml_legend=1 00:11:01.365 --rc geninfo_all_blocks=1 00:11:01.365 --rc geninfo_unexecuted_blocks=1 00:11:01.365 00:11:01.365 ' 00:11:01.365 12:34:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:01.365 12:34:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:11:01.365 12:34:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:01.365 12:34:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:01.365 12:34:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:01.365 12:34:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:01.365 12:34:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:01.365 12:34:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:01.365 12:34:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:01.365 12:34:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:01.365 12:34:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:01.365 12:34:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:01.365 12:34:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:01.365 12:34:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:11:01.365 12:34:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:01.365 12:34:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:01.365 12:34:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:01.365 12:34:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:01.365 12:34:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:01.365 12:34:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:11:01.365 12:34:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:01.365 12:34:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:01.365 12:34:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:01.365 12:34:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:01.365 12:34:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:01.365 12:34:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:01.365 12:34:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:11:01.365 12:34:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:01.365 12:34:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:11:01.365 12:34:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:01.365 12:34:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:01.365 12:34:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:01.365 12:34:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:01.365 12:34:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:01.365 12:34:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:01.365 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:01.365 12:34:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:01.365 12:34:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:01.365 12:34:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:01.365 12:34:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:11:01.365 12:34:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:11:01.365 12:34:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:11:01.365 12:34:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:11:01.365 12:34:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:11:01.365 12:34:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:11:01.365 12:34:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:11:01.365 12:34:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:01.365 12:34:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:01.365 12:34:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:01.365 12:34:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:01.365 12:34:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:01.365 12:34:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:01.365 12:34:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:01.365 12:34:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:01.365 12:34:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:01.365 12:34:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:01.365 12:34:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:11:01.365 12:34:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:06.778 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:06.778 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:11:06.778 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:06.778 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:06.778 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:06.778 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:06.778 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:06.778 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:11:06.778 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:06.778 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:11:06.778 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:11:06.778 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:11:06.778 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:11:06.778 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:11:06.778 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:11:06.778 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:06.778 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:06.778 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:06.778 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:06.778 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:06.778 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:06.778 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:06.778 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:06.778 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:06.778 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:06.778 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:06.778 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:06.778 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:06.778 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:06.778 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:06.778 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:06.778 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:06.778 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:06.778 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:06.778 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:06.778 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:06.779 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:06.779 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:06.779 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:06.779 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:06.779 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:06.779 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:06.779 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:06.779 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:06.779 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:06.779 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:06.779 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:06.779 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:06.779 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:06.779 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:06.779 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:06.779 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:06.779 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:06.779 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:06.779 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:06.779 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:06.779 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:06.779 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:06.779 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:06.779 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:06.779 Found net devices under 0000:86:00.0: cvl_0_0 00:11:06.779 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:06.779 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:06.779 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:06.779 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:06.779 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:06.779 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:06.779 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:06.779 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:06.779 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:06.779 Found net devices under 0000:86:00.1: cvl_0_1 00:11:06.779 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:06.779 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:06.779 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:11:06.779 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:06.779 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:06.779 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:06.779 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:06.779 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:06.779 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:06.779 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:06.779 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:06.779 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:06.779 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:06.779 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:06.779 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:06.779 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:06.779 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:06.779 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:06.779 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:06.779 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:06.779 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:06.779 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:06.779 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:06.779 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:06.779 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:06.779 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:06.779 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:06.779 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:06.779 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:06.779 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:06.779 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.397 ms 00:11:06.779 00:11:06.779 --- 10.0.0.2 ping statistics --- 00:11:06.779 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:06.779 rtt min/avg/max/mdev = 0.397/0.397/0.397/0.000 ms 00:11:06.779 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:06.779 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:06.779 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.218 ms 00:11:06.779 00:11:06.779 --- 10.0.0.1 ping statistics --- 00:11:06.779 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:06.779 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:11:06.779 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:06.779 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:11:06.779 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:06.779 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:06.779 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:06.779 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:06.779 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:06.779 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:06.779 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:07.039 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:11:07.039 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:07.039 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:07.039 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:07.039 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=2442744 00:11:07.039 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 2442744 00:11:07.039 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 2442744 ']' 00:11:07.039 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:07.039 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:07.039 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:07.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:07.039 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:07.039 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:07.039 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:07.039 [2024-11-28 12:34:49.355342] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:11:07.039 [2024-11-28 12:34:49.355383] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:07.039 [2024-11-28 12:34:49.421095] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:07.039 [2024-11-28 12:34:49.464055] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:07.039 [2024-11-28 12:34:49.464092] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:07.039 [2024-11-28 12:34:49.464099] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:07.039 [2024-11-28 12:34:49.464106] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:07.039 [2024-11-28 12:34:49.464111] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:07.039 [2024-11-28 12:34:49.465674] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:07.039 [2024-11-28 12:34:49.465771] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:07.039 [2024-11-28 12:34:49.465860] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:07.039 [2024-11-28 12:34:49.465861] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:07.298 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:07.298 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:11:07.298 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:07.298 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:07.298 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:07.298 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:07.298 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:07.298 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.298 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:07.298 [2024-11-28 12:34:49.616457] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:07.298 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.298 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:11:07.298 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.298 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:07.298 [2024-11-28 12:34:49.638092] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:11:07.298 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.298 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:11:07.298 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.298 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:07.298 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.298 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:11:07.298 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.298 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:07.298 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.298 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:11:07.298 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.298 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:07.298 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.299 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:07.299 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:11:07.299 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.299 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:07.299 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.299 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:11:07.299 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:11:07.299 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:07.299 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:07.299 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:07.299 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.299 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:07.299 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:07.299 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.299 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:07.299 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:07.299 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:11:07.299 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:07.299 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:07.299 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:07.299 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:07.299 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:07.557 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:07.557 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:07.557 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:11:07.557 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.557 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:07.557 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.557 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:11:07.557 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.557 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:07.557 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.557 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:11:07.557 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.557 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:07.557 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.557 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:07.557 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:11:07.557 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.557 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:07.557 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.557 12:34:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:11:07.557 12:34:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:11:07.557 12:34:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:07.557 12:34:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:07.557 12:34:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:07.557 12:34:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:07.557 12:34:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:07.815 12:34:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:07.815 12:34:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:11:07.815 12:34:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:11:07.815 12:34:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.815 12:34:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:07.815 12:34:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.815 12:34:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:07.815 12:34:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.815 12:34:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:07.815 12:34:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.815 12:34:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:11:07.815 12:34:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:07.815 12:34:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:07.815 12:34:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:07.815 12:34:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.815 12:34:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:07.815 12:34:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:07.815 12:34:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.815 12:34:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:11:07.815 12:34:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:07.815 12:34:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:11:07.815 12:34:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:07.815 12:34:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:07.815 12:34:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:07.815 12:34:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:07.815 12:34:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:08.073 12:34:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:11:08.073 12:34:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:08.073 12:34:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:11:08.073 12:34:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:08.073 12:34:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:11:08.073 12:34:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:08.073 12:34:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:08.073 12:34:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:11:08.073 12:34:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:11:08.073 12:34:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:11:08.073 12:34:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:08.073 12:34:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:08.073 12:34:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:08.330 12:34:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:08.330 12:34:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:08.330 12:34:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.330 12:34:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:08.330 12:34:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.330 12:34:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:11:08.330 12:34:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:08.330 12:34:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:08.330 12:34:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:08.330 12:34:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.331 12:34:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:08.331 12:34:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:08.331 12:34:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.331 12:34:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:11:08.331 12:34:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:08.588 12:34:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:11:08.588 12:34:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:08.588 12:34:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:08.588 12:34:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:08.588 12:34:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:08.588 12:34:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:08.588 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:11:08.588 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:08.588 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:11:08.588 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:11:08.588 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:08.588 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:08.588 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:08.846 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:11:08.846 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:11:08.846 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:11:08.846 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:08.846 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:08.846 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:09.104 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:09.104 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:11:09.104 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.104 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:09.104 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.104 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:09.104 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:11:09.104 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.104 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:09.104 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.104 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:11:09.104 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:11:09.104 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:09.104 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:09.104 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:09.104 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:09.104 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:09.362 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:09.362 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:11:09.362 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:11:09.362 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:11:09.362 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:09.362 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:11:09.362 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:09.362 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:11:09.362 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:09.362 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:09.362 rmmod nvme_tcp 00:11:09.362 rmmod nvme_fabrics 00:11:09.362 rmmod nvme_keyring 00:11:09.362 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:09.362 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:11:09.362 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:11:09.362 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 2442744 ']' 00:11:09.362 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 2442744 00:11:09.362 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 2442744 ']' 00:11:09.362 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 2442744 00:11:09.362 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:11:09.362 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:09.362 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2442744 00:11:09.362 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:09.362 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:09.362 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2442744' 00:11:09.362 killing process with pid 2442744 00:11:09.362 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 2442744 00:11:09.362 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 2442744 00:11:09.621 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:09.621 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:09.621 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:09.621 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:11:09.621 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:11:09.621 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:09.621 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:11:09.621 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:09.621 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:09.621 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:09.621 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:09.621 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:11.523 12:34:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:11.808 00:11:11.808 real 0m10.612s 00:11:11.808 user 0m12.429s 00:11:11.808 sys 0m4.977s 00:11:11.808 12:34:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:11.808 12:34:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:11.808 ************************************ 00:11:11.808 END TEST nvmf_referrals 00:11:11.808 ************************************ 00:11:11.808 12:34:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:11.808 12:34:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:11.808 12:34:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:11.808 12:34:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:11.808 ************************************ 00:11:11.808 START TEST nvmf_connect_disconnect 00:11:11.808 ************************************ 00:11:11.808 12:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:11.808 * Looking for test storage... 00:11:11.808 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:11.808 12:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:11.808 12:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:11:11.808 12:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:11.808 12:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:11.808 12:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:11.808 12:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:11.808 12:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:11.808 12:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:11:11.808 12:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:11:11.808 12:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:11:11.808 12:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:11:11.808 12:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:11:11.808 12:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:11:11.808 12:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:11:11.808 12:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:11.808 12:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:11:11.808 12:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:11:11.808 12:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:11.809 12:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:11.809 12:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:11:11.809 12:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:11:11.809 12:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:11.809 12:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:11:11.809 12:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:11:11.809 12:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:11:11.809 12:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:11:11.809 12:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:11.809 12:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:11:11.809 12:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:11:11.809 12:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:11.809 12:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:11.809 12:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:11:11.809 12:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:11.809 12:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:11.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:11.809 --rc genhtml_branch_coverage=1 00:11:11.809 --rc genhtml_function_coverage=1 00:11:11.809 --rc genhtml_legend=1 00:11:11.809 --rc geninfo_all_blocks=1 00:11:11.809 --rc geninfo_unexecuted_blocks=1 00:11:11.809 00:11:11.809 ' 00:11:11.809 12:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:11.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:11.809 --rc genhtml_branch_coverage=1 00:11:11.809 --rc genhtml_function_coverage=1 00:11:11.809 --rc genhtml_legend=1 00:11:11.809 --rc geninfo_all_blocks=1 00:11:11.809 --rc geninfo_unexecuted_blocks=1 00:11:11.809 00:11:11.809 ' 00:11:11.809 12:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:11.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:11.809 --rc genhtml_branch_coverage=1 00:11:11.809 --rc genhtml_function_coverage=1 00:11:11.809 --rc genhtml_legend=1 00:11:11.809 --rc geninfo_all_blocks=1 00:11:11.809 --rc geninfo_unexecuted_blocks=1 00:11:11.809 00:11:11.809 ' 00:11:11.809 12:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:11.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:11.809 --rc genhtml_branch_coverage=1 00:11:11.809 --rc genhtml_function_coverage=1 00:11:11.809 --rc genhtml_legend=1 00:11:11.809 --rc geninfo_all_blocks=1 00:11:11.809 --rc geninfo_unexecuted_blocks=1 00:11:11.809 00:11:11.809 ' 00:11:11.809 12:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:11.809 12:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:11:11.809 12:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:11.809 12:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:11.809 12:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:11.809 12:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:11.809 12:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:11.809 12:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:11.809 12:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:11.809 12:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:11.809 12:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:11.809 12:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:11.809 12:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:11.809 12:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:11:11.809 12:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:11.809 12:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:11.809 12:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:11.809 12:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:11.809 12:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:11.809 12:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:11:11.809 12:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:11.809 12:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:11.809 12:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:11.809 12:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:11.809 12:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:11.809 12:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:11.809 12:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:11:11.809 12:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:11.809 12:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:11:11.809 12:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:11.809 12:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:11.809 12:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:11.809 12:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:11.809 12:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:11.809 12:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:11.809 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:11.809 12:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:11.809 12:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:11.809 12:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:11.809 12:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:11.809 12:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:11.809 12:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:11:11.809 12:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:11.809 12:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:11.809 12:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:11.809 12:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:11.809 12:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:11.809 12:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:11.809 12:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:11.809 12:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:11.809 12:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:11.810 12:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:11.810 12:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:11:11.810 12:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:18.371 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:18.371 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:11:18.371 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:18.371 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:18.372 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:18.372 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:18.372 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:18.372 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:11:18.372 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:18.372 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:11:18.372 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:11:18.372 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:11:18.372 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:11:18.372 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:11:18.372 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:11:18.372 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:18.372 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:18.372 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:18.372 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:18.372 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:18.372 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:18.372 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:18.372 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:18.372 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:18.372 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:18.372 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:18.372 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:18.372 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:18.372 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:18.372 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:18.372 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:18.372 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:18.372 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:18.372 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:18.372 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:18.372 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:18.372 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:18.372 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:18.372 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:18.372 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:18.372 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:18.372 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:18.372 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:18.372 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:18.372 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:18.372 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:18.372 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:18.372 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:18.372 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:18.372 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:18.372 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:18.372 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:18.372 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:18.372 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:18.372 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:18.372 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:18.372 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:18.372 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:18.372 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:18.372 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:18.372 Found net devices under 0000:86:00.0: cvl_0_0 00:11:18.372 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:18.372 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:18.372 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:18.372 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:18.372 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:18.372 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:18.372 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:18.372 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:18.372 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:18.372 Found net devices under 0000:86:00.1: cvl_0_1 00:11:18.372 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:18.372 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:18.372 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:11:18.372 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:18.372 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:18.372 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:18.372 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:18.372 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:18.372 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:18.372 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:18.372 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:18.372 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:18.372 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:18.372 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:18.372 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:18.372 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:18.372 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:18.372 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:18.372 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:18.372 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:18.372 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:18.372 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:18.372 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:18.372 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:18.372 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:18.372 12:35:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:18.372 12:35:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:18.372 12:35:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:18.372 12:35:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:18.372 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:18.372 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.269 ms 00:11:18.372 00:11:18.372 --- 10.0.0.2 ping statistics --- 00:11:18.372 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:18.372 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:11:18.372 12:35:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:18.372 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:18.372 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.164 ms 00:11:18.372 00:11:18.372 --- 10.0.0.1 ping statistics --- 00:11:18.373 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:18.373 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:11:18.373 12:35:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:18.373 12:35:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:11:18.373 12:35:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:18.373 12:35:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:18.373 12:35:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:18.373 12:35:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:18.373 12:35:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:18.373 12:35:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:18.373 12:35:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:18.373 12:35:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:11:18.373 12:35:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:18.373 12:35:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:18.373 12:35:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:18.373 12:35:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=2446724 00:11:18.373 12:35:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:18.373 12:35:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 2446724 00:11:18.373 12:35:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 2446724 ']' 00:11:18.373 12:35:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:18.373 12:35:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:18.373 12:35:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:18.373 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:18.373 12:35:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:18.373 12:35:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:18.373 [2024-11-28 12:35:00.165505] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:11:18.373 [2024-11-28 12:35:00.165550] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:18.373 [2024-11-28 12:35:00.233663] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:18.373 [2024-11-28 12:35:00.275625] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:18.373 [2024-11-28 12:35:00.275663] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:18.373 [2024-11-28 12:35:00.275671] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:18.373 [2024-11-28 12:35:00.275677] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:18.373 [2024-11-28 12:35:00.275682] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:18.373 [2024-11-28 12:35:00.277176] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:18.373 [2024-11-28 12:35:00.277192] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:18.373 [2024-11-28 12:35:00.277208] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:18.373 [2024-11-28 12:35:00.277211] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:18.373 12:35:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:18.373 12:35:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:11:18.373 12:35:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:18.373 12:35:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:18.373 12:35:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:18.373 12:35:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:18.373 12:35:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:18.373 12:35:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.373 12:35:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:18.373 [2024-11-28 12:35:00.427942] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:18.373 12:35:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.373 12:35:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:11:18.373 12:35:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.373 12:35:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:18.373 12:35:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.373 12:35:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:11:18.373 12:35:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:18.373 12:35:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.373 12:35:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:18.373 12:35:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.373 12:35:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:18.373 12:35:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.373 12:35:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:18.373 12:35:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.373 12:35:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:18.373 12:35:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.373 12:35:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:18.373 [2024-11-28 12:35:00.494501] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:18.373 12:35:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.373 12:35:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:11:18.373 12:35:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:11:18.373 12:35:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:11:21.651 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:24.922 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:28.201 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:31.480 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:34.760 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:34.760 12:35:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:11:34.761 12:35:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:11:34.761 12:35:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:34.761 12:35:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:11:34.761 12:35:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:34.761 12:35:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:11:34.761 12:35:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:34.761 12:35:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:34.761 rmmod nvme_tcp 00:11:34.761 rmmod nvme_fabrics 00:11:34.761 rmmod nvme_keyring 00:11:34.761 12:35:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:34.761 12:35:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:11:34.761 12:35:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:11:34.761 12:35:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 2446724 ']' 00:11:34.761 12:35:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 2446724 00:11:34.761 12:35:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 2446724 ']' 00:11:34.761 12:35:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 2446724 00:11:34.761 12:35:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:11:34.761 12:35:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:34.761 12:35:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2446724 00:11:34.761 12:35:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:34.761 12:35:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:34.761 12:35:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2446724' 00:11:34.761 killing process with pid 2446724 00:11:34.761 12:35:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 2446724 00:11:34.761 12:35:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 2446724 00:11:34.761 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:34.761 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:34.761 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:34.761 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:11:34.761 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:11:34.761 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:34.761 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:11:34.761 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:34.761 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:34.761 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:34.761 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:34.761 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:36.664 12:35:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:36.664 00:11:36.664 real 0m25.070s 00:11:36.664 user 1m8.344s 00:11:36.664 sys 0m5.732s 00:11:36.664 12:35:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:36.664 12:35:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:36.664 ************************************ 00:11:36.664 END TEST nvmf_connect_disconnect 00:11:36.664 ************************************ 00:11:36.924 12:35:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:11:36.924 12:35:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:36.924 12:35:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:36.924 12:35:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:36.924 ************************************ 00:11:36.924 START TEST nvmf_multitarget 00:11:36.924 ************************************ 00:11:36.924 12:35:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:11:36.924 * Looking for test storage... 00:11:36.924 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:36.924 12:35:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:36.924 12:35:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lcov --version 00:11:36.924 12:35:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:36.924 12:35:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:36.924 12:35:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:36.924 12:35:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:36.924 12:35:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:36.924 12:35:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:11:36.924 12:35:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:11:36.924 12:35:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:11:36.924 12:35:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:11:36.924 12:35:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:11:36.924 12:35:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:11:36.924 12:35:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:11:36.924 12:35:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:36.924 12:35:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:11:36.924 12:35:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:11:36.924 12:35:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:36.924 12:35:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:36.924 12:35:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:11:36.924 12:35:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:11:36.924 12:35:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:36.924 12:35:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:11:36.924 12:35:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:11:36.924 12:35:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:11:36.924 12:35:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:11:36.924 12:35:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:36.924 12:35:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:11:36.924 12:35:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:11:36.924 12:35:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:36.924 12:35:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:36.924 12:35:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:11:36.924 12:35:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:36.924 12:35:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:36.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:36.924 --rc genhtml_branch_coverage=1 00:11:36.924 --rc genhtml_function_coverage=1 00:11:36.924 --rc genhtml_legend=1 00:11:36.924 --rc geninfo_all_blocks=1 00:11:36.924 --rc geninfo_unexecuted_blocks=1 00:11:36.924 00:11:36.924 ' 00:11:36.924 12:35:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:36.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:36.924 --rc genhtml_branch_coverage=1 00:11:36.924 --rc genhtml_function_coverage=1 00:11:36.924 --rc genhtml_legend=1 00:11:36.924 --rc geninfo_all_blocks=1 00:11:36.924 --rc geninfo_unexecuted_blocks=1 00:11:36.924 00:11:36.924 ' 00:11:36.924 12:35:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:36.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:36.924 --rc genhtml_branch_coverage=1 00:11:36.924 --rc genhtml_function_coverage=1 00:11:36.924 --rc genhtml_legend=1 00:11:36.924 --rc geninfo_all_blocks=1 00:11:36.924 --rc geninfo_unexecuted_blocks=1 00:11:36.924 00:11:36.924 ' 00:11:36.924 12:35:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:36.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:36.924 --rc genhtml_branch_coverage=1 00:11:36.924 --rc genhtml_function_coverage=1 00:11:36.924 --rc genhtml_legend=1 00:11:36.924 --rc geninfo_all_blocks=1 00:11:36.924 --rc geninfo_unexecuted_blocks=1 00:11:36.924 00:11:36.924 ' 00:11:36.924 12:35:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:36.924 12:35:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:11:36.924 12:35:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:36.924 12:35:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:36.924 12:35:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:36.924 12:35:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:36.924 12:35:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:36.924 12:35:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:36.924 12:35:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:36.924 12:35:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:36.924 12:35:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:36.924 12:35:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:36.924 12:35:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:36.924 12:35:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:11:36.924 12:35:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:36.924 12:35:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:36.925 12:35:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:36.925 12:35:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:36.925 12:35:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:36.925 12:35:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:11:36.925 12:35:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:36.925 12:35:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:36.925 12:35:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:36.925 12:35:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:36.925 12:35:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:36.925 12:35:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:36.925 12:35:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:11:36.925 12:35:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:36.925 12:35:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:11:36.925 12:35:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:36.925 12:35:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:36.925 12:35:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:36.925 12:35:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:36.925 12:35:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:36.925 12:35:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:36.925 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:36.925 12:35:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:36.925 12:35:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:36.925 12:35:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:36.925 12:35:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:11:36.925 12:35:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:11:36.925 12:35:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:36.925 12:35:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:36.925 12:35:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:36.925 12:35:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:36.925 12:35:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:36.925 12:35:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:36.925 12:35:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:36.925 12:35:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:36.925 12:35:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:36.925 12:35:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:36.925 12:35:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:11:36.925 12:35:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:42.196 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:42.196 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:11:42.196 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:42.196 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:42.196 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:42.196 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:42.196 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:42.196 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:11:42.196 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:42.196 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:11:42.196 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:11:42.197 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:11:42.197 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:11:42.197 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:11:42.197 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:11:42.197 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:42.197 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:42.197 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:42.197 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:42.197 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:42.197 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:42.197 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:42.197 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:42.197 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:42.197 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:42.197 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:42.197 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:42.197 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:42.197 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:42.197 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:42.197 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:42.197 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:42.197 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:42.197 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:42.197 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:42.197 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:42.197 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:42.197 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:42.197 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:42.197 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:42.197 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:42.197 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:42.197 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:42.197 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:42.197 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:42.197 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:42.197 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:42.197 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:42.197 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:42.197 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:42.197 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:42.197 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:42.197 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:42.197 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:42.197 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:42.197 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:42.197 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:42.197 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:42.197 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:42.197 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:42.197 Found net devices under 0000:86:00.0: cvl_0_0 00:11:42.197 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:42.197 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:42.197 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:42.197 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:42.197 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:42.197 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:42.197 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:42.197 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:42.197 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:42.197 Found net devices under 0000:86:00.1: cvl_0_1 00:11:42.197 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:42.197 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:42.197 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:11:42.197 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:42.197 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:42.197 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:42.197 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:42.197 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:42.197 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:42.197 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:42.197 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:42.197 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:42.197 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:42.197 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:42.197 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:42.197 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:42.197 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:42.197 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:42.197 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:42.197 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:42.197 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:42.457 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:42.457 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:42.457 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:42.457 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:42.457 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:42.457 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:42.457 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:42.457 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:42.457 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:42.457 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.481 ms 00:11:42.457 00:11:42.457 --- 10.0.0.2 ping statistics --- 00:11:42.457 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:42.457 rtt min/avg/max/mdev = 0.481/0.481/0.481/0.000 ms 00:11:42.457 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:42.457 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:42.457 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.189 ms 00:11:42.457 00:11:42.457 --- 10.0.0.1 ping statistics --- 00:11:42.457 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:42.457 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:11:42.457 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:42.457 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:11:42.457 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:42.457 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:42.457 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:42.457 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:42.457 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:42.457 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:42.457 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:42.457 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:11:42.457 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:42.457 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:42.457 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:42.457 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=2452998 00:11:42.457 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:42.457 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 2452998 00:11:42.457 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 2452998 ']' 00:11:42.457 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:42.457 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:42.457 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:42.457 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:42.457 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:42.457 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:42.717 [2024-11-28 12:35:24.977233] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:11:42.718 [2024-11-28 12:35:24.977283] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:42.718 [2024-11-28 12:35:25.043722] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:42.718 [2024-11-28 12:35:25.086617] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:42.718 [2024-11-28 12:35:25.086657] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:42.718 [2024-11-28 12:35:25.086664] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:42.718 [2024-11-28 12:35:25.086670] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:42.718 [2024-11-28 12:35:25.086675] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:42.718 [2024-11-28 12:35:25.091965] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:42.718 [2024-11-28 12:35:25.091983] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:42.718 [2024-11-28 12:35:25.092071] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:42.718 [2024-11-28 12:35:25.092073] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:42.718 12:35:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:42.718 12:35:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:11:42.718 12:35:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:42.718 12:35:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:42.718 12:35:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:42.718 12:35:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:42.718 12:35:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:11:42.976 12:35:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:42.976 12:35:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:11:42.976 12:35:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:11:42.976 12:35:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:11:42.976 "nvmf_tgt_1" 00:11:42.976 12:35:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:11:43.235 "nvmf_tgt_2" 00:11:43.235 12:35:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:43.235 12:35:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:11:43.235 12:35:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:11:43.235 12:35:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:11:43.493 true 00:11:43.493 12:35:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:11:43.493 true 00:11:43.493 12:35:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:43.493 12:35:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:11:43.493 12:35:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:11:43.493 12:35:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:11:43.493 12:35:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:11:43.493 12:35:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:43.493 12:35:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:11:43.752 12:35:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:43.752 12:35:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:11:43.752 12:35:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:43.752 12:35:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:43.752 rmmod nvme_tcp 00:11:43.752 rmmod nvme_fabrics 00:11:43.752 rmmod nvme_keyring 00:11:43.752 12:35:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:43.752 12:35:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:11:43.752 12:35:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:11:43.752 12:35:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 2452998 ']' 00:11:43.752 12:35:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 2452998 00:11:43.752 12:35:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 2452998 ']' 00:11:43.752 12:35:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 2452998 00:11:43.752 12:35:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:11:43.752 12:35:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:43.752 12:35:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2452998 00:11:43.752 12:35:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:43.752 12:35:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:43.752 12:35:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2452998' 00:11:43.752 killing process with pid 2452998 00:11:43.752 12:35:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 2452998 00:11:43.752 12:35:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 2452998 00:11:44.011 12:35:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:44.011 12:35:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:44.011 12:35:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:44.011 12:35:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:11:44.011 12:35:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:44.011 12:35:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:11:44.011 12:35:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:11:44.011 12:35:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:44.011 12:35:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:44.011 12:35:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:44.011 12:35:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:44.011 12:35:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:45.916 12:35:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:45.916 00:11:45.916 real 0m9.121s 00:11:45.916 user 0m7.132s 00:11:45.916 sys 0m4.583s 00:11:45.916 12:35:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:45.916 12:35:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:45.916 ************************************ 00:11:45.916 END TEST nvmf_multitarget 00:11:45.916 ************************************ 00:11:45.916 12:35:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:11:45.916 12:35:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:45.916 12:35:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:45.916 12:35:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:45.916 ************************************ 00:11:45.916 START TEST nvmf_rpc 00:11:45.916 ************************************ 00:11:45.916 12:35:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:11:46.176 * Looking for test storage... 00:11:46.176 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:46.176 12:35:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:46.176 12:35:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:11:46.176 12:35:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:46.176 12:35:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:46.177 12:35:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:46.177 12:35:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:46.177 12:35:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:46.177 12:35:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:11:46.177 12:35:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:11:46.177 12:35:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:11:46.177 12:35:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:11:46.177 12:35:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:11:46.177 12:35:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:11:46.177 12:35:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:11:46.177 12:35:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:46.177 12:35:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:11:46.177 12:35:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:11:46.177 12:35:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:46.177 12:35:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:46.177 12:35:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:11:46.177 12:35:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:11:46.177 12:35:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:46.177 12:35:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:11:46.177 12:35:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:11:46.177 12:35:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:11:46.177 12:35:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:11:46.177 12:35:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:46.177 12:35:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:11:46.177 12:35:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:11:46.177 12:35:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:46.177 12:35:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:46.177 12:35:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:11:46.177 12:35:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:46.177 12:35:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:46.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:46.177 --rc genhtml_branch_coverage=1 00:11:46.177 --rc genhtml_function_coverage=1 00:11:46.177 --rc genhtml_legend=1 00:11:46.177 --rc geninfo_all_blocks=1 00:11:46.177 --rc geninfo_unexecuted_blocks=1 00:11:46.177 00:11:46.177 ' 00:11:46.177 12:35:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:46.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:46.177 --rc genhtml_branch_coverage=1 00:11:46.177 --rc genhtml_function_coverage=1 00:11:46.177 --rc genhtml_legend=1 00:11:46.177 --rc geninfo_all_blocks=1 00:11:46.177 --rc geninfo_unexecuted_blocks=1 00:11:46.177 00:11:46.177 ' 00:11:46.177 12:35:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:46.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:46.177 --rc genhtml_branch_coverage=1 00:11:46.177 --rc genhtml_function_coverage=1 00:11:46.177 --rc genhtml_legend=1 00:11:46.177 --rc geninfo_all_blocks=1 00:11:46.177 --rc geninfo_unexecuted_blocks=1 00:11:46.177 00:11:46.177 ' 00:11:46.177 12:35:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:46.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:46.177 --rc genhtml_branch_coverage=1 00:11:46.177 --rc genhtml_function_coverage=1 00:11:46.177 --rc genhtml_legend=1 00:11:46.177 --rc geninfo_all_blocks=1 00:11:46.177 --rc geninfo_unexecuted_blocks=1 00:11:46.177 00:11:46.177 ' 00:11:46.177 12:35:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:46.177 12:35:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:11:46.177 12:35:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:46.177 12:35:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:46.177 12:35:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:46.177 12:35:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:46.177 12:35:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:46.177 12:35:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:46.177 12:35:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:46.177 12:35:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:46.177 12:35:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:46.177 12:35:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:46.177 12:35:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:46.177 12:35:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:11:46.177 12:35:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:46.177 12:35:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:46.177 12:35:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:46.177 12:35:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:46.177 12:35:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:46.177 12:35:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:11:46.177 12:35:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:46.177 12:35:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:46.177 12:35:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:46.177 12:35:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:46.177 12:35:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:46.177 12:35:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:46.177 12:35:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:11:46.177 12:35:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:46.177 12:35:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:11:46.177 12:35:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:46.177 12:35:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:46.177 12:35:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:46.177 12:35:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:46.177 12:35:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:46.177 12:35:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:46.177 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:46.177 12:35:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:46.177 12:35:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:46.177 12:35:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:46.177 12:35:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:11:46.177 12:35:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:11:46.177 12:35:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:46.177 12:35:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:46.177 12:35:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:46.177 12:35:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:46.177 12:35:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:46.177 12:35:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:46.177 12:35:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:46.177 12:35:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:46.177 12:35:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:46.178 12:35:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:46.178 12:35:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:11:46.178 12:35:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:51.453 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:51.453 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:11:51.453 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:51.453 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:51.453 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:51.453 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:51.453 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:51.453 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:11:51.453 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:51.453 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:11:51.453 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:11:51.453 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:11:51.453 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:11:51.453 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:11:51.453 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:11:51.453 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:51.453 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:51.453 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:51.453 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:51.453 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:51.453 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:51.453 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:51.453 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:51.453 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:51.453 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:51.453 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:51.453 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:51.453 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:51.453 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:51.453 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:51.453 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:51.453 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:51.453 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:51.453 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:51.453 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:51.453 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:51.453 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:51.453 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:51.454 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:51.454 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:51.454 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:51.454 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:51.454 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:51.454 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:51.454 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:51.454 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:51.454 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:51.454 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:51.454 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:51.454 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:51.454 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:51.454 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:51.454 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:51.454 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:51.454 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:51.454 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:51.454 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:51.454 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:51.454 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:51.454 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:51.454 Found net devices under 0000:86:00.0: cvl_0_0 00:11:51.454 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:51.454 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:51.454 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:51.454 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:51.454 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:51.454 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:51.454 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:51.454 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:51.454 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:51.454 Found net devices under 0000:86:00.1: cvl_0_1 00:11:51.454 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:51.454 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:51.454 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:11:51.454 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:51.454 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:51.454 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:51.454 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:51.454 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:51.454 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:51.454 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:51.454 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:51.454 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:51.454 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:51.454 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:51.454 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:51.454 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:51.454 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:51.454 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:51.454 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:51.454 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:51.454 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:51.454 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:51.454 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:51.454 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:51.454 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:51.454 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:51.454 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:51.454 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:51.454 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:51.454 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:51.454 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.240 ms 00:11:51.454 00:11:51.454 --- 10.0.0.2 ping statistics --- 00:11:51.454 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:51.454 rtt min/avg/max/mdev = 0.240/0.240/0.240/0.000 ms 00:11:51.454 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:51.454 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:51.454 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.228 ms 00:11:51.454 00:11:51.454 --- 10.0.0.1 ping statistics --- 00:11:51.454 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:51.454 rtt min/avg/max/mdev = 0.228/0.228/0.228/0.000 ms 00:11:51.454 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:51.454 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:11:51.454 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:51.454 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:51.454 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:51.454 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:51.454 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:51.454 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:51.454 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:51.454 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:11:51.454 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:51.454 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:51.454 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:51.454 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=2456780 00:11:51.454 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 2456780 00:11:51.454 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 2456780 ']' 00:11:51.454 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:51.454 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:51.454 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:51.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:51.454 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:51.454 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:51.454 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:51.713 [2024-11-28 12:35:33.997623] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:11:51.713 [2024-11-28 12:35:33.997671] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:51.713 [2024-11-28 12:35:34.063864] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:51.713 [2024-11-28 12:35:34.106420] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:51.713 [2024-11-28 12:35:34.106456] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:51.713 [2024-11-28 12:35:34.106466] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:51.713 [2024-11-28 12:35:34.106472] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:51.713 [2024-11-28 12:35:34.106477] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:51.713 [2024-11-28 12:35:34.107969] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:51.714 [2024-11-28 12:35:34.108062] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:51.714 [2024-11-28 12:35:34.108149] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:51.714 [2024-11-28 12:35:34.108151] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:51.714 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:51.714 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:11:51.714 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:51.714 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:51.714 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:51.973 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:51.973 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:11:51.973 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.973 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:51.973 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.973 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:11:51.973 "tick_rate": 2300000000, 00:11:51.973 "poll_groups": [ 00:11:51.973 { 00:11:51.973 "name": "nvmf_tgt_poll_group_000", 00:11:51.973 "admin_qpairs": 0, 00:11:51.973 "io_qpairs": 0, 00:11:51.973 "current_admin_qpairs": 0, 00:11:51.973 "current_io_qpairs": 0, 00:11:51.973 "pending_bdev_io": 0, 00:11:51.973 "completed_nvme_io": 0, 00:11:51.973 "transports": [] 00:11:51.973 }, 00:11:51.973 { 00:11:51.973 "name": "nvmf_tgt_poll_group_001", 00:11:51.973 "admin_qpairs": 0, 00:11:51.973 "io_qpairs": 0, 00:11:51.973 "current_admin_qpairs": 0, 00:11:51.973 "current_io_qpairs": 0, 00:11:51.973 "pending_bdev_io": 0, 00:11:51.973 "completed_nvme_io": 0, 00:11:51.973 "transports": [] 00:11:51.973 }, 00:11:51.973 { 00:11:51.973 "name": "nvmf_tgt_poll_group_002", 00:11:51.973 "admin_qpairs": 0, 00:11:51.973 "io_qpairs": 0, 00:11:51.973 "current_admin_qpairs": 0, 00:11:51.973 "current_io_qpairs": 0, 00:11:51.973 "pending_bdev_io": 0, 00:11:51.973 "completed_nvme_io": 0, 00:11:51.973 "transports": [] 00:11:51.973 }, 00:11:51.973 { 00:11:51.973 "name": "nvmf_tgt_poll_group_003", 00:11:51.973 "admin_qpairs": 0, 00:11:51.973 "io_qpairs": 0, 00:11:51.973 "current_admin_qpairs": 0, 00:11:51.973 "current_io_qpairs": 0, 00:11:51.973 "pending_bdev_io": 0, 00:11:51.973 "completed_nvme_io": 0, 00:11:51.973 "transports": [] 00:11:51.973 } 00:11:51.973 ] 00:11:51.973 }' 00:11:51.973 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:11:51.973 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:11:51.973 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:11:51.973 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:11:51.973 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:11:51.973 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:11:51.973 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:11:51.973 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:51.973 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.973 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:51.973 [2024-11-28 12:35:34.350569] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:51.973 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.973 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:11:51.973 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.973 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:51.973 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.973 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:11:51.973 "tick_rate": 2300000000, 00:11:51.973 "poll_groups": [ 00:11:51.973 { 00:11:51.973 "name": "nvmf_tgt_poll_group_000", 00:11:51.973 "admin_qpairs": 0, 00:11:51.973 "io_qpairs": 0, 00:11:51.973 "current_admin_qpairs": 0, 00:11:51.973 "current_io_qpairs": 0, 00:11:51.973 "pending_bdev_io": 0, 00:11:51.973 "completed_nvme_io": 0, 00:11:51.973 "transports": [ 00:11:51.973 { 00:11:51.973 "trtype": "TCP" 00:11:51.973 } 00:11:51.973 ] 00:11:51.973 }, 00:11:51.973 { 00:11:51.973 "name": "nvmf_tgt_poll_group_001", 00:11:51.973 "admin_qpairs": 0, 00:11:51.973 "io_qpairs": 0, 00:11:51.973 "current_admin_qpairs": 0, 00:11:51.973 "current_io_qpairs": 0, 00:11:51.973 "pending_bdev_io": 0, 00:11:51.973 "completed_nvme_io": 0, 00:11:51.973 "transports": [ 00:11:51.973 { 00:11:51.973 "trtype": "TCP" 00:11:51.973 } 00:11:51.973 ] 00:11:51.973 }, 00:11:51.973 { 00:11:51.973 "name": "nvmf_tgt_poll_group_002", 00:11:51.973 "admin_qpairs": 0, 00:11:51.973 "io_qpairs": 0, 00:11:51.973 "current_admin_qpairs": 0, 00:11:51.973 "current_io_qpairs": 0, 00:11:51.973 "pending_bdev_io": 0, 00:11:51.973 "completed_nvme_io": 0, 00:11:51.973 "transports": [ 00:11:51.973 { 00:11:51.973 "trtype": "TCP" 00:11:51.973 } 00:11:51.973 ] 00:11:51.973 }, 00:11:51.973 { 00:11:51.973 "name": "nvmf_tgt_poll_group_003", 00:11:51.973 "admin_qpairs": 0, 00:11:51.973 "io_qpairs": 0, 00:11:51.973 "current_admin_qpairs": 0, 00:11:51.973 "current_io_qpairs": 0, 00:11:51.973 "pending_bdev_io": 0, 00:11:51.973 "completed_nvme_io": 0, 00:11:51.973 "transports": [ 00:11:51.973 { 00:11:51.973 "trtype": "TCP" 00:11:51.973 } 00:11:51.973 ] 00:11:51.973 } 00:11:51.973 ] 00:11:51.973 }' 00:11:51.973 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:11:51.973 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:11:51.974 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:11:51.974 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:51.974 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:11:51.974 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:11:51.974 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:11:51.974 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:11:51.974 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:51.974 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:11:51.974 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:11:51.974 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:11:51.974 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:11:51.974 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:11:51.974 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.974 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:52.234 Malloc1 00:11:52.234 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.234 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:52.234 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.234 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:52.234 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.234 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:52.234 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.234 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:52.234 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.234 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:11:52.234 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.234 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:52.234 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.234 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:52.234 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.234 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:52.234 [2024-11-28 12:35:34.528520] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:52.234 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.234 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:11:52.234 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:11:52.234 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:11:52.234 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:11:52.234 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:52.234 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:11:52.234 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:52.234 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:11:52.234 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:52.234 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:11:52.234 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:11:52.234 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:11:52.234 [2024-11-28 12:35:34.557144] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562' 00:11:52.234 Failed to write to /dev/nvme-fabrics: Input/output error 00:11:52.234 could not add new controller: failed to write to nvme-fabrics device 00:11:52.234 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:11:52.234 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:52.234 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:52.234 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:52.234 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:52.234 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.234 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:52.234 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.234 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:53.611 12:35:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:11:53.611 12:35:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:53.611 12:35:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:53.611 12:35:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:53.611 12:35:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:55.515 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:55.515 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:55.515 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:55.515 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:55.515 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:55.515 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:55.515 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:55.515 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:55.515 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:55.515 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:55.515 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:55.515 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:55.515 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:55.515 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:55.515 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:55.515 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:55.515 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.515 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:55.515 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.515 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:55.515 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:11:55.515 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:55.515 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:11:55.515 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:55.515 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:11:55.516 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:55.516 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:11:55.516 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:55.516 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:11:55.516 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:11:55.516 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:55.516 [2024-11-28 12:35:38.012495] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562' 00:11:55.775 Failed to write to /dev/nvme-fabrics: Input/output error 00:11:55.775 could not add new controller: failed to write to nvme-fabrics device 00:11:55.775 12:35:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:11:55.775 12:35:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:55.775 12:35:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:55.775 12:35:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:55.775 12:35:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:11:55.775 12:35:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.775 12:35:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:55.775 12:35:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.775 12:35:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:57.152 12:35:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:11:57.152 12:35:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:57.152 12:35:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:57.152 12:35:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:57.152 12:35:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:59.058 12:35:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:59.058 12:35:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:59.058 12:35:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:59.058 12:35:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:59.058 12:35:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:59.058 12:35:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:59.058 12:35:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:59.058 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:59.058 12:35:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:59.058 12:35:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:59.058 12:35:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:59.058 12:35:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:59.058 12:35:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:59.058 12:35:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:59.058 12:35:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:59.058 12:35:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:59.058 12:35:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.058 12:35:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.058 12:35:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.058 12:35:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:11:59.058 12:35:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:59.058 12:35:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:59.058 12:35:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.058 12:35:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.058 12:35:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.058 12:35:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:59.058 12:35:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.058 12:35:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.058 [2024-11-28 12:35:41.387714] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:59.059 12:35:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.059 12:35:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:59.059 12:35:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.059 12:35:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.059 12:35:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.059 12:35:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:59.059 12:35:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.059 12:35:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.059 12:35:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.059 12:35:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:00.435 12:35:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:00.435 12:35:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:00.435 12:35:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:00.436 12:35:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:00.436 12:35:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:02.340 12:35:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:02.340 12:35:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:02.340 12:35:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:02.340 12:35:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:02.340 12:35:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:02.340 12:35:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:02.340 12:35:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:02.340 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:02.340 12:35:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:02.340 12:35:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:02.340 12:35:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:02.340 12:35:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:02.340 12:35:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:02.340 12:35:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:02.340 12:35:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:02.340 12:35:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:02.340 12:35:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.340 12:35:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:02.340 12:35:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.340 12:35:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:02.340 12:35:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.340 12:35:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:02.340 12:35:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.340 12:35:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:02.340 12:35:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:02.340 12:35:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.340 12:35:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:02.340 12:35:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.340 12:35:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:02.340 12:35:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.340 12:35:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:02.340 [2024-11-28 12:35:44.756154] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:02.340 12:35:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.340 12:35:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:02.340 12:35:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.340 12:35:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:02.340 12:35:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.340 12:35:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:02.340 12:35:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.340 12:35:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:02.340 12:35:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.340 12:35:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:03.717 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:03.717 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:03.717 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:03.717 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:03.717 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:05.621 12:35:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:05.621 12:35:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:05.621 12:35:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:05.621 12:35:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:05.621 12:35:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:05.621 12:35:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:05.621 12:35:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:05.621 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:05.621 12:35:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:05.621 12:35:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:05.621 12:35:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:05.621 12:35:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:05.621 12:35:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:05.621 12:35:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:05.621 12:35:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:05.621 12:35:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:05.621 12:35:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.621 12:35:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:05.621 12:35:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.621 12:35:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:05.621 12:35:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.621 12:35:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:05.621 12:35:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.621 12:35:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:05.621 12:35:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:05.621 12:35:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.621 12:35:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:05.621 12:35:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.621 12:35:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:05.621 12:35:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.621 12:35:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:05.621 [2024-11-28 12:35:48.126353] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:05.621 12:35:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.621 12:35:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:05.621 12:35:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.621 12:35:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:05.621 12:35:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.621 12:35:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:05.880 12:35:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.880 12:35:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:05.880 12:35:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.880 12:35:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:06.813 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:06.813 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:06.813 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:06.813 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:06.813 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:09.383 12:35:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:09.383 12:35:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:09.383 12:35:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:09.383 12:35:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:09.383 12:35:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:09.383 12:35:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:09.383 12:35:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:09.383 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:09.383 12:35:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:09.383 12:35:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:09.383 12:35:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:09.383 12:35:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:09.383 12:35:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:09.383 12:35:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:09.383 12:35:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:09.383 12:35:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:09.383 12:35:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.383 12:35:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:09.383 12:35:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.384 12:35:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:09.384 12:35:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.384 12:35:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:09.384 12:35:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.384 12:35:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:09.384 12:35:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:09.384 12:35:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.384 12:35:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:09.384 12:35:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.384 12:35:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:09.384 12:35:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.384 12:35:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:09.384 [2024-11-28 12:35:51.479132] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:09.384 12:35:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.384 12:35:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:09.384 12:35:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.384 12:35:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:09.384 12:35:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.384 12:35:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:09.384 12:35:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.384 12:35:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:09.384 12:35:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.384 12:35:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:10.315 12:35:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:10.315 12:35:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:10.315 12:35:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:10.315 12:35:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:10.315 12:35:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:12.215 12:35:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:12.215 12:35:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:12.215 12:35:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:12.215 12:35:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:12.215 12:35:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:12.215 12:35:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:12.215 12:35:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:12.474 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:12.474 12:35:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:12.474 12:35:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:12.474 12:35:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:12.474 12:35:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:12.474 12:35:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:12.474 12:35:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:12.474 12:35:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:12.474 12:35:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:12.474 12:35:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.474 12:35:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:12.474 12:35:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.474 12:35:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:12.474 12:35:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.474 12:35:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:12.474 12:35:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.474 12:35:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:12.474 12:35:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:12.474 12:35:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.474 12:35:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:12.474 12:35:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.474 12:35:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:12.474 12:35:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.474 12:35:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:12.474 [2024-11-28 12:35:54.923369] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:12.474 12:35:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.474 12:35:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:12.474 12:35:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.474 12:35:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:12.474 12:35:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.474 12:35:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:12.474 12:35:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.474 12:35:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:12.474 12:35:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.474 12:35:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:13.851 12:35:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:13.851 12:35:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:13.852 12:35:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:13.852 12:35:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:13.852 12:35:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:15.755 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:15.755 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:15.755 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:15.755 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:15.755 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:15.755 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:15.755 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:15.755 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:15.755 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:15.755 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:15.755 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:15.755 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:15.755 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:15.755 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:15.755 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:15.755 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:15.755 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.755 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:15.755 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.755 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:15.755 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.755 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:15.755 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.755 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:12:15.755 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:15.755 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:15.755 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.755 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:15.755 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.755 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:15.755 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.755 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:15.755 [2024-11-28 12:35:58.206119] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:15.755 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.755 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:15.755 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.755 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:15.755 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.755 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:15.755 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.755 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:15.755 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.755 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:15.755 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.755 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:15.755 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.755 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:15.755 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.755 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:15.755 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.755 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:15.755 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:15.755 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.755 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:15.755 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.755 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:15.755 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.755 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:15.755 [2024-11-28 12:35:58.254136] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:15.755 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.755 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:15.755 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.755 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:15.755 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.755 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:15.755 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.755 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.015 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.015 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:16.015 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.015 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.015 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.015 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:16.015 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.015 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.015 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.015 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:16.015 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:16.015 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.015 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.015 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.015 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:16.015 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.015 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.015 [2024-11-28 12:35:58.302278] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:16.015 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.015 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:16.015 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.015 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.015 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.015 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:16.015 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.015 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.015 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.015 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:16.015 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.015 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.015 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.015 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:16.015 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.015 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.015 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.015 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:16.015 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:16.015 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.015 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.015 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.015 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:16.015 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.015 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.015 [2024-11-28 12:35:58.350439] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:16.015 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.015 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:16.015 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.015 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.015 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.015 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:16.015 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.016 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.016 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.016 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:16.016 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.016 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.016 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.016 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:16.016 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.016 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.016 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.016 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:16.016 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:16.016 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.016 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.016 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.016 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:16.016 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.016 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.016 [2024-11-28 12:35:58.398605] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:16.016 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.016 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:16.016 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.016 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.016 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.016 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:16.016 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.016 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.016 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.016 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:16.016 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.016 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.016 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.016 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:16.016 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.016 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.016 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.016 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:12:16.016 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.016 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.016 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.016 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:12:16.016 "tick_rate": 2300000000, 00:12:16.016 "poll_groups": [ 00:12:16.016 { 00:12:16.016 "name": "nvmf_tgt_poll_group_000", 00:12:16.016 "admin_qpairs": 2, 00:12:16.016 "io_qpairs": 168, 00:12:16.016 "current_admin_qpairs": 0, 00:12:16.016 "current_io_qpairs": 0, 00:12:16.016 "pending_bdev_io": 0, 00:12:16.016 "completed_nvme_io": 268, 00:12:16.016 "transports": [ 00:12:16.016 { 00:12:16.016 "trtype": "TCP" 00:12:16.016 } 00:12:16.016 ] 00:12:16.016 }, 00:12:16.016 { 00:12:16.016 "name": "nvmf_tgt_poll_group_001", 00:12:16.016 "admin_qpairs": 2, 00:12:16.016 "io_qpairs": 168, 00:12:16.016 "current_admin_qpairs": 0, 00:12:16.016 "current_io_qpairs": 0, 00:12:16.016 "pending_bdev_io": 0, 00:12:16.016 "completed_nvme_io": 317, 00:12:16.016 "transports": [ 00:12:16.016 { 00:12:16.016 "trtype": "TCP" 00:12:16.016 } 00:12:16.016 ] 00:12:16.016 }, 00:12:16.016 { 00:12:16.016 "name": "nvmf_tgt_poll_group_002", 00:12:16.016 "admin_qpairs": 1, 00:12:16.016 "io_qpairs": 168, 00:12:16.016 "current_admin_qpairs": 0, 00:12:16.016 "current_io_qpairs": 0, 00:12:16.016 "pending_bdev_io": 0, 00:12:16.016 "completed_nvme_io": 214, 00:12:16.016 "transports": [ 00:12:16.016 { 00:12:16.016 "trtype": "TCP" 00:12:16.016 } 00:12:16.016 ] 00:12:16.016 }, 00:12:16.016 { 00:12:16.016 "name": "nvmf_tgt_poll_group_003", 00:12:16.016 "admin_qpairs": 2, 00:12:16.016 "io_qpairs": 168, 00:12:16.016 "current_admin_qpairs": 0, 00:12:16.016 "current_io_qpairs": 0, 00:12:16.016 "pending_bdev_io": 0, 00:12:16.016 "completed_nvme_io": 223, 00:12:16.016 "transports": [ 00:12:16.016 { 00:12:16.016 "trtype": "TCP" 00:12:16.016 } 00:12:16.016 ] 00:12:16.016 } 00:12:16.016 ] 00:12:16.016 }' 00:12:16.016 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:12:16.016 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:16.016 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:16.016 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:16.016 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:12:16.016 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:12:16.016 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:16.016 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:16.016 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:16.275 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 672 > 0 )) 00:12:16.275 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:12:16.275 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:12:16.275 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:12:16.275 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:16.275 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:12:16.275 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:16.275 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:12:16.275 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:16.275 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:16.275 rmmod nvme_tcp 00:12:16.275 rmmod nvme_fabrics 00:12:16.275 rmmod nvme_keyring 00:12:16.276 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:16.276 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:12:16.276 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:12:16.276 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 2456780 ']' 00:12:16.276 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 2456780 00:12:16.276 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 2456780 ']' 00:12:16.276 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 2456780 00:12:16.276 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:12:16.276 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:16.276 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2456780 00:12:16.276 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:16.276 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:16.276 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2456780' 00:12:16.276 killing process with pid 2456780 00:12:16.276 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 2456780 00:12:16.276 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 2456780 00:12:16.535 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:16.535 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:16.535 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:16.535 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:12:16.535 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:12:16.535 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:16.535 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:12:16.535 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:16.535 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:16.535 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:16.535 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:16.535 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:18.441 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:18.441 00:12:18.441 real 0m32.522s 00:12:18.441 user 1m40.007s 00:12:18.441 sys 0m6.124s 00:12:18.441 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:18.441 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:18.441 ************************************ 00:12:18.441 END TEST nvmf_rpc 00:12:18.441 ************************************ 00:12:18.701 12:36:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:18.701 12:36:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:18.701 12:36:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:18.701 12:36:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:18.701 ************************************ 00:12:18.701 START TEST nvmf_invalid 00:12:18.701 ************************************ 00:12:18.701 12:36:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:18.701 * Looking for test storage... 00:12:18.701 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:18.701 12:36:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:18.701 12:36:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lcov --version 00:12:18.701 12:36:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:18.701 12:36:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:18.701 12:36:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:18.701 12:36:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:18.701 12:36:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:18.701 12:36:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:12:18.701 12:36:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:12:18.701 12:36:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:12:18.701 12:36:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:12:18.701 12:36:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:12:18.701 12:36:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:12:18.701 12:36:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:12:18.701 12:36:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:18.701 12:36:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:12:18.701 12:36:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:12:18.701 12:36:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:18.701 12:36:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:18.701 12:36:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:12:18.701 12:36:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:12:18.701 12:36:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:18.701 12:36:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:12:18.701 12:36:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:12:18.701 12:36:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:12:18.701 12:36:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:12:18.701 12:36:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:18.701 12:36:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:12:18.701 12:36:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:12:18.701 12:36:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:18.701 12:36:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:18.701 12:36:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:12:18.701 12:36:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:18.701 12:36:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:18.701 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:18.701 --rc genhtml_branch_coverage=1 00:12:18.701 --rc genhtml_function_coverage=1 00:12:18.701 --rc genhtml_legend=1 00:12:18.701 --rc geninfo_all_blocks=1 00:12:18.701 --rc geninfo_unexecuted_blocks=1 00:12:18.701 00:12:18.701 ' 00:12:18.701 12:36:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:18.701 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:18.701 --rc genhtml_branch_coverage=1 00:12:18.701 --rc genhtml_function_coverage=1 00:12:18.701 --rc genhtml_legend=1 00:12:18.701 --rc geninfo_all_blocks=1 00:12:18.701 --rc geninfo_unexecuted_blocks=1 00:12:18.701 00:12:18.702 ' 00:12:18.702 12:36:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:18.702 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:18.702 --rc genhtml_branch_coverage=1 00:12:18.702 --rc genhtml_function_coverage=1 00:12:18.702 --rc genhtml_legend=1 00:12:18.702 --rc geninfo_all_blocks=1 00:12:18.702 --rc geninfo_unexecuted_blocks=1 00:12:18.702 00:12:18.702 ' 00:12:18.702 12:36:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:18.702 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:18.702 --rc genhtml_branch_coverage=1 00:12:18.702 --rc genhtml_function_coverage=1 00:12:18.702 --rc genhtml_legend=1 00:12:18.702 --rc geninfo_all_blocks=1 00:12:18.702 --rc geninfo_unexecuted_blocks=1 00:12:18.702 00:12:18.702 ' 00:12:18.702 12:36:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:18.702 12:36:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:12:18.702 12:36:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:18.702 12:36:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:18.702 12:36:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:18.702 12:36:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:18.702 12:36:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:18.702 12:36:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:18.702 12:36:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:18.702 12:36:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:18.702 12:36:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:18.702 12:36:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:18.702 12:36:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:18.702 12:36:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:18.702 12:36:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:18.702 12:36:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:18.702 12:36:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:18.702 12:36:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:18.702 12:36:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:18.702 12:36:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:12:18.702 12:36:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:18.702 12:36:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:18.702 12:36:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:18.702 12:36:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:18.702 12:36:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:18.702 12:36:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:18.702 12:36:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:12:18.702 12:36:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:18.702 12:36:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:12:18.702 12:36:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:18.702 12:36:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:18.702 12:36:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:18.702 12:36:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:18.702 12:36:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:18.702 12:36:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:18.702 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:18.702 12:36:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:18.702 12:36:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:18.702 12:36:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:18.702 12:36:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:18.702 12:36:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:18.702 12:36:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:12:18.702 12:36:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:12:18.702 12:36:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:12:18.702 12:36:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:12:18.702 12:36:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:18.702 12:36:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:18.702 12:36:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:18.702 12:36:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:18.702 12:36:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:18.702 12:36:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:18.702 12:36:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:18.702 12:36:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:18.702 12:36:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:18.702 12:36:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:18.702 12:36:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:12:18.702 12:36:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:23.974 12:36:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:23.974 12:36:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:12:23.974 12:36:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:23.974 12:36:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:23.974 12:36:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:23.974 12:36:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:23.974 12:36:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:23.974 12:36:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:12:23.974 12:36:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:23.974 12:36:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:12:23.974 12:36:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:12:23.974 12:36:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:12:23.974 12:36:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:12:23.974 12:36:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:12:23.974 12:36:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:12:23.974 12:36:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:23.974 12:36:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:23.974 12:36:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:23.974 12:36:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:23.974 12:36:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:23.974 12:36:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:23.974 12:36:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:23.974 12:36:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:23.974 12:36:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:23.974 12:36:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:23.974 12:36:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:23.974 12:36:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:23.974 12:36:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:23.974 12:36:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:23.974 12:36:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:23.974 12:36:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:23.974 12:36:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:23.974 12:36:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:23.974 12:36:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:23.974 12:36:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:23.974 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:23.974 12:36:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:23.974 12:36:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:23.974 12:36:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:23.974 12:36:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:23.974 12:36:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:23.974 12:36:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:23.974 12:36:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:23.974 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:23.974 12:36:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:23.974 12:36:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:23.974 12:36:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:23.974 12:36:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:23.974 12:36:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:23.974 12:36:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:23.974 12:36:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:23.974 12:36:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:23.974 12:36:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:23.974 12:36:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:23.974 12:36:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:23.974 12:36:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:23.974 12:36:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:23.975 12:36:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:23.975 12:36:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:23.975 12:36:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:23.975 Found net devices under 0000:86:00.0: cvl_0_0 00:12:23.975 12:36:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:23.975 12:36:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:23.975 12:36:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:23.975 12:36:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:23.975 12:36:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:23.975 12:36:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:23.975 12:36:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:23.975 12:36:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:23.975 12:36:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:23.975 Found net devices under 0000:86:00.1: cvl_0_1 00:12:23.975 12:36:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:23.975 12:36:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:23.975 12:36:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:12:23.975 12:36:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:23.975 12:36:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:23.975 12:36:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:23.975 12:36:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:23.975 12:36:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:23.975 12:36:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:23.975 12:36:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:23.975 12:36:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:23.975 12:36:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:23.975 12:36:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:23.975 12:36:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:23.975 12:36:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:23.975 12:36:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:23.975 12:36:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:23.975 12:36:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:23.975 12:36:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:23.975 12:36:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:23.975 12:36:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:23.975 12:36:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:23.975 12:36:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:23.975 12:36:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:23.975 12:36:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:23.975 12:36:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:23.975 12:36:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:23.975 12:36:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:23.975 12:36:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:23.975 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:23.975 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.319 ms 00:12:23.975 00:12:23.975 --- 10.0.0.2 ping statistics --- 00:12:23.975 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:23.975 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:12:23.975 12:36:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:23.975 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:23.975 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.211 ms 00:12:23.975 00:12:23.975 --- 10.0.0.1 ping statistics --- 00:12:23.975 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:23.975 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:12:23.975 12:36:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:23.975 12:36:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:12:23.975 12:36:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:23.975 12:36:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:23.975 12:36:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:23.975 12:36:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:23.975 12:36:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:23.975 12:36:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:23.975 12:36:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:24.233 12:36:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:12:24.233 12:36:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:24.233 12:36:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:24.233 12:36:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:24.233 12:36:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=2464507 00:12:24.233 12:36:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:24.233 12:36:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 2464507 00:12:24.234 12:36:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 2464507 ']' 00:12:24.234 12:36:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:24.234 12:36:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:24.234 12:36:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:24.234 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:24.234 12:36:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:24.234 12:36:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:24.234 [2024-11-28 12:36:06.577393] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:12:24.234 [2024-11-28 12:36:06.577443] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:24.234 [2024-11-28 12:36:06.644866] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:24.234 [2024-11-28 12:36:06.687748] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:24.234 [2024-11-28 12:36:06.687785] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:24.234 [2024-11-28 12:36:06.687792] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:24.234 [2024-11-28 12:36:06.687798] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:24.234 [2024-11-28 12:36:06.687804] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:24.234 [2024-11-28 12:36:06.689388] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:24.234 [2024-11-28 12:36:06.689485] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:24.234 [2024-11-28 12:36:06.689507] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:24.234 [2024-11-28 12:36:06.689509] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:24.493 12:36:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:24.493 12:36:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:12:24.493 12:36:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:24.493 12:36:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:24.493 12:36:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:24.493 12:36:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:24.493 12:36:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:24.493 12:36:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode15920 00:12:24.493 [2024-11-28 12:36:07.005090] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:12:24.751 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:12:24.751 { 00:12:24.752 "nqn": "nqn.2016-06.io.spdk:cnode15920", 00:12:24.752 "tgt_name": "foobar", 00:12:24.752 "method": "nvmf_create_subsystem", 00:12:24.752 "req_id": 1 00:12:24.752 } 00:12:24.752 Got JSON-RPC error response 00:12:24.752 response: 00:12:24.752 { 00:12:24.752 "code": -32603, 00:12:24.752 "message": "Unable to find target foobar" 00:12:24.752 }' 00:12:24.752 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:12:24.752 { 00:12:24.752 "nqn": "nqn.2016-06.io.spdk:cnode15920", 00:12:24.752 "tgt_name": "foobar", 00:12:24.752 "method": "nvmf_create_subsystem", 00:12:24.752 "req_id": 1 00:12:24.752 } 00:12:24.752 Got JSON-RPC error response 00:12:24.752 response: 00:12:24.752 { 00:12:24.752 "code": -32603, 00:12:24.752 "message": "Unable to find target foobar" 00:12:24.752 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:12:24.752 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:12:24.752 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode22220 00:12:24.752 [2024-11-28 12:36:07.217830] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode22220: invalid serial number 'SPDKISFASTANDAWESOME' 00:12:24.752 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:12:24.752 { 00:12:24.752 "nqn": "nqn.2016-06.io.spdk:cnode22220", 00:12:24.752 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:12:24.752 "method": "nvmf_create_subsystem", 00:12:24.752 "req_id": 1 00:12:24.752 } 00:12:24.752 Got JSON-RPC error response 00:12:24.752 response: 00:12:24.752 { 00:12:24.752 "code": -32602, 00:12:24.752 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:12:24.752 }' 00:12:24.752 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:12:24.752 { 00:12:24.752 "nqn": "nqn.2016-06.io.spdk:cnode22220", 00:12:24.752 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:12:24.752 "method": "nvmf_create_subsystem", 00:12:24.752 "req_id": 1 00:12:24.752 } 00:12:24.752 Got JSON-RPC error response 00:12:24.752 response: 00:12:24.752 { 00:12:24.752 "code": -32602, 00:12:24.752 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:12:24.752 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:24.752 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:12:24.752 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode16073 00:12:25.011 [2024-11-28 12:36:07.438577] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16073: invalid model number 'SPDK_Controller' 00:12:25.011 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:12:25.011 { 00:12:25.011 "nqn": "nqn.2016-06.io.spdk:cnode16073", 00:12:25.011 "model_number": "SPDK_Controller\u001f", 00:12:25.011 "method": "nvmf_create_subsystem", 00:12:25.011 "req_id": 1 00:12:25.011 } 00:12:25.011 Got JSON-RPC error response 00:12:25.011 response: 00:12:25.011 { 00:12:25.011 "code": -32602, 00:12:25.011 "message": "Invalid MN SPDK_Controller\u001f" 00:12:25.011 }' 00:12:25.011 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:12:25.011 { 00:12:25.011 "nqn": "nqn.2016-06.io.spdk:cnode16073", 00:12:25.011 "model_number": "SPDK_Controller\u001f", 00:12:25.011 "method": "nvmf_create_subsystem", 00:12:25.011 "req_id": 1 00:12:25.011 } 00:12:25.011 Got JSON-RPC error response 00:12:25.011 response: 00:12:25.011 { 00:12:25.011 "code": -32602, 00:12:25.011 "message": "Invalid MN SPDK_Controller\u001f" 00:12:25.011 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:25.011 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:12:25.011 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:12:25.011 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:25.011 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:12:25.011 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:12:25.011 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:25.011 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:25.011 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:12:25.011 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:12:25.011 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:12:25.011 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:25.011 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:25.011 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:12:25.011 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:12:25.011 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:12:25.011 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:25.011 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:25.011 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:12:25.011 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:12:25.011 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:12:25.011 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:25.011 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:25.011 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:12:25.011 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:12:25.011 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:12:25.011 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:25.012 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:25.012 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:12:25.012 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:12:25.012 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:12:25.012 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:25.012 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:25.012 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:12:25.012 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:12:25.012 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:12:25.012 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:25.012 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:25.012 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:12:25.012 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:12:25.012 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:12:25.012 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:25.012 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:25.012 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:12:25.012 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:12:25.012 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:12:25.012 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:25.012 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:25.272 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:12:25.272 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:12:25.272 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:12:25.272 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:25.272 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:25.272 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:12:25.272 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:12:25.272 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:12:25.272 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:25.272 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:25.272 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:12:25.272 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:12:25.272 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:12:25.272 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:25.272 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:25.272 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:12:25.272 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:12:25.272 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:12:25.272 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:25.272 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:25.272 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:12:25.272 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:12:25.272 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:12:25.272 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:25.272 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:25.272 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:12:25.272 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:12:25.272 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:12:25.272 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:25.272 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:25.272 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:12:25.272 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:12:25.272 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:12:25.272 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:25.272 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:25.272 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:12:25.272 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:12:25.272 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:12:25.272 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:25.272 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:25.272 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:12:25.272 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:12:25.272 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:12:25.272 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:25.272 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:25.272 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:12:25.272 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:12:25.272 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:12:25.272 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:25.272 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:25.272 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:12:25.272 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:12:25.272 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:12:25.272 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:25.272 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:25.272 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:12:25.272 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:12:25.272 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:12:25.272 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:25.273 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:25.273 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:12:25.273 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:12:25.273 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:12:25.273 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:25.273 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:25.273 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ v == \- ]] 00:12:25.273 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'v{@vhEjC*e$^F_dEy;Ibi' 00:12:25.273 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'v{@vhEjC*e$^F_dEy;Ibi' nqn.2016-06.io.spdk:cnode4257 00:12:25.273 [2024-11-28 12:36:07.787783] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode4257: invalid serial number 'v{@vhEjC*e$^F_dEy;Ibi' 00:12:25.533 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:12:25.533 { 00:12:25.533 "nqn": "nqn.2016-06.io.spdk:cnode4257", 00:12:25.533 "serial_number": "v{@vhEjC*e$^F_dEy;Ibi", 00:12:25.533 "method": "nvmf_create_subsystem", 00:12:25.533 "req_id": 1 00:12:25.533 } 00:12:25.533 Got JSON-RPC error response 00:12:25.533 response: 00:12:25.533 { 00:12:25.533 "code": -32602, 00:12:25.533 "message": "Invalid SN v{@vhEjC*e$^F_dEy;Ibi" 00:12:25.533 }' 00:12:25.533 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:12:25.533 { 00:12:25.533 "nqn": "nqn.2016-06.io.spdk:cnode4257", 00:12:25.533 "serial_number": "v{@vhEjC*e$^F_dEy;Ibi", 00:12:25.533 "method": "nvmf_create_subsystem", 00:12:25.533 "req_id": 1 00:12:25.533 } 00:12:25.533 Got JSON-RPC error response 00:12:25.533 response: 00:12:25.533 { 00:12:25.533 "code": -32602, 00:12:25.533 "message": "Invalid SN v{@vhEjC*e$^F_dEy;Ibi" 00:12:25.533 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:25.533 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:12:25.533 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:12:25.533 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:25.533 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:12:25.533 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:12:25.533 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:25.533 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:25.533 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:12:25.533 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:12:25.533 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:12:25.533 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:25.533 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:25.533 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:12:25.533 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:12:25.533 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:12:25.533 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:25.533 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:25.533 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:12:25.533 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:12:25.533 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:12:25.533 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:25.533 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:25.533 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:12:25.533 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:12:25.533 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:12:25.533 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:25.533 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:25.533 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:12:25.533 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:12:25.533 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:12:25.533 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:25.533 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:25.533 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:12:25.533 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:12:25.533 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:12:25.533 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:25.533 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:25.533 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:12:25.533 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:12:25.533 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:12:25.533 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:25.533 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:25.533 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:12:25.533 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:12:25.533 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:12:25.533 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:25.533 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:25.533 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:12:25.533 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:12:25.533 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:12:25.533 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:25.533 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:25.533 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:12:25.533 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:12:25.533 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:12:25.533 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:25.533 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:25.533 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:12:25.533 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:12:25.533 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:12:25.533 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:25.533 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:25.533 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:12:25.533 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:12:25.533 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:12:25.533 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:25.533 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:25.533 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:12:25.533 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:12:25.533 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:12:25.533 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:25.533 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:25.533 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:12:25.533 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:12:25.533 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:12:25.533 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:25.533 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:25.533 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:12:25.533 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:12:25.533 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:12:25.534 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:25.534 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:25.534 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:12:25.534 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:12:25.534 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:12:25.534 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:25.534 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:25.534 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:12:25.534 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:12:25.534 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:12:25.534 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:25.534 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:25.534 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:12:25.534 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:12:25.534 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:12:25.534 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:25.534 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:25.534 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:12:25.534 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:12:25.534 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:12:25.534 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:25.534 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:25.534 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:12:25.534 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:12:25.534 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:12:25.534 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:25.534 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:25.534 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:12:25.534 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:12:25.534 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:12:25.534 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:25.534 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:25.534 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:12:25.534 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:12:25.534 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:12:25.534 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:25.534 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:25.534 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:12:25.534 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:12:25.534 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:12:25.534 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:25.534 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:25.534 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:12:25.534 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:12:25.534 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:12:25.534 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:25.534 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:25.534 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:12:25.534 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:12:25.534 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:12:25.534 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:25.534 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:25.534 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:12:25.534 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:12:25.534 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:12:25.534 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:25.534 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:25.534 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:12:25.534 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:12:25.534 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:12:25.534 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:25.534 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:25.534 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:12:25.534 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:12:25.534 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:12:25.534 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:25.534 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:25.534 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:12:25.534 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:12:25.534 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:12:25.534 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:25.534 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:25.534 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:12:25.534 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:12:25.534 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:12:25.534 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:25.534 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:25.534 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:12:25.534 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:12:25.534 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:12:25.534 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:25.534 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:25.534 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:12:25.534 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:12:25.534 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:12:25.534 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:25.534 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:25.534 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:12:25.534 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:12:25.534 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:12:25.534 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:25.534 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:25.534 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:12:25.534 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:12:25.534 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:12:25.534 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:25.534 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:25.534 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:12:25.534 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:12:25.534 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:12:25.534 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:25.534 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:25.534 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:12:25.794 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:12:25.794 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:12:25.794 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:25.794 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:25.794 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:12:25.794 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:12:25.794 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:12:25.794 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:25.794 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:25.794 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:12:25.794 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:12:25.794 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:12:25.794 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:25.794 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:25.794 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:12:25.794 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:12:25.794 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:12:25.794 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:25.794 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:25.794 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:12:25.794 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:12:25.794 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:12:25.794 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:25.794 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:25.794 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:12:25.794 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:12:25.794 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:12:25.794 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:25.794 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:25.794 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ k == \- ]] 00:12:25.794 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'kMn%;oCJHJ\)]Sh`R:RUEBD)b}3F3Id,s<@W]nkG' 00:12:25.794 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'kMn%;oCJHJ\)]Sh`R:RUEBD)b}3F3Id,s<@W]nkG' nqn.2016-06.io.spdk:cnode8767 00:12:25.794 [2024-11-28 12:36:08.269380] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode8767: invalid model number 'kMn%;oCJHJ\)]Sh`R:RUEBD)b}3F3Id,s<@W]nkG' 00:12:25.794 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:12:25.794 { 00:12:25.794 "nqn": "nqn.2016-06.io.spdk:cnode8767", 00:12:25.794 "model_number": "kMn%;oC\u007fJHJ\\)]Sh`R:RUEBD)b}3F3Id,s<@W]nkG", 00:12:25.794 "method": "nvmf_create_subsystem", 00:12:25.794 "req_id": 1 00:12:25.794 } 00:12:25.794 Got JSON-RPC error response 00:12:25.794 response: 00:12:25.794 { 00:12:25.794 "code": -32602, 00:12:25.794 "message": "Invalid MN kMn%;oC\u007fJHJ\\)]Sh`R:RUEBD)b}3F3Id,s<@W]nkG" 00:12:25.794 }' 00:12:25.794 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:12:25.794 { 00:12:25.794 "nqn": "nqn.2016-06.io.spdk:cnode8767", 00:12:25.794 "model_number": "kMn%;oC\u007fJHJ\\)]Sh`R:RUEBD)b}3F3Id,s<@W]nkG", 00:12:25.794 "method": "nvmf_create_subsystem", 00:12:25.794 "req_id": 1 00:12:25.794 } 00:12:25.794 Got JSON-RPC error response 00:12:25.794 response: 00:12:25.794 { 00:12:25.794 "code": -32602, 00:12:25.794 "message": "Invalid MN kMn%;oC\u007fJHJ\\)]Sh`R:RUEBD)b}3F3Id,s<@W]nkG" 00:12:25.794 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:25.794 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:12:26.054 [2024-11-28 12:36:08.478167] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:26.054 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:12:26.312 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:12:26.312 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:12:26.312 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:12:26.312 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:12:26.312 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:12:26.571 [2024-11-28 12:36:08.915611] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:12:26.571 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:12:26.571 { 00:12:26.571 "nqn": "nqn.2016-06.io.spdk:cnode", 00:12:26.571 "listen_address": { 00:12:26.571 "trtype": "tcp", 00:12:26.571 "traddr": "", 00:12:26.571 "trsvcid": "4421" 00:12:26.571 }, 00:12:26.571 "method": "nvmf_subsystem_remove_listener", 00:12:26.571 "req_id": 1 00:12:26.571 } 00:12:26.571 Got JSON-RPC error response 00:12:26.571 response: 00:12:26.571 { 00:12:26.571 "code": -32602, 00:12:26.571 "message": "Invalid parameters" 00:12:26.571 }' 00:12:26.571 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:12:26.571 { 00:12:26.571 "nqn": "nqn.2016-06.io.spdk:cnode", 00:12:26.571 "listen_address": { 00:12:26.571 "trtype": "tcp", 00:12:26.571 "traddr": "", 00:12:26.571 "trsvcid": "4421" 00:12:26.571 }, 00:12:26.571 "method": "nvmf_subsystem_remove_listener", 00:12:26.571 "req_id": 1 00:12:26.571 } 00:12:26.571 Got JSON-RPC error response 00:12:26.571 response: 00:12:26.571 { 00:12:26.571 "code": -32602, 00:12:26.571 "message": "Invalid parameters" 00:12:26.571 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:12:26.571 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode14176 -i 0 00:12:26.830 [2024-11-28 12:36:09.132303] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode14176: invalid cntlid range [0-65519] 00:12:26.830 12:36:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:12:26.830 { 00:12:26.830 "nqn": "nqn.2016-06.io.spdk:cnode14176", 00:12:26.830 "min_cntlid": 0, 00:12:26.830 "method": "nvmf_create_subsystem", 00:12:26.830 "req_id": 1 00:12:26.830 } 00:12:26.830 Got JSON-RPC error response 00:12:26.830 response: 00:12:26.830 { 00:12:26.830 "code": -32602, 00:12:26.830 "message": "Invalid cntlid range [0-65519]" 00:12:26.830 }' 00:12:26.830 12:36:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:12:26.830 { 00:12:26.830 "nqn": "nqn.2016-06.io.spdk:cnode14176", 00:12:26.830 "min_cntlid": 0, 00:12:26.830 "method": "nvmf_create_subsystem", 00:12:26.830 "req_id": 1 00:12:26.830 } 00:12:26.830 Got JSON-RPC error response 00:12:26.830 response: 00:12:26.830 { 00:12:26.830 "code": -32602, 00:12:26.830 "message": "Invalid cntlid range [0-65519]" 00:12:26.830 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:26.830 12:36:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode22944 -i 65520 00:12:26.830 [2024-11-28 12:36:09.341013] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode22944: invalid cntlid range [65520-65519] 00:12:27.089 12:36:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:12:27.089 { 00:12:27.089 "nqn": "nqn.2016-06.io.spdk:cnode22944", 00:12:27.089 "min_cntlid": 65520, 00:12:27.089 "method": "nvmf_create_subsystem", 00:12:27.089 "req_id": 1 00:12:27.089 } 00:12:27.089 Got JSON-RPC error response 00:12:27.089 response: 00:12:27.089 { 00:12:27.089 "code": -32602, 00:12:27.089 "message": "Invalid cntlid range [65520-65519]" 00:12:27.089 }' 00:12:27.089 12:36:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:12:27.089 { 00:12:27.089 "nqn": "nqn.2016-06.io.spdk:cnode22944", 00:12:27.089 "min_cntlid": 65520, 00:12:27.089 "method": "nvmf_create_subsystem", 00:12:27.089 "req_id": 1 00:12:27.089 } 00:12:27.089 Got JSON-RPC error response 00:12:27.089 response: 00:12:27.089 { 00:12:27.089 "code": -32602, 00:12:27.089 "message": "Invalid cntlid range [65520-65519]" 00:12:27.089 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:27.089 12:36:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode18707 -I 0 00:12:27.089 [2024-11-28 12:36:09.553776] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode18707: invalid cntlid range [1-0] 00:12:27.089 12:36:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:12:27.089 { 00:12:27.089 "nqn": "nqn.2016-06.io.spdk:cnode18707", 00:12:27.089 "max_cntlid": 0, 00:12:27.089 "method": "nvmf_create_subsystem", 00:12:27.089 "req_id": 1 00:12:27.089 } 00:12:27.089 Got JSON-RPC error response 00:12:27.089 response: 00:12:27.089 { 00:12:27.089 "code": -32602, 00:12:27.089 "message": "Invalid cntlid range [1-0]" 00:12:27.089 }' 00:12:27.089 12:36:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:12:27.089 { 00:12:27.089 "nqn": "nqn.2016-06.io.spdk:cnode18707", 00:12:27.089 "max_cntlid": 0, 00:12:27.089 "method": "nvmf_create_subsystem", 00:12:27.089 "req_id": 1 00:12:27.089 } 00:12:27.089 Got JSON-RPC error response 00:12:27.089 response: 00:12:27.089 { 00:12:27.089 "code": -32602, 00:12:27.089 "message": "Invalid cntlid range [1-0]" 00:12:27.089 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:27.089 12:36:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9489 -I 65520 00:12:27.348 [2024-11-28 12:36:09.750420] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9489: invalid cntlid range [1-65520] 00:12:27.348 12:36:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:12:27.348 { 00:12:27.348 "nqn": "nqn.2016-06.io.spdk:cnode9489", 00:12:27.348 "max_cntlid": 65520, 00:12:27.348 "method": "nvmf_create_subsystem", 00:12:27.348 "req_id": 1 00:12:27.348 } 00:12:27.348 Got JSON-RPC error response 00:12:27.348 response: 00:12:27.348 { 00:12:27.348 "code": -32602, 00:12:27.348 "message": "Invalid cntlid range [1-65520]" 00:12:27.348 }' 00:12:27.348 12:36:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:12:27.348 { 00:12:27.348 "nqn": "nqn.2016-06.io.spdk:cnode9489", 00:12:27.348 "max_cntlid": 65520, 00:12:27.348 "method": "nvmf_create_subsystem", 00:12:27.348 "req_id": 1 00:12:27.348 } 00:12:27.348 Got JSON-RPC error response 00:12:27.348 response: 00:12:27.348 { 00:12:27.348 "code": -32602, 00:12:27.348 "message": "Invalid cntlid range [1-65520]" 00:12:27.348 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:27.348 12:36:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4213 -i 6 -I 5 00:12:27.607 [2024-11-28 12:36:09.959156] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode4213: invalid cntlid range [6-5] 00:12:27.607 12:36:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:12:27.607 { 00:12:27.607 "nqn": "nqn.2016-06.io.spdk:cnode4213", 00:12:27.607 "min_cntlid": 6, 00:12:27.607 "max_cntlid": 5, 00:12:27.607 "method": "nvmf_create_subsystem", 00:12:27.607 "req_id": 1 00:12:27.607 } 00:12:27.607 Got JSON-RPC error response 00:12:27.607 response: 00:12:27.607 { 00:12:27.607 "code": -32602, 00:12:27.607 "message": "Invalid cntlid range [6-5]" 00:12:27.607 }' 00:12:27.607 12:36:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:12:27.607 { 00:12:27.607 "nqn": "nqn.2016-06.io.spdk:cnode4213", 00:12:27.607 "min_cntlid": 6, 00:12:27.607 "max_cntlid": 5, 00:12:27.607 "method": "nvmf_create_subsystem", 00:12:27.607 "req_id": 1 00:12:27.607 } 00:12:27.607 Got JSON-RPC error response 00:12:27.607 response: 00:12:27.607 { 00:12:27.607 "code": -32602, 00:12:27.607 "message": "Invalid cntlid range [6-5]" 00:12:27.607 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:27.607 12:36:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:12:27.607 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:12:27.607 { 00:12:27.607 "name": "foobar", 00:12:27.607 "method": "nvmf_delete_target", 00:12:27.607 "req_id": 1 00:12:27.607 } 00:12:27.607 Got JSON-RPC error response 00:12:27.607 response: 00:12:27.607 { 00:12:27.607 "code": -32602, 00:12:27.607 "message": "The specified target doesn'\''t exist, cannot delete it." 00:12:27.607 }' 00:12:27.607 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:12:27.607 { 00:12:27.607 "name": "foobar", 00:12:27.607 "method": "nvmf_delete_target", 00:12:27.607 "req_id": 1 00:12:27.607 } 00:12:27.607 Got JSON-RPC error response 00:12:27.607 response: 00:12:27.607 { 00:12:27.607 "code": -32602, 00:12:27.607 "message": "The specified target doesn't exist, cannot delete it." 00:12:27.607 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:12:27.608 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:12:27.608 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:12:27.608 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:27.608 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:12:27.608 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:27.608 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:12:27.608 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:27.608 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:27.608 rmmod nvme_tcp 00:12:27.866 rmmod nvme_fabrics 00:12:27.866 rmmod nvme_keyring 00:12:27.866 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:27.867 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:12:27.867 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:12:27.867 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 2464507 ']' 00:12:27.867 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 2464507 00:12:27.867 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # '[' -z 2464507 ']' 00:12:27.867 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # kill -0 2464507 00:12:27.867 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # uname 00:12:27.867 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:27.867 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2464507 00:12:27.867 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:27.867 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:27.867 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2464507' 00:12:27.867 killing process with pid 2464507 00:12:27.867 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@973 -- # kill 2464507 00:12:27.867 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@978 -- # wait 2464507 00:12:28.124 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:28.124 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:28.124 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:28.124 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:12:28.124 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-save 00:12:28.124 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:28.124 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-restore 00:12:28.124 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:28.124 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:28.124 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:28.124 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:28.124 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:30.030 12:36:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:30.030 00:12:30.030 real 0m11.461s 00:12:30.030 user 0m18.869s 00:12:30.030 sys 0m4.988s 00:12:30.030 12:36:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:30.030 12:36:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:30.030 ************************************ 00:12:30.030 END TEST nvmf_invalid 00:12:30.030 ************************************ 00:12:30.030 12:36:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:12:30.030 12:36:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:30.030 12:36:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:30.030 12:36:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:30.030 ************************************ 00:12:30.030 START TEST nvmf_connect_stress 00:12:30.030 ************************************ 00:12:30.030 12:36:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:12:30.291 * Looking for test storage... 00:12:30.291 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:30.291 12:36:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:30.291 12:36:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:12:30.291 12:36:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:30.291 12:36:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:30.291 12:36:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:30.291 12:36:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:30.291 12:36:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:30.291 12:36:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:12:30.291 12:36:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:12:30.291 12:36:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:12:30.291 12:36:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:12:30.291 12:36:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:12:30.291 12:36:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:12:30.291 12:36:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:12:30.291 12:36:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:30.291 12:36:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:12:30.291 12:36:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:12:30.291 12:36:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:30.291 12:36:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:30.291 12:36:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:12:30.291 12:36:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:12:30.291 12:36:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:30.291 12:36:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:12:30.291 12:36:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:12:30.291 12:36:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:12:30.291 12:36:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:12:30.291 12:36:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:30.291 12:36:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:12:30.291 12:36:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:12:30.291 12:36:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:30.291 12:36:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:30.291 12:36:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:12:30.291 12:36:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:30.291 12:36:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:30.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:30.291 --rc genhtml_branch_coverage=1 00:12:30.291 --rc genhtml_function_coverage=1 00:12:30.291 --rc genhtml_legend=1 00:12:30.291 --rc geninfo_all_blocks=1 00:12:30.291 --rc geninfo_unexecuted_blocks=1 00:12:30.291 00:12:30.291 ' 00:12:30.291 12:36:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:30.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:30.291 --rc genhtml_branch_coverage=1 00:12:30.291 --rc genhtml_function_coverage=1 00:12:30.291 --rc genhtml_legend=1 00:12:30.291 --rc geninfo_all_blocks=1 00:12:30.291 --rc geninfo_unexecuted_blocks=1 00:12:30.291 00:12:30.291 ' 00:12:30.291 12:36:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:30.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:30.291 --rc genhtml_branch_coverage=1 00:12:30.291 --rc genhtml_function_coverage=1 00:12:30.291 --rc genhtml_legend=1 00:12:30.291 --rc geninfo_all_blocks=1 00:12:30.291 --rc geninfo_unexecuted_blocks=1 00:12:30.291 00:12:30.291 ' 00:12:30.291 12:36:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:30.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:30.291 --rc genhtml_branch_coverage=1 00:12:30.291 --rc genhtml_function_coverage=1 00:12:30.291 --rc genhtml_legend=1 00:12:30.291 --rc geninfo_all_blocks=1 00:12:30.291 --rc geninfo_unexecuted_blocks=1 00:12:30.291 00:12:30.291 ' 00:12:30.291 12:36:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:30.291 12:36:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:12:30.291 12:36:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:30.291 12:36:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:30.291 12:36:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:30.291 12:36:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:30.291 12:36:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:30.291 12:36:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:30.291 12:36:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:30.291 12:36:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:30.291 12:36:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:30.291 12:36:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:30.291 12:36:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:30.291 12:36:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:30.291 12:36:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:30.291 12:36:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:30.291 12:36:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:30.291 12:36:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:30.291 12:36:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:30.291 12:36:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:12:30.291 12:36:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:30.291 12:36:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:30.291 12:36:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:30.292 12:36:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:30.292 12:36:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:30.292 12:36:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:30.292 12:36:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:12:30.292 12:36:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:30.292 12:36:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:12:30.292 12:36:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:30.292 12:36:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:30.292 12:36:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:30.292 12:36:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:30.292 12:36:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:30.292 12:36:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:30.292 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:30.292 12:36:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:30.292 12:36:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:30.292 12:36:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:30.292 12:36:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:12:30.292 12:36:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:30.292 12:36:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:30.292 12:36:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:30.292 12:36:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:30.292 12:36:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:30.292 12:36:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:30.292 12:36:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:30.292 12:36:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:30.292 12:36:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:30.292 12:36:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:30.292 12:36:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:12:30.292 12:36:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:35.778 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:35.778 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:12:35.778 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:35.778 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:35.778 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:35.778 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:35.778 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:35.778 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:12:35.778 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:35.778 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:12:35.778 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:12:35.778 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:12:35.778 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:12:35.778 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:12:35.778 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:12:35.778 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:35.778 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:35.778 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:35.778 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:35.778 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:35.778 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:35.778 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:35.778 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:35.778 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:35.778 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:35.778 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:35.778 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:35.778 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:35.778 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:35.778 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:35.779 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:35.779 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:35.779 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:35.779 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:35.779 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:35.779 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:35.779 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:35.779 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:35.779 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:35.779 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:35.779 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:35.779 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:35.779 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:35.779 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:35.779 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:35.779 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:35.779 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:35.779 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:35.779 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:35.779 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:35.779 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:35.779 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:35.779 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:35.779 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:35.779 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:35.779 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:35.779 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:35.779 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:35.779 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:35.779 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:35.779 Found net devices under 0000:86:00.0: cvl_0_0 00:12:35.779 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:35.779 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:35.779 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:35.779 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:35.779 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:35.779 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:35.779 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:35.779 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:35.779 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:35.779 Found net devices under 0000:86:00.1: cvl_0_1 00:12:35.779 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:35.779 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:35.779 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:12:35.779 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:35.779 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:35.779 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:35.779 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:35.779 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:35.779 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:35.779 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:35.779 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:35.779 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:35.779 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:35.779 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:35.779 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:35.779 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:35.779 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:35.779 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:35.779 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:35.779 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:35.779 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:35.779 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:35.779 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:35.779 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:35.779 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:35.779 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:35.779 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:35.779 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:35.779 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:35.779 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:35.779 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.410 ms 00:12:35.779 00:12:35.779 --- 10.0.0.2 ping statistics --- 00:12:35.779 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:35.779 rtt min/avg/max/mdev = 0.410/0.410/0.410/0.000 ms 00:12:35.779 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:35.779 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:35.779 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.204 ms 00:12:35.779 00:12:35.779 --- 10.0.0.1 ping statistics --- 00:12:35.779 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:35.779 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:12:35.779 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:35.779 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:12:35.779 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:35.779 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:35.779 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:35.779 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:35.779 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:35.779 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:35.779 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:36.039 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:12:36.039 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:36.039 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:36.039 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:36.039 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=2469074 00:12:36.039 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 2469074 00:12:36.039 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:12:36.039 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 2469074 ']' 00:12:36.039 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:36.039 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:36.039 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:36.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:36.039 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:36.039 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:36.039 [2024-11-28 12:36:18.383791] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:12:36.039 [2024-11-28 12:36:18.383842] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:36.039 [2024-11-28 12:36:18.449453] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:36.039 [2024-11-28 12:36:18.491195] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:36.039 [2024-11-28 12:36:18.491233] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:36.039 [2024-11-28 12:36:18.491241] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:36.039 [2024-11-28 12:36:18.491246] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:36.039 [2024-11-28 12:36:18.491252] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:36.039 [2024-11-28 12:36:18.492695] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:36.039 [2024-11-28 12:36:18.492783] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:36.039 [2024-11-28 12:36:18.492784] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:36.298 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:36.298 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:12:36.298 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:36.298 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:36.298 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:36.298 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:36.298 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:36.298 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.298 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:36.298 [2024-11-28 12:36:18.629960] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:36.298 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.298 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:36.298 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.298 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:36.298 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.298 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:36.298 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.298 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:36.298 [2024-11-28 12:36:18.650171] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:36.299 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.299 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:36.299 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.299 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:36.299 NULL1 00:12:36.299 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.299 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=2469243 00:12:36.299 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:12:36.299 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:12:36.299 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:12:36.299 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:12:36.299 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:36.299 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:36.299 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:36.299 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:36.299 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:36.299 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:36.299 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:36.299 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:36.299 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:36.299 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:36.299 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:36.299 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:36.299 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:36.299 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:36.299 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:36.299 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:36.299 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:36.299 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:36.299 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:36.299 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:36.299 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:36.299 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:36.299 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:36.299 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:36.299 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:36.299 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:36.299 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:36.299 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:36.299 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:36.299 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:36.299 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:36.299 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:36.299 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:36.299 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:36.299 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:36.299 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:36.299 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:36.299 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:36.299 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:36.299 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:36.299 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2469243 00:12:36.299 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:36.299 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.299 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:36.866 12:36:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.866 12:36:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2469243 00:12:36.866 12:36:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:36.866 12:36:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.866 12:36:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:37.124 12:36:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.124 12:36:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2469243 00:12:37.124 12:36:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:37.124 12:36:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.124 12:36:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:37.382 12:36:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.382 12:36:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2469243 00:12:37.382 12:36:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:37.382 12:36:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.382 12:36:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:37.640 12:36:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.640 12:36:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2469243 00:12:37.640 12:36:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:37.640 12:36:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.640 12:36:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:37.899 12:36:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.899 12:36:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2469243 00:12:37.899 12:36:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:37.899 12:36:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.899 12:36:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:38.465 12:36:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.465 12:36:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2469243 00:12:38.465 12:36:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:38.466 12:36:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.466 12:36:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:38.724 12:36:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.724 12:36:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2469243 00:12:38.724 12:36:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:38.724 12:36:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.724 12:36:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:38.982 12:36:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.982 12:36:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2469243 00:12:38.982 12:36:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:38.982 12:36:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.982 12:36:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:39.240 12:36:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.240 12:36:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2469243 00:12:39.240 12:36:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:39.240 12:36:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.240 12:36:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:39.499 12:36:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.499 12:36:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2469243 00:12:39.499 12:36:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:39.499 12:36:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.499 12:36:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:40.066 12:36:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.066 12:36:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2469243 00:12:40.066 12:36:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:40.066 12:36:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.066 12:36:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:40.324 12:36:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.324 12:36:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2469243 00:12:40.324 12:36:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:40.324 12:36:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.324 12:36:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:40.583 12:36:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.583 12:36:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2469243 00:12:40.583 12:36:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:40.583 12:36:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.583 12:36:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:40.841 12:36:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.841 12:36:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2469243 00:12:40.841 12:36:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:40.841 12:36:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.841 12:36:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:41.408 12:36:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.408 12:36:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2469243 00:12:41.408 12:36:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:41.408 12:36:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.408 12:36:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:41.667 12:36:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.667 12:36:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2469243 00:12:41.667 12:36:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:41.667 12:36:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.667 12:36:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:41.925 12:36:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.925 12:36:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2469243 00:12:41.925 12:36:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:41.925 12:36:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.925 12:36:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:42.184 12:36:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.184 12:36:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2469243 00:12:42.184 12:36:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:42.184 12:36:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.184 12:36:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:42.443 12:36:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.443 12:36:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2469243 00:12:42.443 12:36:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:42.443 12:36:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.443 12:36:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:43.011 12:36:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.011 12:36:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2469243 00:12:43.011 12:36:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:43.011 12:36:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.011 12:36:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:43.270 12:36:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.270 12:36:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2469243 00:12:43.270 12:36:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:43.270 12:36:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.270 12:36:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:43.529 12:36:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.529 12:36:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2469243 00:12:43.529 12:36:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:43.530 12:36:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.530 12:36:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:43.788 12:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.788 12:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2469243 00:12:43.788 12:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:43.788 12:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.788 12:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:44.047 12:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.048 12:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2469243 00:12:44.048 12:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:44.048 12:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.048 12:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:44.615 12:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.615 12:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2469243 00:12:44.615 12:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:44.615 12:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.615 12:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:44.873 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.873 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2469243 00:12:44.873 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:44.873 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.873 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:45.132 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.132 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2469243 00:12:45.132 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:45.132 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.132 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:45.391 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.392 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2469243 00:12:45.392 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:45.392 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.392 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:45.959 12:36:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.959 12:36:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2469243 00:12:45.959 12:36:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:45.959 12:36:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.959 12:36:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:46.218 12:36:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.218 12:36:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2469243 00:12:46.218 12:36:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:46.218 12:36:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.218 12:36:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:46.477 12:36:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.477 12:36:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2469243 00:12:46.477 12:36:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:46.477 12:36:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.477 12:36:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:46.477 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:46.736 12:36:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.736 12:36:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2469243 00:12:46.736 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (2469243) - No such process 00:12:46.736 12:36:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 2469243 00:12:46.736 12:36:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:12:46.736 12:36:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:12:46.736 12:36:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:12:46.736 12:36:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:46.736 12:36:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:12:46.736 12:36:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:46.736 12:36:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:12:46.736 12:36:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:46.736 12:36:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:46.736 rmmod nvme_tcp 00:12:46.736 rmmod nvme_fabrics 00:12:46.736 rmmod nvme_keyring 00:12:46.736 12:36:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:46.736 12:36:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:12:46.736 12:36:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:12:46.736 12:36:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 2469074 ']' 00:12:46.736 12:36:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 2469074 00:12:46.736 12:36:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 2469074 ']' 00:12:46.736 12:36:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 2469074 00:12:46.736 12:36:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:12:46.736 12:36:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:46.736 12:36:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2469074 00:12:46.994 12:36:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:12:46.994 12:36:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:12:46.994 12:36:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2469074' 00:12:46.995 killing process with pid 2469074 00:12:46.995 12:36:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 2469074 00:12:46.995 12:36:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 2469074 00:12:46.995 12:36:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:46.995 12:36:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:46.995 12:36:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:46.995 12:36:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:12:46.995 12:36:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:46.995 12:36:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:12:46.995 12:36:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:12:46.995 12:36:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:46.995 12:36:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:46.995 12:36:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:46.995 12:36:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:46.995 12:36:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:49.531 12:36:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:49.531 00:12:49.531 real 0m18.960s 00:12:49.531 user 0m40.101s 00:12:49.531 sys 0m8.457s 00:12:49.531 12:36:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:49.531 12:36:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:49.531 ************************************ 00:12:49.531 END TEST nvmf_connect_stress 00:12:49.531 ************************************ 00:12:49.531 12:36:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:12:49.531 12:36:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:49.531 12:36:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:49.531 12:36:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:49.531 ************************************ 00:12:49.531 START TEST nvmf_fused_ordering 00:12:49.531 ************************************ 00:12:49.531 12:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:12:49.531 * Looking for test storage... 00:12:49.531 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:49.531 12:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:49.531 12:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lcov --version 00:12:49.531 12:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:49.531 12:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:49.531 12:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:49.531 12:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:49.531 12:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:49.531 12:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:12:49.531 12:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:12:49.531 12:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:12:49.531 12:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:12:49.531 12:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:12:49.531 12:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:12:49.531 12:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:12:49.531 12:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:49.531 12:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:12:49.531 12:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:12:49.531 12:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:49.531 12:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:49.531 12:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:12:49.531 12:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:12:49.531 12:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:49.531 12:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:12:49.531 12:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:12:49.531 12:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:12:49.531 12:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:12:49.531 12:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:49.531 12:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:12:49.531 12:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:12:49.531 12:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:49.531 12:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:49.531 12:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:12:49.531 12:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:49.531 12:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:49.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:49.531 --rc genhtml_branch_coverage=1 00:12:49.531 --rc genhtml_function_coverage=1 00:12:49.531 --rc genhtml_legend=1 00:12:49.531 --rc geninfo_all_blocks=1 00:12:49.531 --rc geninfo_unexecuted_blocks=1 00:12:49.531 00:12:49.531 ' 00:12:49.531 12:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:49.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:49.532 --rc genhtml_branch_coverage=1 00:12:49.532 --rc genhtml_function_coverage=1 00:12:49.532 --rc genhtml_legend=1 00:12:49.532 --rc geninfo_all_blocks=1 00:12:49.532 --rc geninfo_unexecuted_blocks=1 00:12:49.532 00:12:49.532 ' 00:12:49.532 12:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:49.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:49.532 --rc genhtml_branch_coverage=1 00:12:49.532 --rc genhtml_function_coverage=1 00:12:49.532 --rc genhtml_legend=1 00:12:49.532 --rc geninfo_all_blocks=1 00:12:49.532 --rc geninfo_unexecuted_blocks=1 00:12:49.532 00:12:49.532 ' 00:12:49.532 12:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:49.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:49.532 --rc genhtml_branch_coverage=1 00:12:49.532 --rc genhtml_function_coverage=1 00:12:49.532 --rc genhtml_legend=1 00:12:49.532 --rc geninfo_all_blocks=1 00:12:49.532 --rc geninfo_unexecuted_blocks=1 00:12:49.532 00:12:49.532 ' 00:12:49.532 12:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:49.532 12:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:12:49.532 12:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:49.532 12:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:49.532 12:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:49.532 12:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:49.532 12:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:49.532 12:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:49.532 12:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:49.532 12:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:49.532 12:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:49.532 12:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:49.532 12:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:49.532 12:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:49.532 12:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:49.532 12:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:49.532 12:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:49.532 12:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:49.532 12:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:49.532 12:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:12:49.532 12:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:49.532 12:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:49.532 12:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:49.532 12:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:49.532 12:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:49.532 12:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:49.532 12:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:12:49.532 12:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:49.532 12:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:12:49.532 12:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:49.532 12:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:49.532 12:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:49.532 12:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:49.532 12:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:49.532 12:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:49.532 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:49.532 12:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:49.532 12:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:49.532 12:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:49.532 12:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:12:49.532 12:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:49.532 12:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:49.532 12:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:49.532 12:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:49.532 12:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:49.532 12:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:49.532 12:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:49.532 12:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:49.532 12:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:49.532 12:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:49.532 12:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:12:49.532 12:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:54.803 12:36:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:54.803 12:36:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:12:54.803 12:36:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:54.803 12:36:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:54.803 12:36:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:54.803 12:36:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:54.803 12:36:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:54.804 12:36:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:12:54.804 12:36:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:54.804 12:36:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:12:54.804 12:36:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:12:54.804 12:36:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:12:54.804 12:36:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:12:54.804 12:36:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:12:54.804 12:36:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:12:54.804 12:36:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:54.804 12:36:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:54.804 12:36:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:54.804 12:36:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:54.804 12:36:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:54.804 12:36:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:54.804 12:36:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:54.804 12:36:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:54.804 12:36:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:54.804 12:36:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:54.804 12:36:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:54.804 12:36:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:54.804 12:36:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:54.804 12:36:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:54.804 12:36:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:54.804 12:36:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:54.804 12:36:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:54.804 12:36:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:54.804 12:36:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:54.804 12:36:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:54.804 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:54.804 12:36:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:54.804 12:36:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:54.804 12:36:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:54.804 12:36:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:54.804 12:36:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:54.804 12:36:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:54.804 12:36:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:54.804 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:54.804 12:36:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:54.804 12:36:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:54.804 12:36:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:54.804 12:36:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:54.804 12:36:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:54.804 12:36:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:54.804 12:36:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:54.804 12:36:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:54.804 12:36:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:54.804 12:36:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:54.804 12:36:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:54.804 12:36:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:54.804 12:36:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:54.804 12:36:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:54.804 12:36:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:54.804 12:36:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:54.804 Found net devices under 0000:86:00.0: cvl_0_0 00:12:54.804 12:36:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:54.804 12:36:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:54.804 12:36:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:54.804 12:36:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:54.804 12:36:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:54.804 12:36:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:54.804 12:36:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:54.804 12:36:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:54.804 12:36:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:54.804 Found net devices under 0000:86:00.1: cvl_0_1 00:12:54.804 12:36:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:54.804 12:36:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:54.804 12:36:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:12:54.804 12:36:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:54.804 12:36:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:54.804 12:36:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:54.804 12:36:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:54.804 12:36:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:54.804 12:36:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:54.804 12:36:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:54.804 12:36:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:54.804 12:36:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:54.804 12:36:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:54.804 12:36:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:54.804 12:36:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:54.804 12:36:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:54.804 12:36:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:54.804 12:36:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:54.804 12:36:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:54.804 12:36:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:54.804 12:36:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:54.804 12:36:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:54.804 12:36:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:54.804 12:36:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:54.804 12:36:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:54.804 12:36:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:54.804 12:36:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:54.804 12:36:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:54.804 12:36:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:54.805 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:54.805 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.266 ms 00:12:54.805 00:12:54.805 --- 10.0.0.2 ping statistics --- 00:12:54.805 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:54.805 rtt min/avg/max/mdev = 0.266/0.266/0.266/0.000 ms 00:12:54.805 12:36:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:54.805 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:54.805 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.138 ms 00:12:54.805 00:12:54.805 --- 10.0.0.1 ping statistics --- 00:12:54.805 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:54.805 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:12:54.805 12:36:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:54.805 12:36:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:12:54.805 12:36:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:54.805 12:36:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:54.805 12:36:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:54.805 12:36:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:54.805 12:36:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:54.805 12:36:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:54.805 12:36:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:54.805 12:36:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:12:54.805 12:36:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:54.805 12:36:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:54.805 12:36:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:54.805 12:36:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=2474461 00:12:54.805 12:36:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 2474461 00:12:54.805 12:36:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:54.805 12:36:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 2474461 ']' 00:12:54.805 12:36:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:54.805 12:36:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:54.805 12:36:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:54.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:54.805 12:36:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:54.805 12:36:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:54.805 [2024-11-28 12:36:37.074916] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:12:54.805 [2024-11-28 12:36:37.074964] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:54.805 [2024-11-28 12:36:37.142641] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:54.805 [2024-11-28 12:36:37.183263] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:54.805 [2024-11-28 12:36:37.183300] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:54.805 [2024-11-28 12:36:37.183308] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:54.805 [2024-11-28 12:36:37.183314] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:54.805 [2024-11-28 12:36:37.183319] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:54.805 [2024-11-28 12:36:37.183830] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:54.805 12:36:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:54.805 12:36:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:12:54.805 12:36:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:54.805 12:36:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:54.805 12:36:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:54.805 12:36:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:54.805 12:36:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:54.805 12:36:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.805 12:36:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:54.805 [2024-11-28 12:36:37.319375] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:55.063 12:36:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.063 12:36:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:55.063 12:36:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.063 12:36:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:55.063 12:36:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.063 12:36:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:55.063 12:36:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.063 12:36:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:55.063 [2024-11-28 12:36:37.335557] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:55.063 12:36:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.063 12:36:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:55.063 12:36:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.063 12:36:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:55.063 NULL1 00:12:55.063 12:36:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.063 12:36:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:12:55.063 12:36:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.063 12:36:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:55.063 12:36:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.063 12:36:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:12:55.063 12:36:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.063 12:36:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:55.063 12:36:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.063 12:36:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:12:55.063 [2024-11-28 12:36:37.378185] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:12:55.063 [2024-11-28 12:36:37.378215] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2474490 ] 00:12:55.321 Attached to nqn.2016-06.io.spdk:cnode1 00:12:55.321 Namespace ID: 1 size: 1GB 00:12:55.321 fused_ordering(0) 00:12:55.321 fused_ordering(1) 00:12:55.321 fused_ordering(2) 00:12:55.321 fused_ordering(3) 00:12:55.321 fused_ordering(4) 00:12:55.321 fused_ordering(5) 00:12:55.321 fused_ordering(6) 00:12:55.321 fused_ordering(7) 00:12:55.321 fused_ordering(8) 00:12:55.321 fused_ordering(9) 00:12:55.321 fused_ordering(10) 00:12:55.321 fused_ordering(11) 00:12:55.321 fused_ordering(12) 00:12:55.321 fused_ordering(13) 00:12:55.321 fused_ordering(14) 00:12:55.321 fused_ordering(15) 00:12:55.321 fused_ordering(16) 00:12:55.321 fused_ordering(17) 00:12:55.321 fused_ordering(18) 00:12:55.321 fused_ordering(19) 00:12:55.321 fused_ordering(20) 00:12:55.321 fused_ordering(21) 00:12:55.321 fused_ordering(22) 00:12:55.321 fused_ordering(23) 00:12:55.321 fused_ordering(24) 00:12:55.321 fused_ordering(25) 00:12:55.321 fused_ordering(26) 00:12:55.321 fused_ordering(27) 00:12:55.321 fused_ordering(28) 00:12:55.321 fused_ordering(29) 00:12:55.321 fused_ordering(30) 00:12:55.321 fused_ordering(31) 00:12:55.321 fused_ordering(32) 00:12:55.321 fused_ordering(33) 00:12:55.321 fused_ordering(34) 00:12:55.321 fused_ordering(35) 00:12:55.321 fused_ordering(36) 00:12:55.321 fused_ordering(37) 00:12:55.321 fused_ordering(38) 00:12:55.321 fused_ordering(39) 00:12:55.321 fused_ordering(40) 00:12:55.321 fused_ordering(41) 00:12:55.321 fused_ordering(42) 00:12:55.321 fused_ordering(43) 00:12:55.321 fused_ordering(44) 00:12:55.321 fused_ordering(45) 00:12:55.321 fused_ordering(46) 00:12:55.321 fused_ordering(47) 00:12:55.321 fused_ordering(48) 00:12:55.321 fused_ordering(49) 00:12:55.321 fused_ordering(50) 00:12:55.321 fused_ordering(51) 00:12:55.321 fused_ordering(52) 00:12:55.321 fused_ordering(53) 00:12:55.321 fused_ordering(54) 00:12:55.321 fused_ordering(55) 00:12:55.321 fused_ordering(56) 00:12:55.321 fused_ordering(57) 00:12:55.321 fused_ordering(58) 00:12:55.321 fused_ordering(59) 00:12:55.321 fused_ordering(60) 00:12:55.321 fused_ordering(61) 00:12:55.321 fused_ordering(62) 00:12:55.321 fused_ordering(63) 00:12:55.321 fused_ordering(64) 00:12:55.321 fused_ordering(65) 00:12:55.321 fused_ordering(66) 00:12:55.321 fused_ordering(67) 00:12:55.321 fused_ordering(68) 00:12:55.321 fused_ordering(69) 00:12:55.321 fused_ordering(70) 00:12:55.321 fused_ordering(71) 00:12:55.321 fused_ordering(72) 00:12:55.321 fused_ordering(73) 00:12:55.321 fused_ordering(74) 00:12:55.321 fused_ordering(75) 00:12:55.321 fused_ordering(76) 00:12:55.321 fused_ordering(77) 00:12:55.321 fused_ordering(78) 00:12:55.321 fused_ordering(79) 00:12:55.321 fused_ordering(80) 00:12:55.321 fused_ordering(81) 00:12:55.321 fused_ordering(82) 00:12:55.321 fused_ordering(83) 00:12:55.321 fused_ordering(84) 00:12:55.321 fused_ordering(85) 00:12:55.321 fused_ordering(86) 00:12:55.321 fused_ordering(87) 00:12:55.321 fused_ordering(88) 00:12:55.321 fused_ordering(89) 00:12:55.321 fused_ordering(90) 00:12:55.321 fused_ordering(91) 00:12:55.321 fused_ordering(92) 00:12:55.321 fused_ordering(93) 00:12:55.321 fused_ordering(94) 00:12:55.321 fused_ordering(95) 00:12:55.321 fused_ordering(96) 00:12:55.321 fused_ordering(97) 00:12:55.321 fused_ordering(98) 00:12:55.321 fused_ordering(99) 00:12:55.321 fused_ordering(100) 00:12:55.321 fused_ordering(101) 00:12:55.321 fused_ordering(102) 00:12:55.321 fused_ordering(103) 00:12:55.321 fused_ordering(104) 00:12:55.321 fused_ordering(105) 00:12:55.321 fused_ordering(106) 00:12:55.321 fused_ordering(107) 00:12:55.321 fused_ordering(108) 00:12:55.321 fused_ordering(109) 00:12:55.321 fused_ordering(110) 00:12:55.321 fused_ordering(111) 00:12:55.321 fused_ordering(112) 00:12:55.321 fused_ordering(113) 00:12:55.321 fused_ordering(114) 00:12:55.321 fused_ordering(115) 00:12:55.321 fused_ordering(116) 00:12:55.321 fused_ordering(117) 00:12:55.321 fused_ordering(118) 00:12:55.321 fused_ordering(119) 00:12:55.321 fused_ordering(120) 00:12:55.321 fused_ordering(121) 00:12:55.321 fused_ordering(122) 00:12:55.321 fused_ordering(123) 00:12:55.321 fused_ordering(124) 00:12:55.321 fused_ordering(125) 00:12:55.321 fused_ordering(126) 00:12:55.321 fused_ordering(127) 00:12:55.321 fused_ordering(128) 00:12:55.321 fused_ordering(129) 00:12:55.321 fused_ordering(130) 00:12:55.321 fused_ordering(131) 00:12:55.321 fused_ordering(132) 00:12:55.321 fused_ordering(133) 00:12:55.321 fused_ordering(134) 00:12:55.321 fused_ordering(135) 00:12:55.321 fused_ordering(136) 00:12:55.321 fused_ordering(137) 00:12:55.321 fused_ordering(138) 00:12:55.321 fused_ordering(139) 00:12:55.321 fused_ordering(140) 00:12:55.321 fused_ordering(141) 00:12:55.321 fused_ordering(142) 00:12:55.321 fused_ordering(143) 00:12:55.321 fused_ordering(144) 00:12:55.321 fused_ordering(145) 00:12:55.321 fused_ordering(146) 00:12:55.321 fused_ordering(147) 00:12:55.321 fused_ordering(148) 00:12:55.321 fused_ordering(149) 00:12:55.321 fused_ordering(150) 00:12:55.321 fused_ordering(151) 00:12:55.321 fused_ordering(152) 00:12:55.321 fused_ordering(153) 00:12:55.321 fused_ordering(154) 00:12:55.321 fused_ordering(155) 00:12:55.321 fused_ordering(156) 00:12:55.321 fused_ordering(157) 00:12:55.321 fused_ordering(158) 00:12:55.321 fused_ordering(159) 00:12:55.321 fused_ordering(160) 00:12:55.321 fused_ordering(161) 00:12:55.321 fused_ordering(162) 00:12:55.321 fused_ordering(163) 00:12:55.321 fused_ordering(164) 00:12:55.321 fused_ordering(165) 00:12:55.321 fused_ordering(166) 00:12:55.321 fused_ordering(167) 00:12:55.321 fused_ordering(168) 00:12:55.321 fused_ordering(169) 00:12:55.321 fused_ordering(170) 00:12:55.321 fused_ordering(171) 00:12:55.321 fused_ordering(172) 00:12:55.321 fused_ordering(173) 00:12:55.321 fused_ordering(174) 00:12:55.321 fused_ordering(175) 00:12:55.321 fused_ordering(176) 00:12:55.321 fused_ordering(177) 00:12:55.321 fused_ordering(178) 00:12:55.321 fused_ordering(179) 00:12:55.321 fused_ordering(180) 00:12:55.321 fused_ordering(181) 00:12:55.321 fused_ordering(182) 00:12:55.321 fused_ordering(183) 00:12:55.321 fused_ordering(184) 00:12:55.321 fused_ordering(185) 00:12:55.321 fused_ordering(186) 00:12:55.321 fused_ordering(187) 00:12:55.321 fused_ordering(188) 00:12:55.321 fused_ordering(189) 00:12:55.321 fused_ordering(190) 00:12:55.322 fused_ordering(191) 00:12:55.322 fused_ordering(192) 00:12:55.322 fused_ordering(193) 00:12:55.322 fused_ordering(194) 00:12:55.322 fused_ordering(195) 00:12:55.322 fused_ordering(196) 00:12:55.322 fused_ordering(197) 00:12:55.322 fused_ordering(198) 00:12:55.322 fused_ordering(199) 00:12:55.322 fused_ordering(200) 00:12:55.322 fused_ordering(201) 00:12:55.322 fused_ordering(202) 00:12:55.322 fused_ordering(203) 00:12:55.322 fused_ordering(204) 00:12:55.322 fused_ordering(205) 00:12:55.580 fused_ordering(206) 00:12:55.580 fused_ordering(207) 00:12:55.580 fused_ordering(208) 00:12:55.580 fused_ordering(209) 00:12:55.580 fused_ordering(210) 00:12:55.580 fused_ordering(211) 00:12:55.580 fused_ordering(212) 00:12:55.580 fused_ordering(213) 00:12:55.580 fused_ordering(214) 00:12:55.580 fused_ordering(215) 00:12:55.580 fused_ordering(216) 00:12:55.580 fused_ordering(217) 00:12:55.580 fused_ordering(218) 00:12:55.580 fused_ordering(219) 00:12:55.580 fused_ordering(220) 00:12:55.580 fused_ordering(221) 00:12:55.580 fused_ordering(222) 00:12:55.580 fused_ordering(223) 00:12:55.580 fused_ordering(224) 00:12:55.580 fused_ordering(225) 00:12:55.580 fused_ordering(226) 00:12:55.580 fused_ordering(227) 00:12:55.580 fused_ordering(228) 00:12:55.580 fused_ordering(229) 00:12:55.580 fused_ordering(230) 00:12:55.580 fused_ordering(231) 00:12:55.580 fused_ordering(232) 00:12:55.580 fused_ordering(233) 00:12:55.580 fused_ordering(234) 00:12:55.580 fused_ordering(235) 00:12:55.580 fused_ordering(236) 00:12:55.580 fused_ordering(237) 00:12:55.580 fused_ordering(238) 00:12:55.580 fused_ordering(239) 00:12:55.580 fused_ordering(240) 00:12:55.580 fused_ordering(241) 00:12:55.580 fused_ordering(242) 00:12:55.580 fused_ordering(243) 00:12:55.580 fused_ordering(244) 00:12:55.580 fused_ordering(245) 00:12:55.580 fused_ordering(246) 00:12:55.580 fused_ordering(247) 00:12:55.580 fused_ordering(248) 00:12:55.580 fused_ordering(249) 00:12:55.580 fused_ordering(250) 00:12:55.580 fused_ordering(251) 00:12:55.580 fused_ordering(252) 00:12:55.580 fused_ordering(253) 00:12:55.580 fused_ordering(254) 00:12:55.580 fused_ordering(255) 00:12:55.580 fused_ordering(256) 00:12:55.580 fused_ordering(257) 00:12:55.580 fused_ordering(258) 00:12:55.580 fused_ordering(259) 00:12:55.580 fused_ordering(260) 00:12:55.580 fused_ordering(261) 00:12:55.580 fused_ordering(262) 00:12:55.580 fused_ordering(263) 00:12:55.580 fused_ordering(264) 00:12:55.580 fused_ordering(265) 00:12:55.580 fused_ordering(266) 00:12:55.580 fused_ordering(267) 00:12:55.580 fused_ordering(268) 00:12:55.580 fused_ordering(269) 00:12:55.580 fused_ordering(270) 00:12:55.580 fused_ordering(271) 00:12:55.580 fused_ordering(272) 00:12:55.580 fused_ordering(273) 00:12:55.580 fused_ordering(274) 00:12:55.580 fused_ordering(275) 00:12:55.580 fused_ordering(276) 00:12:55.580 fused_ordering(277) 00:12:55.580 fused_ordering(278) 00:12:55.580 fused_ordering(279) 00:12:55.580 fused_ordering(280) 00:12:55.580 fused_ordering(281) 00:12:55.580 fused_ordering(282) 00:12:55.580 fused_ordering(283) 00:12:55.580 fused_ordering(284) 00:12:55.580 fused_ordering(285) 00:12:55.580 fused_ordering(286) 00:12:55.580 fused_ordering(287) 00:12:55.580 fused_ordering(288) 00:12:55.580 fused_ordering(289) 00:12:55.580 fused_ordering(290) 00:12:55.580 fused_ordering(291) 00:12:55.580 fused_ordering(292) 00:12:55.580 fused_ordering(293) 00:12:55.580 fused_ordering(294) 00:12:55.580 fused_ordering(295) 00:12:55.580 fused_ordering(296) 00:12:55.580 fused_ordering(297) 00:12:55.580 fused_ordering(298) 00:12:55.580 fused_ordering(299) 00:12:55.580 fused_ordering(300) 00:12:55.580 fused_ordering(301) 00:12:55.580 fused_ordering(302) 00:12:55.580 fused_ordering(303) 00:12:55.580 fused_ordering(304) 00:12:55.580 fused_ordering(305) 00:12:55.580 fused_ordering(306) 00:12:55.580 fused_ordering(307) 00:12:55.580 fused_ordering(308) 00:12:55.580 fused_ordering(309) 00:12:55.580 fused_ordering(310) 00:12:55.580 fused_ordering(311) 00:12:55.580 fused_ordering(312) 00:12:55.580 fused_ordering(313) 00:12:55.580 fused_ordering(314) 00:12:55.580 fused_ordering(315) 00:12:55.580 fused_ordering(316) 00:12:55.580 fused_ordering(317) 00:12:55.580 fused_ordering(318) 00:12:55.580 fused_ordering(319) 00:12:55.580 fused_ordering(320) 00:12:55.580 fused_ordering(321) 00:12:55.580 fused_ordering(322) 00:12:55.580 fused_ordering(323) 00:12:55.580 fused_ordering(324) 00:12:55.580 fused_ordering(325) 00:12:55.580 fused_ordering(326) 00:12:55.580 fused_ordering(327) 00:12:55.580 fused_ordering(328) 00:12:55.580 fused_ordering(329) 00:12:55.580 fused_ordering(330) 00:12:55.580 fused_ordering(331) 00:12:55.580 fused_ordering(332) 00:12:55.580 fused_ordering(333) 00:12:55.580 fused_ordering(334) 00:12:55.580 fused_ordering(335) 00:12:55.580 fused_ordering(336) 00:12:55.580 fused_ordering(337) 00:12:55.580 fused_ordering(338) 00:12:55.580 fused_ordering(339) 00:12:55.580 fused_ordering(340) 00:12:55.580 fused_ordering(341) 00:12:55.580 fused_ordering(342) 00:12:55.580 fused_ordering(343) 00:12:55.580 fused_ordering(344) 00:12:55.580 fused_ordering(345) 00:12:55.580 fused_ordering(346) 00:12:55.580 fused_ordering(347) 00:12:55.580 fused_ordering(348) 00:12:55.580 fused_ordering(349) 00:12:55.580 fused_ordering(350) 00:12:55.580 fused_ordering(351) 00:12:55.580 fused_ordering(352) 00:12:55.580 fused_ordering(353) 00:12:55.580 fused_ordering(354) 00:12:55.580 fused_ordering(355) 00:12:55.580 fused_ordering(356) 00:12:55.580 fused_ordering(357) 00:12:55.580 fused_ordering(358) 00:12:55.580 fused_ordering(359) 00:12:55.580 fused_ordering(360) 00:12:55.580 fused_ordering(361) 00:12:55.580 fused_ordering(362) 00:12:55.580 fused_ordering(363) 00:12:55.580 fused_ordering(364) 00:12:55.580 fused_ordering(365) 00:12:55.580 fused_ordering(366) 00:12:55.580 fused_ordering(367) 00:12:55.580 fused_ordering(368) 00:12:55.580 fused_ordering(369) 00:12:55.580 fused_ordering(370) 00:12:55.580 fused_ordering(371) 00:12:55.580 fused_ordering(372) 00:12:55.580 fused_ordering(373) 00:12:55.580 fused_ordering(374) 00:12:55.580 fused_ordering(375) 00:12:55.580 fused_ordering(376) 00:12:55.580 fused_ordering(377) 00:12:55.580 fused_ordering(378) 00:12:55.580 fused_ordering(379) 00:12:55.580 fused_ordering(380) 00:12:55.580 fused_ordering(381) 00:12:55.580 fused_ordering(382) 00:12:55.580 fused_ordering(383) 00:12:55.580 fused_ordering(384) 00:12:55.580 fused_ordering(385) 00:12:55.580 fused_ordering(386) 00:12:55.580 fused_ordering(387) 00:12:55.580 fused_ordering(388) 00:12:55.580 fused_ordering(389) 00:12:55.580 fused_ordering(390) 00:12:55.580 fused_ordering(391) 00:12:55.580 fused_ordering(392) 00:12:55.580 fused_ordering(393) 00:12:55.580 fused_ordering(394) 00:12:55.580 fused_ordering(395) 00:12:55.580 fused_ordering(396) 00:12:55.580 fused_ordering(397) 00:12:55.580 fused_ordering(398) 00:12:55.580 fused_ordering(399) 00:12:55.580 fused_ordering(400) 00:12:55.580 fused_ordering(401) 00:12:55.580 fused_ordering(402) 00:12:55.580 fused_ordering(403) 00:12:55.580 fused_ordering(404) 00:12:55.580 fused_ordering(405) 00:12:55.580 fused_ordering(406) 00:12:55.580 fused_ordering(407) 00:12:55.580 fused_ordering(408) 00:12:55.580 fused_ordering(409) 00:12:55.580 fused_ordering(410) 00:12:56.147 fused_ordering(411) 00:12:56.147 fused_ordering(412) 00:12:56.147 fused_ordering(413) 00:12:56.147 fused_ordering(414) 00:12:56.147 fused_ordering(415) 00:12:56.147 fused_ordering(416) 00:12:56.147 fused_ordering(417) 00:12:56.147 fused_ordering(418) 00:12:56.147 fused_ordering(419) 00:12:56.147 fused_ordering(420) 00:12:56.147 fused_ordering(421) 00:12:56.147 fused_ordering(422) 00:12:56.147 fused_ordering(423) 00:12:56.147 fused_ordering(424) 00:12:56.147 fused_ordering(425) 00:12:56.147 fused_ordering(426) 00:12:56.147 fused_ordering(427) 00:12:56.147 fused_ordering(428) 00:12:56.147 fused_ordering(429) 00:12:56.147 fused_ordering(430) 00:12:56.147 fused_ordering(431) 00:12:56.147 fused_ordering(432) 00:12:56.147 fused_ordering(433) 00:12:56.147 fused_ordering(434) 00:12:56.147 fused_ordering(435) 00:12:56.147 fused_ordering(436) 00:12:56.147 fused_ordering(437) 00:12:56.147 fused_ordering(438) 00:12:56.147 fused_ordering(439) 00:12:56.147 fused_ordering(440) 00:12:56.147 fused_ordering(441) 00:12:56.147 fused_ordering(442) 00:12:56.147 fused_ordering(443) 00:12:56.147 fused_ordering(444) 00:12:56.147 fused_ordering(445) 00:12:56.147 fused_ordering(446) 00:12:56.147 fused_ordering(447) 00:12:56.147 fused_ordering(448) 00:12:56.147 fused_ordering(449) 00:12:56.147 fused_ordering(450) 00:12:56.147 fused_ordering(451) 00:12:56.147 fused_ordering(452) 00:12:56.147 fused_ordering(453) 00:12:56.147 fused_ordering(454) 00:12:56.147 fused_ordering(455) 00:12:56.147 fused_ordering(456) 00:12:56.147 fused_ordering(457) 00:12:56.147 fused_ordering(458) 00:12:56.147 fused_ordering(459) 00:12:56.147 fused_ordering(460) 00:12:56.147 fused_ordering(461) 00:12:56.147 fused_ordering(462) 00:12:56.147 fused_ordering(463) 00:12:56.147 fused_ordering(464) 00:12:56.147 fused_ordering(465) 00:12:56.147 fused_ordering(466) 00:12:56.147 fused_ordering(467) 00:12:56.147 fused_ordering(468) 00:12:56.147 fused_ordering(469) 00:12:56.147 fused_ordering(470) 00:12:56.147 fused_ordering(471) 00:12:56.147 fused_ordering(472) 00:12:56.147 fused_ordering(473) 00:12:56.147 fused_ordering(474) 00:12:56.147 fused_ordering(475) 00:12:56.147 fused_ordering(476) 00:12:56.147 fused_ordering(477) 00:12:56.147 fused_ordering(478) 00:12:56.147 fused_ordering(479) 00:12:56.147 fused_ordering(480) 00:12:56.147 fused_ordering(481) 00:12:56.147 fused_ordering(482) 00:12:56.147 fused_ordering(483) 00:12:56.147 fused_ordering(484) 00:12:56.147 fused_ordering(485) 00:12:56.147 fused_ordering(486) 00:12:56.147 fused_ordering(487) 00:12:56.147 fused_ordering(488) 00:12:56.147 fused_ordering(489) 00:12:56.147 fused_ordering(490) 00:12:56.147 fused_ordering(491) 00:12:56.147 fused_ordering(492) 00:12:56.147 fused_ordering(493) 00:12:56.147 fused_ordering(494) 00:12:56.147 fused_ordering(495) 00:12:56.147 fused_ordering(496) 00:12:56.147 fused_ordering(497) 00:12:56.147 fused_ordering(498) 00:12:56.147 fused_ordering(499) 00:12:56.147 fused_ordering(500) 00:12:56.147 fused_ordering(501) 00:12:56.147 fused_ordering(502) 00:12:56.147 fused_ordering(503) 00:12:56.147 fused_ordering(504) 00:12:56.147 fused_ordering(505) 00:12:56.147 fused_ordering(506) 00:12:56.147 fused_ordering(507) 00:12:56.147 fused_ordering(508) 00:12:56.147 fused_ordering(509) 00:12:56.147 fused_ordering(510) 00:12:56.147 fused_ordering(511) 00:12:56.147 fused_ordering(512) 00:12:56.147 fused_ordering(513) 00:12:56.147 fused_ordering(514) 00:12:56.147 fused_ordering(515) 00:12:56.147 fused_ordering(516) 00:12:56.147 fused_ordering(517) 00:12:56.147 fused_ordering(518) 00:12:56.147 fused_ordering(519) 00:12:56.147 fused_ordering(520) 00:12:56.147 fused_ordering(521) 00:12:56.147 fused_ordering(522) 00:12:56.147 fused_ordering(523) 00:12:56.147 fused_ordering(524) 00:12:56.147 fused_ordering(525) 00:12:56.147 fused_ordering(526) 00:12:56.147 fused_ordering(527) 00:12:56.147 fused_ordering(528) 00:12:56.147 fused_ordering(529) 00:12:56.147 fused_ordering(530) 00:12:56.147 fused_ordering(531) 00:12:56.147 fused_ordering(532) 00:12:56.147 fused_ordering(533) 00:12:56.147 fused_ordering(534) 00:12:56.147 fused_ordering(535) 00:12:56.147 fused_ordering(536) 00:12:56.147 fused_ordering(537) 00:12:56.147 fused_ordering(538) 00:12:56.147 fused_ordering(539) 00:12:56.147 fused_ordering(540) 00:12:56.147 fused_ordering(541) 00:12:56.147 fused_ordering(542) 00:12:56.147 fused_ordering(543) 00:12:56.147 fused_ordering(544) 00:12:56.147 fused_ordering(545) 00:12:56.147 fused_ordering(546) 00:12:56.147 fused_ordering(547) 00:12:56.147 fused_ordering(548) 00:12:56.147 fused_ordering(549) 00:12:56.147 fused_ordering(550) 00:12:56.147 fused_ordering(551) 00:12:56.147 fused_ordering(552) 00:12:56.147 fused_ordering(553) 00:12:56.147 fused_ordering(554) 00:12:56.147 fused_ordering(555) 00:12:56.147 fused_ordering(556) 00:12:56.147 fused_ordering(557) 00:12:56.147 fused_ordering(558) 00:12:56.147 fused_ordering(559) 00:12:56.147 fused_ordering(560) 00:12:56.147 fused_ordering(561) 00:12:56.147 fused_ordering(562) 00:12:56.147 fused_ordering(563) 00:12:56.147 fused_ordering(564) 00:12:56.147 fused_ordering(565) 00:12:56.147 fused_ordering(566) 00:12:56.147 fused_ordering(567) 00:12:56.147 fused_ordering(568) 00:12:56.147 fused_ordering(569) 00:12:56.147 fused_ordering(570) 00:12:56.147 fused_ordering(571) 00:12:56.147 fused_ordering(572) 00:12:56.147 fused_ordering(573) 00:12:56.147 fused_ordering(574) 00:12:56.147 fused_ordering(575) 00:12:56.147 fused_ordering(576) 00:12:56.147 fused_ordering(577) 00:12:56.147 fused_ordering(578) 00:12:56.147 fused_ordering(579) 00:12:56.147 fused_ordering(580) 00:12:56.147 fused_ordering(581) 00:12:56.147 fused_ordering(582) 00:12:56.147 fused_ordering(583) 00:12:56.147 fused_ordering(584) 00:12:56.147 fused_ordering(585) 00:12:56.147 fused_ordering(586) 00:12:56.147 fused_ordering(587) 00:12:56.147 fused_ordering(588) 00:12:56.147 fused_ordering(589) 00:12:56.147 fused_ordering(590) 00:12:56.147 fused_ordering(591) 00:12:56.147 fused_ordering(592) 00:12:56.147 fused_ordering(593) 00:12:56.147 fused_ordering(594) 00:12:56.147 fused_ordering(595) 00:12:56.147 fused_ordering(596) 00:12:56.147 fused_ordering(597) 00:12:56.147 fused_ordering(598) 00:12:56.147 fused_ordering(599) 00:12:56.147 fused_ordering(600) 00:12:56.147 fused_ordering(601) 00:12:56.147 fused_ordering(602) 00:12:56.147 fused_ordering(603) 00:12:56.147 fused_ordering(604) 00:12:56.147 fused_ordering(605) 00:12:56.147 fused_ordering(606) 00:12:56.147 fused_ordering(607) 00:12:56.147 fused_ordering(608) 00:12:56.147 fused_ordering(609) 00:12:56.147 fused_ordering(610) 00:12:56.147 fused_ordering(611) 00:12:56.147 fused_ordering(612) 00:12:56.147 fused_ordering(613) 00:12:56.147 fused_ordering(614) 00:12:56.147 fused_ordering(615) 00:12:56.406 fused_ordering(616) 00:12:56.406 fused_ordering(617) 00:12:56.406 fused_ordering(618) 00:12:56.406 fused_ordering(619) 00:12:56.406 fused_ordering(620) 00:12:56.406 fused_ordering(621) 00:12:56.406 fused_ordering(622) 00:12:56.406 fused_ordering(623) 00:12:56.406 fused_ordering(624) 00:12:56.406 fused_ordering(625) 00:12:56.406 fused_ordering(626) 00:12:56.406 fused_ordering(627) 00:12:56.406 fused_ordering(628) 00:12:56.406 fused_ordering(629) 00:12:56.406 fused_ordering(630) 00:12:56.406 fused_ordering(631) 00:12:56.406 fused_ordering(632) 00:12:56.406 fused_ordering(633) 00:12:56.406 fused_ordering(634) 00:12:56.406 fused_ordering(635) 00:12:56.406 fused_ordering(636) 00:12:56.406 fused_ordering(637) 00:12:56.406 fused_ordering(638) 00:12:56.406 fused_ordering(639) 00:12:56.406 fused_ordering(640) 00:12:56.406 fused_ordering(641) 00:12:56.406 fused_ordering(642) 00:12:56.406 fused_ordering(643) 00:12:56.406 fused_ordering(644) 00:12:56.406 fused_ordering(645) 00:12:56.406 fused_ordering(646) 00:12:56.406 fused_ordering(647) 00:12:56.406 fused_ordering(648) 00:12:56.406 fused_ordering(649) 00:12:56.406 fused_ordering(650) 00:12:56.406 fused_ordering(651) 00:12:56.406 fused_ordering(652) 00:12:56.406 fused_ordering(653) 00:12:56.406 fused_ordering(654) 00:12:56.406 fused_ordering(655) 00:12:56.406 fused_ordering(656) 00:12:56.406 fused_ordering(657) 00:12:56.406 fused_ordering(658) 00:12:56.406 fused_ordering(659) 00:12:56.406 fused_ordering(660) 00:12:56.406 fused_ordering(661) 00:12:56.406 fused_ordering(662) 00:12:56.406 fused_ordering(663) 00:12:56.406 fused_ordering(664) 00:12:56.406 fused_ordering(665) 00:12:56.406 fused_ordering(666) 00:12:56.406 fused_ordering(667) 00:12:56.406 fused_ordering(668) 00:12:56.406 fused_ordering(669) 00:12:56.406 fused_ordering(670) 00:12:56.406 fused_ordering(671) 00:12:56.406 fused_ordering(672) 00:12:56.406 fused_ordering(673) 00:12:56.406 fused_ordering(674) 00:12:56.406 fused_ordering(675) 00:12:56.406 fused_ordering(676) 00:12:56.406 fused_ordering(677) 00:12:56.406 fused_ordering(678) 00:12:56.406 fused_ordering(679) 00:12:56.406 fused_ordering(680) 00:12:56.406 fused_ordering(681) 00:12:56.406 fused_ordering(682) 00:12:56.406 fused_ordering(683) 00:12:56.406 fused_ordering(684) 00:12:56.406 fused_ordering(685) 00:12:56.406 fused_ordering(686) 00:12:56.406 fused_ordering(687) 00:12:56.406 fused_ordering(688) 00:12:56.406 fused_ordering(689) 00:12:56.406 fused_ordering(690) 00:12:56.406 fused_ordering(691) 00:12:56.406 fused_ordering(692) 00:12:56.406 fused_ordering(693) 00:12:56.406 fused_ordering(694) 00:12:56.406 fused_ordering(695) 00:12:56.406 fused_ordering(696) 00:12:56.406 fused_ordering(697) 00:12:56.406 fused_ordering(698) 00:12:56.406 fused_ordering(699) 00:12:56.406 fused_ordering(700) 00:12:56.406 fused_ordering(701) 00:12:56.406 fused_ordering(702) 00:12:56.406 fused_ordering(703) 00:12:56.406 fused_ordering(704) 00:12:56.406 fused_ordering(705) 00:12:56.406 fused_ordering(706) 00:12:56.406 fused_ordering(707) 00:12:56.406 fused_ordering(708) 00:12:56.406 fused_ordering(709) 00:12:56.406 fused_ordering(710) 00:12:56.406 fused_ordering(711) 00:12:56.406 fused_ordering(712) 00:12:56.406 fused_ordering(713) 00:12:56.407 fused_ordering(714) 00:12:56.407 fused_ordering(715) 00:12:56.407 fused_ordering(716) 00:12:56.407 fused_ordering(717) 00:12:56.407 fused_ordering(718) 00:12:56.407 fused_ordering(719) 00:12:56.407 fused_ordering(720) 00:12:56.407 fused_ordering(721) 00:12:56.407 fused_ordering(722) 00:12:56.407 fused_ordering(723) 00:12:56.407 fused_ordering(724) 00:12:56.407 fused_ordering(725) 00:12:56.407 fused_ordering(726) 00:12:56.407 fused_ordering(727) 00:12:56.407 fused_ordering(728) 00:12:56.407 fused_ordering(729) 00:12:56.407 fused_ordering(730) 00:12:56.407 fused_ordering(731) 00:12:56.407 fused_ordering(732) 00:12:56.407 fused_ordering(733) 00:12:56.407 fused_ordering(734) 00:12:56.407 fused_ordering(735) 00:12:56.407 fused_ordering(736) 00:12:56.407 fused_ordering(737) 00:12:56.407 fused_ordering(738) 00:12:56.407 fused_ordering(739) 00:12:56.407 fused_ordering(740) 00:12:56.407 fused_ordering(741) 00:12:56.407 fused_ordering(742) 00:12:56.407 fused_ordering(743) 00:12:56.407 fused_ordering(744) 00:12:56.407 fused_ordering(745) 00:12:56.407 fused_ordering(746) 00:12:56.407 fused_ordering(747) 00:12:56.407 fused_ordering(748) 00:12:56.407 fused_ordering(749) 00:12:56.407 fused_ordering(750) 00:12:56.407 fused_ordering(751) 00:12:56.407 fused_ordering(752) 00:12:56.407 fused_ordering(753) 00:12:56.407 fused_ordering(754) 00:12:56.407 fused_ordering(755) 00:12:56.407 fused_ordering(756) 00:12:56.407 fused_ordering(757) 00:12:56.407 fused_ordering(758) 00:12:56.407 fused_ordering(759) 00:12:56.407 fused_ordering(760) 00:12:56.407 fused_ordering(761) 00:12:56.407 fused_ordering(762) 00:12:56.407 fused_ordering(763) 00:12:56.407 fused_ordering(764) 00:12:56.407 fused_ordering(765) 00:12:56.407 fused_ordering(766) 00:12:56.407 fused_ordering(767) 00:12:56.407 fused_ordering(768) 00:12:56.407 fused_ordering(769) 00:12:56.407 fused_ordering(770) 00:12:56.407 fused_ordering(771) 00:12:56.407 fused_ordering(772) 00:12:56.407 fused_ordering(773) 00:12:56.407 fused_ordering(774) 00:12:56.407 fused_ordering(775) 00:12:56.407 fused_ordering(776) 00:12:56.407 fused_ordering(777) 00:12:56.407 fused_ordering(778) 00:12:56.407 fused_ordering(779) 00:12:56.407 fused_ordering(780) 00:12:56.407 fused_ordering(781) 00:12:56.407 fused_ordering(782) 00:12:56.407 fused_ordering(783) 00:12:56.407 fused_ordering(784) 00:12:56.407 fused_ordering(785) 00:12:56.407 fused_ordering(786) 00:12:56.407 fused_ordering(787) 00:12:56.407 fused_ordering(788) 00:12:56.407 fused_ordering(789) 00:12:56.407 fused_ordering(790) 00:12:56.407 fused_ordering(791) 00:12:56.407 fused_ordering(792) 00:12:56.407 fused_ordering(793) 00:12:56.407 fused_ordering(794) 00:12:56.407 fused_ordering(795) 00:12:56.407 fused_ordering(796) 00:12:56.407 fused_ordering(797) 00:12:56.407 fused_ordering(798) 00:12:56.407 fused_ordering(799) 00:12:56.407 fused_ordering(800) 00:12:56.407 fused_ordering(801) 00:12:56.407 fused_ordering(802) 00:12:56.407 fused_ordering(803) 00:12:56.407 fused_ordering(804) 00:12:56.407 fused_ordering(805) 00:12:56.407 fused_ordering(806) 00:12:56.407 fused_ordering(807) 00:12:56.407 fused_ordering(808) 00:12:56.407 fused_ordering(809) 00:12:56.407 fused_ordering(810) 00:12:56.407 fused_ordering(811) 00:12:56.407 fused_ordering(812) 00:12:56.407 fused_ordering(813) 00:12:56.407 fused_ordering(814) 00:12:56.407 fused_ordering(815) 00:12:56.407 fused_ordering(816) 00:12:56.407 fused_ordering(817) 00:12:56.407 fused_ordering(818) 00:12:56.407 fused_ordering(819) 00:12:56.407 fused_ordering(820) 00:12:56.975 fused_ordering(821) 00:12:56.975 fused_ordering(822) 00:12:56.975 fused_ordering(823) 00:12:56.975 fused_ordering(824) 00:12:56.975 fused_ordering(825) 00:12:56.975 fused_ordering(826) 00:12:56.975 fused_ordering(827) 00:12:56.975 fused_ordering(828) 00:12:56.975 fused_ordering(829) 00:12:56.975 fused_ordering(830) 00:12:56.975 fused_ordering(831) 00:12:56.975 fused_ordering(832) 00:12:56.975 fused_ordering(833) 00:12:56.975 fused_ordering(834) 00:12:56.975 fused_ordering(835) 00:12:56.975 fused_ordering(836) 00:12:56.975 fused_ordering(837) 00:12:56.975 fused_ordering(838) 00:12:56.975 fused_ordering(839) 00:12:56.975 fused_ordering(840) 00:12:56.975 fused_ordering(841) 00:12:56.975 fused_ordering(842) 00:12:56.975 fused_ordering(843) 00:12:56.975 fused_ordering(844) 00:12:56.975 fused_ordering(845) 00:12:56.975 fused_ordering(846) 00:12:56.975 fused_ordering(847) 00:12:56.975 fused_ordering(848) 00:12:56.975 fused_ordering(849) 00:12:56.975 fused_ordering(850) 00:12:56.975 fused_ordering(851) 00:12:56.975 fused_ordering(852) 00:12:56.975 fused_ordering(853) 00:12:56.975 fused_ordering(854) 00:12:56.975 fused_ordering(855) 00:12:56.975 fused_ordering(856) 00:12:56.975 fused_ordering(857) 00:12:56.975 fused_ordering(858) 00:12:56.975 fused_ordering(859) 00:12:56.975 fused_ordering(860) 00:12:56.975 fused_ordering(861) 00:12:56.975 fused_ordering(862) 00:12:56.975 fused_ordering(863) 00:12:56.975 fused_ordering(864) 00:12:56.975 fused_ordering(865) 00:12:56.975 fused_ordering(866) 00:12:56.975 fused_ordering(867) 00:12:56.975 fused_ordering(868) 00:12:56.975 fused_ordering(869) 00:12:56.975 fused_ordering(870) 00:12:56.975 fused_ordering(871) 00:12:56.975 fused_ordering(872) 00:12:56.975 fused_ordering(873) 00:12:56.975 fused_ordering(874) 00:12:56.975 fused_ordering(875) 00:12:56.975 fused_ordering(876) 00:12:56.975 fused_ordering(877) 00:12:56.975 fused_ordering(878) 00:12:56.975 fused_ordering(879) 00:12:56.975 fused_ordering(880) 00:12:56.975 fused_ordering(881) 00:12:56.975 fused_ordering(882) 00:12:56.975 fused_ordering(883) 00:12:56.975 fused_ordering(884) 00:12:56.975 fused_ordering(885) 00:12:56.975 fused_ordering(886) 00:12:56.975 fused_ordering(887) 00:12:56.975 fused_ordering(888) 00:12:56.975 fused_ordering(889) 00:12:56.975 fused_ordering(890) 00:12:56.975 fused_ordering(891) 00:12:56.975 fused_ordering(892) 00:12:56.975 fused_ordering(893) 00:12:56.975 fused_ordering(894) 00:12:56.975 fused_ordering(895) 00:12:56.975 fused_ordering(896) 00:12:56.975 fused_ordering(897) 00:12:56.975 fused_ordering(898) 00:12:56.975 fused_ordering(899) 00:12:56.975 fused_ordering(900) 00:12:56.975 fused_ordering(901) 00:12:56.975 fused_ordering(902) 00:12:56.975 fused_ordering(903) 00:12:56.975 fused_ordering(904) 00:12:56.975 fused_ordering(905) 00:12:56.975 fused_ordering(906) 00:12:56.975 fused_ordering(907) 00:12:56.975 fused_ordering(908) 00:12:56.975 fused_ordering(909) 00:12:56.975 fused_ordering(910) 00:12:56.975 fused_ordering(911) 00:12:56.975 fused_ordering(912) 00:12:56.975 fused_ordering(913) 00:12:56.975 fused_ordering(914) 00:12:56.975 fused_ordering(915) 00:12:56.975 fused_ordering(916) 00:12:56.975 fused_ordering(917) 00:12:56.975 fused_ordering(918) 00:12:56.975 fused_ordering(919) 00:12:56.975 fused_ordering(920) 00:12:56.975 fused_ordering(921) 00:12:56.975 fused_ordering(922) 00:12:56.975 fused_ordering(923) 00:12:56.975 fused_ordering(924) 00:12:56.975 fused_ordering(925) 00:12:56.975 fused_ordering(926) 00:12:56.975 fused_ordering(927) 00:12:56.975 fused_ordering(928) 00:12:56.975 fused_ordering(929) 00:12:56.975 fused_ordering(930) 00:12:56.975 fused_ordering(931) 00:12:56.975 fused_ordering(932) 00:12:56.975 fused_ordering(933) 00:12:56.975 fused_ordering(934) 00:12:56.975 fused_ordering(935) 00:12:56.975 fused_ordering(936) 00:12:56.975 fused_ordering(937) 00:12:56.975 fused_ordering(938) 00:12:56.975 fused_ordering(939) 00:12:56.975 fused_ordering(940) 00:12:56.975 fused_ordering(941) 00:12:56.975 fused_ordering(942) 00:12:56.975 fused_ordering(943) 00:12:56.975 fused_ordering(944) 00:12:56.975 fused_ordering(945) 00:12:56.975 fused_ordering(946) 00:12:56.975 fused_ordering(947) 00:12:56.975 fused_ordering(948) 00:12:56.975 fused_ordering(949) 00:12:56.975 fused_ordering(950) 00:12:56.975 fused_ordering(951) 00:12:56.975 fused_ordering(952) 00:12:56.975 fused_ordering(953) 00:12:56.975 fused_ordering(954) 00:12:56.975 fused_ordering(955) 00:12:56.975 fused_ordering(956) 00:12:56.975 fused_ordering(957) 00:12:56.975 fused_ordering(958) 00:12:56.975 fused_ordering(959) 00:12:56.975 fused_ordering(960) 00:12:56.975 fused_ordering(961) 00:12:56.975 fused_ordering(962) 00:12:56.975 fused_ordering(963) 00:12:56.975 fused_ordering(964) 00:12:56.975 fused_ordering(965) 00:12:56.975 fused_ordering(966) 00:12:56.975 fused_ordering(967) 00:12:56.975 fused_ordering(968) 00:12:56.975 fused_ordering(969) 00:12:56.975 fused_ordering(970) 00:12:56.975 fused_ordering(971) 00:12:56.975 fused_ordering(972) 00:12:56.975 fused_ordering(973) 00:12:56.975 fused_ordering(974) 00:12:56.975 fused_ordering(975) 00:12:56.975 fused_ordering(976) 00:12:56.975 fused_ordering(977) 00:12:56.975 fused_ordering(978) 00:12:56.975 fused_ordering(979) 00:12:56.975 fused_ordering(980) 00:12:56.975 fused_ordering(981) 00:12:56.975 fused_ordering(982) 00:12:56.975 fused_ordering(983) 00:12:56.975 fused_ordering(984) 00:12:56.975 fused_ordering(985) 00:12:56.975 fused_ordering(986) 00:12:56.975 fused_ordering(987) 00:12:56.975 fused_ordering(988) 00:12:56.975 fused_ordering(989) 00:12:56.975 fused_ordering(990) 00:12:56.975 fused_ordering(991) 00:12:56.975 fused_ordering(992) 00:12:56.975 fused_ordering(993) 00:12:56.975 fused_ordering(994) 00:12:56.975 fused_ordering(995) 00:12:56.975 fused_ordering(996) 00:12:56.975 fused_ordering(997) 00:12:56.975 fused_ordering(998) 00:12:56.975 fused_ordering(999) 00:12:56.975 fused_ordering(1000) 00:12:56.975 fused_ordering(1001) 00:12:56.975 fused_ordering(1002) 00:12:56.975 fused_ordering(1003) 00:12:56.975 fused_ordering(1004) 00:12:56.975 fused_ordering(1005) 00:12:56.975 fused_ordering(1006) 00:12:56.975 fused_ordering(1007) 00:12:56.975 fused_ordering(1008) 00:12:56.975 fused_ordering(1009) 00:12:56.975 fused_ordering(1010) 00:12:56.975 fused_ordering(1011) 00:12:56.975 fused_ordering(1012) 00:12:56.975 fused_ordering(1013) 00:12:56.975 fused_ordering(1014) 00:12:56.975 fused_ordering(1015) 00:12:56.975 fused_ordering(1016) 00:12:56.975 fused_ordering(1017) 00:12:56.975 fused_ordering(1018) 00:12:56.975 fused_ordering(1019) 00:12:56.975 fused_ordering(1020) 00:12:56.975 fused_ordering(1021) 00:12:56.975 fused_ordering(1022) 00:12:56.975 fused_ordering(1023) 00:12:56.975 12:36:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:12:56.975 12:36:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:12:56.975 12:36:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:56.975 12:36:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:12:56.975 12:36:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:56.975 12:36:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:12:56.975 12:36:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:56.975 12:36:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:56.975 rmmod nvme_tcp 00:12:56.975 rmmod nvme_fabrics 00:12:56.975 rmmod nvme_keyring 00:12:56.975 12:36:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:56.975 12:36:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:12:56.975 12:36:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:12:56.975 12:36:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 2474461 ']' 00:12:56.975 12:36:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 2474461 00:12:56.976 12:36:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 2474461 ']' 00:12:56.976 12:36:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 2474461 00:12:56.976 12:36:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:12:56.976 12:36:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:56.976 12:36:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2474461 00:12:56.976 12:36:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:12:56.976 12:36:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:12:56.976 12:36:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2474461' 00:12:56.976 killing process with pid 2474461 00:12:56.976 12:36:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 2474461 00:12:56.976 12:36:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 2474461 00:12:57.235 12:36:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:57.235 12:36:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:57.235 12:36:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:57.235 12:36:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:12:57.235 12:36:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:12:57.235 12:36:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:57.235 12:36:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:12:57.235 12:36:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:57.235 12:36:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:57.235 12:36:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:57.235 12:36:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:57.235 12:36:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:59.140 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:59.140 00:12:59.140 real 0m10.026s 00:12:59.140 user 0m4.879s 00:12:59.140 sys 0m5.347s 00:12:59.140 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:59.140 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:59.140 ************************************ 00:12:59.140 END TEST nvmf_fused_ordering 00:12:59.140 ************************************ 00:12:59.140 12:36:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:12:59.140 12:36:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:59.140 12:36:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:59.140 12:36:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:59.400 ************************************ 00:12:59.400 START TEST nvmf_ns_masking 00:12:59.400 ************************************ 00:12:59.400 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:12:59.400 * Looking for test storage... 00:12:59.400 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:59.400 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:59.400 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lcov --version 00:12:59.400 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:59.400 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:59.400 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:59.400 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:59.400 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:59.400 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:12:59.400 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:12:59.400 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:12:59.400 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:12:59.400 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:12:59.400 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:12:59.400 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:12:59.400 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:59.400 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:12:59.400 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:12:59.400 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:59.400 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:59.400 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:12:59.400 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:12:59.400 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:59.400 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:12:59.400 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:12:59.400 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:12:59.400 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:12:59.400 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:59.400 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:12:59.400 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:12:59.400 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:59.400 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:59.400 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:12:59.400 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:59.400 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:59.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:59.400 --rc genhtml_branch_coverage=1 00:12:59.400 --rc genhtml_function_coverage=1 00:12:59.400 --rc genhtml_legend=1 00:12:59.400 --rc geninfo_all_blocks=1 00:12:59.400 --rc geninfo_unexecuted_blocks=1 00:12:59.400 00:12:59.400 ' 00:12:59.400 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:59.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:59.400 --rc genhtml_branch_coverage=1 00:12:59.400 --rc genhtml_function_coverage=1 00:12:59.400 --rc genhtml_legend=1 00:12:59.400 --rc geninfo_all_blocks=1 00:12:59.400 --rc geninfo_unexecuted_blocks=1 00:12:59.400 00:12:59.400 ' 00:12:59.400 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:59.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:59.400 --rc genhtml_branch_coverage=1 00:12:59.400 --rc genhtml_function_coverage=1 00:12:59.400 --rc genhtml_legend=1 00:12:59.400 --rc geninfo_all_blocks=1 00:12:59.400 --rc geninfo_unexecuted_blocks=1 00:12:59.400 00:12:59.400 ' 00:12:59.400 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:59.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:59.400 --rc genhtml_branch_coverage=1 00:12:59.400 --rc genhtml_function_coverage=1 00:12:59.400 --rc genhtml_legend=1 00:12:59.400 --rc geninfo_all_blocks=1 00:12:59.400 --rc geninfo_unexecuted_blocks=1 00:12:59.400 00:12:59.400 ' 00:12:59.400 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:59.400 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:12:59.400 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:59.400 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:59.400 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:59.400 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:59.400 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:59.400 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:59.400 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:59.401 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:59.401 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:59.401 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:59.401 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:59.401 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:59.401 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:59.401 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:59.401 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:59.401 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:59.401 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:59.401 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:12:59.401 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:59.401 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:59.401 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:59.401 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:59.401 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:59.401 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:59.401 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:12:59.401 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:59.401 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:12:59.401 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:59.401 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:59.401 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:59.401 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:59.401 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:59.401 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:59.401 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:59.401 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:59.401 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:59.401 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:59.401 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:59.401 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:12:59.401 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:12:59.401 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:12:59.401 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=281b8007-e365-4070-9a6c-002fb73cf69d 00:12:59.401 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:12:59.401 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=9a557f34-a025-4ac7-a5a0-d348b905f36d 00:12:59.401 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:12:59.401 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:12:59.401 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:12:59.401 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:12:59.401 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=a7c71cd2-3207-407d-a2eb-44a4f6c787e7 00:12:59.401 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:12:59.401 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:59.401 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:59.401 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:59.401 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:59.401 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:59.401 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:59.401 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:59.401 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:59.401 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:59.401 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:59.401 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:12:59.401 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:04.674 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:04.674 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:13:04.674 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:04.675 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:04.675 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:04.675 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:04.675 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:04.675 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:13:04.675 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:04.675 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:13:04.675 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:13:04.675 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:13:04.675 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:13:04.675 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:13:04.675 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:13:04.675 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:04.675 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:04.675 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:04.675 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:04.675 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:04.675 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:04.675 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:04.675 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:04.675 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:04.675 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:04.675 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:04.675 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:04.675 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:04.675 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:04.675 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:04.675 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:04.675 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:04.675 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:04.675 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:04.675 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:04.675 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:04.675 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:04.675 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:04.675 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:04.675 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:04.675 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:04.675 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:04.675 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:04.675 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:04.675 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:04.675 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:04.675 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:04.675 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:04.675 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:04.675 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:04.675 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:04.675 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:04.675 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:04.675 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:04.675 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:04.675 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:04.675 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:04.675 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:04.675 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:04.675 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:04.675 Found net devices under 0000:86:00.0: cvl_0_0 00:13:04.675 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:04.675 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:04.675 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:04.675 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:04.675 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:04.675 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:04.675 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:04.675 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:04.675 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:04.675 Found net devices under 0000:86:00.1: cvl_0_1 00:13:04.675 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:04.675 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:04.675 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:13:04.675 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:04.675 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:04.675 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:04.675 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:04.675 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:04.675 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:04.675 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:04.675 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:04.675 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:04.675 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:04.675 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:04.675 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:04.675 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:04.675 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:04.675 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:04.675 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:04.935 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:04.935 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:04.935 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:04.935 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:04.935 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:04.935 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:04.935 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:04.935 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:04.935 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:04.935 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:04.935 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:04.935 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.455 ms 00:13:04.935 00:13:04.935 --- 10.0.0.2 ping statistics --- 00:13:04.935 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:04.935 rtt min/avg/max/mdev = 0.455/0.455/0.455/0.000 ms 00:13:04.935 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:04.935 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:04.935 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.209 ms 00:13:04.935 00:13:04.935 --- 10.0.0.1 ping statistics --- 00:13:04.935 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:04.935 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:13:04.935 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:04.935 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:13:04.935 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:04.935 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:04.935 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:04.935 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:04.935 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:04.935 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:04.935 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:05.194 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:13:05.194 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:05.194 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:05.194 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:05.194 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:13:05.194 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=2478249 00:13:05.194 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 2478249 00:13:05.194 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 2478249 ']' 00:13:05.194 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:05.194 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:05.194 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:05.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:05.194 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:05.194 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:05.194 [2024-11-28 12:36:47.498153] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:13:05.194 [2024-11-28 12:36:47.498199] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:05.194 [2024-11-28 12:36:47.564481] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:05.194 [2024-11-28 12:36:47.606422] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:05.194 [2024-11-28 12:36:47.606454] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:05.194 [2024-11-28 12:36:47.606462] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:05.194 [2024-11-28 12:36:47.606468] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:05.194 [2024-11-28 12:36:47.606472] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:05.194 [2024-11-28 12:36:47.607029] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:05.194 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:05.194 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:13:05.195 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:05.195 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:05.195 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:05.454 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:05.454 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:05.454 [2024-11-28 12:36:47.921184] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:05.454 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:13:05.454 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:13:05.454 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:05.713 Malloc1 00:13:05.713 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:05.972 Malloc2 00:13:05.972 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:06.231 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:13:06.231 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:06.489 [2024-11-28 12:36:48.880059] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:06.489 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:13:06.490 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I a7c71cd2-3207-407d-a2eb-44a4f6c787e7 -a 10.0.0.2 -s 4420 -i 4 00:13:06.748 12:36:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:13:06.748 12:36:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:13:06.748 12:36:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:06.748 12:36:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:06.748 12:36:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:13:08.694 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:08.694 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:08.694 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:08.694 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:08.694 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:08.694 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:13:08.694 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:08.694 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:08.694 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:08.694 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:08.694 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:13:08.694 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:08.694 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:08.694 [ 0]:0x1 00:13:08.694 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:08.694 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:08.694 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=2b71c8d9db2d45b3a76b94f00e8692cf 00:13:08.694 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 2b71c8d9db2d45b3a76b94f00e8692cf != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:08.694 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:13:08.953 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:13:08.953 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:08.953 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:08.953 [ 0]:0x1 00:13:08.953 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:08.953 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:08.953 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=2b71c8d9db2d45b3a76b94f00e8692cf 00:13:08.953 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 2b71c8d9db2d45b3a76b94f00e8692cf != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:08.953 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:13:08.953 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:08.953 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:08.953 [ 1]:0x2 00:13:08.953 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:08.953 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:08.953 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ebf1b6f2594c4e80899f6cd8fff864bd 00:13:08.954 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ebf1b6f2594c4e80899f6cd8fff864bd != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:08.954 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:13:08.954 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:09.212 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:09.212 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:09.212 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:13:09.470 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:13:09.470 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I a7c71cd2-3207-407d-a2eb-44a4f6c787e7 -a 10.0.0.2 -s 4420 -i 4 00:13:09.729 12:36:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:13:09.729 12:36:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:13:09.729 12:36:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:09.729 12:36:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:13:09.729 12:36:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:13:09.729 12:36:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:13:11.632 12:36:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:11.632 12:36:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:11.632 12:36:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:11.632 12:36:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:11.632 12:36:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:11.632 12:36:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:13:11.632 12:36:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:11.632 12:36:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:11.891 12:36:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:11.891 12:36:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:11.891 12:36:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:13:11.891 12:36:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:11.891 12:36:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:13:11.891 12:36:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:13:11.891 12:36:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:11.891 12:36:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:13:11.891 12:36:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:11.891 12:36:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:13:11.891 12:36:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:11.891 12:36:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:11.891 12:36:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:11.891 12:36:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:11.891 12:36:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:11.892 12:36:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:11.892 12:36:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:11.892 12:36:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:11.892 12:36:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:11.892 12:36:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:11.892 12:36:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:13:11.892 12:36:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:11.892 12:36:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:11.892 [ 0]:0x2 00:13:11.892 12:36:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:11.892 12:36:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:11.892 12:36:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ebf1b6f2594c4e80899f6cd8fff864bd 00:13:11.892 12:36:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ebf1b6f2594c4e80899f6cd8fff864bd != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:11.892 12:36:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:12.151 12:36:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:13:12.151 12:36:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:12.151 12:36:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:12.151 [ 0]:0x1 00:13:12.151 12:36:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:12.151 12:36:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:12.151 12:36:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=2b71c8d9db2d45b3a76b94f00e8692cf 00:13:12.151 12:36:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 2b71c8d9db2d45b3a76b94f00e8692cf != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:12.151 12:36:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:13:12.151 12:36:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:12.151 12:36:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:12.151 [ 1]:0x2 00:13:12.151 12:36:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:12.151 12:36:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:12.151 12:36:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ebf1b6f2594c4e80899f6cd8fff864bd 00:13:12.151 12:36:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ebf1b6f2594c4e80899f6cd8fff864bd != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:12.151 12:36:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:12.411 12:36:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:13:12.411 12:36:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:12.411 12:36:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:13:12.411 12:36:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:13:12.411 12:36:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:12.411 12:36:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:13:12.411 12:36:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:12.411 12:36:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:13:12.411 12:36:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:12.411 12:36:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:12.411 12:36:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:12.411 12:36:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:12.411 12:36:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:12.411 12:36:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:12.411 12:36:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:12.411 12:36:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:12.411 12:36:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:12.411 12:36:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:12.411 12:36:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:13:12.411 12:36:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:12.411 12:36:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:12.411 [ 0]:0x2 00:13:12.411 12:36:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:12.411 12:36:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:12.671 12:36:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ebf1b6f2594c4e80899f6cd8fff864bd 00:13:12.671 12:36:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ebf1b6f2594c4e80899f6cd8fff864bd != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:12.671 12:36:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:13:12.671 12:36:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:12.671 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:12.671 12:36:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:12.671 12:36:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:13:12.671 12:36:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I a7c71cd2-3207-407d-a2eb-44a4f6c787e7 -a 10.0.0.2 -s 4420 -i 4 00:13:12.929 12:36:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:13:12.929 12:36:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:13:12.929 12:36:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:12.929 12:36:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:13:12.929 12:36:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:13:12.929 12:36:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:13:14.832 12:36:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:14.832 12:36:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:14.832 12:36:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:14.832 12:36:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:13:14.832 12:36:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:14.832 12:36:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:13:14.832 12:36:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:14.832 12:36:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:15.091 12:36:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:15.091 12:36:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:15.091 12:36:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:13:15.091 12:36:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:15.091 12:36:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:15.091 [ 0]:0x1 00:13:15.091 12:36:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:15.091 12:36:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:15.091 12:36:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=2b71c8d9db2d45b3a76b94f00e8692cf 00:13:15.091 12:36:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 2b71c8d9db2d45b3a76b94f00e8692cf != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:15.091 12:36:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:13:15.091 12:36:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:15.091 12:36:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:15.091 [ 1]:0x2 00:13:15.091 12:36:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:15.091 12:36:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:15.350 12:36:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ebf1b6f2594c4e80899f6cd8fff864bd 00:13:15.350 12:36:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ebf1b6f2594c4e80899f6cd8fff864bd != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:15.350 12:36:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:15.350 12:36:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:13:15.350 12:36:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:15.350 12:36:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:13:15.350 12:36:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:13:15.350 12:36:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:15.350 12:36:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:13:15.350 12:36:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:15.350 12:36:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:13:15.350 12:36:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:15.350 12:36:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:15.350 12:36:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:15.350 12:36:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:15.608 12:36:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:15.609 12:36:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:15.609 12:36:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:15.609 12:36:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:15.609 12:36:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:15.609 12:36:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:15.609 12:36:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:13:15.609 12:36:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:15.609 12:36:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:15.609 [ 0]:0x2 00:13:15.609 12:36:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:15.609 12:36:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:15.609 12:36:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ebf1b6f2594c4e80899f6cd8fff864bd 00:13:15.609 12:36:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ebf1b6f2594c4e80899f6cd8fff864bd != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:15.609 12:36:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:15.609 12:36:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:15.609 12:36:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:15.609 12:36:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:15.609 12:36:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:15.609 12:36:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:15.609 12:36:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:15.609 12:36:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:15.609 12:36:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:15.609 12:36:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:15.609 12:36:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:13:15.609 12:36:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:15.609 [2024-11-28 12:36:58.098836] nvmf_rpc.c:1873:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:13:15.609 request: 00:13:15.609 { 00:13:15.609 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:15.609 "nsid": 2, 00:13:15.609 "host": "nqn.2016-06.io.spdk:host1", 00:13:15.609 "method": "nvmf_ns_remove_host", 00:13:15.609 "req_id": 1 00:13:15.609 } 00:13:15.609 Got JSON-RPC error response 00:13:15.609 response: 00:13:15.609 { 00:13:15.609 "code": -32602, 00:13:15.609 "message": "Invalid parameters" 00:13:15.609 } 00:13:15.609 12:36:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:15.609 12:36:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:15.609 12:36:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:15.609 12:36:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:15.609 12:36:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:13:15.609 12:36:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:15.609 12:36:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:13:15.609 12:36:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:13:15.609 12:36:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:15.609 12:36:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:13:15.609 12:36:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:15.609 12:36:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:13:15.609 12:36:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:15.609 12:36:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:15.868 12:36:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:15.868 12:36:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:15.868 12:36:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:15.868 12:36:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:15.868 12:36:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:15.868 12:36:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:15.868 12:36:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:15.868 12:36:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:15.868 12:36:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:13:15.868 12:36:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:15.868 12:36:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:15.868 [ 0]:0x2 00:13:15.868 12:36:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:15.868 12:36:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:15.868 12:36:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ebf1b6f2594c4e80899f6cd8fff864bd 00:13:15.868 12:36:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ebf1b6f2594c4e80899f6cd8fff864bd != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:15.868 12:36:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:13:15.868 12:36:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:15.868 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:15.868 12:36:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=2480241 00:13:15.868 12:36:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:13:15.868 12:36:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 2480241 /var/tmp/host.sock 00:13:15.868 12:36:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:13:15.868 12:36:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 2480241 ']' 00:13:15.868 12:36:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:13:15.868 12:36:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:15.868 12:36:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:13:15.868 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:13:15.868 12:36:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:15.868 12:36:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:15.868 [2024-11-28 12:36:58.311349] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:13:15.868 [2024-11-28 12:36:58.311394] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2480241 ] 00:13:15.868 [2024-11-28 12:36:58.373844] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:16.126 [2024-11-28 12:36:58.415765] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:16.126 12:36:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:16.126 12:36:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:13:16.126 12:36:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:16.384 12:36:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:16.642 12:36:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 281b8007-e365-4070-9a6c-002fb73cf69d 00:13:16.642 12:36:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:13:16.642 12:36:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 281B8007E36540709A6C002FB73CF69D -i 00:13:16.900 12:36:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 9a557f34-a025-4ac7-a5a0-d348b905f36d 00:13:16.900 12:36:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:13:16.900 12:36:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 9A557F34A0254AC7A5A0D348B905F36D -i 00:13:16.900 12:36:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:17.159 12:36:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:13:17.417 12:36:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:13:17.417 12:36:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:13:17.676 nvme0n1 00:13:17.676 12:37:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:13:17.676 12:37:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:13:17.934 nvme1n2 00:13:17.934 12:37:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:13:17.934 12:37:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:13:17.934 12:37:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:13:17.934 12:37:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:13:17.934 12:37:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:13:18.193 12:37:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:13:18.193 12:37:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:13:18.193 12:37:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:13:18.193 12:37:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:13:18.452 12:37:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 281b8007-e365-4070-9a6c-002fb73cf69d == \2\8\1\b\8\0\0\7\-\e\3\6\5\-\4\0\7\0\-\9\a\6\c\-\0\0\2\f\b\7\3\c\f\6\9\d ]] 00:13:18.452 12:37:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:13:18.452 12:37:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:13:18.452 12:37:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:13:18.710 12:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 9a557f34-a025-4ac7-a5a0-d348b905f36d == \9\a\5\5\7\f\3\4\-\a\0\2\5\-\4\a\c\7\-\a\5\a\0\-\d\3\4\8\b\9\0\5\f\3\6\d ]] 00:13:18.710 12:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:18.710 12:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:18.969 12:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid 281b8007-e365-4070-9a6c-002fb73cf69d 00:13:18.969 12:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:13:18.970 12:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 281B8007E36540709A6C002FB73CF69D 00:13:18.970 12:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:18.970 12:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 281B8007E36540709A6C002FB73CF69D 00:13:18.970 12:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:18.970 12:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:18.970 12:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:18.970 12:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:18.970 12:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:18.970 12:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:18.970 12:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:18.970 12:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:13:18.970 12:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 281B8007E36540709A6C002FB73CF69D 00:13:19.228 [2024-11-28 12:37:01.564354] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:13:19.228 [2024-11-28 12:37:01.564389] subsystem.c:2156:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:13:19.229 [2024-11-28 12:37:01.564397] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.229 request: 00:13:19.229 { 00:13:19.229 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:19.229 "namespace": { 00:13:19.229 "bdev_name": "invalid", 00:13:19.229 "nsid": 1, 00:13:19.229 "nguid": "281B8007E36540709A6C002FB73CF69D", 00:13:19.229 "no_auto_visible": false, 00:13:19.229 "hide_metadata": false 00:13:19.229 }, 00:13:19.229 "method": "nvmf_subsystem_add_ns", 00:13:19.229 "req_id": 1 00:13:19.229 } 00:13:19.229 Got JSON-RPC error response 00:13:19.229 response: 00:13:19.229 { 00:13:19.229 "code": -32602, 00:13:19.229 "message": "Invalid parameters" 00:13:19.229 } 00:13:19.229 12:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:19.229 12:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:19.229 12:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:19.229 12:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:19.229 12:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid 281b8007-e365-4070-9a6c-002fb73cf69d 00:13:19.229 12:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:13:19.229 12:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 281B8007E36540709A6C002FB73CF69D -i 00:13:19.487 12:37:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:13:21.389 12:37:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:13:21.389 12:37:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:13:21.389 12:37:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:13:21.648 12:37:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:13:21.648 12:37:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 2480241 00:13:21.648 12:37:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 2480241 ']' 00:13:21.648 12:37:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 2480241 00:13:21.648 12:37:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:13:21.648 12:37:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:21.648 12:37:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2480241 00:13:21.648 12:37:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:21.648 12:37:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:21.648 12:37:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2480241' 00:13:21.648 killing process with pid 2480241 00:13:21.648 12:37:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 2480241 00:13:21.648 12:37:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 2480241 00:13:21.907 12:37:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:22.166 12:37:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:13:22.166 12:37:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:13:22.166 12:37:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:22.166 12:37:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:13:22.166 12:37:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:22.166 12:37:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:13:22.166 12:37:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:22.166 12:37:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:22.166 rmmod nvme_tcp 00:13:22.166 rmmod nvme_fabrics 00:13:22.166 rmmod nvme_keyring 00:13:22.166 12:37:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:22.166 12:37:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:13:22.166 12:37:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:13:22.166 12:37:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 2478249 ']' 00:13:22.166 12:37:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 2478249 00:13:22.166 12:37:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 2478249 ']' 00:13:22.166 12:37:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 2478249 00:13:22.166 12:37:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:13:22.166 12:37:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:22.166 12:37:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2478249 00:13:22.166 12:37:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:22.166 12:37:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:22.166 12:37:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2478249' 00:13:22.166 killing process with pid 2478249 00:13:22.166 12:37:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 2478249 00:13:22.166 12:37:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 2478249 00:13:22.425 12:37:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:22.425 12:37:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:22.425 12:37:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:22.425 12:37:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:13:22.425 12:37:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:13:22.425 12:37:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:22.425 12:37:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:13:22.425 12:37:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:22.425 12:37:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:22.425 12:37:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:22.425 12:37:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:22.426 12:37:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:24.962 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:24.962 00:13:24.962 real 0m25.254s 00:13:24.962 user 0m30.192s 00:13:24.962 sys 0m6.763s 00:13:24.962 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:24.962 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:24.962 ************************************ 00:13:24.962 END TEST nvmf_ns_masking 00:13:24.962 ************************************ 00:13:24.962 12:37:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:13:24.962 12:37:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:13:24.962 12:37:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:24.962 12:37:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:24.962 12:37:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:24.962 ************************************ 00:13:24.962 START TEST nvmf_nvme_cli 00:13:24.962 ************************************ 00:13:24.962 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:13:24.962 * Looking for test storage... 00:13:24.962 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:24.962 12:37:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:24.962 12:37:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lcov --version 00:13:24.962 12:37:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:24.962 12:37:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:24.962 12:37:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:24.962 12:37:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:24.962 12:37:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:24.962 12:37:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:13:24.962 12:37:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:13:24.962 12:37:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:13:24.962 12:37:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:13:24.962 12:37:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:13:24.962 12:37:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:13:24.962 12:37:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:13:24.962 12:37:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:24.962 12:37:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:13:24.962 12:37:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:13:24.962 12:37:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:24.962 12:37:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:24.962 12:37:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:13:24.962 12:37:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:13:24.962 12:37:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:24.962 12:37:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:13:24.962 12:37:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:13:24.962 12:37:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:13:24.962 12:37:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:13:24.962 12:37:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:24.962 12:37:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:13:24.962 12:37:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:13:24.962 12:37:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:24.962 12:37:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:24.962 12:37:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:13:24.962 12:37:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:24.962 12:37:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:24.962 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:24.962 --rc genhtml_branch_coverage=1 00:13:24.962 --rc genhtml_function_coverage=1 00:13:24.962 --rc genhtml_legend=1 00:13:24.962 --rc geninfo_all_blocks=1 00:13:24.962 --rc geninfo_unexecuted_blocks=1 00:13:24.962 00:13:24.962 ' 00:13:24.962 12:37:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:24.962 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:24.962 --rc genhtml_branch_coverage=1 00:13:24.962 --rc genhtml_function_coverage=1 00:13:24.962 --rc genhtml_legend=1 00:13:24.962 --rc geninfo_all_blocks=1 00:13:24.962 --rc geninfo_unexecuted_blocks=1 00:13:24.962 00:13:24.962 ' 00:13:24.962 12:37:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:24.962 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:24.963 --rc genhtml_branch_coverage=1 00:13:24.963 --rc genhtml_function_coverage=1 00:13:24.963 --rc genhtml_legend=1 00:13:24.963 --rc geninfo_all_blocks=1 00:13:24.963 --rc geninfo_unexecuted_blocks=1 00:13:24.963 00:13:24.963 ' 00:13:24.963 12:37:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:24.963 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:24.963 --rc genhtml_branch_coverage=1 00:13:24.963 --rc genhtml_function_coverage=1 00:13:24.963 --rc genhtml_legend=1 00:13:24.963 --rc geninfo_all_blocks=1 00:13:24.963 --rc geninfo_unexecuted_blocks=1 00:13:24.963 00:13:24.963 ' 00:13:24.963 12:37:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:24.963 12:37:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:13:24.963 12:37:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:24.963 12:37:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:24.963 12:37:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:24.963 12:37:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:24.963 12:37:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:24.963 12:37:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:24.963 12:37:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:24.963 12:37:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:24.963 12:37:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:24.963 12:37:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:24.963 12:37:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:13:24.963 12:37:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:13:24.963 12:37:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:24.963 12:37:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:24.963 12:37:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:24.963 12:37:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:24.963 12:37:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:24.963 12:37:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:13:24.963 12:37:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:24.963 12:37:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:24.963 12:37:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:24.963 12:37:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:24.963 12:37:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:24.963 12:37:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:24.963 12:37:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:13:24.963 12:37:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:24.963 12:37:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:13:24.963 12:37:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:24.963 12:37:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:24.963 12:37:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:24.963 12:37:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:24.963 12:37:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:24.963 12:37:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:24.963 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:24.963 12:37:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:24.963 12:37:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:24.963 12:37:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:24.963 12:37:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:24.963 12:37:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:24.963 12:37:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:13:24.963 12:37:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:13:24.963 12:37:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:24.963 12:37:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:24.963 12:37:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:24.963 12:37:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:24.963 12:37:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:24.963 12:37:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:24.963 12:37:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:24.963 12:37:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:24.963 12:37:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:24.963 12:37:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:24.963 12:37:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:13:24.963 12:37:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:30.235 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:30.235 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:13:30.235 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:30.235 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:30.235 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:30.235 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:30.235 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:30.235 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:13:30.235 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:30.235 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:13:30.235 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:13:30.235 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:13:30.235 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:13:30.235 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:13:30.236 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:13:30.236 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:30.236 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:30.236 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:30.236 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:30.236 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:30.236 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:30.236 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:30.236 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:30.236 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:30.236 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:30.236 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:30.236 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:30.236 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:30.236 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:30.236 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:30.236 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:30.236 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:30.236 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:30.236 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:30.236 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:30.236 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:30.236 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:30.236 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:30.236 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:30.236 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:30.236 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:30.236 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:30.236 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:30.236 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:30.236 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:30.236 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:30.236 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:30.236 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:30.236 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:30.236 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:30.236 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:30.236 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:30.236 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:30.236 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:30.236 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:30.236 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:30.236 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:30.236 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:30.236 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:30.236 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:30.236 Found net devices under 0000:86:00.0: cvl_0_0 00:13:30.236 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:30.236 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:30.236 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:30.236 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:30.236 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:30.236 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:30.236 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:30.236 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:30.236 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:30.236 Found net devices under 0000:86:00.1: cvl_0_1 00:13:30.236 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:30.236 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:30.236 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:13:30.236 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:30.236 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:30.236 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:30.236 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:30.236 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:30.236 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:30.236 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:30.236 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:30.236 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:30.236 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:30.236 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:30.236 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:30.236 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:30.236 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:30.236 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:30.236 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:30.236 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:30.236 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:30.236 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:30.236 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:30.236 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:30.236 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:30.236 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:30.236 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:30.236 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:30.236 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:30.495 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:30.495 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.358 ms 00:13:30.495 00:13:30.495 --- 10.0.0.2 ping statistics --- 00:13:30.496 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:30.496 rtt min/avg/max/mdev = 0.358/0.358/0.358/0.000 ms 00:13:30.496 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:30.496 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:30.496 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.150 ms 00:13:30.496 00:13:30.496 --- 10.0.0.1 ping statistics --- 00:13:30.496 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:30.496 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:13:30.496 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:30.496 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:13:30.496 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:30.496 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:30.496 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:30.496 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:30.496 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:30.496 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:30.496 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:30.496 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:13:30.496 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:30.496 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:30.496 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:30.496 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=2484737 00:13:30.496 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 2484737 00:13:30.496 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:30.496 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 2484737 ']' 00:13:30.496 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:30.496 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:30.496 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:30.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:30.496 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:30.496 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:30.496 [2024-11-28 12:37:12.867932] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:13:30.496 [2024-11-28 12:37:12.867995] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:30.496 [2024-11-28 12:37:12.936530] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:30.496 [2024-11-28 12:37:12.980297] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:30.496 [2024-11-28 12:37:12.980338] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:30.496 [2024-11-28 12:37:12.980345] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:30.496 [2024-11-28 12:37:12.980352] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:30.496 [2024-11-28 12:37:12.980358] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:30.496 [2024-11-28 12:37:12.981832] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:30.496 [2024-11-28 12:37:12.981929] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:30.496 [2024-11-28 12:37:12.981993] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:30.496 [2024-11-28 12:37:12.981995] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:30.755 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:30.755 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:13:30.755 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:30.755 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:30.755 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:30.755 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:30.755 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:30.755 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.755 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:30.755 [2024-11-28 12:37:13.133166] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:30.755 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.755 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:30.755 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.755 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:30.755 Malloc0 00:13:30.755 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.755 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:13:30.755 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.755 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:30.755 Malloc1 00:13:30.755 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.755 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:13:30.755 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.755 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:30.755 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.755 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:30.755 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.755 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:30.755 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.755 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:30.755 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.755 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:30.755 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.755 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:30.755 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.755 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:30.755 [2024-11-28 12:37:13.228789] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:30.755 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.755 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:30.755 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.755 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:30.755 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.755 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:13:31.014 00:13:31.014 Discovery Log Number of Records 2, Generation counter 2 00:13:31.014 =====Discovery Log Entry 0====== 00:13:31.014 trtype: tcp 00:13:31.014 adrfam: ipv4 00:13:31.014 subtype: current discovery subsystem 00:13:31.014 treq: not required 00:13:31.014 portid: 0 00:13:31.014 trsvcid: 4420 00:13:31.014 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:13:31.014 traddr: 10.0.0.2 00:13:31.014 eflags: explicit discovery connections, duplicate discovery information 00:13:31.014 sectype: none 00:13:31.014 =====Discovery Log Entry 1====== 00:13:31.014 trtype: tcp 00:13:31.014 adrfam: ipv4 00:13:31.014 subtype: nvme subsystem 00:13:31.014 treq: not required 00:13:31.014 portid: 0 00:13:31.014 trsvcid: 4420 00:13:31.014 subnqn: nqn.2016-06.io.spdk:cnode1 00:13:31.014 traddr: 10.0.0.2 00:13:31.014 eflags: none 00:13:31.014 sectype: none 00:13:31.014 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:13:31.014 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:13:31.014 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:13:31.014 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:31.014 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:13:31.014 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:13:31.014 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:31.014 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:13:31.014 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:31.014 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:13:31.014 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:32.391 12:37:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:13:32.391 12:37:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:13:32.391 12:37:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:32.391 12:37:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:13:32.391 12:37:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:13:32.391 12:37:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:13:34.296 12:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:34.296 12:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:34.296 12:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:34.296 12:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:13:34.296 12:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:34.296 12:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:13:34.296 12:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:13:34.296 12:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:13:34.296 12:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:34.296 12:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:13:34.296 12:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:13:34.296 12:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:34.296 12:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:13:34.296 12:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:34.296 12:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:13:34.296 12:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:13:34.296 12:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:34.296 12:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:13:34.296 12:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:13:34.296 12:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:34.296 12:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:13:34.296 /dev/nvme0n2 ]] 00:13:34.296 12:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:13:34.296 12:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:13:34.296 12:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:13:34.296 12:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:34.296 12:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:13:34.296 12:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:13:34.296 12:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:34.296 12:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:13:34.296 12:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:34.296 12:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:13:34.296 12:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:13:34.296 12:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:34.296 12:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:13:34.296 12:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:13:34.296 12:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:34.296 12:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:13:34.296 12:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:34.555 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:34.555 12:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:34.555 12:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:13:34.555 12:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:34.555 12:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:34.555 12:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:34.555 12:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:34.555 12:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:13:34.555 12:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:13:34.555 12:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:34.555 12:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.555 12:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:34.555 12:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.555 12:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:13:34.555 12:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:13:34.555 12:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:34.555 12:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:13:34.555 12:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:34.555 12:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:13:34.555 12:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:34.555 12:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:34.555 rmmod nvme_tcp 00:13:34.555 rmmod nvme_fabrics 00:13:34.555 rmmod nvme_keyring 00:13:34.555 12:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:34.555 12:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:13:34.555 12:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:13:34.555 12:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 2484737 ']' 00:13:34.555 12:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 2484737 00:13:34.555 12:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 2484737 ']' 00:13:34.555 12:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 2484737 00:13:34.555 12:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:13:34.555 12:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:34.555 12:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2484737 00:13:34.555 12:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:34.555 12:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:34.555 12:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2484737' 00:13:34.555 killing process with pid 2484737 00:13:34.555 12:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 2484737 00:13:34.555 12:37:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 2484737 00:13:34.814 12:37:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:34.814 12:37:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:34.814 12:37:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:34.814 12:37:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:13:34.814 12:37:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:13:34.814 12:37:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:34.814 12:37:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:13:34.814 12:37:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:34.814 12:37:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:34.814 12:37:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:34.814 12:37:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:34.814 12:37:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:37.352 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:37.352 00:13:37.352 real 0m12.292s 00:13:37.352 user 0m18.510s 00:13:37.352 sys 0m4.805s 00:13:37.352 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:37.352 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:37.352 ************************************ 00:13:37.352 END TEST nvmf_nvme_cli 00:13:37.352 ************************************ 00:13:37.352 12:37:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:13:37.352 12:37:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:13:37.352 12:37:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:37.352 12:37:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:37.352 12:37:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:37.352 ************************************ 00:13:37.352 START TEST nvmf_vfio_user 00:13:37.352 ************************************ 00:13:37.352 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:13:37.352 * Looking for test storage... 00:13:37.352 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:37.352 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:37.352 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # lcov --version 00:13:37.353 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:37.353 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:37.353 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:37.353 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:37.353 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:37.353 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:13:37.353 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:13:37.353 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:13:37.353 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:13:37.353 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:13:37.353 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:13:37.353 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:13:37.353 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:37.353 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:13:37.353 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:13:37.353 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:37.353 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:37.353 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:13:37.353 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:13:37.353 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:37.353 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:13:37.353 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:13:37.353 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:13:37.353 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:13:37.353 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:37.353 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:13:37.353 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:13:37.353 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:37.353 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:37.353 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:13:37.353 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:37.353 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:37.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:37.353 --rc genhtml_branch_coverage=1 00:13:37.353 --rc genhtml_function_coverage=1 00:13:37.353 --rc genhtml_legend=1 00:13:37.353 --rc geninfo_all_blocks=1 00:13:37.353 --rc geninfo_unexecuted_blocks=1 00:13:37.353 00:13:37.353 ' 00:13:37.353 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:37.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:37.353 --rc genhtml_branch_coverage=1 00:13:37.353 --rc genhtml_function_coverage=1 00:13:37.353 --rc genhtml_legend=1 00:13:37.353 --rc geninfo_all_blocks=1 00:13:37.353 --rc geninfo_unexecuted_blocks=1 00:13:37.353 00:13:37.353 ' 00:13:37.353 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:37.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:37.353 --rc genhtml_branch_coverage=1 00:13:37.353 --rc genhtml_function_coverage=1 00:13:37.353 --rc genhtml_legend=1 00:13:37.353 --rc geninfo_all_blocks=1 00:13:37.353 --rc geninfo_unexecuted_blocks=1 00:13:37.353 00:13:37.353 ' 00:13:37.353 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:37.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:37.353 --rc genhtml_branch_coverage=1 00:13:37.353 --rc genhtml_function_coverage=1 00:13:37.353 --rc genhtml_legend=1 00:13:37.353 --rc geninfo_all_blocks=1 00:13:37.353 --rc geninfo_unexecuted_blocks=1 00:13:37.353 00:13:37.353 ' 00:13:37.353 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:37.353 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:13:37.353 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:37.353 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:37.353 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:37.353 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:37.353 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:37.353 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:37.353 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:37.353 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:37.353 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:37.353 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:37.353 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:13:37.353 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:13:37.353 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:37.353 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:37.353 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:37.353 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:37.353 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:37.353 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:13:37.353 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:37.353 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:37.353 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:37.353 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:37.353 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:37.353 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:37.353 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:13:37.353 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:37.353 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:13:37.353 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:37.353 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:37.353 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:37.353 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:37.353 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:37.353 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:37.353 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:37.353 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:37.353 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:37.353 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:37.353 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:13:37.353 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:13:37.353 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:13:37.353 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:37.353 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:13:37.353 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:13:37.354 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:13:37.354 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:13:37.354 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:13:37.354 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:13:37.354 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2486032 00:13:37.354 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2486032' 00:13:37.354 Process pid: 2486032 00:13:37.354 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:37.354 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2486032 00:13:37.354 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:13:37.354 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 2486032 ']' 00:13:37.354 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:37.354 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:37.354 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:37.354 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:37.354 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:37.354 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:13:37.354 [2024-11-28 12:37:19.626271] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:13:37.354 [2024-11-28 12:37:19.626321] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:37.354 [2024-11-28 12:37:19.688665] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:37.354 [2024-11-28 12:37:19.728978] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:37.354 [2024-11-28 12:37:19.729018] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:37.354 [2024-11-28 12:37:19.729027] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:37.354 [2024-11-28 12:37:19.729033] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:37.354 [2024-11-28 12:37:19.729038] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:37.354 [2024-11-28 12:37:19.730659] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:37.354 [2024-11-28 12:37:19.730756] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:37.354 [2024-11-28 12:37:19.730850] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:37.354 [2024-11-28 12:37:19.730852] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:37.354 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:37.354 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:13:37.354 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:13:38.729 12:37:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:13:38.729 12:37:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:13:38.729 12:37:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:13:38.729 12:37:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:38.729 12:37:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:13:38.729 12:37:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:38.987 Malloc1 00:13:38.987 12:37:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:13:38.987 12:37:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:13:39.245 12:37:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:13:39.503 12:37:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:39.503 12:37:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:13:39.503 12:37:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:39.761 Malloc2 00:13:39.761 12:37:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:13:40.019 12:37:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:13:40.019 12:37:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:13:40.335 12:37:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:13:40.335 12:37:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:13:40.335 12:37:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:40.335 12:37:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:13:40.335 12:37:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:13:40.335 12:37:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:13:40.335 [2024-11-28 12:37:22.734941] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:13:40.335 [2024-11-28 12:37:22.734974] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2486517 ] 00:13:40.335 [2024-11-28 12:37:22.773880] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:13:40.335 [2024-11-28 12:37:22.782261] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:40.335 [2024-11-28 12:37:22.782284] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7ff7fdfc6000 00:13:40.335 [2024-11-28 12:37:22.783262] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:40.335 [2024-11-28 12:37:22.784267] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:40.335 [2024-11-28 12:37:22.785273] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:40.335 [2024-11-28 12:37:22.786280] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:40.335 [2024-11-28 12:37:22.787284] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:40.335 [2024-11-28 12:37:22.788288] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:40.335 [2024-11-28 12:37:22.789292] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:40.335 [2024-11-28 12:37:22.790289] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:40.335 [2024-11-28 12:37:22.791305] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:40.335 [2024-11-28 12:37:22.791318] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7ff7fdfbb000 00:13:40.335 [2024-11-28 12:37:22.792304] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:40.595 [2024-11-28 12:37:22.805897] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:13:40.595 [2024-11-28 12:37:22.805924] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:13:40.595 [2024-11-28 12:37:22.810424] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:13:40.595 [2024-11-28 12:37:22.810466] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:13:40.595 [2024-11-28 12:37:22.810537] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:13:40.595 [2024-11-28 12:37:22.810550] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:13:40.595 [2024-11-28 12:37:22.810556] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:13:40.595 [2024-11-28 12:37:22.811420] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:13:40.595 [2024-11-28 12:37:22.811433] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:13:40.595 [2024-11-28 12:37:22.811440] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:13:40.595 [2024-11-28 12:37:22.812421] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:13:40.595 [2024-11-28 12:37:22.812429] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:13:40.595 [2024-11-28 12:37:22.812439] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:13:40.595 [2024-11-28 12:37:22.813434] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:13:40.595 [2024-11-28 12:37:22.813442] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:13:40.595 [2024-11-28 12:37:22.814436] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:13:40.595 [2024-11-28 12:37:22.814444] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:13:40.595 [2024-11-28 12:37:22.814449] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:13:40.595 [2024-11-28 12:37:22.814455] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:13:40.595 [2024-11-28 12:37:22.814562] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:13:40.595 [2024-11-28 12:37:22.814566] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:13:40.595 [2024-11-28 12:37:22.814571] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:13:40.595 [2024-11-28 12:37:22.815445] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:13:40.595 [2024-11-28 12:37:22.816448] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:13:40.595 [2024-11-28 12:37:22.817452] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:13:40.595 [2024-11-28 12:37:22.818453] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:40.595 [2024-11-28 12:37:22.818531] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:13:40.595 [2024-11-28 12:37:22.819465] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:13:40.595 [2024-11-28 12:37:22.819473] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:13:40.595 [2024-11-28 12:37:22.819477] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:13:40.595 [2024-11-28 12:37:22.819496] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:13:40.595 [2024-11-28 12:37:22.819503] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:13:40.595 [2024-11-28 12:37:22.819520] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:40.595 [2024-11-28 12:37:22.819525] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:40.595 [2024-11-28 12:37:22.819528] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:40.595 [2024-11-28 12:37:22.819540] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:40.595 [2024-11-28 12:37:22.819584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:13:40.595 [2024-11-28 12:37:22.819595] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:13:40.595 [2024-11-28 12:37:22.819600] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:13:40.595 [2024-11-28 12:37:22.819604] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:13:40.595 [2024-11-28 12:37:22.819608] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:13:40.595 [2024-11-28 12:37:22.819612] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:13:40.595 [2024-11-28 12:37:22.819616] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:13:40.595 [2024-11-28 12:37:22.819620] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:13:40.595 [2024-11-28 12:37:22.819627] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:13:40.595 [2024-11-28 12:37:22.819636] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:13:40.595 [2024-11-28 12:37:22.819649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:13:40.595 [2024-11-28 12:37:22.819658] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:40.595 [2024-11-28 12:37:22.819666] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:40.595 [2024-11-28 12:37:22.819673] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:40.596 [2024-11-28 12:37:22.819680] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:40.596 [2024-11-28 12:37:22.819685] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:13:40.596 [2024-11-28 12:37:22.819692] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:13:40.596 [2024-11-28 12:37:22.819701] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:13:40.596 [2024-11-28 12:37:22.819710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:13:40.596 [2024-11-28 12:37:22.819716] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:13:40.596 [2024-11-28 12:37:22.819720] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:13:40.596 [2024-11-28 12:37:22.819727] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:13:40.596 [2024-11-28 12:37:22.819733] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:13:40.596 [2024-11-28 12:37:22.819740] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:40.596 [2024-11-28 12:37:22.819753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:13:40.596 [2024-11-28 12:37:22.819805] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:13:40.596 [2024-11-28 12:37:22.819813] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:13:40.596 [2024-11-28 12:37:22.819820] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:13:40.596 [2024-11-28 12:37:22.819824] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:13:40.596 [2024-11-28 12:37:22.819827] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:40.596 [2024-11-28 12:37:22.819832] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:13:40.596 [2024-11-28 12:37:22.819849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:13:40.596 [2024-11-28 12:37:22.819858] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:13:40.596 [2024-11-28 12:37:22.819869] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:13:40.596 [2024-11-28 12:37:22.819875] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:13:40.596 [2024-11-28 12:37:22.819881] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:40.596 [2024-11-28 12:37:22.819885] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:40.596 [2024-11-28 12:37:22.819888] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:40.596 [2024-11-28 12:37:22.819894] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:40.596 [2024-11-28 12:37:22.819922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:13:40.596 [2024-11-28 12:37:22.819932] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:13:40.596 [2024-11-28 12:37:22.819938] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:13:40.596 [2024-11-28 12:37:22.819944] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:40.596 [2024-11-28 12:37:22.819954] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:40.596 [2024-11-28 12:37:22.819957] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:40.596 [2024-11-28 12:37:22.819963] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:40.596 [2024-11-28 12:37:22.819978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:13:40.596 [2024-11-28 12:37:22.819987] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:13:40.596 [2024-11-28 12:37:22.819993] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:13:40.596 [2024-11-28 12:37:22.820000] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:13:40.596 [2024-11-28 12:37:22.820005] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:13:40.596 [2024-11-28 12:37:22.820010] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:13:40.596 [2024-11-28 12:37:22.820016] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:13:40.596 [2024-11-28 12:37:22.820021] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:13:40.596 [2024-11-28 12:37:22.820025] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:13:40.596 [2024-11-28 12:37:22.820030] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:13:40.596 [2024-11-28 12:37:22.820046] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:13:40.596 [2024-11-28 12:37:22.820055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:13:40.596 [2024-11-28 12:37:22.820065] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:13:40.596 [2024-11-28 12:37:22.820074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:13:40.596 [2024-11-28 12:37:22.820084] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:13:40.596 [2024-11-28 12:37:22.820096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:13:40.596 [2024-11-28 12:37:22.820107] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:40.596 [2024-11-28 12:37:22.820117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:13:40.596 [2024-11-28 12:37:22.820129] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:13:40.596 [2024-11-28 12:37:22.820133] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:13:40.596 [2024-11-28 12:37:22.820136] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:13:40.596 [2024-11-28 12:37:22.820139] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:13:40.596 [2024-11-28 12:37:22.820142] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:13:40.596 [2024-11-28 12:37:22.820148] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:13:40.596 [2024-11-28 12:37:22.820154] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:13:40.596 [2024-11-28 12:37:22.820158] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:13:40.596 [2024-11-28 12:37:22.820161] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:40.596 [2024-11-28 12:37:22.820166] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:13:40.596 [2024-11-28 12:37:22.820172] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:13:40.596 [2024-11-28 12:37:22.820176] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:40.596 [2024-11-28 12:37:22.820179] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:40.596 [2024-11-28 12:37:22.820184] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:40.596 [2024-11-28 12:37:22.820191] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:13:40.596 [2024-11-28 12:37:22.820196] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:13:40.596 [2024-11-28 12:37:22.820199] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:40.596 [2024-11-28 12:37:22.820204] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:13:40.596 [2024-11-28 12:37:22.820210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:13:40.596 [2024-11-28 12:37:22.820222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:13:40.596 [2024-11-28 12:37:22.820231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:13:40.596 [2024-11-28 12:37:22.820238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:13:40.596 ===================================================== 00:13:40.596 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:40.596 ===================================================== 00:13:40.596 Controller Capabilities/Features 00:13:40.596 ================================ 00:13:40.596 Vendor ID: 4e58 00:13:40.596 Subsystem Vendor ID: 4e58 00:13:40.596 Serial Number: SPDK1 00:13:40.596 Model Number: SPDK bdev Controller 00:13:40.596 Firmware Version: 25.01 00:13:40.596 Recommended Arb Burst: 6 00:13:40.596 IEEE OUI Identifier: 8d 6b 50 00:13:40.596 Multi-path I/O 00:13:40.596 May have multiple subsystem ports: Yes 00:13:40.596 May have multiple controllers: Yes 00:13:40.596 Associated with SR-IOV VF: No 00:13:40.596 Max Data Transfer Size: 131072 00:13:40.596 Max Number of Namespaces: 32 00:13:40.596 Max Number of I/O Queues: 127 00:13:40.596 NVMe Specification Version (VS): 1.3 00:13:40.596 NVMe Specification Version (Identify): 1.3 00:13:40.596 Maximum Queue Entries: 256 00:13:40.596 Contiguous Queues Required: Yes 00:13:40.596 Arbitration Mechanisms Supported 00:13:40.596 Weighted Round Robin: Not Supported 00:13:40.597 Vendor Specific: Not Supported 00:13:40.597 Reset Timeout: 15000 ms 00:13:40.597 Doorbell Stride: 4 bytes 00:13:40.597 NVM Subsystem Reset: Not Supported 00:13:40.597 Command Sets Supported 00:13:40.597 NVM Command Set: Supported 00:13:40.597 Boot Partition: Not Supported 00:13:40.597 Memory Page Size Minimum: 4096 bytes 00:13:40.597 Memory Page Size Maximum: 4096 bytes 00:13:40.597 Persistent Memory Region: Not Supported 00:13:40.597 Optional Asynchronous Events Supported 00:13:40.597 Namespace Attribute Notices: Supported 00:13:40.597 Firmware Activation Notices: Not Supported 00:13:40.597 ANA Change Notices: Not Supported 00:13:40.597 PLE Aggregate Log Change Notices: Not Supported 00:13:40.597 LBA Status Info Alert Notices: Not Supported 00:13:40.597 EGE Aggregate Log Change Notices: Not Supported 00:13:40.597 Normal NVM Subsystem Shutdown event: Not Supported 00:13:40.597 Zone Descriptor Change Notices: Not Supported 00:13:40.597 Discovery Log Change Notices: Not Supported 00:13:40.597 Controller Attributes 00:13:40.597 128-bit Host Identifier: Supported 00:13:40.597 Non-Operational Permissive Mode: Not Supported 00:13:40.597 NVM Sets: Not Supported 00:13:40.597 Read Recovery Levels: Not Supported 00:13:40.597 Endurance Groups: Not Supported 00:13:40.597 Predictable Latency Mode: Not Supported 00:13:40.597 Traffic Based Keep ALive: Not Supported 00:13:40.597 Namespace Granularity: Not Supported 00:13:40.597 SQ Associations: Not Supported 00:13:40.597 UUID List: Not Supported 00:13:40.597 Multi-Domain Subsystem: Not Supported 00:13:40.597 Fixed Capacity Management: Not Supported 00:13:40.597 Variable Capacity Management: Not Supported 00:13:40.597 Delete Endurance Group: Not Supported 00:13:40.597 Delete NVM Set: Not Supported 00:13:40.597 Extended LBA Formats Supported: Not Supported 00:13:40.597 Flexible Data Placement Supported: Not Supported 00:13:40.597 00:13:40.597 Controller Memory Buffer Support 00:13:40.597 ================================ 00:13:40.597 Supported: No 00:13:40.597 00:13:40.597 Persistent Memory Region Support 00:13:40.597 ================================ 00:13:40.597 Supported: No 00:13:40.597 00:13:40.597 Admin Command Set Attributes 00:13:40.597 ============================ 00:13:40.597 Security Send/Receive: Not Supported 00:13:40.597 Format NVM: Not Supported 00:13:40.597 Firmware Activate/Download: Not Supported 00:13:40.597 Namespace Management: Not Supported 00:13:40.597 Device Self-Test: Not Supported 00:13:40.597 Directives: Not Supported 00:13:40.597 NVMe-MI: Not Supported 00:13:40.597 Virtualization Management: Not Supported 00:13:40.597 Doorbell Buffer Config: Not Supported 00:13:40.597 Get LBA Status Capability: Not Supported 00:13:40.597 Command & Feature Lockdown Capability: Not Supported 00:13:40.597 Abort Command Limit: 4 00:13:40.597 Async Event Request Limit: 4 00:13:40.597 Number of Firmware Slots: N/A 00:13:40.597 Firmware Slot 1 Read-Only: N/A 00:13:40.597 Firmware Activation Without Reset: N/A 00:13:40.597 Multiple Update Detection Support: N/A 00:13:40.597 Firmware Update Granularity: No Information Provided 00:13:40.597 Per-Namespace SMART Log: No 00:13:40.597 Asymmetric Namespace Access Log Page: Not Supported 00:13:40.597 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:13:40.597 Command Effects Log Page: Supported 00:13:40.597 Get Log Page Extended Data: Supported 00:13:40.597 Telemetry Log Pages: Not Supported 00:13:40.597 Persistent Event Log Pages: Not Supported 00:13:40.597 Supported Log Pages Log Page: May Support 00:13:40.597 Commands Supported & Effects Log Page: Not Supported 00:13:40.597 Feature Identifiers & Effects Log Page:May Support 00:13:40.597 NVMe-MI Commands & Effects Log Page: May Support 00:13:40.597 Data Area 4 for Telemetry Log: Not Supported 00:13:40.597 Error Log Page Entries Supported: 128 00:13:40.597 Keep Alive: Supported 00:13:40.597 Keep Alive Granularity: 10000 ms 00:13:40.597 00:13:40.597 NVM Command Set Attributes 00:13:40.597 ========================== 00:13:40.597 Submission Queue Entry Size 00:13:40.597 Max: 64 00:13:40.597 Min: 64 00:13:40.597 Completion Queue Entry Size 00:13:40.597 Max: 16 00:13:40.597 Min: 16 00:13:40.597 Number of Namespaces: 32 00:13:40.597 Compare Command: Supported 00:13:40.597 Write Uncorrectable Command: Not Supported 00:13:40.597 Dataset Management Command: Supported 00:13:40.597 Write Zeroes Command: Supported 00:13:40.597 Set Features Save Field: Not Supported 00:13:40.597 Reservations: Not Supported 00:13:40.597 Timestamp: Not Supported 00:13:40.597 Copy: Supported 00:13:40.597 Volatile Write Cache: Present 00:13:40.597 Atomic Write Unit (Normal): 1 00:13:40.597 Atomic Write Unit (PFail): 1 00:13:40.597 Atomic Compare & Write Unit: 1 00:13:40.597 Fused Compare & Write: Supported 00:13:40.597 Scatter-Gather List 00:13:40.597 SGL Command Set: Supported (Dword aligned) 00:13:40.597 SGL Keyed: Not Supported 00:13:40.597 SGL Bit Bucket Descriptor: Not Supported 00:13:40.597 SGL Metadata Pointer: Not Supported 00:13:40.597 Oversized SGL: Not Supported 00:13:40.597 SGL Metadata Address: Not Supported 00:13:40.597 SGL Offset: Not Supported 00:13:40.597 Transport SGL Data Block: Not Supported 00:13:40.597 Replay Protected Memory Block: Not Supported 00:13:40.597 00:13:40.597 Firmware Slot Information 00:13:40.597 ========================= 00:13:40.597 Active slot: 1 00:13:40.597 Slot 1 Firmware Revision: 25.01 00:13:40.597 00:13:40.597 00:13:40.597 Commands Supported and Effects 00:13:40.597 ============================== 00:13:40.597 Admin Commands 00:13:40.597 -------------- 00:13:40.597 Get Log Page (02h): Supported 00:13:40.597 Identify (06h): Supported 00:13:40.597 Abort (08h): Supported 00:13:40.597 Set Features (09h): Supported 00:13:40.597 Get Features (0Ah): Supported 00:13:40.597 Asynchronous Event Request (0Ch): Supported 00:13:40.597 Keep Alive (18h): Supported 00:13:40.597 I/O Commands 00:13:40.597 ------------ 00:13:40.597 Flush (00h): Supported LBA-Change 00:13:40.597 Write (01h): Supported LBA-Change 00:13:40.597 Read (02h): Supported 00:13:40.597 Compare (05h): Supported 00:13:40.597 Write Zeroes (08h): Supported LBA-Change 00:13:40.597 Dataset Management (09h): Supported LBA-Change 00:13:40.597 Copy (19h): Supported LBA-Change 00:13:40.597 00:13:40.597 Error Log 00:13:40.597 ========= 00:13:40.597 00:13:40.597 Arbitration 00:13:40.597 =========== 00:13:40.597 Arbitration Burst: 1 00:13:40.597 00:13:40.597 Power Management 00:13:40.597 ================ 00:13:40.597 Number of Power States: 1 00:13:40.597 Current Power State: Power State #0 00:13:40.597 Power State #0: 00:13:40.597 Max Power: 0.00 W 00:13:40.597 Non-Operational State: Operational 00:13:40.597 Entry Latency: Not Reported 00:13:40.597 Exit Latency: Not Reported 00:13:40.597 Relative Read Throughput: 0 00:13:40.597 Relative Read Latency: 0 00:13:40.597 Relative Write Throughput: 0 00:13:40.597 Relative Write Latency: 0 00:13:40.597 Idle Power: Not Reported 00:13:40.597 Active Power: Not Reported 00:13:40.597 Non-Operational Permissive Mode: Not Supported 00:13:40.597 00:13:40.597 Health Information 00:13:40.597 ================== 00:13:40.597 Critical Warnings: 00:13:40.597 Available Spare Space: OK 00:13:40.597 Temperature: OK 00:13:40.597 Device Reliability: OK 00:13:40.597 Read Only: No 00:13:40.597 Volatile Memory Backup: OK 00:13:40.597 Current Temperature: 0 Kelvin (-273 Celsius) 00:13:40.597 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:13:40.597 Available Spare: 0% 00:13:40.597 Available Sp[2024-11-28 12:37:22.820325] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:13:40.597 [2024-11-28 12:37:22.820335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:13:40.597 [2024-11-28 12:37:22.820361] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:13:40.597 [2024-11-28 12:37:22.820369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:40.597 [2024-11-28 12:37:22.820375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:40.597 [2024-11-28 12:37:22.820381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:40.597 [2024-11-28 12:37:22.820387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:40.597 [2024-11-28 12:37:22.820474] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:13:40.597 [2024-11-28 12:37:22.820484] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:13:40.597 [2024-11-28 12:37:22.821479] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:40.597 [2024-11-28 12:37:22.821527] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:13:40.597 [2024-11-28 12:37:22.821533] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:13:40.598 [2024-11-28 12:37:22.822488] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:13:40.598 [2024-11-28 12:37:22.822499] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:13:40.598 [2024-11-28 12:37:22.822547] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:13:40.598 [2024-11-28 12:37:22.827955] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:40.598 are Threshold: 0% 00:13:40.598 Life Percentage Used: 0% 00:13:40.598 Data Units Read: 0 00:13:40.598 Data Units Written: 0 00:13:40.598 Host Read Commands: 0 00:13:40.598 Host Write Commands: 0 00:13:40.598 Controller Busy Time: 0 minutes 00:13:40.598 Power Cycles: 0 00:13:40.598 Power On Hours: 0 hours 00:13:40.598 Unsafe Shutdowns: 0 00:13:40.598 Unrecoverable Media Errors: 0 00:13:40.598 Lifetime Error Log Entries: 0 00:13:40.598 Warning Temperature Time: 0 minutes 00:13:40.598 Critical Temperature Time: 0 minutes 00:13:40.598 00:13:40.598 Number of Queues 00:13:40.598 ================ 00:13:40.598 Number of I/O Submission Queues: 127 00:13:40.598 Number of I/O Completion Queues: 127 00:13:40.598 00:13:40.598 Active Namespaces 00:13:40.598 ================= 00:13:40.598 Namespace ID:1 00:13:40.598 Error Recovery Timeout: Unlimited 00:13:40.598 Command Set Identifier: NVM (00h) 00:13:40.598 Deallocate: Supported 00:13:40.598 Deallocated/Unwritten Error: Not Supported 00:13:40.598 Deallocated Read Value: Unknown 00:13:40.598 Deallocate in Write Zeroes: Not Supported 00:13:40.598 Deallocated Guard Field: 0xFFFF 00:13:40.598 Flush: Supported 00:13:40.598 Reservation: Supported 00:13:40.598 Namespace Sharing Capabilities: Multiple Controllers 00:13:40.598 Size (in LBAs): 131072 (0GiB) 00:13:40.598 Capacity (in LBAs): 131072 (0GiB) 00:13:40.598 Utilization (in LBAs): 131072 (0GiB) 00:13:40.598 NGUID: B7C442B9D2A745008C0CDE54A0A792D7 00:13:40.598 UUID: b7c442b9-d2a7-4500-8c0c-de54a0a792d7 00:13:40.598 Thin Provisioning: Not Supported 00:13:40.598 Per-NS Atomic Units: Yes 00:13:40.598 Atomic Boundary Size (Normal): 0 00:13:40.598 Atomic Boundary Size (PFail): 0 00:13:40.598 Atomic Boundary Offset: 0 00:13:40.598 Maximum Single Source Range Length: 65535 00:13:40.598 Maximum Copy Length: 65535 00:13:40.598 Maximum Source Range Count: 1 00:13:40.598 NGUID/EUI64 Never Reused: No 00:13:40.598 Namespace Write Protected: No 00:13:40.598 Number of LBA Formats: 1 00:13:40.598 Current LBA Format: LBA Format #00 00:13:40.598 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:40.598 00:13:40.598 12:37:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:13:40.598 [2024-11-28 12:37:23.057499] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:45.868 Initializing NVMe Controllers 00:13:45.868 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:45.868 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:13:45.868 Initialization complete. Launching workers. 00:13:45.868 ======================================================== 00:13:45.868 Latency(us) 00:13:45.868 Device Information : IOPS MiB/s Average min max 00:13:45.868 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39941.94 156.02 3204.49 995.52 6584.28 00:13:45.868 ======================================================== 00:13:45.868 Total : 39941.94 156.02 3204.49 995.52 6584.28 00:13:45.868 00:13:45.868 [2024-11-28 12:37:28.077575] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:45.868 12:37:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:13:45.868 [2024-11-28 12:37:28.318667] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:51.358 Initializing NVMe Controllers 00:13:51.358 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:51.358 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:13:51.358 Initialization complete. Launching workers. 00:13:51.358 ======================================================== 00:13:51.358 Latency(us) 00:13:51.358 Device Information : IOPS MiB/s Average min max 00:13:51.358 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16025.60 62.60 7997.25 6985.13 11973.29 00:13:51.358 ======================================================== 00:13:51.358 Total : 16025.60 62.60 7997.25 6985.13 11973.29 00:13:51.358 00:13:51.358 [2024-11-28 12:37:33.356075] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:51.358 12:37:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:13:51.358 [2024-11-28 12:37:33.560036] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:56.630 [2024-11-28 12:37:38.637205] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:56.630 Initializing NVMe Controllers 00:13:56.630 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:56.630 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:56.630 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:13:56.630 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:13:56.630 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:13:56.630 Initialization complete. Launching workers. 00:13:56.630 Starting thread on core 2 00:13:56.630 Starting thread on core 3 00:13:56.630 Starting thread on core 1 00:13:56.630 12:37:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:13:56.630 [2024-11-28 12:37:38.937337] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:59.918 [2024-11-28 12:37:42.154148] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:59.918 Initializing NVMe Controllers 00:13:59.918 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:59.918 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:59.918 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:13:59.918 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:13:59.918 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:13:59.918 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:13:59.918 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:13:59.918 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:13:59.918 Initialization complete. Launching workers. 00:13:59.918 Starting thread on core 1 with urgent priority queue 00:13:59.918 Starting thread on core 2 with urgent priority queue 00:13:59.918 Starting thread on core 3 with urgent priority queue 00:13:59.918 Starting thread on core 0 with urgent priority queue 00:13:59.918 SPDK bdev Controller (SPDK1 ) core 0: 1007.00 IO/s 99.30 secs/100000 ios 00:13:59.918 SPDK bdev Controller (SPDK1 ) core 1: 1090.33 IO/s 91.72 secs/100000 ios 00:13:59.918 SPDK bdev Controller (SPDK1 ) core 2: 1153.67 IO/s 86.68 secs/100000 ios 00:13:59.918 SPDK bdev Controller (SPDK1 ) core 3: 933.67 IO/s 107.10 secs/100000 ios 00:13:59.918 ======================================================== 00:13:59.918 00:13:59.918 12:37:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:14:00.175 [2024-11-28 12:37:42.441398] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:00.175 Initializing NVMe Controllers 00:14:00.175 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:00.175 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:00.175 Namespace ID: 1 size: 0GB 00:14:00.175 Initialization complete. 00:14:00.175 INFO: using host memory buffer for IO 00:14:00.175 Hello world! 00:14:00.175 [2024-11-28 12:37:42.475648] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:00.175 12:37:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:14:00.433 [2024-11-28 12:37:42.762354] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:01.369 Initializing NVMe Controllers 00:14:01.369 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:01.369 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:01.369 Initialization complete. Launching workers. 00:14:01.369 submit (in ns) avg, min, max = 6909.1, 3267.8, 3999453.0 00:14:01.369 complete (in ns) avg, min, max = 19119.5, 1773.9, 4994201.7 00:14:01.369 00:14:01.369 Submit histogram 00:14:01.370 ================ 00:14:01.370 Range in us Cumulative Count 00:14:01.370 3.256 - 3.270: 0.0062% ( 1) 00:14:01.370 3.270 - 3.283: 0.1671% ( 26) 00:14:01.370 3.283 - 3.297: 1.7203% ( 251) 00:14:01.370 3.297 - 3.311: 5.8478% ( 667) 00:14:01.370 3.311 - 3.325: 11.2562% ( 874) 00:14:01.370 3.325 - 3.339: 17.2587% ( 970) 00:14:01.370 3.339 - 3.353: 23.7438% ( 1048) 00:14:01.370 3.353 - 3.367: 29.7215% ( 966) 00:14:01.370 3.367 - 3.381: 35.4889% ( 932) 00:14:01.370 3.381 - 3.395: 41.0953% ( 906) 00:14:01.370 3.395 - 3.409: 45.4394% ( 702) 00:14:01.370 3.409 - 3.423: 49.7649% ( 699) 00:14:01.370 3.423 - 3.437: 54.2203% ( 720) 00:14:01.370 3.437 - 3.450: 60.9406% ( 1086) 00:14:01.370 3.450 - 3.464: 66.3366% ( 872) 00:14:01.370 3.464 - 3.478: 70.8292% ( 726) 00:14:01.370 3.478 - 3.492: 76.4109% ( 902) 00:14:01.370 3.492 - 3.506: 80.6807% ( 690) 00:14:01.370 3.506 - 3.520: 83.3601% ( 433) 00:14:01.370 3.520 - 3.534: 85.3403% ( 320) 00:14:01.370 3.534 - 3.548: 86.4233% ( 175) 00:14:01.370 3.548 - 3.562: 87.0606% ( 103) 00:14:01.370 3.562 - 3.590: 87.7661% ( 114) 00:14:01.370 3.590 - 3.617: 89.1460% ( 223) 00:14:01.370 3.617 - 3.645: 90.7921% ( 266) 00:14:01.370 3.645 - 3.673: 92.5804% ( 289) 00:14:01.370 3.673 - 3.701: 94.6040% ( 327) 00:14:01.370 3.701 - 3.729: 96.2686% ( 269) 00:14:01.370 3.729 - 3.757: 97.6609% ( 225) 00:14:01.370 3.757 - 3.784: 98.5272% ( 140) 00:14:01.370 3.784 - 3.812: 99.0965% ( 92) 00:14:01.370 3.812 - 3.840: 99.3502% ( 41) 00:14:01.370 3.840 - 3.868: 99.5297% ( 29) 00:14:01.370 3.868 - 3.896: 99.5792% ( 8) 00:14:01.370 3.896 - 3.923: 99.6101% ( 5) 00:14:01.370 3.923 - 3.951: 99.6287% ( 3) 00:14:01.370 3.951 - 3.979: 99.6349% ( 1) 00:14:01.370 4.007 - 4.035: 99.6411% ( 1) 00:14:01.370 4.035 - 4.063: 99.6535% ( 2) 00:14:01.370 4.090 - 4.118: 99.6597% ( 1) 00:14:01.370 4.118 - 4.146: 99.6658% ( 1) 00:14:01.370 5.259 - 5.287: 99.6720% ( 1) 00:14:01.370 5.287 - 5.315: 99.6782% ( 1) 00:14:01.370 5.370 - 5.398: 99.6844% ( 1) 00:14:01.370 5.482 - 5.510: 99.6906% ( 1) 00:14:01.370 5.537 - 5.565: 99.7030% ( 2) 00:14:01.370 5.565 - 5.593: 99.7092% ( 1) 00:14:01.370 5.649 - 5.677: 99.7153% ( 1) 00:14:01.370 5.677 - 5.704: 99.7277% ( 2) 00:14:01.370 5.704 - 5.732: 99.7339% ( 1) 00:14:01.370 5.732 - 5.760: 99.7401% ( 1) 00:14:01.370 5.788 - 5.816: 99.7525% ( 2) 00:14:01.370 5.899 - 5.927: 99.7587% ( 1) 00:14:01.370 5.955 - 5.983: 99.7649% ( 1) 00:14:01.370 5.983 - 6.010: 99.7772% ( 2) 00:14:01.370 6.010 - 6.038: 99.7896% ( 2) 00:14:01.370 6.066 - 6.094: 99.7958% ( 1) 00:14:01.370 6.094 - 6.122: 99.8020% ( 1) 00:14:01.370 6.177 - 6.205: 99.8082% ( 1) 00:14:01.370 6.233 - 6.261: 99.8144% ( 1) 00:14:01.370 6.289 - 6.317: 99.8205% ( 1) 00:14:01.370 6.344 - 6.372: 99.8329% ( 2) 00:14:01.370 6.372 - 6.400: 99.8391% ( 1) 00:14:01.370 6.428 - 6.456: 99.8453% ( 1) 00:14:01.370 6.567 - 6.595: 99.8577% ( 2) 00:14:01.370 6.623 - 6.650: 99.8639% ( 1) 00:14:01.370 6.706 - 6.734: 99.8700% ( 1) 00:14:01.370 6.790 - 6.817: 99.8824% ( 2) 00:14:01.370 6.901 - 6.929: 99.8886% ( 1) 00:14:01.370 6.984 - 7.012: 99.8948% ( 1) 00:14:01.370 7.346 - 7.402: 99.9010% ( 1) 00:14:01.370 8.237 - 8.292: 99.9072% ( 1) 00:14:01.370 10.852 - 10.908: 99.9134% ( 1) 00:14:01.370 3989.148 - 4017.642: 100.0000% ( 14) 00:14:01.370 00:14:01.370 Complete histogram 00:14:01.370 ================== 00:14:01.370 Range in us Cumulative Count 00:14:01.370 1.774 - 1.781: 0.0186% ( 3) 00:14:01.370 1.781 - 1.795: 0.0990% ( 13) 00:14:01.370 1.795 - 1.809: 0.1238% ( 4) 00:14:01.370 1.809 - [2024-11-28 12:37:43.786223] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:01.370 1.823: 0.8787% ( 122) 00:14:01.370 1.823 - 1.837: 26.6275% ( 4161) 00:14:01.370 1.837 - 1.850: 56.9616% ( 4902) 00:14:01.370 1.850 - 1.864: 62.8218% ( 947) 00:14:01.370 1.864 - 1.878: 69.8762% ( 1140) 00:14:01.370 1.878 - 1.892: 86.5656% ( 2697) 00:14:01.370 1.892 - 1.906: 93.3230% ( 1092) 00:14:01.370 1.906 - 1.920: 96.7698% ( 557) 00:14:01.370 1.920 - 1.934: 97.8960% ( 182) 00:14:01.370 1.934 - 1.948: 98.2921% ( 64) 00:14:01.370 1.948 - 1.962: 98.7686% ( 77) 00:14:01.370 1.962 - 1.976: 99.0408% ( 44) 00:14:01.370 1.976 - 1.990: 99.0903% ( 8) 00:14:01.370 1.990 - 2.003: 99.1151% ( 4) 00:14:01.370 2.017 - 2.031: 99.1460% ( 5) 00:14:01.370 2.031 - 2.045: 99.1522% ( 1) 00:14:01.370 2.045 - 2.059: 99.1646% ( 2) 00:14:01.370 2.059 - 2.073: 99.1770% ( 2) 00:14:01.370 2.087 - 2.101: 99.1832% ( 1) 00:14:01.370 2.101 - 2.115: 99.1894% ( 1) 00:14:01.370 2.129 - 2.143: 99.2017% ( 2) 00:14:01.370 2.143 - 2.157: 99.2203% ( 3) 00:14:01.370 2.157 - 2.170: 99.2265% ( 1) 00:14:01.370 2.184 - 2.198: 99.2450% ( 3) 00:14:01.370 2.212 - 2.226: 99.2636% ( 3) 00:14:01.370 2.226 - 2.240: 99.2822% ( 3) 00:14:01.370 2.240 - 2.254: 99.2884% ( 1) 00:14:01.370 2.254 - 2.268: 99.2946% ( 1) 00:14:01.370 2.268 - 2.282: 99.3131% ( 3) 00:14:01.370 2.282 - 2.296: 99.3193% ( 1) 00:14:01.370 2.296 - 2.310: 99.3255% ( 1) 00:14:01.370 2.337 - 2.351: 99.3379% ( 2) 00:14:01.370 3.979 - 4.007: 99.3502% ( 2) 00:14:01.370 4.035 - 4.063: 99.3564% ( 1) 00:14:01.370 4.063 - 4.090: 99.3750% ( 3) 00:14:01.370 4.090 - 4.118: 99.3812% ( 1) 00:14:01.370 4.146 - 4.174: 99.3874% ( 1) 00:14:01.370 4.202 - 4.230: 99.4059% ( 3) 00:14:01.370 4.230 - 4.257: 99.4121% ( 1) 00:14:01.370 4.313 - 4.341: 99.4183% ( 1) 00:14:01.370 4.424 - 4.452: 99.4245% ( 1) 00:14:01.370 4.508 - 4.536: 99.4307% ( 1) 00:14:01.370 4.536 - 4.563: 99.4369% ( 1) 00:14:01.370 4.591 - 4.619: 99.4493% ( 2) 00:14:01.370 4.619 - 4.647: 99.4616% ( 2) 00:14:01.370 4.786 - 4.814: 99.4678% ( 1) 00:14:01.370 4.842 - 4.870: 99.4740% ( 1) 00:14:01.370 4.897 - 4.925: 99.4802% ( 1) 00:14:01.370 5.092 - 5.120: 99.4926% ( 2) 00:14:01.370 5.231 - 5.259: 99.4988% ( 1) 00:14:01.370 5.259 - 5.287: 99.5050% ( 1) 00:14:01.370 5.482 - 5.510: 99.5111% ( 1) 00:14:01.370 5.760 - 5.788: 99.5235% ( 2) 00:14:01.370 6.150 - 6.177: 99.5297% ( 1) 00:14:01.370 7.624 - 7.680: 99.5359% ( 1) 00:14:01.370 9.572 - 9.628: 99.5421% ( 1) 00:14:01.370 9.628 - 9.683: 99.5483% ( 1) 00:14:01.370 14.080 - 14.136: 99.5545% ( 1) 00:14:01.370 14.358 - 14.470: 99.5606% ( 1) 00:14:01.370 40.070 - 40.292: 99.5668% ( 1) 00:14:01.370 2179.784 - 2194.031: 99.5730% ( 1) 00:14:01.370 3989.148 - 4017.642: 99.9938% ( 68) 00:14:01.370 4986.435 - 5014.929: 100.0000% ( 1) 00:14:01.370 00:14:01.370 12:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:14:01.370 12:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:14:01.370 12:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:14:01.370 12:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:14:01.370 12:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:01.629 [ 00:14:01.629 { 00:14:01.629 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:01.629 "subtype": "Discovery", 00:14:01.629 "listen_addresses": [], 00:14:01.629 "allow_any_host": true, 00:14:01.629 "hosts": [] 00:14:01.629 }, 00:14:01.629 { 00:14:01.629 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:01.629 "subtype": "NVMe", 00:14:01.629 "listen_addresses": [ 00:14:01.629 { 00:14:01.629 "trtype": "VFIOUSER", 00:14:01.629 "adrfam": "IPv4", 00:14:01.629 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:01.629 "trsvcid": "0" 00:14:01.629 } 00:14:01.629 ], 00:14:01.629 "allow_any_host": true, 00:14:01.629 "hosts": [], 00:14:01.629 "serial_number": "SPDK1", 00:14:01.629 "model_number": "SPDK bdev Controller", 00:14:01.629 "max_namespaces": 32, 00:14:01.629 "min_cntlid": 1, 00:14:01.629 "max_cntlid": 65519, 00:14:01.629 "namespaces": [ 00:14:01.629 { 00:14:01.629 "nsid": 1, 00:14:01.629 "bdev_name": "Malloc1", 00:14:01.629 "name": "Malloc1", 00:14:01.629 "nguid": "B7C442B9D2A745008C0CDE54A0A792D7", 00:14:01.629 "uuid": "b7c442b9-d2a7-4500-8c0c-de54a0a792d7" 00:14:01.629 } 00:14:01.629 ] 00:14:01.629 }, 00:14:01.629 { 00:14:01.629 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:01.629 "subtype": "NVMe", 00:14:01.629 "listen_addresses": [ 00:14:01.629 { 00:14:01.629 "trtype": "VFIOUSER", 00:14:01.629 "adrfam": "IPv4", 00:14:01.629 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:01.629 "trsvcid": "0" 00:14:01.629 } 00:14:01.629 ], 00:14:01.629 "allow_any_host": true, 00:14:01.629 "hosts": [], 00:14:01.629 "serial_number": "SPDK2", 00:14:01.629 "model_number": "SPDK bdev Controller", 00:14:01.629 "max_namespaces": 32, 00:14:01.629 "min_cntlid": 1, 00:14:01.629 "max_cntlid": 65519, 00:14:01.630 "namespaces": [ 00:14:01.630 { 00:14:01.630 "nsid": 1, 00:14:01.630 "bdev_name": "Malloc2", 00:14:01.630 "name": "Malloc2", 00:14:01.630 "nguid": "E27082B448CE476EA8EDB95FE3752DE4", 00:14:01.630 "uuid": "e27082b4-48ce-476e-a8ed-b95fe3752de4" 00:14:01.630 } 00:14:01.630 ] 00:14:01.630 } 00:14:01.630 ] 00:14:01.630 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:14:01.630 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2490142 00:14:01.630 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:14:01.630 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:14:01.630 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:14:01.630 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:01.630 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:01.630 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:14:01.630 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:14:01.630 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:14:01.888 [2024-11-28 12:37:44.202358] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:01.888 Malloc3 00:14:01.888 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:14:02.147 [2024-11-28 12:37:44.444151] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:02.147 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:02.147 Asynchronous Event Request test 00:14:02.147 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:02.147 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:02.147 Registering asynchronous event callbacks... 00:14:02.147 Starting namespace attribute notice tests for all controllers... 00:14:02.147 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:14:02.147 aer_cb - Changed Namespace 00:14:02.147 Cleaning up... 00:14:02.147 [ 00:14:02.147 { 00:14:02.147 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:02.147 "subtype": "Discovery", 00:14:02.147 "listen_addresses": [], 00:14:02.147 "allow_any_host": true, 00:14:02.147 "hosts": [] 00:14:02.147 }, 00:14:02.147 { 00:14:02.147 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:02.147 "subtype": "NVMe", 00:14:02.147 "listen_addresses": [ 00:14:02.147 { 00:14:02.147 "trtype": "VFIOUSER", 00:14:02.147 "adrfam": "IPv4", 00:14:02.147 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:02.147 "trsvcid": "0" 00:14:02.147 } 00:14:02.147 ], 00:14:02.147 "allow_any_host": true, 00:14:02.147 "hosts": [], 00:14:02.147 "serial_number": "SPDK1", 00:14:02.147 "model_number": "SPDK bdev Controller", 00:14:02.147 "max_namespaces": 32, 00:14:02.147 "min_cntlid": 1, 00:14:02.147 "max_cntlid": 65519, 00:14:02.147 "namespaces": [ 00:14:02.147 { 00:14:02.147 "nsid": 1, 00:14:02.147 "bdev_name": "Malloc1", 00:14:02.147 "name": "Malloc1", 00:14:02.147 "nguid": "B7C442B9D2A745008C0CDE54A0A792D7", 00:14:02.147 "uuid": "b7c442b9-d2a7-4500-8c0c-de54a0a792d7" 00:14:02.147 }, 00:14:02.147 { 00:14:02.147 "nsid": 2, 00:14:02.147 "bdev_name": "Malloc3", 00:14:02.147 "name": "Malloc3", 00:14:02.147 "nguid": "B35EF8D76E984DE69DA214BEC9E47B03", 00:14:02.147 "uuid": "b35ef8d7-6e98-4de6-9da2-14bec9e47b03" 00:14:02.147 } 00:14:02.147 ] 00:14:02.147 }, 00:14:02.147 { 00:14:02.147 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:02.147 "subtype": "NVMe", 00:14:02.147 "listen_addresses": [ 00:14:02.147 { 00:14:02.147 "trtype": "VFIOUSER", 00:14:02.147 "adrfam": "IPv4", 00:14:02.147 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:02.147 "trsvcid": "0" 00:14:02.147 } 00:14:02.147 ], 00:14:02.147 "allow_any_host": true, 00:14:02.147 "hosts": [], 00:14:02.147 "serial_number": "SPDK2", 00:14:02.147 "model_number": "SPDK bdev Controller", 00:14:02.147 "max_namespaces": 32, 00:14:02.147 "min_cntlid": 1, 00:14:02.147 "max_cntlid": 65519, 00:14:02.147 "namespaces": [ 00:14:02.147 { 00:14:02.147 "nsid": 1, 00:14:02.147 "bdev_name": "Malloc2", 00:14:02.147 "name": "Malloc2", 00:14:02.147 "nguid": "E27082B448CE476EA8EDB95FE3752DE4", 00:14:02.147 "uuid": "e27082b4-48ce-476e-a8ed-b95fe3752de4" 00:14:02.147 } 00:14:02.147 ] 00:14:02.147 } 00:14:02.147 ] 00:14:02.407 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2490142 00:14:02.407 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:02.407 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:14:02.407 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:14:02.407 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:14:02.407 [2024-11-28 12:37:44.696183] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:14:02.408 [2024-11-28 12:37:44.696231] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2490196 ] 00:14:02.408 [2024-11-28 12:37:44.736753] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:14:02.408 [2024-11-28 12:37:44.741016] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:02.408 [2024-11-28 12:37:44.741042] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f8d25bba000 00:14:02.408 [2024-11-28 12:37:44.742010] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:02.408 [2024-11-28 12:37:44.743022] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:02.408 [2024-11-28 12:37:44.744025] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:02.408 [2024-11-28 12:37:44.745036] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:02.408 [2024-11-28 12:37:44.746045] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:02.408 [2024-11-28 12:37:44.747053] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:02.408 [2024-11-28 12:37:44.748061] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:02.408 [2024-11-28 12:37:44.749069] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:02.408 [2024-11-28 12:37:44.750080] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:02.408 [2024-11-28 12:37:44.750091] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f8d25baf000 00:14:02.408 [2024-11-28 12:37:44.751217] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:02.408 [2024-11-28 12:37:44.761720] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:14:02.408 [2024-11-28 12:37:44.761744] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:14:02.408 [2024-11-28 12:37:44.766838] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:14:02.408 [2024-11-28 12:37:44.766877] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:14:02.408 [2024-11-28 12:37:44.766945] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:14:02.408 [2024-11-28 12:37:44.766960] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:14:02.408 [2024-11-28 12:37:44.766966] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:14:02.408 [2024-11-28 12:37:44.767842] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:14:02.408 [2024-11-28 12:37:44.767853] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:14:02.408 [2024-11-28 12:37:44.767860] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:14:02.408 [2024-11-28 12:37:44.768852] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:14:02.408 [2024-11-28 12:37:44.768861] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:14:02.408 [2024-11-28 12:37:44.768868] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:14:02.408 [2024-11-28 12:37:44.769864] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:14:02.408 [2024-11-28 12:37:44.769873] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:02.408 [2024-11-28 12:37:44.770871] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:14:02.408 [2024-11-28 12:37:44.770878] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:14:02.408 [2024-11-28 12:37:44.770883] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:14:02.408 [2024-11-28 12:37:44.770889] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:02.408 [2024-11-28 12:37:44.770999] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:14:02.408 [2024-11-28 12:37:44.771004] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:02.408 [2024-11-28 12:37:44.771009] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:14:02.408 [2024-11-28 12:37:44.771877] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:14:02.408 [2024-11-28 12:37:44.772882] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:14:02.408 [2024-11-28 12:37:44.773891] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:14:02.408 [2024-11-28 12:37:44.774889] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:02.408 [2024-11-28 12:37:44.774926] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:02.408 [2024-11-28 12:37:44.775903] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:14:02.408 [2024-11-28 12:37:44.775912] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:02.408 [2024-11-28 12:37:44.775916] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:14:02.408 [2024-11-28 12:37:44.775933] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:14:02.408 [2024-11-28 12:37:44.775943] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:14:02.408 [2024-11-28 12:37:44.775960] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:02.408 [2024-11-28 12:37:44.775965] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:02.408 [2024-11-28 12:37:44.775969] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:02.408 [2024-11-28 12:37:44.775979] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:02.408 [2024-11-28 12:37:44.783955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:14:02.408 [2024-11-28 12:37:44.783966] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:14:02.408 [2024-11-28 12:37:44.783970] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:14:02.408 [2024-11-28 12:37:44.783974] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:14:02.408 [2024-11-28 12:37:44.783978] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:14:02.408 [2024-11-28 12:37:44.783983] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:14:02.408 [2024-11-28 12:37:44.783987] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:14:02.408 [2024-11-28 12:37:44.783991] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:14:02.408 [2024-11-28 12:37:44.784000] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:14:02.408 [2024-11-28 12:37:44.784010] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:14:02.408 [2024-11-28 12:37:44.791952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:14:02.408 [2024-11-28 12:37:44.791974] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:02.408 [2024-11-28 12:37:44.791982] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:02.408 [2024-11-28 12:37:44.791989] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:02.408 [2024-11-28 12:37:44.791997] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:02.408 [2024-11-28 12:37:44.792001] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:14:02.408 [2024-11-28 12:37:44.792012] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:02.408 [2024-11-28 12:37:44.792020] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:14:02.408 [2024-11-28 12:37:44.799954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:14:02.408 [2024-11-28 12:37:44.799962] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:14:02.408 [2024-11-28 12:37:44.799967] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:02.408 [2024-11-28 12:37:44.799978] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:14:02.408 [2024-11-28 12:37:44.799983] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:14:02.408 [2024-11-28 12:37:44.799991] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:02.408 [2024-11-28 12:37:44.807952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:14:02.408 [2024-11-28 12:37:44.808009] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:14:02.408 [2024-11-28 12:37:44.808017] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:14:02.409 [2024-11-28 12:37:44.808023] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:14:02.409 [2024-11-28 12:37:44.808028] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:14:02.409 [2024-11-28 12:37:44.808031] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:02.409 [2024-11-28 12:37:44.808037] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:14:02.409 [2024-11-28 12:37:44.815953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:14:02.409 [2024-11-28 12:37:44.815968] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:14:02.409 [2024-11-28 12:37:44.815978] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:14:02.409 [2024-11-28 12:37:44.815986] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:14:02.409 [2024-11-28 12:37:44.815992] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:02.409 [2024-11-28 12:37:44.815996] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:02.409 [2024-11-28 12:37:44.815999] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:02.409 [2024-11-28 12:37:44.816005] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:02.409 [2024-11-28 12:37:44.823954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:14:02.409 [2024-11-28 12:37:44.823967] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:02.409 [2024-11-28 12:37:44.823974] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:02.409 [2024-11-28 12:37:44.823980] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:02.409 [2024-11-28 12:37:44.823984] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:02.409 [2024-11-28 12:37:44.823987] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:02.409 [2024-11-28 12:37:44.823993] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:02.409 [2024-11-28 12:37:44.831952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:14:02.409 [2024-11-28 12:37:44.831964] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:02.409 [2024-11-28 12:37:44.831970] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:14:02.409 [2024-11-28 12:37:44.831977] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:14:02.409 [2024-11-28 12:37:44.831982] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:14:02.409 [2024-11-28 12:37:44.831987] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:02.409 [2024-11-28 12:37:44.831992] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:14:02.409 [2024-11-28 12:37:44.831996] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:14:02.409 [2024-11-28 12:37:44.832000] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:14:02.409 [2024-11-28 12:37:44.832005] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:14:02.409 [2024-11-28 12:37:44.832020] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:14:02.409 [2024-11-28 12:37:44.839952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:14:02.409 [2024-11-28 12:37:44.839965] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:14:02.409 [2024-11-28 12:37:44.847952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:14:02.409 [2024-11-28 12:37:44.847965] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:14:02.409 [2024-11-28 12:37:44.855953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:14:02.409 [2024-11-28 12:37:44.855966] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:02.409 [2024-11-28 12:37:44.863954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:14:02.409 [2024-11-28 12:37:44.863969] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:14:02.409 [2024-11-28 12:37:44.863973] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:14:02.409 [2024-11-28 12:37:44.863976] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:14:02.409 [2024-11-28 12:37:44.863979] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:14:02.409 [2024-11-28 12:37:44.863983] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:14:02.409 [2024-11-28 12:37:44.863989] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:14:02.409 [2024-11-28 12:37:44.863995] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:14:02.409 [2024-11-28 12:37:44.863999] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:14:02.409 [2024-11-28 12:37:44.864002] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:02.409 [2024-11-28 12:37:44.864008] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:14:02.409 [2024-11-28 12:37:44.864014] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:14:02.409 [2024-11-28 12:37:44.864018] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:02.409 [2024-11-28 12:37:44.864021] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:02.409 [2024-11-28 12:37:44.864026] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:02.409 [2024-11-28 12:37:44.864033] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:14:02.409 [2024-11-28 12:37:44.864036] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:14:02.409 [2024-11-28 12:37:44.864040] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:02.409 [2024-11-28 12:37:44.864045] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:14:02.409 [2024-11-28 12:37:44.871954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:14:02.409 [2024-11-28 12:37:44.871968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:14:02.409 [2024-11-28 12:37:44.871978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:14:02.409 [2024-11-28 12:37:44.871985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:14:02.409 ===================================================== 00:14:02.409 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:02.409 ===================================================== 00:14:02.409 Controller Capabilities/Features 00:14:02.409 ================================ 00:14:02.409 Vendor ID: 4e58 00:14:02.409 Subsystem Vendor ID: 4e58 00:14:02.409 Serial Number: SPDK2 00:14:02.409 Model Number: SPDK bdev Controller 00:14:02.409 Firmware Version: 25.01 00:14:02.409 Recommended Arb Burst: 6 00:14:02.409 IEEE OUI Identifier: 8d 6b 50 00:14:02.409 Multi-path I/O 00:14:02.409 May have multiple subsystem ports: Yes 00:14:02.409 May have multiple controllers: Yes 00:14:02.409 Associated with SR-IOV VF: No 00:14:02.409 Max Data Transfer Size: 131072 00:14:02.409 Max Number of Namespaces: 32 00:14:02.409 Max Number of I/O Queues: 127 00:14:02.409 NVMe Specification Version (VS): 1.3 00:14:02.409 NVMe Specification Version (Identify): 1.3 00:14:02.409 Maximum Queue Entries: 256 00:14:02.409 Contiguous Queues Required: Yes 00:14:02.409 Arbitration Mechanisms Supported 00:14:02.409 Weighted Round Robin: Not Supported 00:14:02.409 Vendor Specific: Not Supported 00:14:02.409 Reset Timeout: 15000 ms 00:14:02.409 Doorbell Stride: 4 bytes 00:14:02.409 NVM Subsystem Reset: Not Supported 00:14:02.409 Command Sets Supported 00:14:02.409 NVM Command Set: Supported 00:14:02.409 Boot Partition: Not Supported 00:14:02.409 Memory Page Size Minimum: 4096 bytes 00:14:02.409 Memory Page Size Maximum: 4096 bytes 00:14:02.409 Persistent Memory Region: Not Supported 00:14:02.409 Optional Asynchronous Events Supported 00:14:02.409 Namespace Attribute Notices: Supported 00:14:02.409 Firmware Activation Notices: Not Supported 00:14:02.409 ANA Change Notices: Not Supported 00:14:02.409 PLE Aggregate Log Change Notices: Not Supported 00:14:02.409 LBA Status Info Alert Notices: Not Supported 00:14:02.409 EGE Aggregate Log Change Notices: Not Supported 00:14:02.409 Normal NVM Subsystem Shutdown event: Not Supported 00:14:02.409 Zone Descriptor Change Notices: Not Supported 00:14:02.409 Discovery Log Change Notices: Not Supported 00:14:02.409 Controller Attributes 00:14:02.409 128-bit Host Identifier: Supported 00:14:02.409 Non-Operational Permissive Mode: Not Supported 00:14:02.409 NVM Sets: Not Supported 00:14:02.409 Read Recovery Levels: Not Supported 00:14:02.409 Endurance Groups: Not Supported 00:14:02.409 Predictable Latency Mode: Not Supported 00:14:02.409 Traffic Based Keep ALive: Not Supported 00:14:02.409 Namespace Granularity: Not Supported 00:14:02.409 SQ Associations: Not Supported 00:14:02.409 UUID List: Not Supported 00:14:02.409 Multi-Domain Subsystem: Not Supported 00:14:02.410 Fixed Capacity Management: Not Supported 00:14:02.410 Variable Capacity Management: Not Supported 00:14:02.410 Delete Endurance Group: Not Supported 00:14:02.410 Delete NVM Set: Not Supported 00:14:02.410 Extended LBA Formats Supported: Not Supported 00:14:02.410 Flexible Data Placement Supported: Not Supported 00:14:02.410 00:14:02.410 Controller Memory Buffer Support 00:14:02.410 ================================ 00:14:02.410 Supported: No 00:14:02.410 00:14:02.410 Persistent Memory Region Support 00:14:02.410 ================================ 00:14:02.410 Supported: No 00:14:02.410 00:14:02.410 Admin Command Set Attributes 00:14:02.410 ============================ 00:14:02.410 Security Send/Receive: Not Supported 00:14:02.410 Format NVM: Not Supported 00:14:02.410 Firmware Activate/Download: Not Supported 00:14:02.410 Namespace Management: Not Supported 00:14:02.410 Device Self-Test: Not Supported 00:14:02.410 Directives: Not Supported 00:14:02.410 NVMe-MI: Not Supported 00:14:02.410 Virtualization Management: Not Supported 00:14:02.410 Doorbell Buffer Config: Not Supported 00:14:02.410 Get LBA Status Capability: Not Supported 00:14:02.410 Command & Feature Lockdown Capability: Not Supported 00:14:02.410 Abort Command Limit: 4 00:14:02.410 Async Event Request Limit: 4 00:14:02.410 Number of Firmware Slots: N/A 00:14:02.410 Firmware Slot 1 Read-Only: N/A 00:14:02.410 Firmware Activation Without Reset: N/A 00:14:02.410 Multiple Update Detection Support: N/A 00:14:02.410 Firmware Update Granularity: No Information Provided 00:14:02.410 Per-Namespace SMART Log: No 00:14:02.410 Asymmetric Namespace Access Log Page: Not Supported 00:14:02.410 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:14:02.410 Command Effects Log Page: Supported 00:14:02.410 Get Log Page Extended Data: Supported 00:14:02.410 Telemetry Log Pages: Not Supported 00:14:02.410 Persistent Event Log Pages: Not Supported 00:14:02.410 Supported Log Pages Log Page: May Support 00:14:02.410 Commands Supported & Effects Log Page: Not Supported 00:14:02.410 Feature Identifiers & Effects Log Page:May Support 00:14:02.410 NVMe-MI Commands & Effects Log Page: May Support 00:14:02.410 Data Area 4 for Telemetry Log: Not Supported 00:14:02.410 Error Log Page Entries Supported: 128 00:14:02.410 Keep Alive: Supported 00:14:02.410 Keep Alive Granularity: 10000 ms 00:14:02.410 00:14:02.410 NVM Command Set Attributes 00:14:02.410 ========================== 00:14:02.410 Submission Queue Entry Size 00:14:02.410 Max: 64 00:14:02.410 Min: 64 00:14:02.410 Completion Queue Entry Size 00:14:02.410 Max: 16 00:14:02.410 Min: 16 00:14:02.410 Number of Namespaces: 32 00:14:02.410 Compare Command: Supported 00:14:02.410 Write Uncorrectable Command: Not Supported 00:14:02.410 Dataset Management Command: Supported 00:14:02.410 Write Zeroes Command: Supported 00:14:02.410 Set Features Save Field: Not Supported 00:14:02.410 Reservations: Not Supported 00:14:02.410 Timestamp: Not Supported 00:14:02.410 Copy: Supported 00:14:02.410 Volatile Write Cache: Present 00:14:02.410 Atomic Write Unit (Normal): 1 00:14:02.410 Atomic Write Unit (PFail): 1 00:14:02.410 Atomic Compare & Write Unit: 1 00:14:02.410 Fused Compare & Write: Supported 00:14:02.410 Scatter-Gather List 00:14:02.410 SGL Command Set: Supported (Dword aligned) 00:14:02.410 SGL Keyed: Not Supported 00:14:02.410 SGL Bit Bucket Descriptor: Not Supported 00:14:02.410 SGL Metadata Pointer: Not Supported 00:14:02.410 Oversized SGL: Not Supported 00:14:02.410 SGL Metadata Address: Not Supported 00:14:02.410 SGL Offset: Not Supported 00:14:02.410 Transport SGL Data Block: Not Supported 00:14:02.410 Replay Protected Memory Block: Not Supported 00:14:02.410 00:14:02.410 Firmware Slot Information 00:14:02.410 ========================= 00:14:02.410 Active slot: 1 00:14:02.410 Slot 1 Firmware Revision: 25.01 00:14:02.410 00:14:02.410 00:14:02.410 Commands Supported and Effects 00:14:02.410 ============================== 00:14:02.410 Admin Commands 00:14:02.410 -------------- 00:14:02.410 Get Log Page (02h): Supported 00:14:02.410 Identify (06h): Supported 00:14:02.410 Abort (08h): Supported 00:14:02.410 Set Features (09h): Supported 00:14:02.410 Get Features (0Ah): Supported 00:14:02.410 Asynchronous Event Request (0Ch): Supported 00:14:02.410 Keep Alive (18h): Supported 00:14:02.410 I/O Commands 00:14:02.410 ------------ 00:14:02.410 Flush (00h): Supported LBA-Change 00:14:02.410 Write (01h): Supported LBA-Change 00:14:02.410 Read (02h): Supported 00:14:02.410 Compare (05h): Supported 00:14:02.410 Write Zeroes (08h): Supported LBA-Change 00:14:02.410 Dataset Management (09h): Supported LBA-Change 00:14:02.410 Copy (19h): Supported LBA-Change 00:14:02.410 00:14:02.410 Error Log 00:14:02.410 ========= 00:14:02.410 00:14:02.410 Arbitration 00:14:02.410 =========== 00:14:02.410 Arbitration Burst: 1 00:14:02.410 00:14:02.410 Power Management 00:14:02.410 ================ 00:14:02.410 Number of Power States: 1 00:14:02.410 Current Power State: Power State #0 00:14:02.410 Power State #0: 00:14:02.410 Max Power: 0.00 W 00:14:02.410 Non-Operational State: Operational 00:14:02.410 Entry Latency: Not Reported 00:14:02.410 Exit Latency: Not Reported 00:14:02.410 Relative Read Throughput: 0 00:14:02.410 Relative Read Latency: 0 00:14:02.410 Relative Write Throughput: 0 00:14:02.410 Relative Write Latency: 0 00:14:02.410 Idle Power: Not Reported 00:14:02.410 Active Power: Not Reported 00:14:02.410 Non-Operational Permissive Mode: Not Supported 00:14:02.410 00:14:02.410 Health Information 00:14:02.410 ================== 00:14:02.410 Critical Warnings: 00:14:02.410 Available Spare Space: OK 00:14:02.410 Temperature: OK 00:14:02.410 Device Reliability: OK 00:14:02.410 Read Only: No 00:14:02.410 Volatile Memory Backup: OK 00:14:02.410 Current Temperature: 0 Kelvin (-273 Celsius) 00:14:02.410 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:14:02.410 Available Spare: 0% 00:14:02.410 Available Sp[2024-11-28 12:37:44.872077] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:14:02.410 [2024-11-28 12:37:44.879952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:14:02.410 [2024-11-28 12:37:44.879982] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:14:02.410 [2024-11-28 12:37:44.879991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:02.410 [2024-11-28 12:37:44.879996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:02.410 [2024-11-28 12:37:44.880002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:02.410 [2024-11-28 12:37:44.880008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:02.410 [2024-11-28 12:37:44.880066] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:14:02.410 [2024-11-28 12:37:44.880077] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:14:02.410 [2024-11-28 12:37:44.881066] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:02.410 [2024-11-28 12:37:44.881108] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:14:02.410 [2024-11-28 12:37:44.881114] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:14:02.410 [2024-11-28 12:37:44.882073] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:14:02.410 [2024-11-28 12:37:44.882083] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:14:02.410 [2024-11-28 12:37:44.882128] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:14:02.410 [2024-11-28 12:37:44.883105] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:02.410 are Threshold: 0% 00:14:02.410 Life Percentage Used: 0% 00:14:02.410 Data Units Read: 0 00:14:02.410 Data Units Written: 0 00:14:02.410 Host Read Commands: 0 00:14:02.410 Host Write Commands: 0 00:14:02.410 Controller Busy Time: 0 minutes 00:14:02.410 Power Cycles: 0 00:14:02.410 Power On Hours: 0 hours 00:14:02.410 Unsafe Shutdowns: 0 00:14:02.410 Unrecoverable Media Errors: 0 00:14:02.410 Lifetime Error Log Entries: 0 00:14:02.410 Warning Temperature Time: 0 minutes 00:14:02.410 Critical Temperature Time: 0 minutes 00:14:02.410 00:14:02.410 Number of Queues 00:14:02.410 ================ 00:14:02.410 Number of I/O Submission Queues: 127 00:14:02.410 Number of I/O Completion Queues: 127 00:14:02.410 00:14:02.410 Active Namespaces 00:14:02.410 ================= 00:14:02.410 Namespace ID:1 00:14:02.410 Error Recovery Timeout: Unlimited 00:14:02.410 Command Set Identifier: NVM (00h) 00:14:02.411 Deallocate: Supported 00:14:02.411 Deallocated/Unwritten Error: Not Supported 00:14:02.411 Deallocated Read Value: Unknown 00:14:02.411 Deallocate in Write Zeroes: Not Supported 00:14:02.411 Deallocated Guard Field: 0xFFFF 00:14:02.411 Flush: Supported 00:14:02.411 Reservation: Supported 00:14:02.411 Namespace Sharing Capabilities: Multiple Controllers 00:14:02.411 Size (in LBAs): 131072 (0GiB) 00:14:02.411 Capacity (in LBAs): 131072 (0GiB) 00:14:02.411 Utilization (in LBAs): 131072 (0GiB) 00:14:02.411 NGUID: E27082B448CE476EA8EDB95FE3752DE4 00:14:02.411 UUID: e27082b4-48ce-476e-a8ed-b95fe3752de4 00:14:02.411 Thin Provisioning: Not Supported 00:14:02.411 Per-NS Atomic Units: Yes 00:14:02.411 Atomic Boundary Size (Normal): 0 00:14:02.411 Atomic Boundary Size (PFail): 0 00:14:02.411 Atomic Boundary Offset: 0 00:14:02.411 Maximum Single Source Range Length: 65535 00:14:02.411 Maximum Copy Length: 65535 00:14:02.411 Maximum Source Range Count: 1 00:14:02.411 NGUID/EUI64 Never Reused: No 00:14:02.411 Namespace Write Protected: No 00:14:02.411 Number of LBA Formats: 1 00:14:02.411 Current LBA Format: LBA Format #00 00:14:02.411 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:02.411 00:14:02.411 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:14:02.669 [2024-11-28 12:37:45.114366] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:07.937 Initializing NVMe Controllers 00:14:07.937 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:07.937 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:14:07.937 Initialization complete. Launching workers. 00:14:07.937 ======================================================== 00:14:07.937 Latency(us) 00:14:07.937 Device Information : IOPS MiB/s Average min max 00:14:07.937 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39941.97 156.02 3204.49 1013.91 6589.41 00:14:07.937 ======================================================== 00:14:07.937 Total : 39941.97 156.02 3204.49 1013.91 6589.41 00:14:07.937 00:14:07.937 [2024-11-28 12:37:50.220213] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:07.937 12:37:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:14:07.937 [2024-11-28 12:37:50.450895] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:13.206 Initializing NVMe Controllers 00:14:13.206 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:13.206 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:14:13.206 Initialization complete. Launching workers. 00:14:13.206 ======================================================== 00:14:13.206 Latency(us) 00:14:13.206 Device Information : IOPS MiB/s Average min max 00:14:13.206 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39946.38 156.04 3204.79 1006.61 7195.76 00:14:13.206 ======================================================== 00:14:13.206 Total : 39946.38 156.04 3204.79 1006.61 7195.76 00:14:13.206 00:14:13.206 [2024-11-28 12:37:55.475364] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:13.206 12:37:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:14:13.206 [2024-11-28 12:37:55.678761] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:18.478 [2024-11-28 12:38:00.808254] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:18.478 Initializing NVMe Controllers 00:14:18.478 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:18.478 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:18.478 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:14:18.478 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:14:18.478 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:14:18.478 Initialization complete. Launching workers. 00:14:18.478 Starting thread on core 2 00:14:18.478 Starting thread on core 3 00:14:18.478 Starting thread on core 1 00:14:18.478 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:14:18.737 [2024-11-28 12:38:01.104061] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:22.023 [2024-11-28 12:38:04.192196] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:22.023 Initializing NVMe Controllers 00:14:22.023 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:22.023 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:22.023 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:14:22.023 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:14:22.023 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:14:22.023 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:14:22.023 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:14:22.023 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:14:22.023 Initialization complete. Launching workers. 00:14:22.023 Starting thread on core 1 with urgent priority queue 00:14:22.023 Starting thread on core 2 with urgent priority queue 00:14:22.023 Starting thread on core 3 with urgent priority queue 00:14:22.023 Starting thread on core 0 with urgent priority queue 00:14:22.023 SPDK bdev Controller (SPDK2 ) core 0: 9237.00 IO/s 10.83 secs/100000 ios 00:14:22.023 SPDK bdev Controller (SPDK2 ) core 1: 8598.67 IO/s 11.63 secs/100000 ios 00:14:22.023 SPDK bdev Controller (SPDK2 ) core 2: 7363.33 IO/s 13.58 secs/100000 ios 00:14:22.023 SPDK bdev Controller (SPDK2 ) core 3: 10346.33 IO/s 9.67 secs/100000 ios 00:14:22.023 ======================================================== 00:14:22.023 00:14:22.023 12:38:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:14:22.023 [2024-11-28 12:38:04.481354] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:22.023 Initializing NVMe Controllers 00:14:22.023 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:22.023 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:22.023 Namespace ID: 1 size: 0GB 00:14:22.023 Initialization complete. 00:14:22.023 INFO: using host memory buffer for IO 00:14:22.023 Hello world! 00:14:22.023 [2024-11-28 12:38:04.491413] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:22.023 12:38:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:14:22.281 [2024-11-28 12:38:04.785883] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:23.655 Initializing NVMe Controllers 00:14:23.655 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:23.655 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:23.655 Initialization complete. Launching workers. 00:14:23.655 submit (in ns) avg, min, max = 6679.5, 3272.2, 4175260.0 00:14:23.655 complete (in ns) avg, min, max = 20382.3, 1821.7, 3999530.4 00:14:23.655 00:14:23.655 Submit histogram 00:14:23.655 ================ 00:14:23.655 Range in us Cumulative Count 00:14:23.655 3.270 - 3.283: 0.1422% ( 23) 00:14:23.655 3.283 - 3.297: 1.5950% ( 235) 00:14:23.655 3.297 - 3.311: 5.4154% ( 618) 00:14:23.655 3.311 - 3.325: 10.2807% ( 787) 00:14:23.655 3.325 - 3.339: 15.4303% ( 833) 00:14:23.655 3.339 - 3.353: 21.5381% ( 988) 00:14:23.655 3.353 - 3.367: 27.2132% ( 918) 00:14:23.655 3.367 - 3.381: 32.2700% ( 818) 00:14:23.655 3.381 - 3.395: 37.8895% ( 909) 00:14:23.655 3.395 - 3.409: 42.6929% ( 777) 00:14:23.655 3.409 - 3.423: 46.8163% ( 667) 00:14:23.655 3.423 - 3.437: 50.9768% ( 673) 00:14:23.655 3.437 - 3.450: 57.3751% ( 1035) 00:14:23.655 3.450 - 3.464: 64.0702% ( 1083) 00:14:23.655 3.464 - 3.478: 68.1813% ( 665) 00:14:23.655 3.478 - 3.492: 73.1330% ( 801) 00:14:23.655 3.492 - 3.506: 78.2826% ( 833) 00:14:23.655 3.506 - 3.520: 81.7013% ( 553) 00:14:23.655 3.520 - 3.534: 84.1926% ( 403) 00:14:23.655 3.534 - 3.548: 85.6516% ( 236) 00:14:23.655 3.548 - 3.562: 86.6531% ( 162) 00:14:23.655 3.562 - 3.590: 87.6669% ( 164) 00:14:23.655 3.590 - 3.617: 89.0640% ( 226) 00:14:23.655 3.617 - 3.645: 90.8074% ( 282) 00:14:23.655 3.645 - 3.673: 92.4023% ( 258) 00:14:23.655 3.673 - 3.701: 94.0776% ( 271) 00:14:23.655 3.701 - 3.729: 95.9075% ( 296) 00:14:23.655 3.729 - 3.757: 97.3170% ( 228) 00:14:23.655 3.757 - 3.784: 98.0774% ( 123) 00:14:23.655 3.784 - 3.812: 98.6832% ( 98) 00:14:23.655 3.812 - 3.840: 99.1345% ( 73) 00:14:23.655 3.840 - 3.868: 99.3447% ( 34) 00:14:23.655 3.868 - 3.896: 99.4807% ( 22) 00:14:23.655 3.896 - 3.923: 99.5302% ( 8) 00:14:23.655 3.951 - 3.979: 99.5549% ( 4) 00:14:23.655 3.979 - 4.007: 99.5673% ( 2) 00:14:23.655 4.007 - 4.035: 99.5734% ( 1) 00:14:23.655 4.035 - 4.063: 99.5858% ( 2) 00:14:23.655 4.090 - 4.118: 99.5920% ( 1) 00:14:23.655 4.118 - 4.146: 99.5982% ( 1) 00:14:23.655 4.146 - 4.174: 99.6044% ( 1) 00:14:23.655 4.257 - 4.285: 99.6105% ( 1) 00:14:23.655 5.092 - 5.120: 99.6167% ( 1) 00:14:23.655 5.120 - 5.148: 99.6229% ( 1) 00:14:23.655 5.259 - 5.287: 99.6291% ( 1) 00:14:23.655 5.454 - 5.482: 99.6353% ( 1) 00:14:23.655 5.482 - 5.510: 99.6414% ( 1) 00:14:23.655 5.593 - 5.621: 99.6476% ( 1) 00:14:23.655 5.621 - 5.649: 99.6538% ( 1) 00:14:23.655 5.677 - 5.704: 99.6600% ( 1) 00:14:23.655 5.816 - 5.843: 99.6662% ( 1) 00:14:23.655 5.843 - 5.871: 99.6785% ( 2) 00:14:23.655 5.871 - 5.899: 99.6847% ( 1) 00:14:23.656 5.983 - 6.010: 99.6909% ( 1) 00:14:23.656 6.066 - 6.094: 99.7094% ( 3) 00:14:23.656 6.177 - 6.205: 99.7156% ( 1) 00:14:23.656 6.233 - 6.261: 99.7218% ( 1) 00:14:23.656 6.289 - 6.317: 99.7280% ( 1) 00:14:23.656 6.317 - 6.344: 99.7404% ( 2) 00:14:23.656 6.344 - 6.372: 99.7465% ( 1) 00:14:23.656 6.428 - 6.456: 99.7527% ( 1) 00:14:23.656 6.456 - 6.483: 99.7651% ( 2) 00:14:23.656 6.511 - 6.539: 99.7774% ( 2) 00:14:23.656 6.567 - 6.595: 99.7960% ( 3) 00:14:23.656 6.678 - 6.706: 99.8084% ( 2) 00:14:23.656 6.706 - 6.734: 99.8145% ( 1) 00:14:23.656 6.790 - 6.817: 99.8331% ( 3) 00:14:23.656 6.901 - 6.929: 99.8393% ( 1) 00:14:23.656 6.929 - 6.957: 99.8455% ( 1) 00:14:23.656 6.957 - 6.984: 99.8516% ( 1) 00:14:23.656 7.040 - 7.068: 99.8578% ( 1) 00:14:23.656 7.179 - 7.235: 99.8640% ( 1) 00:14:23.656 7.290 - 7.346: 99.8702% ( 1) 00:14:23.656 7.457 - 7.513: 99.8764% ( 1) 00:14:23.656 7.513 - 7.569: 99.8825% ( 1) 00:14:23.656 7.569 - 7.624: 99.8887% ( 1) 00:14:23.656 7.624 - 7.680: 99.8949% ( 1) 00:14:23.656 7.847 - 7.903: 99.9011% ( 1) 00:14:23.656 8.014 - 8.070: 99.9073% ( 1) 00:14:23.656 [2024-11-28 12:38:05.878023] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:23.656 8.125 - 8.181: 99.9135% ( 1) 00:14:23.656 11.965 - 12.021: 99.9196% ( 1) 00:14:23.656 3989.148 - 4017.642: 99.9938% ( 12) 00:14:23.656 4160.111 - 4188.605: 100.0000% ( 1) 00:14:23.656 00:14:23.656 Complete histogram 00:14:23.656 ================== 00:14:23.656 Range in us Cumulative Count 00:14:23.656 1.809 - 1.823: 0.0062% ( 1) 00:14:23.656 1.823 - 1.837: 4.7107% ( 761) 00:14:23.656 1.837 - 1.850: 58.5312% ( 8706) 00:14:23.656 1.850 - 1.864: 80.8420% ( 3609) 00:14:23.656 1.864 - 1.878: 86.8942% ( 979) 00:14:23.656 1.878 - 1.892: 91.3452% ( 720) 00:14:23.656 1.892 - 1.906: 95.6973% ( 704) 00:14:23.656 1.906 - 1.920: 97.5519% ( 300) 00:14:23.656 1.920 - 1.934: 98.5658% ( 164) 00:14:23.656 1.934 - 1.948: 98.9552% ( 63) 00:14:23.656 1.948 - 1.962: 99.0171% ( 10) 00:14:23.656 1.962 - 1.976: 99.0851% ( 11) 00:14:23.656 1.976 - 1.990: 99.1036% ( 3) 00:14:23.656 1.990 - 2.003: 99.1160% ( 2) 00:14:23.656 2.003 - 2.017: 99.1283% ( 2) 00:14:23.656 2.017 - 2.031: 99.1345% ( 1) 00:14:23.656 2.031 - 2.045: 99.1407% ( 1) 00:14:23.656 2.045 - 2.059: 99.1531% ( 2) 00:14:23.656 2.059 - 2.073: 99.1592% ( 1) 00:14:23.656 2.087 - 2.101: 99.1778% ( 3) 00:14:23.656 2.101 - 2.115: 99.1902% ( 2) 00:14:23.656 2.157 - 2.170: 99.2025% ( 2) 00:14:23.656 2.170 - 2.184: 99.2087% ( 1) 00:14:23.656 2.184 - 2.198: 99.2211% ( 2) 00:14:23.656 2.198 - 2.212: 99.2273% ( 1) 00:14:23.656 2.212 - 2.226: 99.2520% ( 4) 00:14:23.656 2.226 - 2.240: 99.2643% ( 2) 00:14:23.656 2.282 - 2.296: 99.2705% ( 1) 00:14:23.656 2.323 - 2.337: 99.2767% ( 1) 00:14:23.656 2.393 - 2.407: 99.2829% ( 1) 00:14:23.656 2.449 - 2.463: 99.2953% ( 2) 00:14:23.656 3.617 - 3.645: 99.3014% ( 1) 00:14:23.656 3.645 - 3.673: 99.3076% ( 1) 00:14:23.656 3.673 - 3.701: 99.3138% ( 1) 00:14:23.656 3.896 - 3.923: 99.3200% ( 1) 00:14:23.656 3.979 - 4.007: 99.3262% ( 1) 00:14:23.656 4.007 - 4.035: 99.3323% ( 1) 00:14:23.656 4.118 - 4.146: 99.3447% ( 2) 00:14:23.656 4.174 - 4.202: 99.3509% ( 1) 00:14:23.656 4.230 - 4.257: 99.3571% ( 1) 00:14:23.656 4.313 - 4.341: 99.3633% ( 1) 00:14:23.656 4.341 - 4.369: 99.3694% ( 1) 00:14:23.656 4.452 - 4.480: 99.3756% ( 1) 00:14:23.656 4.480 - 4.508: 99.3818% ( 1) 00:14:23.656 4.619 - 4.647: 99.3880% ( 1) 00:14:23.656 4.647 - 4.675: 99.3942% ( 1) 00:14:23.656 4.675 - 4.703: 99.4003% ( 1) 00:14:23.656 4.730 - 4.758: 99.4065% ( 1) 00:14:23.656 4.814 - 4.842: 99.4127% ( 1) 00:14:23.656 4.897 - 4.925: 99.4251% ( 2) 00:14:23.656 4.981 - 5.009: 99.4313% ( 1) 00:14:23.656 5.120 - 5.148: 99.4374% ( 1) 00:14:23.656 5.148 - 5.176: 99.4436% ( 1) 00:14:23.656 5.176 - 5.203: 99.4498% ( 1) 00:14:23.656 5.287 - 5.315: 99.4560% ( 1) 00:14:23.656 5.398 - 5.426: 99.4622% ( 1) 00:14:23.656 5.482 - 5.510: 99.4683% ( 1) 00:14:23.656 5.816 - 5.843: 99.4807% ( 2) 00:14:23.656 5.871 - 5.899: 99.4869% ( 1) 00:14:23.656 6.122 - 6.150: 99.4931% ( 1) 00:14:23.656 6.150 - 6.177: 99.5054% ( 2) 00:14:23.656 6.177 - 6.205: 99.5116% ( 1) 00:14:23.656 6.511 - 6.539: 99.5178% ( 1) 00:14:23.656 6.706 - 6.734: 99.5240% ( 1) 00:14:23.656 7.040 - 7.068: 99.5302% ( 1) 00:14:23.656 9.517 - 9.572: 99.5364% ( 1) 00:14:23.656 3989.148 - 4017.642: 100.0000% ( 75) 00:14:23.656 00:14:23.656 12:38:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:14:23.656 12:38:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:14:23.656 12:38:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:14:23.656 12:38:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:14:23.656 12:38:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:23.656 [ 00:14:23.656 { 00:14:23.656 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:23.656 "subtype": "Discovery", 00:14:23.656 "listen_addresses": [], 00:14:23.656 "allow_any_host": true, 00:14:23.656 "hosts": [] 00:14:23.656 }, 00:14:23.656 { 00:14:23.656 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:23.656 "subtype": "NVMe", 00:14:23.656 "listen_addresses": [ 00:14:23.656 { 00:14:23.656 "trtype": "VFIOUSER", 00:14:23.656 "adrfam": "IPv4", 00:14:23.656 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:23.656 "trsvcid": "0" 00:14:23.656 } 00:14:23.656 ], 00:14:23.656 "allow_any_host": true, 00:14:23.656 "hosts": [], 00:14:23.656 "serial_number": "SPDK1", 00:14:23.656 "model_number": "SPDK bdev Controller", 00:14:23.656 "max_namespaces": 32, 00:14:23.656 "min_cntlid": 1, 00:14:23.656 "max_cntlid": 65519, 00:14:23.656 "namespaces": [ 00:14:23.656 { 00:14:23.656 "nsid": 1, 00:14:23.656 "bdev_name": "Malloc1", 00:14:23.656 "name": "Malloc1", 00:14:23.657 "nguid": "B7C442B9D2A745008C0CDE54A0A792D7", 00:14:23.657 "uuid": "b7c442b9-d2a7-4500-8c0c-de54a0a792d7" 00:14:23.657 }, 00:14:23.657 { 00:14:23.657 "nsid": 2, 00:14:23.657 "bdev_name": "Malloc3", 00:14:23.657 "name": "Malloc3", 00:14:23.657 "nguid": "B35EF8D76E984DE69DA214BEC9E47B03", 00:14:23.657 "uuid": "b35ef8d7-6e98-4de6-9da2-14bec9e47b03" 00:14:23.657 } 00:14:23.657 ] 00:14:23.657 }, 00:14:23.657 { 00:14:23.657 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:23.657 "subtype": "NVMe", 00:14:23.657 "listen_addresses": [ 00:14:23.657 { 00:14:23.657 "trtype": "VFIOUSER", 00:14:23.657 "adrfam": "IPv4", 00:14:23.657 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:23.657 "trsvcid": "0" 00:14:23.657 } 00:14:23.657 ], 00:14:23.657 "allow_any_host": true, 00:14:23.657 "hosts": [], 00:14:23.657 "serial_number": "SPDK2", 00:14:23.657 "model_number": "SPDK bdev Controller", 00:14:23.657 "max_namespaces": 32, 00:14:23.657 "min_cntlid": 1, 00:14:23.657 "max_cntlid": 65519, 00:14:23.657 "namespaces": [ 00:14:23.657 { 00:14:23.657 "nsid": 1, 00:14:23.657 "bdev_name": "Malloc2", 00:14:23.657 "name": "Malloc2", 00:14:23.657 "nguid": "E27082B448CE476EA8EDB95FE3752DE4", 00:14:23.657 "uuid": "e27082b4-48ce-476e-a8ed-b95fe3752de4" 00:14:23.657 } 00:14:23.657 ] 00:14:23.657 } 00:14:23.657 ] 00:14:23.657 12:38:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:14:23.657 12:38:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:14:23.657 12:38:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2493653 00:14:23.657 12:38:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:14:23.657 12:38:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:14:23.657 12:38:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:23.657 12:38:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:23.657 12:38:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:14:23.657 12:38:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:14:23.657 12:38:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:14:23.916 [2024-11-28 12:38:06.263571] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:23.916 Malloc4 00:14:23.916 12:38:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:14:24.175 [2024-11-28 12:38:06.529661] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:24.175 12:38:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:24.175 Asynchronous Event Request test 00:14:24.175 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:24.175 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:24.175 Registering asynchronous event callbacks... 00:14:24.175 Starting namespace attribute notice tests for all controllers... 00:14:24.175 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:14:24.175 aer_cb - Changed Namespace 00:14:24.175 Cleaning up... 00:14:24.434 [ 00:14:24.434 { 00:14:24.434 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:24.434 "subtype": "Discovery", 00:14:24.434 "listen_addresses": [], 00:14:24.434 "allow_any_host": true, 00:14:24.434 "hosts": [] 00:14:24.434 }, 00:14:24.434 { 00:14:24.434 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:24.434 "subtype": "NVMe", 00:14:24.434 "listen_addresses": [ 00:14:24.434 { 00:14:24.434 "trtype": "VFIOUSER", 00:14:24.434 "adrfam": "IPv4", 00:14:24.434 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:24.434 "trsvcid": "0" 00:14:24.434 } 00:14:24.434 ], 00:14:24.434 "allow_any_host": true, 00:14:24.434 "hosts": [], 00:14:24.434 "serial_number": "SPDK1", 00:14:24.434 "model_number": "SPDK bdev Controller", 00:14:24.434 "max_namespaces": 32, 00:14:24.434 "min_cntlid": 1, 00:14:24.434 "max_cntlid": 65519, 00:14:24.434 "namespaces": [ 00:14:24.434 { 00:14:24.434 "nsid": 1, 00:14:24.434 "bdev_name": "Malloc1", 00:14:24.434 "name": "Malloc1", 00:14:24.434 "nguid": "B7C442B9D2A745008C0CDE54A0A792D7", 00:14:24.434 "uuid": "b7c442b9-d2a7-4500-8c0c-de54a0a792d7" 00:14:24.434 }, 00:14:24.434 { 00:14:24.434 "nsid": 2, 00:14:24.434 "bdev_name": "Malloc3", 00:14:24.434 "name": "Malloc3", 00:14:24.434 "nguid": "B35EF8D76E984DE69DA214BEC9E47B03", 00:14:24.434 "uuid": "b35ef8d7-6e98-4de6-9da2-14bec9e47b03" 00:14:24.434 } 00:14:24.434 ] 00:14:24.434 }, 00:14:24.434 { 00:14:24.434 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:24.434 "subtype": "NVMe", 00:14:24.434 "listen_addresses": [ 00:14:24.434 { 00:14:24.434 "trtype": "VFIOUSER", 00:14:24.434 "adrfam": "IPv4", 00:14:24.434 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:24.434 "trsvcid": "0" 00:14:24.434 } 00:14:24.434 ], 00:14:24.434 "allow_any_host": true, 00:14:24.434 "hosts": [], 00:14:24.434 "serial_number": "SPDK2", 00:14:24.434 "model_number": "SPDK bdev Controller", 00:14:24.434 "max_namespaces": 32, 00:14:24.434 "min_cntlid": 1, 00:14:24.434 "max_cntlid": 65519, 00:14:24.434 "namespaces": [ 00:14:24.434 { 00:14:24.434 "nsid": 1, 00:14:24.434 "bdev_name": "Malloc2", 00:14:24.434 "name": "Malloc2", 00:14:24.434 "nguid": "E27082B448CE476EA8EDB95FE3752DE4", 00:14:24.434 "uuid": "e27082b4-48ce-476e-a8ed-b95fe3752de4" 00:14:24.434 }, 00:14:24.434 { 00:14:24.434 "nsid": 2, 00:14:24.434 "bdev_name": "Malloc4", 00:14:24.434 "name": "Malloc4", 00:14:24.434 "nguid": "562337B5696F4360973C06D8A2F26329", 00:14:24.434 "uuid": "562337b5-696f-4360-973c-06d8a2f26329" 00:14:24.434 } 00:14:24.434 ] 00:14:24.434 } 00:14:24.434 ] 00:14:24.434 12:38:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2493653 00:14:24.434 12:38:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:14:24.434 12:38:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2486032 00:14:24.434 12:38:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 2486032 ']' 00:14:24.434 12:38:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 2486032 00:14:24.434 12:38:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:14:24.434 12:38:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:24.434 12:38:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2486032 00:14:24.434 12:38:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:24.434 12:38:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:24.434 12:38:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2486032' 00:14:24.434 killing process with pid 2486032 00:14:24.434 12:38:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 2486032 00:14:24.434 12:38:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 2486032 00:14:24.694 12:38:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:14:24.694 12:38:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:14:24.694 12:38:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:14:24.694 12:38:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:14:24.694 12:38:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:14:24.694 12:38:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2493885 00:14:24.694 12:38:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2493885' 00:14:24.694 Process pid: 2493885 00:14:24.694 12:38:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:14:24.694 12:38:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:24.694 12:38:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2493885 00:14:24.694 12:38:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 2493885 ']' 00:14:24.694 12:38:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:24.694 12:38:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:24.694 12:38:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:24.694 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:24.694 12:38:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:24.694 12:38:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:24.694 [2024-11-28 12:38:07.093482] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:14:24.694 [2024-11-28 12:38:07.094348] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:14:24.694 [2024-11-28 12:38:07.094388] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:24.694 [2024-11-28 12:38:07.157003] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:24.694 [2024-11-28 12:38:07.194726] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:24.695 [2024-11-28 12:38:07.194769] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:24.695 [2024-11-28 12:38:07.194777] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:24.695 [2024-11-28 12:38:07.194784] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:24.695 [2024-11-28 12:38:07.194789] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:24.695 [2024-11-28 12:38:07.196213] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:24.695 [2024-11-28 12:38:07.196312] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:24.695 [2024-11-28 12:38:07.196375] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:24.695 [2024-11-28 12:38:07.196376] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:24.954 [2024-11-28 12:38:07.265380] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:14:24.954 [2024-11-28 12:38:07.265494] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:14:24.954 [2024-11-28 12:38:07.265611] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:14:24.954 [2024-11-28 12:38:07.265810] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:14:24.954 [2024-11-28 12:38:07.265999] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:14:24.954 12:38:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:24.954 12:38:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:14:24.954 12:38:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:14:25.892 12:38:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:14:26.151 12:38:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:14:26.151 12:38:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:14:26.151 12:38:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:26.151 12:38:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:14:26.151 12:38:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:26.409 Malloc1 00:14:26.409 12:38:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:14:26.409 12:38:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:14:26.666 12:38:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:14:26.923 12:38:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:26.923 12:38:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:14:26.923 12:38:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:27.180 Malloc2 00:14:27.180 12:38:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:14:27.438 12:38:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:14:27.438 12:38:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:14:27.695 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:14:27.695 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2493885 00:14:27.695 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 2493885 ']' 00:14:27.695 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 2493885 00:14:27.695 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:14:27.695 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:27.695 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2493885 00:14:27.695 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:27.695 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:27.695 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2493885' 00:14:27.695 killing process with pid 2493885 00:14:27.695 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 2493885 00:14:27.695 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 2493885 00:14:27.953 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:14:27.953 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:14:27.953 00:14:27.953 real 0m51.007s 00:14:27.953 user 3m17.594s 00:14:27.953 sys 0m3.197s 00:14:27.953 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:27.953 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:27.953 ************************************ 00:14:27.953 END TEST nvmf_vfio_user 00:14:27.953 ************************************ 00:14:27.953 12:38:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:14:27.953 12:38:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:27.953 12:38:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:27.953 12:38:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:27.953 ************************************ 00:14:27.953 START TEST nvmf_vfio_user_nvme_compliance 00:14:27.953 ************************************ 00:14:27.953 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:14:28.212 * Looking for test storage... 00:14:28.212 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:14:28.212 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:28.212 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # lcov --version 00:14:28.212 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:28.212 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:28.212 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:28.212 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:28.212 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:28.212 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:14:28.212 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:14:28.212 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:14:28.212 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:14:28.212 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:14:28.212 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:14:28.212 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:14:28.212 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:28.212 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:14:28.212 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:14:28.212 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:28.212 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:28.212 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:14:28.212 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:14:28.212 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:28.212 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:14:28.212 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:14:28.212 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:14:28.212 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:14:28.212 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:28.212 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:14:28.212 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:14:28.212 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:28.212 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:28.212 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:14:28.212 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:28.212 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:28.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:28.212 --rc genhtml_branch_coverage=1 00:14:28.212 --rc genhtml_function_coverage=1 00:14:28.212 --rc genhtml_legend=1 00:14:28.212 --rc geninfo_all_blocks=1 00:14:28.212 --rc geninfo_unexecuted_blocks=1 00:14:28.212 00:14:28.212 ' 00:14:28.212 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:28.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:28.212 --rc genhtml_branch_coverage=1 00:14:28.212 --rc genhtml_function_coverage=1 00:14:28.212 --rc genhtml_legend=1 00:14:28.212 --rc geninfo_all_blocks=1 00:14:28.212 --rc geninfo_unexecuted_blocks=1 00:14:28.212 00:14:28.212 ' 00:14:28.212 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:28.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:28.212 --rc genhtml_branch_coverage=1 00:14:28.212 --rc genhtml_function_coverage=1 00:14:28.212 --rc genhtml_legend=1 00:14:28.212 --rc geninfo_all_blocks=1 00:14:28.212 --rc geninfo_unexecuted_blocks=1 00:14:28.212 00:14:28.212 ' 00:14:28.212 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:28.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:28.212 --rc genhtml_branch_coverage=1 00:14:28.212 --rc genhtml_function_coverage=1 00:14:28.212 --rc genhtml_legend=1 00:14:28.212 --rc geninfo_all_blocks=1 00:14:28.212 --rc geninfo_unexecuted_blocks=1 00:14:28.212 00:14:28.212 ' 00:14:28.212 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:28.212 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:14:28.212 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:28.212 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:28.212 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:28.212 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:28.212 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:28.212 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:28.212 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:28.212 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:28.212 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:28.212 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:28.212 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:28.212 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:28.212 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:28.212 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:28.212 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:28.212 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:28.212 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:28.212 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:14:28.212 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:28.212 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:28.212 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:28.212 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:28.212 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:28.212 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:28.212 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:14:28.212 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:28.212 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:14:28.212 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:28.212 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:28.212 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:28.212 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:28.212 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:28.212 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:28.212 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:28.213 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:28.213 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:28.213 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:28.213 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:28.213 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:28.213 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:14:28.213 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:14:28.213 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:14:28.213 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=2494653 00:14:28.213 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 2494653' 00:14:28.213 Process pid: 2494653 00:14:28.213 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:28.213 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:14:28.213 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 2494653 00:14:28.213 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # '[' -z 2494653 ']' 00:14:28.213 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:28.213 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:28.213 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:28.213 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:28.213 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:28.213 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:28.213 [2024-11-28 12:38:10.685687] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:14:28.213 [2024-11-28 12:38:10.685739] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:28.470 [2024-11-28 12:38:10.750219] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:28.470 [2024-11-28 12:38:10.792493] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:28.470 [2024-11-28 12:38:10.792532] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:28.470 [2024-11-28 12:38:10.792539] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:28.470 [2024-11-28 12:38:10.792548] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:28.470 [2024-11-28 12:38:10.792553] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:28.470 [2024-11-28 12:38:10.793919] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:28.470 [2024-11-28 12:38:10.794019] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:28.470 [2024-11-28 12:38:10.794021] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:28.470 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:28.470 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@868 -- # return 0 00:14:28.470 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:14:29.403 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:14:29.403 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:14:29.403 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:14:29.403 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.403 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:29.403 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.403 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:14:29.403 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:14:29.403 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.403 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:29.661 malloc0 00:14:29.661 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.661 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:14:29.661 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.661 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:29.661 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.661 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:14:29.661 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.661 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:29.661 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.661 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:14:29.661 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.661 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:29.661 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.661 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:14:29.661 00:14:29.661 00:14:29.661 CUnit - A unit testing framework for C - Version 2.1-3 00:14:29.661 http://cunit.sourceforge.net/ 00:14:29.661 00:14:29.661 00:14:29.661 Suite: nvme_compliance 00:14:29.661 Test: admin_identify_ctrlr_verify_dptr ...[2024-11-28 12:38:12.130386] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:29.661 [2024-11-28 12:38:12.131738] vfio_user.c: 807:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:14:29.661 [2024-11-28 12:38:12.131753] vfio_user.c:5511:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:14:29.661 [2024-11-28 12:38:12.131760] vfio_user.c:5604:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:14:29.661 [2024-11-28 12:38:12.135416] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:29.661 passed 00:14:29.919 Test: admin_identify_ctrlr_verify_fused ...[2024-11-28 12:38:12.214994] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:29.919 [2024-11-28 12:38:12.217999] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:29.919 passed 00:14:29.919 Test: admin_identify_ns ...[2024-11-28 12:38:12.295439] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:29.919 [2024-11-28 12:38:12.358962] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:14:29.919 [2024-11-28 12:38:12.366958] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:14:29.919 [2024-11-28 12:38:12.388053] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:29.919 passed 00:14:30.178 Test: admin_get_features_mandatory_features ...[2024-11-28 12:38:12.465249] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:30.178 [2024-11-28 12:38:12.468270] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:30.178 passed 00:14:30.178 Test: admin_get_features_optional_features ...[2024-11-28 12:38:12.546749] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:30.178 [2024-11-28 12:38:12.549763] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:30.178 passed 00:14:30.178 Test: admin_set_features_number_of_queues ...[2024-11-28 12:38:12.628408] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:30.436 [2024-11-28 12:38:12.741050] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:30.436 passed 00:14:30.436 Test: admin_get_log_page_mandatory_logs ...[2024-11-28 12:38:12.816110] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:30.436 [2024-11-28 12:38:12.819136] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:30.436 passed 00:14:30.436 Test: admin_get_log_page_with_lpo ...[2024-11-28 12:38:12.897054] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:30.695 [2024-11-28 12:38:12.968960] ctrlr.c:2699:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:14:30.695 [2024-11-28 12:38:12.982005] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:30.695 passed 00:14:30.695 Test: fabric_property_get ...[2024-11-28 12:38:13.056082] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:30.695 [2024-11-28 12:38:13.057327] vfio_user.c:5604:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:14:30.695 [2024-11-28 12:38:13.060111] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:30.695 passed 00:14:30.695 Test: admin_delete_io_sq_use_admin_qid ...[2024-11-28 12:38:13.138640] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:30.695 [2024-11-28 12:38:13.139878] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:14:30.695 [2024-11-28 12:38:13.141666] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:30.695 passed 00:14:30.953 Test: admin_delete_io_sq_delete_sq_twice ...[2024-11-28 12:38:13.218409] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:30.953 [2024-11-28 12:38:13.295956] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:14:30.953 [2024-11-28 12:38:13.311954] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:14:30.953 [2024-11-28 12:38:13.317054] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:30.953 passed 00:14:30.953 Test: admin_delete_io_cq_use_admin_qid ...[2024-11-28 12:38:13.392254] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:30.953 [2024-11-28 12:38:13.393493] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:14:30.953 [2024-11-28 12:38:13.395275] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:30.953 passed 00:14:31.211 Test: admin_delete_io_cq_delete_cq_first ...[2024-11-28 12:38:13.474276] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:31.211 [2024-11-28 12:38:13.549961] vfio_user.c:2322:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:14:31.211 [2024-11-28 12:38:13.573957] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:14:31.211 [2024-11-28 12:38:13.579037] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:31.211 passed 00:14:31.211 Test: admin_create_io_cq_verify_iv_pc ...[2024-11-28 12:38:13.654225] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:31.211 [2024-11-28 12:38:13.655463] vfio_user.c:2161:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:14:31.211 [2024-11-28 12:38:13.655487] vfio_user.c:2155:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:14:31.211 [2024-11-28 12:38:13.657243] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:31.211 passed 00:14:31.470 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-11-28 12:38:13.735149] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:31.470 [2024-11-28 12:38:13.827957] vfio_user.c:2243:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:14:31.470 [2024-11-28 12:38:13.835957] vfio_user.c:2243:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:14:31.470 [2024-11-28 12:38:13.843953] vfio_user.c:2041:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:14:31.470 [2024-11-28 12:38:13.851954] vfio_user.c:2041:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:14:31.470 [2024-11-28 12:38:13.881048] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:31.470 passed 00:14:31.470 Test: admin_create_io_sq_verify_pc ...[2024-11-28 12:38:13.957153] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:31.470 [2024-11-28 12:38:13.973962] vfio_user.c:2054:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:14:31.729 [2024-11-28 12:38:13.991319] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:31.729 passed 00:14:31.729 Test: admin_create_io_qp_max_qps ...[2024-11-28 12:38:14.069852] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:32.664 [2024-11-28 12:38:15.173958] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:14:33.230 [2024-11-28 12:38:15.563973] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:33.230 passed 00:14:33.230 Test: admin_create_io_sq_shared_cq ...[2024-11-28 12:38:15.641047] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:33.488 [2024-11-28 12:38:15.774957] vfio_user.c:2322:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:14:33.488 [2024-11-28 12:38:15.812015] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:33.488 passed 00:14:33.488 00:14:33.488 Run Summary: Type Total Ran Passed Failed Inactive 00:14:33.488 suites 1 1 n/a 0 0 00:14:33.488 tests 18 18 18 0 0 00:14:33.488 asserts 360 360 360 0 n/a 00:14:33.488 00:14:33.488 Elapsed time = 1.516 seconds 00:14:33.488 12:38:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 2494653 00:14:33.488 12:38:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # '[' -z 2494653 ']' 00:14:33.488 12:38:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # kill -0 2494653 00:14:33.488 12:38:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # uname 00:14:33.488 12:38:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:33.488 12:38:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2494653 00:14:33.488 12:38:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:33.488 12:38:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:33.488 12:38:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2494653' 00:14:33.488 killing process with pid 2494653 00:14:33.488 12:38:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@973 -- # kill 2494653 00:14:33.488 12:38:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@978 -- # wait 2494653 00:14:33.747 12:38:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:14:33.747 12:38:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:14:33.747 00:14:33.747 real 0m5.657s 00:14:33.747 user 0m15.881s 00:14:33.747 sys 0m0.480s 00:14:33.747 12:38:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:33.747 12:38:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:33.747 ************************************ 00:14:33.747 END TEST nvmf_vfio_user_nvme_compliance 00:14:33.747 ************************************ 00:14:33.747 12:38:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:14:33.747 12:38:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:33.747 12:38:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:33.747 12:38:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:33.747 ************************************ 00:14:33.747 START TEST nvmf_vfio_user_fuzz 00:14:33.747 ************************************ 00:14:33.747 12:38:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:14:33.747 * Looking for test storage... 00:14:33.747 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:33.747 12:38:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:33.747 12:38:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # lcov --version 00:14:33.747 12:38:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:34.006 12:38:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:34.006 12:38:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:34.006 12:38:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:34.006 12:38:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:34.006 12:38:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:14:34.006 12:38:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:14:34.006 12:38:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:14:34.006 12:38:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:14:34.006 12:38:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:14:34.006 12:38:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:14:34.006 12:38:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:14:34.006 12:38:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:34.006 12:38:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:14:34.006 12:38:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:14:34.006 12:38:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:34.006 12:38:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:34.006 12:38:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:14:34.006 12:38:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:14:34.006 12:38:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:34.006 12:38:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:14:34.006 12:38:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:14:34.006 12:38:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:14:34.006 12:38:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:14:34.006 12:38:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:34.006 12:38:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:14:34.006 12:38:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:14:34.006 12:38:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:34.006 12:38:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:34.006 12:38:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:14:34.006 12:38:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:34.006 12:38:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:34.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:34.006 --rc genhtml_branch_coverage=1 00:14:34.006 --rc genhtml_function_coverage=1 00:14:34.006 --rc genhtml_legend=1 00:14:34.006 --rc geninfo_all_blocks=1 00:14:34.006 --rc geninfo_unexecuted_blocks=1 00:14:34.006 00:14:34.006 ' 00:14:34.006 12:38:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:34.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:34.006 --rc genhtml_branch_coverage=1 00:14:34.006 --rc genhtml_function_coverage=1 00:14:34.006 --rc genhtml_legend=1 00:14:34.006 --rc geninfo_all_blocks=1 00:14:34.006 --rc geninfo_unexecuted_blocks=1 00:14:34.006 00:14:34.006 ' 00:14:34.006 12:38:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:34.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:34.006 --rc genhtml_branch_coverage=1 00:14:34.006 --rc genhtml_function_coverage=1 00:14:34.006 --rc genhtml_legend=1 00:14:34.006 --rc geninfo_all_blocks=1 00:14:34.006 --rc geninfo_unexecuted_blocks=1 00:14:34.006 00:14:34.006 ' 00:14:34.006 12:38:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:34.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:34.006 --rc genhtml_branch_coverage=1 00:14:34.006 --rc genhtml_function_coverage=1 00:14:34.006 --rc genhtml_legend=1 00:14:34.006 --rc geninfo_all_blocks=1 00:14:34.006 --rc geninfo_unexecuted_blocks=1 00:14:34.006 00:14:34.006 ' 00:14:34.006 12:38:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:34.006 12:38:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:14:34.006 12:38:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:34.006 12:38:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:34.006 12:38:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:34.006 12:38:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:34.006 12:38:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:34.006 12:38:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:34.006 12:38:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:34.006 12:38:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:34.007 12:38:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:34.007 12:38:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:34.007 12:38:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:34.007 12:38:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:34.007 12:38:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:34.007 12:38:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:34.007 12:38:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:34.007 12:38:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:34.007 12:38:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:34.007 12:38:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:14:34.007 12:38:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:34.007 12:38:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:34.007 12:38:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:34.007 12:38:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:34.007 12:38:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:34.007 12:38:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:34.007 12:38:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:14:34.007 12:38:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:34.007 12:38:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:14:34.007 12:38:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:34.007 12:38:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:34.007 12:38:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:34.007 12:38:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:34.007 12:38:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:34.007 12:38:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:34.007 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:34.007 12:38:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:34.007 12:38:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:34.007 12:38:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:34.007 12:38:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:34.007 12:38:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:34.007 12:38:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:14:34.007 12:38:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:14:34.007 12:38:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:14:34.007 12:38:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:14:34.007 12:38:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:14:34.007 12:38:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=2495632 00:14:34.007 12:38:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 2495632' 00:14:34.007 Process pid: 2495632 00:14:34.007 12:38:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:34.007 12:38:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:34.007 12:38:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 2495632 00:14:34.007 12:38:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # '[' -z 2495632 ']' 00:14:34.007 12:38:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:34.007 12:38:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:34.007 12:38:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:34.007 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:34.007 12:38:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:34.007 12:38:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:34.265 12:38:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:34.265 12:38:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@868 -- # return 0 00:14:34.265 12:38:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:14:35.200 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:14:35.200 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.200 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:35.200 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.200 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:14:35.200 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:14:35.200 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.200 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:35.200 malloc0 00:14:35.200 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.200 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:14:35.200 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.200 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:35.200 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.200 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:14:35.200 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.200 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:35.200 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.200 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:14:35.200 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.200 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:35.200 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.200 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:14:35.200 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:15:07.268 Fuzzing completed. Shutting down the fuzz application 00:15:07.268 00:15:07.268 Dumping successful admin opcodes: 00:15:07.268 9, 10, 00:15:07.269 Dumping successful io opcodes: 00:15:07.269 0, 00:15:07.269 NS: 0x20000081ef00 I/O qp, Total commands completed: 1017616, total successful commands: 3998, random_seed: 380110400 00:15:07.269 NS: 0x20000081ef00 admin qp, Total commands completed: 251152, total successful commands: 59, random_seed: 2852149056 00:15:07.269 12:38:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:15:07.269 12:38:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.269 12:38:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:07.269 12:38:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.269 12:38:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 2495632 00:15:07.269 12:38:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # '[' -z 2495632 ']' 00:15:07.269 12:38:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # kill -0 2495632 00:15:07.269 12:38:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # uname 00:15:07.269 12:38:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:07.269 12:38:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2495632 00:15:07.269 12:38:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:07.269 12:38:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:07.269 12:38:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2495632' 00:15:07.269 killing process with pid 2495632 00:15:07.269 12:38:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@973 -- # kill 2495632 00:15:07.269 12:38:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@978 -- # wait 2495632 00:15:07.269 12:38:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:15:07.269 12:38:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:15:07.269 00:15:07.269 real 0m32.189s 00:15:07.269 user 0m29.417s 00:15:07.269 sys 0m31.993s 00:15:07.269 12:38:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:07.269 12:38:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:07.269 ************************************ 00:15:07.269 END TEST nvmf_vfio_user_fuzz 00:15:07.269 ************************************ 00:15:07.269 12:38:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:15:07.269 12:38:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:07.269 12:38:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:07.269 12:38:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:07.269 ************************************ 00:15:07.269 START TEST nvmf_auth_target 00:15:07.269 ************************************ 00:15:07.269 12:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:15:07.269 * Looking for test storage... 00:15:07.269 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:07.269 12:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:07.269 12:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lcov --version 00:15:07.269 12:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:07.269 12:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:07.269 12:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:07.269 12:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:07.269 12:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:07.269 12:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:15:07.269 12:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:15:07.269 12:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:15:07.269 12:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:15:07.269 12:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:15:07.269 12:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:15:07.269 12:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:15:07.269 12:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:07.269 12:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:15:07.269 12:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:15:07.269 12:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:07.269 12:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:07.269 12:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:15:07.269 12:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:15:07.269 12:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:07.269 12:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:15:07.269 12:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:15:07.269 12:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:15:07.269 12:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:15:07.269 12:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:07.269 12:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:15:07.269 12:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:15:07.269 12:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:07.269 12:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:07.269 12:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:15:07.269 12:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:07.269 12:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:07.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:07.269 --rc genhtml_branch_coverage=1 00:15:07.269 --rc genhtml_function_coverage=1 00:15:07.269 --rc genhtml_legend=1 00:15:07.269 --rc geninfo_all_blocks=1 00:15:07.269 --rc geninfo_unexecuted_blocks=1 00:15:07.269 00:15:07.269 ' 00:15:07.269 12:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:07.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:07.269 --rc genhtml_branch_coverage=1 00:15:07.269 --rc genhtml_function_coverage=1 00:15:07.269 --rc genhtml_legend=1 00:15:07.269 --rc geninfo_all_blocks=1 00:15:07.269 --rc geninfo_unexecuted_blocks=1 00:15:07.269 00:15:07.269 ' 00:15:07.269 12:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:07.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:07.269 --rc genhtml_branch_coverage=1 00:15:07.269 --rc genhtml_function_coverage=1 00:15:07.269 --rc genhtml_legend=1 00:15:07.269 --rc geninfo_all_blocks=1 00:15:07.269 --rc geninfo_unexecuted_blocks=1 00:15:07.269 00:15:07.269 ' 00:15:07.269 12:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:07.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:07.269 --rc genhtml_branch_coverage=1 00:15:07.269 --rc genhtml_function_coverage=1 00:15:07.269 --rc genhtml_legend=1 00:15:07.269 --rc geninfo_all_blocks=1 00:15:07.269 --rc geninfo_unexecuted_blocks=1 00:15:07.269 00:15:07.269 ' 00:15:07.269 12:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:07.269 12:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:15:07.269 12:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:07.269 12:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:07.269 12:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:07.269 12:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:07.269 12:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:07.269 12:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:07.269 12:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:07.269 12:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:07.269 12:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:07.269 12:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:07.270 12:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:07.270 12:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:15:07.270 12:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:07.270 12:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:07.270 12:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:07.270 12:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:07.270 12:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:07.270 12:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:15:07.270 12:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:07.270 12:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:07.270 12:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:07.270 12:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:07.270 12:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:07.270 12:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:07.270 12:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:15:07.270 12:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:07.270 12:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:15:07.270 12:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:07.270 12:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:07.270 12:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:07.270 12:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:07.270 12:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:07.270 12:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:07.270 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:07.270 12:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:07.270 12:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:07.270 12:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:07.270 12:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:15:07.270 12:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:15:07.270 12:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:15:07.270 12:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:07.270 12:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:15:07.270 12:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:15:07.270 12:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:15:07.270 12:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:15:07.270 12:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:07.270 12:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:07.270 12:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:07.270 12:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:07.270 12:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:07.270 12:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:07.270 12:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:07.270 12:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:07.270 12:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:07.270 12:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:07.270 12:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:15:07.270 12:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.543 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:12.543 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:15:12.543 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:12.543 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:12.543 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:12.543 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:12.543 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:12.543 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:15:12.543 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:12.543 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:15:12.543 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:15:12.543 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:15:12.543 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:15:12.543 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:15:12.543 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:15:12.543 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:12.543 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:12.543 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:12.543 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:12.543 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:12.543 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:12.543 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:12.543 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:12.543 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:12.543 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:12.543 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:12.543 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:12.543 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:12.543 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:12.543 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:12.543 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:12.543 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:12.543 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:12.543 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:12.543 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:15:12.543 Found 0000:86:00.0 (0x8086 - 0x159b) 00:15:12.543 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:12.543 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:12.543 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:12.543 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:12.543 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:12.543 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:12.543 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:15:12.543 Found 0000:86:00.1 (0x8086 - 0x159b) 00:15:12.543 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:12.543 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:12.543 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:12.543 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:12.543 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:12.543 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:12.543 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:12.543 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:12.543 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:12.543 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:12.543 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:12.543 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:12.544 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:12.544 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:12.544 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:12.544 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:15:12.544 Found net devices under 0000:86:00.0: cvl_0_0 00:15:12.544 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:12.544 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:12.544 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:12.544 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:12.544 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:12.544 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:12.544 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:12.544 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:12.544 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:15:12.544 Found net devices under 0000:86:00.1: cvl_0_1 00:15:12.544 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:12.544 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:12.544 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:15:12.544 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:12.544 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:15:12.544 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:15:12.544 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:12.544 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:12.544 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:12.544 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:12.544 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:12.544 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:12.544 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:12.544 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:12.544 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:12.544 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:12.544 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:12.544 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:12.544 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:12.544 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:12.544 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:12.544 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:12.544 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:12.544 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:12.544 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:12.544 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:12.544 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:12.544 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:12.544 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:12.544 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:12.544 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.395 ms 00:15:12.544 00:15:12.544 --- 10.0.0.2 ping statistics --- 00:15:12.544 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:12.544 rtt min/avg/max/mdev = 0.395/0.395/0.395/0.000 ms 00:15:12.544 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:12.544 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:12.544 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.208 ms 00:15:12.544 00:15:12.544 --- 10.0.0.1 ping statistics --- 00:15:12.544 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:12.544 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:15:12.544 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:12.544 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:15:12.544 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:12.544 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:12.544 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:12.544 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:12.544 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:12.544 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:12.544 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:12.544 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:15:12.544 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:12.544 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:12.544 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.544 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:15:12.544 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=2503933 00:15:12.544 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 2503933 00:15:12.544 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2503933 ']' 00:15:12.544 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:12.544 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:12.544 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:12.544 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:12.544 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.544 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:12.544 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:15:12.544 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:12.544 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:12.544 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.544 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:12.544 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=2503955 00:15:12.544 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:15:12.544 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:15:12.544 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:15:12.544 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:12.544 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:12.544 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:12.544 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:15:12.544 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:15:12.544 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:12.545 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=87cab3e256d8c486e48b42398da706d978b27b7ca9c1c137 00:15:12.545 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:15:12.545 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.RSE 00:15:12.545 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 87cab3e256d8c486e48b42398da706d978b27b7ca9c1c137 0 00:15:12.545 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 87cab3e256d8c486e48b42398da706d978b27b7ca9c1c137 0 00:15:12.545 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:12.545 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:12.545 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=87cab3e256d8c486e48b42398da706d978b27b7ca9c1c137 00:15:12.545 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:15:12.545 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:12.545 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.RSE 00:15:12.545 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.RSE 00:15:12.545 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.RSE 00:15:12.545 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:15:12.545 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:12.545 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:12.545 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:12.545 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:15:12.545 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:15:12.545 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:15:12.545 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=1be7ca2879d763a1f65bb946df9b411a2cbeaf9408ffc1889ec8939973fe91d0 00:15:12.545 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:15:12.545 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.PBP 00:15:12.545 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 1be7ca2879d763a1f65bb946df9b411a2cbeaf9408ffc1889ec8939973fe91d0 3 00:15:12.545 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 1be7ca2879d763a1f65bb946df9b411a2cbeaf9408ffc1889ec8939973fe91d0 3 00:15:12.545 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:12.545 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:12.545 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=1be7ca2879d763a1f65bb946df9b411a2cbeaf9408ffc1889ec8939973fe91d0 00:15:12.545 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:15:12.545 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:12.545 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.PBP 00:15:12.545 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.PBP 00:15:12.545 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.PBP 00:15:12.545 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:15:12.545 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:12.545 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:12.545 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:12.545 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:15:12.545 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:15:12.545 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:15:12.545 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=8507bbb8f7e27ee57e962f7fef9810a2 00:15:12.545 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:15:12.545 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.Cww 00:15:12.545 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 8507bbb8f7e27ee57e962f7fef9810a2 1 00:15:12.545 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 8507bbb8f7e27ee57e962f7fef9810a2 1 00:15:12.545 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:12.545 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:12.545 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=8507bbb8f7e27ee57e962f7fef9810a2 00:15:12.545 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:15:12.545 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:12.545 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.Cww 00:15:12.545 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.Cww 00:15:12.545 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.Cww 00:15:12.545 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:15:12.545 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:12.545 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:12.545 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:12.545 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:15:12.545 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:15:12.545 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:12.545 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=165190941e81694ee6193ac780c725ee28603d1c908b349c 00:15:12.545 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:15:12.545 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.L2o 00:15:12.545 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 165190941e81694ee6193ac780c725ee28603d1c908b349c 2 00:15:12.545 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 165190941e81694ee6193ac780c725ee28603d1c908b349c 2 00:15:12.545 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:12.545 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:12.545 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=165190941e81694ee6193ac780c725ee28603d1c908b349c 00:15:12.545 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:15:12.545 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:12.545 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.L2o 00:15:12.545 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.L2o 00:15:12.545 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.L2o 00:15:12.545 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:15:12.545 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:12.545 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:12.545 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:12.545 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:15:12.545 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:15:12.545 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:12.545 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=75868577b40b60daa6202fd419fc028db6b562864bc56e6a 00:15:12.545 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:15:12.545 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.ONg 00:15:12.545 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 75868577b40b60daa6202fd419fc028db6b562864bc56e6a 2 00:15:12.545 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 75868577b40b60daa6202fd419fc028db6b562864bc56e6a 2 00:15:12.546 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:12.546 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:12.546 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=75868577b40b60daa6202fd419fc028db6b562864bc56e6a 00:15:12.546 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:15:12.546 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:12.546 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.ONg 00:15:12.546 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.ONg 00:15:12.546 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.ONg 00:15:12.546 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:15:12.546 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:12.546 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:12.546 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:12.546 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:15:12.546 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:15:12.546 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:15:12.546 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=290536777dcbb631c5489f4b3209b25c 00:15:12.546 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:15:12.546 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.KwW 00:15:12.546 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 290536777dcbb631c5489f4b3209b25c 1 00:15:12.546 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 290536777dcbb631c5489f4b3209b25c 1 00:15:12.546 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:12.546 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:12.546 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=290536777dcbb631c5489f4b3209b25c 00:15:12.546 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:15:12.546 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:12.546 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.KwW 00:15:12.546 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.KwW 00:15:12.546 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.KwW 00:15:12.546 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:15:12.546 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:12.546 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:12.546 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:12.546 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:15:12.546 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:15:12.546 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:15:12.546 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=5b29ea37e3d1cb7cca90037a4f69d94654349be1fc82ecc2e979f4bb57844e7a 00:15:12.546 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:15:12.546 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.IrJ 00:15:12.546 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 5b29ea37e3d1cb7cca90037a4f69d94654349be1fc82ecc2e979f4bb57844e7a 3 00:15:12.546 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 5b29ea37e3d1cb7cca90037a4f69d94654349be1fc82ecc2e979f4bb57844e7a 3 00:15:12.546 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:12.546 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:12.546 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=5b29ea37e3d1cb7cca90037a4f69d94654349be1fc82ecc2e979f4bb57844e7a 00:15:12.546 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:15:12.546 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:12.546 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.IrJ 00:15:12.546 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.IrJ 00:15:12.546 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.IrJ 00:15:12.546 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:15:12.546 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 2503933 00:15:12.546 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2503933 ']' 00:15:12.546 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:12.546 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:12.546 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:12.546 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:12.546 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:12.546 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.806 12:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:12.806 12:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:15:12.806 12:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 2503955 /var/tmp/host.sock 00:15:12.806 12:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2503955 ']' 00:15:12.806 12:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:15:12.806 12:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:12.806 12:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:15:12.806 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:15:12.806 12:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:12.806 12:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.066 12:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:13.066 12:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:15:13.066 12:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:15:13.066 12:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.066 12:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.066 12:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.066 12:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:15:13.066 12:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.RSE 00:15:13.066 12:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.066 12:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.066 12:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.066 12:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.RSE 00:15:13.066 12:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.RSE 00:15:13.324 12:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.PBP ]] 00:15:13.324 12:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.PBP 00:15:13.324 12:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.324 12:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.324 12:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.324 12:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.PBP 00:15:13.324 12:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.PBP 00:15:13.324 12:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:15:13.324 12:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.Cww 00:15:13.324 12:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.324 12:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.324 12:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.324 12:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.Cww 00:15:13.324 12:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.Cww 00:15:13.583 12:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.L2o ]] 00:15:13.583 12:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.L2o 00:15:13.583 12:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.583 12:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.583 12:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.583 12:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.L2o 00:15:13.583 12:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.L2o 00:15:13.841 12:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:15:13.841 12:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.ONg 00:15:13.841 12:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.841 12:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.841 12:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.841 12:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.ONg 00:15:13.841 12:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.ONg 00:15:14.099 12:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.KwW ]] 00:15:14.099 12:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.KwW 00:15:14.099 12:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.099 12:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.099 12:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.099 12:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.KwW 00:15:14.099 12:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.KwW 00:15:14.099 12:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:15:14.099 12:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.IrJ 00:15:14.099 12:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.099 12:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.099 12:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.099 12:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.IrJ 00:15:14.099 12:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.IrJ 00:15:14.357 12:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:15:14.357 12:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:15:14.357 12:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:14.357 12:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:14.357 12:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:14.357 12:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:14.615 12:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:15:14.616 12:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:14.616 12:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:14.616 12:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:14.616 12:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:14.616 12:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:14.616 12:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:14.616 12:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.616 12:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.616 12:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.616 12:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:14.616 12:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:14.616 12:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:14.874 00:15:14.874 12:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:14.874 12:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:14.874 12:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:15.133 12:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:15.133 12:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:15.133 12:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.133 12:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.133 12:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.133 12:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:15.133 { 00:15:15.133 "cntlid": 1, 00:15:15.133 "qid": 0, 00:15:15.133 "state": "enabled", 00:15:15.133 "thread": "nvmf_tgt_poll_group_000", 00:15:15.133 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:15.133 "listen_address": { 00:15:15.133 "trtype": "TCP", 00:15:15.133 "adrfam": "IPv4", 00:15:15.133 "traddr": "10.0.0.2", 00:15:15.133 "trsvcid": "4420" 00:15:15.133 }, 00:15:15.133 "peer_address": { 00:15:15.133 "trtype": "TCP", 00:15:15.133 "adrfam": "IPv4", 00:15:15.133 "traddr": "10.0.0.1", 00:15:15.133 "trsvcid": "35206" 00:15:15.133 }, 00:15:15.133 "auth": { 00:15:15.133 "state": "completed", 00:15:15.133 "digest": "sha256", 00:15:15.133 "dhgroup": "null" 00:15:15.133 } 00:15:15.133 } 00:15:15.133 ]' 00:15:15.133 12:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:15.133 12:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:15.133 12:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:15.133 12:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:15.133 12:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:15.133 12:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:15.133 12:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:15.133 12:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:15.392 12:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODdjYWIzZTI1NmQ4YzQ4NmU0OGI0MjM5OGRhNzA2ZDk3OGIyN2I3Y2E5YzFjMTM3TZ3J6w==: --dhchap-ctrl-secret DHHC-1:03:MWJlN2NhMjg3OWQ3NjNhMWY2NWJiOTQ2ZGY5YjQxMWEyY2JlYWY5NDA4ZmZjMTg4OWVjODkzOTk3M2ZlOTFkMJ89BcI=: 00:15:15.392 12:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ODdjYWIzZTI1NmQ4YzQ4NmU0OGI0MjM5OGRhNzA2ZDk3OGIyN2I3Y2E5YzFjMTM3TZ3J6w==: --dhchap-ctrl-secret DHHC-1:03:MWJlN2NhMjg3OWQ3NjNhMWY2NWJiOTQ2ZGY5YjQxMWEyY2JlYWY5NDA4ZmZjMTg4OWVjODkzOTk3M2ZlOTFkMJ89BcI=: 00:15:15.959 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:15.959 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:15.959 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:15.959 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.960 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.960 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.960 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:15.960 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:15.960 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:16.218 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:15:16.218 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:16.218 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:16.218 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:16.218 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:16.218 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:16.218 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:16.218 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.218 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.218 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.218 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:16.218 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:16.218 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:16.476 00:15:16.476 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:16.476 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:16.476 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:16.476 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:16.476 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:16.476 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.476 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.734 12:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.734 12:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:16.734 { 00:15:16.734 "cntlid": 3, 00:15:16.734 "qid": 0, 00:15:16.734 "state": "enabled", 00:15:16.734 "thread": "nvmf_tgt_poll_group_000", 00:15:16.734 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:16.734 "listen_address": { 00:15:16.734 "trtype": "TCP", 00:15:16.734 "adrfam": "IPv4", 00:15:16.734 "traddr": "10.0.0.2", 00:15:16.734 "trsvcid": "4420" 00:15:16.734 }, 00:15:16.734 "peer_address": { 00:15:16.734 "trtype": "TCP", 00:15:16.734 "adrfam": "IPv4", 00:15:16.734 "traddr": "10.0.0.1", 00:15:16.734 "trsvcid": "35228" 00:15:16.734 }, 00:15:16.734 "auth": { 00:15:16.734 "state": "completed", 00:15:16.734 "digest": "sha256", 00:15:16.734 "dhgroup": "null" 00:15:16.734 } 00:15:16.734 } 00:15:16.734 ]' 00:15:16.734 12:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:16.734 12:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:16.734 12:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:16.734 12:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:16.734 12:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:16.734 12:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:16.734 12:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:16.734 12:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:16.993 12:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODUwN2JiYjhmN2UyN2VlNTdlOTYyZjdmZWY5ODEwYTJErtK/: --dhchap-ctrl-secret DHHC-1:02:MTY1MTkwOTQxZTgxNjk0ZWU2MTkzYWM3ODBjNzI1ZWUyODYwM2QxYzkwOGIzNDljsyhgAw==: 00:15:16.993 12:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ODUwN2JiYjhmN2UyN2VlNTdlOTYyZjdmZWY5ODEwYTJErtK/: --dhchap-ctrl-secret DHHC-1:02:MTY1MTkwOTQxZTgxNjk0ZWU2MTkzYWM3ODBjNzI1ZWUyODYwM2QxYzkwOGIzNDljsyhgAw==: 00:15:17.560 12:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:17.560 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:17.560 12:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:17.560 12:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.560 12:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.560 12:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.560 12:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:17.560 12:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:17.560 12:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:17.819 12:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:15:17.819 12:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:17.819 12:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:17.819 12:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:17.819 12:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:17.819 12:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:17.819 12:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:17.819 12:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.819 12:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.819 12:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.819 12:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:17.819 12:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:17.819 12:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:18.078 00:15:18.078 12:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:18.078 12:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:18.078 12:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:18.078 12:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:18.078 12:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:18.078 12:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.078 12:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.336 12:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.336 12:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:18.336 { 00:15:18.336 "cntlid": 5, 00:15:18.336 "qid": 0, 00:15:18.336 "state": "enabled", 00:15:18.336 "thread": "nvmf_tgt_poll_group_000", 00:15:18.336 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:18.336 "listen_address": { 00:15:18.336 "trtype": "TCP", 00:15:18.336 "adrfam": "IPv4", 00:15:18.336 "traddr": "10.0.0.2", 00:15:18.336 "trsvcid": "4420" 00:15:18.336 }, 00:15:18.336 "peer_address": { 00:15:18.336 "trtype": "TCP", 00:15:18.336 "adrfam": "IPv4", 00:15:18.336 "traddr": "10.0.0.1", 00:15:18.336 "trsvcid": "35272" 00:15:18.336 }, 00:15:18.336 "auth": { 00:15:18.336 "state": "completed", 00:15:18.336 "digest": "sha256", 00:15:18.336 "dhgroup": "null" 00:15:18.336 } 00:15:18.336 } 00:15:18.336 ]' 00:15:18.336 12:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:18.336 12:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:18.336 12:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:18.336 12:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:18.336 12:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:18.336 12:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:18.336 12:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:18.336 12:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:18.595 12:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzU4Njg1NzdiNDBiNjBkYWE2MjAyZmQ0MTlmYzAyOGRiNmI1NjI4NjRiYzU2ZTZhfJv/wQ==: --dhchap-ctrl-secret DHHC-1:01:MjkwNTM2Nzc3ZGNiYjYzMWM1NDg5ZjRiMzIwOWIyNWNUTaOW: 00:15:18.595 12:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NzU4Njg1NzdiNDBiNjBkYWE2MjAyZmQ0MTlmYzAyOGRiNmI1NjI4NjRiYzU2ZTZhfJv/wQ==: --dhchap-ctrl-secret DHHC-1:01:MjkwNTM2Nzc3ZGNiYjYzMWM1NDg5ZjRiMzIwOWIyNWNUTaOW: 00:15:19.160 12:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:19.160 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:19.160 12:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:19.160 12:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.160 12:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.160 12:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.160 12:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:19.160 12:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:19.160 12:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:19.418 12:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:15:19.418 12:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:19.418 12:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:19.418 12:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:19.418 12:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:19.418 12:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:19.418 12:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:15:19.418 12:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.418 12:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.418 12:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.418 12:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:19.418 12:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:19.418 12:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:19.676 00:15:19.676 12:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:19.676 12:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:19.676 12:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:19.676 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:19.676 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:19.676 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.676 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.934 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.934 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:19.934 { 00:15:19.934 "cntlid": 7, 00:15:19.934 "qid": 0, 00:15:19.934 "state": "enabled", 00:15:19.934 "thread": "nvmf_tgt_poll_group_000", 00:15:19.934 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:19.934 "listen_address": { 00:15:19.934 "trtype": "TCP", 00:15:19.934 "adrfam": "IPv4", 00:15:19.934 "traddr": "10.0.0.2", 00:15:19.934 "trsvcid": "4420" 00:15:19.934 }, 00:15:19.934 "peer_address": { 00:15:19.934 "trtype": "TCP", 00:15:19.934 "adrfam": "IPv4", 00:15:19.934 "traddr": "10.0.0.1", 00:15:19.934 "trsvcid": "35296" 00:15:19.934 }, 00:15:19.934 "auth": { 00:15:19.934 "state": "completed", 00:15:19.934 "digest": "sha256", 00:15:19.934 "dhgroup": "null" 00:15:19.934 } 00:15:19.934 } 00:15:19.934 ]' 00:15:19.934 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:19.934 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:19.934 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:19.935 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:19.935 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:19.935 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:19.935 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:19.935 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:20.193 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWIyOWVhMzdlM2QxY2I3Y2NhOTAwMzdhNGY2OWQ5NDY1NDM0OWJlMWZjODJlY2MyZTk3OWY0YmI1Nzg0NGU3YbPRy9c=: 00:15:20.193 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NWIyOWVhMzdlM2QxY2I3Y2NhOTAwMzdhNGY2OWQ5NDY1NDM0OWJlMWZjODJlY2MyZTk3OWY0YmI1Nzg0NGU3YbPRy9c=: 00:15:20.759 12:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:20.759 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:20.759 12:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:20.759 12:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.759 12:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.759 12:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.759 12:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:20.759 12:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:20.759 12:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:20.759 12:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:21.018 12:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:15:21.018 12:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:21.018 12:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:21.018 12:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:21.018 12:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:21.018 12:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:21.018 12:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:21.018 12:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.018 12:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.018 12:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.018 12:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:21.018 12:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:21.018 12:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:21.277 00:15:21.277 12:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:21.277 12:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:21.277 12:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:21.277 12:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:21.277 12:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:21.277 12:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.277 12:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.277 12:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.277 12:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:21.277 { 00:15:21.277 "cntlid": 9, 00:15:21.277 "qid": 0, 00:15:21.277 "state": "enabled", 00:15:21.277 "thread": "nvmf_tgt_poll_group_000", 00:15:21.277 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:21.277 "listen_address": { 00:15:21.277 "trtype": "TCP", 00:15:21.277 "adrfam": "IPv4", 00:15:21.277 "traddr": "10.0.0.2", 00:15:21.277 "trsvcid": "4420" 00:15:21.277 }, 00:15:21.277 "peer_address": { 00:15:21.277 "trtype": "TCP", 00:15:21.277 "adrfam": "IPv4", 00:15:21.277 "traddr": "10.0.0.1", 00:15:21.277 "trsvcid": "35310" 00:15:21.277 }, 00:15:21.277 "auth": { 00:15:21.277 "state": "completed", 00:15:21.277 "digest": "sha256", 00:15:21.277 "dhgroup": "ffdhe2048" 00:15:21.277 } 00:15:21.277 } 00:15:21.277 ]' 00:15:21.277 12:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:21.535 12:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:21.535 12:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:21.535 12:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:21.535 12:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:21.535 12:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:21.535 12:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:21.535 12:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:21.794 12:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODdjYWIzZTI1NmQ4YzQ4NmU0OGI0MjM5OGRhNzA2ZDk3OGIyN2I3Y2E5YzFjMTM3TZ3J6w==: --dhchap-ctrl-secret DHHC-1:03:MWJlN2NhMjg3OWQ3NjNhMWY2NWJiOTQ2ZGY5YjQxMWEyY2JlYWY5NDA4ZmZjMTg4OWVjODkzOTk3M2ZlOTFkMJ89BcI=: 00:15:21.794 12:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ODdjYWIzZTI1NmQ4YzQ4NmU0OGI0MjM5OGRhNzA2ZDk3OGIyN2I3Y2E5YzFjMTM3TZ3J6w==: --dhchap-ctrl-secret DHHC-1:03:MWJlN2NhMjg3OWQ3NjNhMWY2NWJiOTQ2ZGY5YjQxMWEyY2JlYWY5NDA4ZmZjMTg4OWVjODkzOTk3M2ZlOTFkMJ89BcI=: 00:15:22.361 12:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:22.361 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:22.361 12:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:22.361 12:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.361 12:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.361 12:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.361 12:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:22.361 12:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:22.362 12:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:22.620 12:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:15:22.620 12:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:22.620 12:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:22.620 12:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:22.620 12:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:22.620 12:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:22.620 12:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:22.620 12:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.620 12:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.620 12:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.620 12:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:22.620 12:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:22.620 12:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:22.879 00:15:22.879 12:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:22.879 12:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:22.879 12:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:22.879 12:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:22.879 12:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:22.879 12:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.879 12:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.879 12:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.879 12:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:22.879 { 00:15:22.879 "cntlid": 11, 00:15:22.879 "qid": 0, 00:15:22.879 "state": "enabled", 00:15:22.879 "thread": "nvmf_tgt_poll_group_000", 00:15:22.879 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:22.879 "listen_address": { 00:15:22.879 "trtype": "TCP", 00:15:22.879 "adrfam": "IPv4", 00:15:22.879 "traddr": "10.0.0.2", 00:15:22.879 "trsvcid": "4420" 00:15:22.879 }, 00:15:22.879 "peer_address": { 00:15:22.879 "trtype": "TCP", 00:15:22.879 "adrfam": "IPv4", 00:15:22.879 "traddr": "10.0.0.1", 00:15:22.879 "trsvcid": "38254" 00:15:22.879 }, 00:15:22.879 "auth": { 00:15:22.880 "state": "completed", 00:15:22.880 "digest": "sha256", 00:15:22.880 "dhgroup": "ffdhe2048" 00:15:22.880 } 00:15:22.880 } 00:15:22.880 ]' 00:15:22.880 12:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:23.138 12:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:23.138 12:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:23.138 12:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:23.138 12:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:23.138 12:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:23.138 12:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:23.138 12:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:23.396 12:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODUwN2JiYjhmN2UyN2VlNTdlOTYyZjdmZWY5ODEwYTJErtK/: --dhchap-ctrl-secret DHHC-1:02:MTY1MTkwOTQxZTgxNjk0ZWU2MTkzYWM3ODBjNzI1ZWUyODYwM2QxYzkwOGIzNDljsyhgAw==: 00:15:23.396 12:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ODUwN2JiYjhmN2UyN2VlNTdlOTYyZjdmZWY5ODEwYTJErtK/: --dhchap-ctrl-secret DHHC-1:02:MTY1MTkwOTQxZTgxNjk0ZWU2MTkzYWM3ODBjNzI1ZWUyODYwM2QxYzkwOGIzNDljsyhgAw==: 00:15:23.963 12:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:23.963 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:23.963 12:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:23.963 12:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.963 12:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.963 12:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.963 12:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:23.963 12:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:23.963 12:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:23.963 12:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:15:23.963 12:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:23.963 12:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:23.963 12:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:23.963 12:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:23.963 12:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:23.963 12:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:23.963 12:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.963 12:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.220 12:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.220 12:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:24.220 12:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:24.220 12:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:24.220 00:15:24.478 12:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:24.478 12:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:24.478 12:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:24.478 12:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:24.478 12:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:24.478 12:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.478 12:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.478 12:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.478 12:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:24.478 { 00:15:24.478 "cntlid": 13, 00:15:24.478 "qid": 0, 00:15:24.478 "state": "enabled", 00:15:24.478 "thread": "nvmf_tgt_poll_group_000", 00:15:24.478 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:24.478 "listen_address": { 00:15:24.478 "trtype": "TCP", 00:15:24.478 "adrfam": "IPv4", 00:15:24.478 "traddr": "10.0.0.2", 00:15:24.478 "trsvcid": "4420" 00:15:24.478 }, 00:15:24.478 "peer_address": { 00:15:24.478 "trtype": "TCP", 00:15:24.478 "adrfam": "IPv4", 00:15:24.478 "traddr": "10.0.0.1", 00:15:24.478 "trsvcid": "38274" 00:15:24.478 }, 00:15:24.478 "auth": { 00:15:24.478 "state": "completed", 00:15:24.478 "digest": "sha256", 00:15:24.478 "dhgroup": "ffdhe2048" 00:15:24.478 } 00:15:24.478 } 00:15:24.478 ]' 00:15:24.478 12:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:24.478 12:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:24.478 12:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:24.735 12:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:24.735 12:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:24.735 12:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:24.735 12:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:24.735 12:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:24.993 12:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzU4Njg1NzdiNDBiNjBkYWE2MjAyZmQ0MTlmYzAyOGRiNmI1NjI4NjRiYzU2ZTZhfJv/wQ==: --dhchap-ctrl-secret DHHC-1:01:MjkwNTM2Nzc3ZGNiYjYzMWM1NDg5ZjRiMzIwOWIyNWNUTaOW: 00:15:24.993 12:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NzU4Njg1NzdiNDBiNjBkYWE2MjAyZmQ0MTlmYzAyOGRiNmI1NjI4NjRiYzU2ZTZhfJv/wQ==: --dhchap-ctrl-secret DHHC-1:01:MjkwNTM2Nzc3ZGNiYjYzMWM1NDg5ZjRiMzIwOWIyNWNUTaOW: 00:15:25.559 12:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:25.559 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:25.559 12:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:25.559 12:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.559 12:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.559 12:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.559 12:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:25.559 12:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:25.559 12:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:25.817 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:15:25.817 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:25.817 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:25.817 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:25.817 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:25.817 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:25.817 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:15:25.817 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.817 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.817 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.817 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:25.817 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:25.817 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:26.075 00:15:26.075 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:26.075 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:26.075 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:26.075 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:26.075 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:26.075 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.075 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.075 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.075 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:26.075 { 00:15:26.075 "cntlid": 15, 00:15:26.075 "qid": 0, 00:15:26.075 "state": "enabled", 00:15:26.075 "thread": "nvmf_tgt_poll_group_000", 00:15:26.075 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:26.075 "listen_address": { 00:15:26.075 "trtype": "TCP", 00:15:26.075 "adrfam": "IPv4", 00:15:26.075 "traddr": "10.0.0.2", 00:15:26.075 "trsvcid": "4420" 00:15:26.075 }, 00:15:26.075 "peer_address": { 00:15:26.075 "trtype": "TCP", 00:15:26.075 "adrfam": "IPv4", 00:15:26.075 "traddr": "10.0.0.1", 00:15:26.075 "trsvcid": "38302" 00:15:26.075 }, 00:15:26.075 "auth": { 00:15:26.075 "state": "completed", 00:15:26.075 "digest": "sha256", 00:15:26.075 "dhgroup": "ffdhe2048" 00:15:26.075 } 00:15:26.075 } 00:15:26.075 ]' 00:15:26.075 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:26.333 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:26.333 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:26.333 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:26.333 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:26.333 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:26.333 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:26.333 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:26.591 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWIyOWVhMzdlM2QxY2I3Y2NhOTAwMzdhNGY2OWQ5NDY1NDM0OWJlMWZjODJlY2MyZTk3OWY0YmI1Nzg0NGU3YbPRy9c=: 00:15:26.591 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NWIyOWVhMzdlM2QxY2I3Y2NhOTAwMzdhNGY2OWQ5NDY1NDM0OWJlMWZjODJlY2MyZTk3OWY0YmI1Nzg0NGU3YbPRy9c=: 00:15:27.157 12:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:27.157 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:27.157 12:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:27.157 12:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.157 12:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.157 12:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.157 12:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:27.157 12:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:27.157 12:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:27.157 12:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:27.416 12:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:15:27.416 12:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:27.416 12:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:27.416 12:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:27.416 12:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:27.416 12:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:27.416 12:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:27.416 12:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.416 12:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.416 12:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.416 12:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:27.416 12:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:27.416 12:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:27.674 00:15:27.674 12:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:27.674 12:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:27.674 12:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:27.674 12:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:27.674 12:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:27.674 12:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.674 12:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.674 12:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.674 12:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:27.674 { 00:15:27.674 "cntlid": 17, 00:15:27.674 "qid": 0, 00:15:27.674 "state": "enabled", 00:15:27.674 "thread": "nvmf_tgt_poll_group_000", 00:15:27.674 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:27.674 "listen_address": { 00:15:27.674 "trtype": "TCP", 00:15:27.674 "adrfam": "IPv4", 00:15:27.674 "traddr": "10.0.0.2", 00:15:27.674 "trsvcid": "4420" 00:15:27.674 }, 00:15:27.674 "peer_address": { 00:15:27.674 "trtype": "TCP", 00:15:27.674 "adrfam": "IPv4", 00:15:27.674 "traddr": "10.0.0.1", 00:15:27.674 "trsvcid": "38318" 00:15:27.674 }, 00:15:27.674 "auth": { 00:15:27.674 "state": "completed", 00:15:27.674 "digest": "sha256", 00:15:27.674 "dhgroup": "ffdhe3072" 00:15:27.674 } 00:15:27.674 } 00:15:27.674 ]' 00:15:27.674 12:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:27.932 12:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:27.932 12:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:27.932 12:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:27.932 12:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:27.932 12:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:27.932 12:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:27.932 12:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:28.191 12:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODdjYWIzZTI1NmQ4YzQ4NmU0OGI0MjM5OGRhNzA2ZDk3OGIyN2I3Y2E5YzFjMTM3TZ3J6w==: --dhchap-ctrl-secret DHHC-1:03:MWJlN2NhMjg3OWQ3NjNhMWY2NWJiOTQ2ZGY5YjQxMWEyY2JlYWY5NDA4ZmZjMTg4OWVjODkzOTk3M2ZlOTFkMJ89BcI=: 00:15:28.191 12:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ODdjYWIzZTI1NmQ4YzQ4NmU0OGI0MjM5OGRhNzA2ZDk3OGIyN2I3Y2E5YzFjMTM3TZ3J6w==: --dhchap-ctrl-secret DHHC-1:03:MWJlN2NhMjg3OWQ3NjNhMWY2NWJiOTQ2ZGY5YjQxMWEyY2JlYWY5NDA4ZmZjMTg4OWVjODkzOTk3M2ZlOTFkMJ89BcI=: 00:15:28.757 12:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:28.757 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:28.757 12:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:28.757 12:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.757 12:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.757 12:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.757 12:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:28.757 12:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:28.757 12:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:29.015 12:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:15:29.016 12:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:29.016 12:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:29.016 12:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:29.016 12:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:29.016 12:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:29.016 12:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:29.016 12:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.016 12:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.016 12:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.016 12:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:29.016 12:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:29.016 12:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:29.274 00:15:29.274 12:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:29.274 12:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:29.274 12:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:29.274 12:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:29.274 12:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:29.274 12:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.274 12:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.274 12:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.275 12:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:29.275 { 00:15:29.275 "cntlid": 19, 00:15:29.275 "qid": 0, 00:15:29.275 "state": "enabled", 00:15:29.275 "thread": "nvmf_tgt_poll_group_000", 00:15:29.275 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:29.275 "listen_address": { 00:15:29.275 "trtype": "TCP", 00:15:29.275 "adrfam": "IPv4", 00:15:29.275 "traddr": "10.0.0.2", 00:15:29.275 "trsvcid": "4420" 00:15:29.275 }, 00:15:29.275 "peer_address": { 00:15:29.275 "trtype": "TCP", 00:15:29.275 "adrfam": "IPv4", 00:15:29.275 "traddr": "10.0.0.1", 00:15:29.275 "trsvcid": "38354" 00:15:29.275 }, 00:15:29.275 "auth": { 00:15:29.275 "state": "completed", 00:15:29.275 "digest": "sha256", 00:15:29.275 "dhgroup": "ffdhe3072" 00:15:29.275 } 00:15:29.275 } 00:15:29.275 ]' 00:15:29.275 12:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:29.532 12:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:29.532 12:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:29.532 12:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:29.532 12:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:29.532 12:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:29.532 12:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:29.532 12:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:29.789 12:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODUwN2JiYjhmN2UyN2VlNTdlOTYyZjdmZWY5ODEwYTJErtK/: --dhchap-ctrl-secret DHHC-1:02:MTY1MTkwOTQxZTgxNjk0ZWU2MTkzYWM3ODBjNzI1ZWUyODYwM2QxYzkwOGIzNDljsyhgAw==: 00:15:29.789 12:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ODUwN2JiYjhmN2UyN2VlNTdlOTYyZjdmZWY5ODEwYTJErtK/: --dhchap-ctrl-secret DHHC-1:02:MTY1MTkwOTQxZTgxNjk0ZWU2MTkzYWM3ODBjNzI1ZWUyODYwM2QxYzkwOGIzNDljsyhgAw==: 00:15:30.353 12:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:30.353 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:30.353 12:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:30.353 12:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.354 12:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.354 12:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.354 12:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:30.354 12:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:30.354 12:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:30.611 12:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:15:30.611 12:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:30.611 12:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:30.611 12:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:30.611 12:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:30.611 12:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:30.611 12:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:30.611 12:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.611 12:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.611 12:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.611 12:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:30.611 12:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:30.611 12:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:30.869 00:15:30.869 12:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:30.869 12:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:30.869 12:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:31.127 12:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:31.127 12:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:31.127 12:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.127 12:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.127 12:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.127 12:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:31.127 { 00:15:31.127 "cntlid": 21, 00:15:31.127 "qid": 0, 00:15:31.127 "state": "enabled", 00:15:31.127 "thread": "nvmf_tgt_poll_group_000", 00:15:31.127 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:31.127 "listen_address": { 00:15:31.127 "trtype": "TCP", 00:15:31.127 "adrfam": "IPv4", 00:15:31.127 "traddr": "10.0.0.2", 00:15:31.127 "trsvcid": "4420" 00:15:31.127 }, 00:15:31.127 "peer_address": { 00:15:31.127 "trtype": "TCP", 00:15:31.127 "adrfam": "IPv4", 00:15:31.127 "traddr": "10.0.0.1", 00:15:31.127 "trsvcid": "38374" 00:15:31.127 }, 00:15:31.127 "auth": { 00:15:31.127 "state": "completed", 00:15:31.127 "digest": "sha256", 00:15:31.127 "dhgroup": "ffdhe3072" 00:15:31.127 } 00:15:31.127 } 00:15:31.127 ]' 00:15:31.127 12:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:31.127 12:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:31.127 12:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:31.127 12:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:31.127 12:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:31.127 12:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:31.127 12:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:31.127 12:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:31.384 12:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzU4Njg1NzdiNDBiNjBkYWE2MjAyZmQ0MTlmYzAyOGRiNmI1NjI4NjRiYzU2ZTZhfJv/wQ==: --dhchap-ctrl-secret DHHC-1:01:MjkwNTM2Nzc3ZGNiYjYzMWM1NDg5ZjRiMzIwOWIyNWNUTaOW: 00:15:31.384 12:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NzU4Njg1NzdiNDBiNjBkYWE2MjAyZmQ0MTlmYzAyOGRiNmI1NjI4NjRiYzU2ZTZhfJv/wQ==: --dhchap-ctrl-secret DHHC-1:01:MjkwNTM2Nzc3ZGNiYjYzMWM1NDg5ZjRiMzIwOWIyNWNUTaOW: 00:15:32.011 12:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:32.011 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:32.011 12:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:32.011 12:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.011 12:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.011 12:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.011 12:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:32.011 12:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:32.011 12:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:32.362 12:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:15:32.362 12:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:32.362 12:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:32.362 12:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:32.362 12:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:32.362 12:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:32.362 12:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:15:32.362 12:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.362 12:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.362 12:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.362 12:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:32.362 12:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:32.362 12:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:32.362 00:15:32.362 12:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:32.362 12:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:32.362 12:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:32.639 12:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:32.639 12:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:32.639 12:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.639 12:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.639 12:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.639 12:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:32.639 { 00:15:32.639 "cntlid": 23, 00:15:32.639 "qid": 0, 00:15:32.639 "state": "enabled", 00:15:32.639 "thread": "nvmf_tgt_poll_group_000", 00:15:32.639 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:32.639 "listen_address": { 00:15:32.639 "trtype": "TCP", 00:15:32.639 "adrfam": "IPv4", 00:15:32.639 "traddr": "10.0.0.2", 00:15:32.639 "trsvcid": "4420" 00:15:32.639 }, 00:15:32.639 "peer_address": { 00:15:32.639 "trtype": "TCP", 00:15:32.639 "adrfam": "IPv4", 00:15:32.639 "traddr": "10.0.0.1", 00:15:32.639 "trsvcid": "48004" 00:15:32.639 }, 00:15:32.639 "auth": { 00:15:32.639 "state": "completed", 00:15:32.639 "digest": "sha256", 00:15:32.639 "dhgroup": "ffdhe3072" 00:15:32.639 } 00:15:32.639 } 00:15:32.639 ]' 00:15:32.639 12:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:32.639 12:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:32.639 12:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:32.639 12:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:32.639 12:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:32.639 12:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:32.639 12:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:32.639 12:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:32.897 12:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWIyOWVhMzdlM2QxY2I3Y2NhOTAwMzdhNGY2OWQ5NDY1NDM0OWJlMWZjODJlY2MyZTk3OWY0YmI1Nzg0NGU3YbPRy9c=: 00:15:32.897 12:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NWIyOWVhMzdlM2QxY2I3Y2NhOTAwMzdhNGY2OWQ5NDY1NDM0OWJlMWZjODJlY2MyZTk3OWY0YmI1Nzg0NGU3YbPRy9c=: 00:15:33.464 12:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:33.464 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:33.464 12:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:33.464 12:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.464 12:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.464 12:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.464 12:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:33.464 12:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:33.464 12:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:33.464 12:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:33.724 12:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:15:33.724 12:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:33.724 12:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:33.724 12:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:33.724 12:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:33.724 12:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:33.724 12:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:33.724 12:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.724 12:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.724 12:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.724 12:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:33.724 12:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:33.724 12:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:33.983 00:15:33.983 12:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:33.983 12:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:33.983 12:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:34.242 12:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:34.242 12:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:34.242 12:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.242 12:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.242 12:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.242 12:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:34.242 { 00:15:34.242 "cntlid": 25, 00:15:34.242 "qid": 0, 00:15:34.242 "state": "enabled", 00:15:34.242 "thread": "nvmf_tgt_poll_group_000", 00:15:34.242 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:34.242 "listen_address": { 00:15:34.242 "trtype": "TCP", 00:15:34.242 "adrfam": "IPv4", 00:15:34.242 "traddr": "10.0.0.2", 00:15:34.242 "trsvcid": "4420" 00:15:34.242 }, 00:15:34.242 "peer_address": { 00:15:34.242 "trtype": "TCP", 00:15:34.242 "adrfam": "IPv4", 00:15:34.242 "traddr": "10.0.0.1", 00:15:34.242 "trsvcid": "48038" 00:15:34.242 }, 00:15:34.242 "auth": { 00:15:34.242 "state": "completed", 00:15:34.242 "digest": "sha256", 00:15:34.242 "dhgroup": "ffdhe4096" 00:15:34.242 } 00:15:34.242 } 00:15:34.242 ]' 00:15:34.242 12:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:34.242 12:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:34.242 12:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:34.242 12:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:34.242 12:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:34.242 12:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:34.242 12:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:34.243 12:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:34.503 12:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODdjYWIzZTI1NmQ4YzQ4NmU0OGI0MjM5OGRhNzA2ZDk3OGIyN2I3Y2E5YzFjMTM3TZ3J6w==: --dhchap-ctrl-secret DHHC-1:03:MWJlN2NhMjg3OWQ3NjNhMWY2NWJiOTQ2ZGY5YjQxMWEyY2JlYWY5NDA4ZmZjMTg4OWVjODkzOTk3M2ZlOTFkMJ89BcI=: 00:15:34.503 12:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ODdjYWIzZTI1NmQ4YzQ4NmU0OGI0MjM5OGRhNzA2ZDk3OGIyN2I3Y2E5YzFjMTM3TZ3J6w==: --dhchap-ctrl-secret DHHC-1:03:MWJlN2NhMjg3OWQ3NjNhMWY2NWJiOTQ2ZGY5YjQxMWEyY2JlYWY5NDA4ZmZjMTg4OWVjODkzOTk3M2ZlOTFkMJ89BcI=: 00:15:35.074 12:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:35.074 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:35.074 12:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:35.074 12:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.074 12:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.074 12:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.074 12:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:35.074 12:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:35.074 12:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:35.333 12:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:15:35.333 12:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:35.333 12:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:35.333 12:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:35.333 12:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:35.333 12:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:35.333 12:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:35.333 12:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.333 12:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.333 12:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.333 12:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:35.333 12:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:35.333 12:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:35.593 00:15:35.593 12:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:35.593 12:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:35.593 12:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:35.853 12:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:35.853 12:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:35.853 12:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.853 12:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.853 12:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.853 12:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:35.853 { 00:15:35.853 "cntlid": 27, 00:15:35.853 "qid": 0, 00:15:35.853 "state": "enabled", 00:15:35.853 "thread": "nvmf_tgt_poll_group_000", 00:15:35.853 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:35.853 "listen_address": { 00:15:35.853 "trtype": "TCP", 00:15:35.853 "adrfam": "IPv4", 00:15:35.853 "traddr": "10.0.0.2", 00:15:35.853 "trsvcid": "4420" 00:15:35.853 }, 00:15:35.853 "peer_address": { 00:15:35.853 "trtype": "TCP", 00:15:35.853 "adrfam": "IPv4", 00:15:35.853 "traddr": "10.0.0.1", 00:15:35.853 "trsvcid": "48070" 00:15:35.853 }, 00:15:35.853 "auth": { 00:15:35.853 "state": "completed", 00:15:35.853 "digest": "sha256", 00:15:35.853 "dhgroup": "ffdhe4096" 00:15:35.853 } 00:15:35.853 } 00:15:35.853 ]' 00:15:35.853 12:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:35.853 12:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:35.853 12:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:35.853 12:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:35.853 12:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:36.113 12:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:36.113 12:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:36.113 12:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:36.113 12:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODUwN2JiYjhmN2UyN2VlNTdlOTYyZjdmZWY5ODEwYTJErtK/: --dhchap-ctrl-secret DHHC-1:02:MTY1MTkwOTQxZTgxNjk0ZWU2MTkzYWM3ODBjNzI1ZWUyODYwM2QxYzkwOGIzNDljsyhgAw==: 00:15:36.113 12:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ODUwN2JiYjhmN2UyN2VlNTdlOTYyZjdmZWY5ODEwYTJErtK/: --dhchap-ctrl-secret DHHC-1:02:MTY1MTkwOTQxZTgxNjk0ZWU2MTkzYWM3ODBjNzI1ZWUyODYwM2QxYzkwOGIzNDljsyhgAw==: 00:15:36.682 12:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:36.682 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:36.682 12:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:36.683 12:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.683 12:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.683 12:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.683 12:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:36.683 12:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:36.683 12:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:36.942 12:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:15:36.942 12:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:36.942 12:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:36.942 12:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:36.942 12:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:36.942 12:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:36.942 12:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:36.942 12:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.942 12:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.942 12:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.942 12:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:36.942 12:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:36.942 12:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:37.202 00:15:37.202 12:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:37.202 12:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:37.202 12:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:37.462 12:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:37.462 12:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:37.462 12:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.462 12:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.462 12:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.462 12:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:37.462 { 00:15:37.462 "cntlid": 29, 00:15:37.462 "qid": 0, 00:15:37.462 "state": "enabled", 00:15:37.462 "thread": "nvmf_tgt_poll_group_000", 00:15:37.462 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:37.462 "listen_address": { 00:15:37.462 "trtype": "TCP", 00:15:37.462 "adrfam": "IPv4", 00:15:37.462 "traddr": "10.0.0.2", 00:15:37.462 "trsvcid": "4420" 00:15:37.462 }, 00:15:37.462 "peer_address": { 00:15:37.462 "trtype": "TCP", 00:15:37.462 "adrfam": "IPv4", 00:15:37.462 "traddr": "10.0.0.1", 00:15:37.462 "trsvcid": "48090" 00:15:37.462 }, 00:15:37.462 "auth": { 00:15:37.462 "state": "completed", 00:15:37.462 "digest": "sha256", 00:15:37.462 "dhgroup": "ffdhe4096" 00:15:37.462 } 00:15:37.462 } 00:15:37.462 ]' 00:15:37.462 12:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:37.462 12:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:37.462 12:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:37.462 12:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:37.462 12:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:37.721 12:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:37.721 12:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:37.721 12:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:37.721 12:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzU4Njg1NzdiNDBiNjBkYWE2MjAyZmQ0MTlmYzAyOGRiNmI1NjI4NjRiYzU2ZTZhfJv/wQ==: --dhchap-ctrl-secret DHHC-1:01:MjkwNTM2Nzc3ZGNiYjYzMWM1NDg5ZjRiMzIwOWIyNWNUTaOW: 00:15:37.721 12:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NzU4Njg1NzdiNDBiNjBkYWE2MjAyZmQ0MTlmYzAyOGRiNmI1NjI4NjRiYzU2ZTZhfJv/wQ==: --dhchap-ctrl-secret DHHC-1:01:MjkwNTM2Nzc3ZGNiYjYzMWM1NDg5ZjRiMzIwOWIyNWNUTaOW: 00:15:38.660 12:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:38.660 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:38.660 12:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:38.660 12:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.660 12:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.660 12:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.660 12:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:38.660 12:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:38.660 12:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:38.660 12:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:15:38.660 12:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:38.660 12:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:38.660 12:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:38.660 12:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:38.660 12:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:38.660 12:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:15:38.660 12:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.660 12:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.660 12:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.660 12:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:38.660 12:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:38.660 12:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:38.920 00:15:38.920 12:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:38.920 12:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:38.920 12:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:39.180 12:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:39.180 12:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:39.180 12:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.180 12:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.180 12:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.180 12:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:39.180 { 00:15:39.180 "cntlid": 31, 00:15:39.180 "qid": 0, 00:15:39.180 "state": "enabled", 00:15:39.180 "thread": "nvmf_tgt_poll_group_000", 00:15:39.180 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:39.180 "listen_address": { 00:15:39.180 "trtype": "TCP", 00:15:39.180 "adrfam": "IPv4", 00:15:39.180 "traddr": "10.0.0.2", 00:15:39.180 "trsvcid": "4420" 00:15:39.180 }, 00:15:39.180 "peer_address": { 00:15:39.180 "trtype": "TCP", 00:15:39.180 "adrfam": "IPv4", 00:15:39.180 "traddr": "10.0.0.1", 00:15:39.180 "trsvcid": "48122" 00:15:39.180 }, 00:15:39.180 "auth": { 00:15:39.180 "state": "completed", 00:15:39.180 "digest": "sha256", 00:15:39.180 "dhgroup": "ffdhe4096" 00:15:39.180 } 00:15:39.180 } 00:15:39.180 ]' 00:15:39.180 12:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:39.180 12:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:39.181 12:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:39.181 12:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:39.181 12:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:39.181 12:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:39.181 12:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:39.181 12:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:39.440 12:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWIyOWVhMzdlM2QxY2I3Y2NhOTAwMzdhNGY2OWQ5NDY1NDM0OWJlMWZjODJlY2MyZTk3OWY0YmI1Nzg0NGU3YbPRy9c=: 00:15:39.440 12:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NWIyOWVhMzdlM2QxY2I3Y2NhOTAwMzdhNGY2OWQ5NDY1NDM0OWJlMWZjODJlY2MyZTk3OWY0YmI1Nzg0NGU3YbPRy9c=: 00:15:40.009 12:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:40.009 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:40.009 12:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:40.009 12:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.009 12:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.009 12:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.009 12:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:40.009 12:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:40.009 12:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:40.009 12:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:40.267 12:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:15:40.267 12:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:40.267 12:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:40.267 12:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:40.267 12:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:40.267 12:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:40.267 12:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:40.267 12:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.267 12:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.267 12:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.267 12:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:40.268 12:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:40.268 12:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:40.835 00:15:40.835 12:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:40.835 12:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:40.835 12:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:40.835 12:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:40.835 12:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:40.835 12:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.835 12:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.835 12:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.835 12:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:40.835 { 00:15:40.835 "cntlid": 33, 00:15:40.835 "qid": 0, 00:15:40.835 "state": "enabled", 00:15:40.835 "thread": "nvmf_tgt_poll_group_000", 00:15:40.835 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:40.835 "listen_address": { 00:15:40.835 "trtype": "TCP", 00:15:40.835 "adrfam": "IPv4", 00:15:40.835 "traddr": "10.0.0.2", 00:15:40.835 "trsvcid": "4420" 00:15:40.835 }, 00:15:40.835 "peer_address": { 00:15:40.835 "trtype": "TCP", 00:15:40.835 "adrfam": "IPv4", 00:15:40.835 "traddr": "10.0.0.1", 00:15:40.835 "trsvcid": "48162" 00:15:40.835 }, 00:15:40.835 "auth": { 00:15:40.835 "state": "completed", 00:15:40.835 "digest": "sha256", 00:15:40.835 "dhgroup": "ffdhe6144" 00:15:40.835 } 00:15:40.835 } 00:15:40.835 ]' 00:15:40.835 12:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:40.835 12:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:40.835 12:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:41.095 12:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:41.095 12:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:41.095 12:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:41.095 12:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:41.095 12:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:41.354 12:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODdjYWIzZTI1NmQ4YzQ4NmU0OGI0MjM5OGRhNzA2ZDk3OGIyN2I3Y2E5YzFjMTM3TZ3J6w==: --dhchap-ctrl-secret DHHC-1:03:MWJlN2NhMjg3OWQ3NjNhMWY2NWJiOTQ2ZGY5YjQxMWEyY2JlYWY5NDA4ZmZjMTg4OWVjODkzOTk3M2ZlOTFkMJ89BcI=: 00:15:41.354 12:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ODdjYWIzZTI1NmQ4YzQ4NmU0OGI0MjM5OGRhNzA2ZDk3OGIyN2I3Y2E5YzFjMTM3TZ3J6w==: --dhchap-ctrl-secret DHHC-1:03:MWJlN2NhMjg3OWQ3NjNhMWY2NWJiOTQ2ZGY5YjQxMWEyY2JlYWY5NDA4ZmZjMTg4OWVjODkzOTk3M2ZlOTFkMJ89BcI=: 00:15:41.922 12:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:41.922 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:41.922 12:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:41.922 12:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.922 12:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.922 12:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.922 12:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:41.922 12:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:41.922 12:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:41.922 12:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:15:41.922 12:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:41.922 12:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:41.922 12:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:41.922 12:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:41.922 12:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:41.922 12:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:41.922 12:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.922 12:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.923 12:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.923 12:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:41.923 12:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:41.923 12:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:42.490 00:15:42.490 12:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:42.490 12:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:42.491 12:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:42.491 12:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:42.491 12:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:42.491 12:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.491 12:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.491 12:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.491 12:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:42.491 { 00:15:42.491 "cntlid": 35, 00:15:42.491 "qid": 0, 00:15:42.491 "state": "enabled", 00:15:42.491 "thread": "nvmf_tgt_poll_group_000", 00:15:42.491 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:42.491 "listen_address": { 00:15:42.491 "trtype": "TCP", 00:15:42.491 "adrfam": "IPv4", 00:15:42.491 "traddr": "10.0.0.2", 00:15:42.491 "trsvcid": "4420" 00:15:42.491 }, 00:15:42.491 "peer_address": { 00:15:42.491 "trtype": "TCP", 00:15:42.491 "adrfam": "IPv4", 00:15:42.491 "traddr": "10.0.0.1", 00:15:42.491 "trsvcid": "57562" 00:15:42.491 }, 00:15:42.491 "auth": { 00:15:42.491 "state": "completed", 00:15:42.491 "digest": "sha256", 00:15:42.491 "dhgroup": "ffdhe6144" 00:15:42.491 } 00:15:42.491 } 00:15:42.491 ]' 00:15:42.491 12:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:42.751 12:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:42.751 12:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:42.751 12:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:42.751 12:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:42.751 12:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:42.751 12:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:42.751 12:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:43.011 12:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODUwN2JiYjhmN2UyN2VlNTdlOTYyZjdmZWY5ODEwYTJErtK/: --dhchap-ctrl-secret DHHC-1:02:MTY1MTkwOTQxZTgxNjk0ZWU2MTkzYWM3ODBjNzI1ZWUyODYwM2QxYzkwOGIzNDljsyhgAw==: 00:15:43.011 12:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ODUwN2JiYjhmN2UyN2VlNTdlOTYyZjdmZWY5ODEwYTJErtK/: --dhchap-ctrl-secret DHHC-1:02:MTY1MTkwOTQxZTgxNjk0ZWU2MTkzYWM3ODBjNzI1ZWUyODYwM2QxYzkwOGIzNDljsyhgAw==: 00:15:43.580 12:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:43.580 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:43.580 12:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:43.580 12:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.580 12:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.580 12:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.580 12:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:43.580 12:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:43.580 12:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:43.580 12:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:15:43.580 12:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:43.580 12:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:43.580 12:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:43.580 12:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:43.580 12:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:43.580 12:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:43.580 12:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.580 12:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.580 12:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.580 12:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:43.580 12:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:43.580 12:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:44.149 00:15:44.149 12:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:44.149 12:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:44.149 12:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:44.149 12:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:44.149 12:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:44.149 12:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.149 12:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.149 12:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.408 12:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:44.408 { 00:15:44.408 "cntlid": 37, 00:15:44.408 "qid": 0, 00:15:44.408 "state": "enabled", 00:15:44.408 "thread": "nvmf_tgt_poll_group_000", 00:15:44.408 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:44.408 "listen_address": { 00:15:44.408 "trtype": "TCP", 00:15:44.408 "adrfam": "IPv4", 00:15:44.408 "traddr": "10.0.0.2", 00:15:44.408 "trsvcid": "4420" 00:15:44.408 }, 00:15:44.408 "peer_address": { 00:15:44.408 "trtype": "TCP", 00:15:44.408 "adrfam": "IPv4", 00:15:44.408 "traddr": "10.0.0.1", 00:15:44.408 "trsvcid": "57604" 00:15:44.408 }, 00:15:44.408 "auth": { 00:15:44.408 "state": "completed", 00:15:44.408 "digest": "sha256", 00:15:44.408 "dhgroup": "ffdhe6144" 00:15:44.408 } 00:15:44.408 } 00:15:44.408 ]' 00:15:44.408 12:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:44.408 12:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:44.408 12:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:44.408 12:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:44.408 12:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:44.408 12:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:44.408 12:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:44.408 12:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:44.668 12:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzU4Njg1NzdiNDBiNjBkYWE2MjAyZmQ0MTlmYzAyOGRiNmI1NjI4NjRiYzU2ZTZhfJv/wQ==: --dhchap-ctrl-secret DHHC-1:01:MjkwNTM2Nzc3ZGNiYjYzMWM1NDg5ZjRiMzIwOWIyNWNUTaOW: 00:15:44.668 12:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NzU4Njg1NzdiNDBiNjBkYWE2MjAyZmQ0MTlmYzAyOGRiNmI1NjI4NjRiYzU2ZTZhfJv/wQ==: --dhchap-ctrl-secret DHHC-1:01:MjkwNTM2Nzc3ZGNiYjYzMWM1NDg5ZjRiMzIwOWIyNWNUTaOW: 00:15:45.237 12:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:45.237 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:45.237 12:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:45.237 12:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.237 12:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.237 12:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.237 12:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:45.237 12:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:45.237 12:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:45.496 12:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:15:45.496 12:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:45.496 12:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:45.496 12:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:45.496 12:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:45.496 12:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:45.496 12:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:15:45.496 12:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.496 12:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.496 12:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.496 12:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:45.496 12:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:45.496 12:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:45.755 00:15:45.755 12:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:45.755 12:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:45.755 12:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:46.013 12:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:46.013 12:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:46.013 12:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.013 12:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.013 12:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.013 12:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:46.014 { 00:15:46.014 "cntlid": 39, 00:15:46.014 "qid": 0, 00:15:46.014 "state": "enabled", 00:15:46.014 "thread": "nvmf_tgt_poll_group_000", 00:15:46.014 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:46.014 "listen_address": { 00:15:46.014 "trtype": "TCP", 00:15:46.014 "adrfam": "IPv4", 00:15:46.014 "traddr": "10.0.0.2", 00:15:46.014 "trsvcid": "4420" 00:15:46.014 }, 00:15:46.014 "peer_address": { 00:15:46.014 "trtype": "TCP", 00:15:46.014 "adrfam": "IPv4", 00:15:46.014 "traddr": "10.0.0.1", 00:15:46.014 "trsvcid": "57638" 00:15:46.014 }, 00:15:46.014 "auth": { 00:15:46.014 "state": "completed", 00:15:46.014 "digest": "sha256", 00:15:46.014 "dhgroup": "ffdhe6144" 00:15:46.014 } 00:15:46.014 } 00:15:46.014 ]' 00:15:46.014 12:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:46.014 12:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:46.014 12:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:46.014 12:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:46.014 12:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:46.014 12:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:46.014 12:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:46.014 12:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:46.274 12:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWIyOWVhMzdlM2QxY2I3Y2NhOTAwMzdhNGY2OWQ5NDY1NDM0OWJlMWZjODJlY2MyZTk3OWY0YmI1Nzg0NGU3YbPRy9c=: 00:15:46.274 12:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NWIyOWVhMzdlM2QxY2I3Y2NhOTAwMzdhNGY2OWQ5NDY1NDM0OWJlMWZjODJlY2MyZTk3OWY0YmI1Nzg0NGU3YbPRy9c=: 00:15:46.842 12:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:46.842 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:46.842 12:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:46.842 12:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.842 12:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.842 12:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.842 12:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:46.842 12:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:46.842 12:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:46.842 12:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:47.102 12:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:15:47.102 12:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:47.102 12:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:47.102 12:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:47.102 12:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:47.102 12:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:47.102 12:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:47.102 12:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.102 12:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.102 12:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.102 12:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:47.102 12:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:47.102 12:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:47.670 00:15:47.670 12:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:47.670 12:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:47.670 12:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:47.670 12:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:47.670 12:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:47.670 12:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.670 12:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.670 12:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.670 12:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:47.670 { 00:15:47.670 "cntlid": 41, 00:15:47.670 "qid": 0, 00:15:47.670 "state": "enabled", 00:15:47.670 "thread": "nvmf_tgt_poll_group_000", 00:15:47.670 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:47.670 "listen_address": { 00:15:47.670 "trtype": "TCP", 00:15:47.670 "adrfam": "IPv4", 00:15:47.670 "traddr": "10.0.0.2", 00:15:47.670 "trsvcid": "4420" 00:15:47.670 }, 00:15:47.670 "peer_address": { 00:15:47.670 "trtype": "TCP", 00:15:47.670 "adrfam": "IPv4", 00:15:47.670 "traddr": "10.0.0.1", 00:15:47.670 "trsvcid": "57654" 00:15:47.670 }, 00:15:47.670 "auth": { 00:15:47.670 "state": "completed", 00:15:47.670 "digest": "sha256", 00:15:47.670 "dhgroup": "ffdhe8192" 00:15:47.670 } 00:15:47.670 } 00:15:47.670 ]' 00:15:47.670 12:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:47.670 12:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:47.670 12:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:47.930 12:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:47.930 12:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:47.930 12:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:47.930 12:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:47.930 12:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:47.930 12:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODdjYWIzZTI1NmQ4YzQ4NmU0OGI0MjM5OGRhNzA2ZDk3OGIyN2I3Y2E5YzFjMTM3TZ3J6w==: --dhchap-ctrl-secret DHHC-1:03:MWJlN2NhMjg3OWQ3NjNhMWY2NWJiOTQ2ZGY5YjQxMWEyY2JlYWY5NDA4ZmZjMTg4OWVjODkzOTk3M2ZlOTFkMJ89BcI=: 00:15:47.930 12:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ODdjYWIzZTI1NmQ4YzQ4NmU0OGI0MjM5OGRhNzA2ZDk3OGIyN2I3Y2E5YzFjMTM3TZ3J6w==: --dhchap-ctrl-secret DHHC-1:03:MWJlN2NhMjg3OWQ3NjNhMWY2NWJiOTQ2ZGY5YjQxMWEyY2JlYWY5NDA4ZmZjMTg4OWVjODkzOTk3M2ZlOTFkMJ89BcI=: 00:15:48.498 12:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:48.498 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:48.498 12:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:48.498 12:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.498 12:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.758 12:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.758 12:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:48.758 12:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:48.758 12:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:48.758 12:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:15:48.758 12:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:48.758 12:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:48.758 12:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:48.758 12:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:48.758 12:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:48.758 12:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:48.758 12:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.758 12:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.758 12:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.758 12:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:48.758 12:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:48.758 12:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:49.325 00:15:49.325 12:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:49.325 12:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:49.325 12:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:49.585 12:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:49.585 12:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:49.585 12:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.585 12:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.585 12:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.585 12:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:49.585 { 00:15:49.585 "cntlid": 43, 00:15:49.585 "qid": 0, 00:15:49.585 "state": "enabled", 00:15:49.585 "thread": "nvmf_tgt_poll_group_000", 00:15:49.585 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:49.585 "listen_address": { 00:15:49.585 "trtype": "TCP", 00:15:49.585 "adrfam": "IPv4", 00:15:49.585 "traddr": "10.0.0.2", 00:15:49.585 "trsvcid": "4420" 00:15:49.585 }, 00:15:49.585 "peer_address": { 00:15:49.585 "trtype": "TCP", 00:15:49.585 "adrfam": "IPv4", 00:15:49.585 "traddr": "10.0.0.1", 00:15:49.585 "trsvcid": "57686" 00:15:49.585 }, 00:15:49.585 "auth": { 00:15:49.585 "state": "completed", 00:15:49.585 "digest": "sha256", 00:15:49.585 "dhgroup": "ffdhe8192" 00:15:49.585 } 00:15:49.585 } 00:15:49.585 ]' 00:15:49.585 12:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:49.585 12:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:49.585 12:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:49.585 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:49.585 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:49.585 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:49.585 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:49.585 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:49.844 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODUwN2JiYjhmN2UyN2VlNTdlOTYyZjdmZWY5ODEwYTJErtK/: --dhchap-ctrl-secret DHHC-1:02:MTY1MTkwOTQxZTgxNjk0ZWU2MTkzYWM3ODBjNzI1ZWUyODYwM2QxYzkwOGIzNDljsyhgAw==: 00:15:49.844 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ODUwN2JiYjhmN2UyN2VlNTdlOTYyZjdmZWY5ODEwYTJErtK/: --dhchap-ctrl-secret DHHC-1:02:MTY1MTkwOTQxZTgxNjk0ZWU2MTkzYWM3ODBjNzI1ZWUyODYwM2QxYzkwOGIzNDljsyhgAw==: 00:15:50.411 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:50.411 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:50.411 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:50.411 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.411 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.411 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.411 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:50.411 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:50.411 12:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:50.670 12:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:15:50.670 12:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:50.670 12:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:50.670 12:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:50.670 12:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:50.670 12:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:50.670 12:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:50.670 12:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.670 12:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.670 12:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.670 12:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:50.670 12:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:50.670 12:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:51.238 00:15:51.238 12:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:51.238 12:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:51.238 12:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:51.238 12:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:51.238 12:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:51.238 12:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.238 12:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.497 12:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.497 12:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:51.497 { 00:15:51.497 "cntlid": 45, 00:15:51.497 "qid": 0, 00:15:51.497 "state": "enabled", 00:15:51.497 "thread": "nvmf_tgt_poll_group_000", 00:15:51.497 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:51.497 "listen_address": { 00:15:51.497 "trtype": "TCP", 00:15:51.497 "adrfam": "IPv4", 00:15:51.497 "traddr": "10.0.0.2", 00:15:51.497 "trsvcid": "4420" 00:15:51.497 }, 00:15:51.497 "peer_address": { 00:15:51.497 "trtype": "TCP", 00:15:51.497 "adrfam": "IPv4", 00:15:51.497 "traddr": "10.0.0.1", 00:15:51.497 "trsvcid": "57706" 00:15:51.497 }, 00:15:51.497 "auth": { 00:15:51.497 "state": "completed", 00:15:51.497 "digest": "sha256", 00:15:51.497 "dhgroup": "ffdhe8192" 00:15:51.497 } 00:15:51.497 } 00:15:51.497 ]' 00:15:51.497 12:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:51.497 12:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:51.497 12:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:51.497 12:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:51.497 12:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:51.497 12:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:51.497 12:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:51.497 12:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:51.756 12:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzU4Njg1NzdiNDBiNjBkYWE2MjAyZmQ0MTlmYzAyOGRiNmI1NjI4NjRiYzU2ZTZhfJv/wQ==: --dhchap-ctrl-secret DHHC-1:01:MjkwNTM2Nzc3ZGNiYjYzMWM1NDg5ZjRiMzIwOWIyNWNUTaOW: 00:15:51.756 12:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NzU4Njg1NzdiNDBiNjBkYWE2MjAyZmQ0MTlmYzAyOGRiNmI1NjI4NjRiYzU2ZTZhfJv/wQ==: --dhchap-ctrl-secret DHHC-1:01:MjkwNTM2Nzc3ZGNiYjYzMWM1NDg5ZjRiMzIwOWIyNWNUTaOW: 00:15:52.326 12:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:52.326 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:52.326 12:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:52.326 12:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.326 12:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.326 12:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.326 12:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:52.326 12:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:52.326 12:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:52.585 12:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:15:52.585 12:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:52.585 12:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:52.585 12:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:52.585 12:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:52.585 12:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:52.585 12:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:15:52.585 12:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.585 12:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.585 12:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.585 12:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:52.585 12:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:52.585 12:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:53.152 00:15:53.152 12:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:53.152 12:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:53.152 12:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:53.152 12:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:53.152 12:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:53.152 12:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.152 12:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.152 12:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.152 12:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:53.152 { 00:15:53.152 "cntlid": 47, 00:15:53.152 "qid": 0, 00:15:53.152 "state": "enabled", 00:15:53.152 "thread": "nvmf_tgt_poll_group_000", 00:15:53.152 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:53.152 "listen_address": { 00:15:53.152 "trtype": "TCP", 00:15:53.152 "adrfam": "IPv4", 00:15:53.152 "traddr": "10.0.0.2", 00:15:53.152 "trsvcid": "4420" 00:15:53.152 }, 00:15:53.152 "peer_address": { 00:15:53.152 "trtype": "TCP", 00:15:53.152 "adrfam": "IPv4", 00:15:53.152 "traddr": "10.0.0.1", 00:15:53.152 "trsvcid": "44032" 00:15:53.152 }, 00:15:53.152 "auth": { 00:15:53.152 "state": "completed", 00:15:53.152 "digest": "sha256", 00:15:53.152 "dhgroup": "ffdhe8192" 00:15:53.152 } 00:15:53.152 } 00:15:53.152 ]' 00:15:53.152 12:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:53.152 12:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:53.152 12:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:53.411 12:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:53.411 12:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:53.411 12:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:53.411 12:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:53.411 12:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:53.411 12:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWIyOWVhMzdlM2QxY2I3Y2NhOTAwMzdhNGY2OWQ5NDY1NDM0OWJlMWZjODJlY2MyZTk3OWY0YmI1Nzg0NGU3YbPRy9c=: 00:15:53.411 12:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NWIyOWVhMzdlM2QxY2I3Y2NhOTAwMzdhNGY2OWQ5NDY1NDM0OWJlMWZjODJlY2MyZTk3OWY0YmI1Nzg0NGU3YbPRy9c=: 00:15:53.979 12:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:53.979 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:53.979 12:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:53.979 12:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.979 12:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.979 12:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.979 12:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:15:53.979 12:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:53.979 12:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:53.979 12:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:53.980 12:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:54.239 12:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:15:54.239 12:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:54.239 12:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:54.239 12:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:54.239 12:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:54.239 12:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:54.239 12:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:54.239 12:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.239 12:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.239 12:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.239 12:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:54.239 12:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:54.239 12:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:54.498 00:15:54.498 12:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:54.498 12:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:54.498 12:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:54.757 12:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:54.757 12:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:54.757 12:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.757 12:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.757 12:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.757 12:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:54.757 { 00:15:54.757 "cntlid": 49, 00:15:54.757 "qid": 0, 00:15:54.757 "state": "enabled", 00:15:54.757 "thread": "nvmf_tgt_poll_group_000", 00:15:54.757 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:54.757 "listen_address": { 00:15:54.757 "trtype": "TCP", 00:15:54.757 "adrfam": "IPv4", 00:15:54.757 "traddr": "10.0.0.2", 00:15:54.757 "trsvcid": "4420" 00:15:54.757 }, 00:15:54.757 "peer_address": { 00:15:54.757 "trtype": "TCP", 00:15:54.757 "adrfam": "IPv4", 00:15:54.757 "traddr": "10.0.0.1", 00:15:54.757 "trsvcid": "44050" 00:15:54.757 }, 00:15:54.757 "auth": { 00:15:54.757 "state": "completed", 00:15:54.757 "digest": "sha384", 00:15:54.757 "dhgroup": "null" 00:15:54.757 } 00:15:54.757 } 00:15:54.757 ]' 00:15:54.757 12:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:54.757 12:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:54.757 12:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:54.757 12:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:54.757 12:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:55.017 12:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:55.017 12:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:55.017 12:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:55.017 12:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODdjYWIzZTI1NmQ4YzQ4NmU0OGI0MjM5OGRhNzA2ZDk3OGIyN2I3Y2E5YzFjMTM3TZ3J6w==: --dhchap-ctrl-secret DHHC-1:03:MWJlN2NhMjg3OWQ3NjNhMWY2NWJiOTQ2ZGY5YjQxMWEyY2JlYWY5NDA4ZmZjMTg4OWVjODkzOTk3M2ZlOTFkMJ89BcI=: 00:15:55.017 12:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ODdjYWIzZTI1NmQ4YzQ4NmU0OGI0MjM5OGRhNzA2ZDk3OGIyN2I3Y2E5YzFjMTM3TZ3J6w==: --dhchap-ctrl-secret DHHC-1:03:MWJlN2NhMjg3OWQ3NjNhMWY2NWJiOTQ2ZGY5YjQxMWEyY2JlYWY5NDA4ZmZjMTg4OWVjODkzOTk3M2ZlOTFkMJ89BcI=: 00:15:55.585 12:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:55.585 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:55.585 12:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:55.585 12:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.585 12:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.585 12:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.585 12:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:55.585 12:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:55.585 12:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:55.844 12:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:15:55.844 12:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:55.844 12:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:55.844 12:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:55.844 12:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:55.844 12:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:55.844 12:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:55.844 12:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.844 12:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.845 12:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.845 12:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:55.845 12:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:55.845 12:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:56.103 00:15:56.103 12:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:56.103 12:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:56.103 12:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:56.361 12:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:56.361 12:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:56.361 12:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.361 12:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.361 12:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.361 12:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:56.361 { 00:15:56.361 "cntlid": 51, 00:15:56.361 "qid": 0, 00:15:56.361 "state": "enabled", 00:15:56.361 "thread": "nvmf_tgt_poll_group_000", 00:15:56.361 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:56.361 "listen_address": { 00:15:56.361 "trtype": "TCP", 00:15:56.361 "adrfam": "IPv4", 00:15:56.361 "traddr": "10.0.0.2", 00:15:56.361 "trsvcid": "4420" 00:15:56.361 }, 00:15:56.361 "peer_address": { 00:15:56.361 "trtype": "TCP", 00:15:56.361 "adrfam": "IPv4", 00:15:56.361 "traddr": "10.0.0.1", 00:15:56.361 "trsvcid": "44082" 00:15:56.361 }, 00:15:56.361 "auth": { 00:15:56.361 "state": "completed", 00:15:56.361 "digest": "sha384", 00:15:56.361 "dhgroup": "null" 00:15:56.361 } 00:15:56.361 } 00:15:56.361 ]' 00:15:56.361 12:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:56.361 12:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:56.361 12:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:56.361 12:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:56.361 12:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:56.361 12:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:56.361 12:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:56.361 12:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:56.620 12:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODUwN2JiYjhmN2UyN2VlNTdlOTYyZjdmZWY5ODEwYTJErtK/: --dhchap-ctrl-secret DHHC-1:02:MTY1MTkwOTQxZTgxNjk0ZWU2MTkzYWM3ODBjNzI1ZWUyODYwM2QxYzkwOGIzNDljsyhgAw==: 00:15:56.620 12:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ODUwN2JiYjhmN2UyN2VlNTdlOTYyZjdmZWY5ODEwYTJErtK/: --dhchap-ctrl-secret DHHC-1:02:MTY1MTkwOTQxZTgxNjk0ZWU2MTkzYWM3ODBjNzI1ZWUyODYwM2QxYzkwOGIzNDljsyhgAw==: 00:15:57.188 12:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:57.188 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:57.188 12:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:57.188 12:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.188 12:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.188 12:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.188 12:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:57.188 12:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:57.188 12:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:57.448 12:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:15:57.448 12:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:57.448 12:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:57.448 12:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:57.448 12:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:57.448 12:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:57.448 12:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:57.448 12:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.448 12:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.448 12:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.448 12:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:57.448 12:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:57.448 12:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:57.707 00:15:57.707 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:57.707 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:57.707 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:57.967 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:57.967 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:57.967 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.967 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.967 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.967 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:57.967 { 00:15:57.967 "cntlid": 53, 00:15:57.967 "qid": 0, 00:15:57.967 "state": "enabled", 00:15:57.967 "thread": "nvmf_tgt_poll_group_000", 00:15:57.967 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:57.967 "listen_address": { 00:15:57.967 "trtype": "TCP", 00:15:57.967 "adrfam": "IPv4", 00:15:57.967 "traddr": "10.0.0.2", 00:15:57.967 "trsvcid": "4420" 00:15:57.967 }, 00:15:57.967 "peer_address": { 00:15:57.967 "trtype": "TCP", 00:15:57.967 "adrfam": "IPv4", 00:15:57.967 "traddr": "10.0.0.1", 00:15:57.967 "trsvcid": "44116" 00:15:57.967 }, 00:15:57.967 "auth": { 00:15:57.967 "state": "completed", 00:15:57.967 "digest": "sha384", 00:15:57.967 "dhgroup": "null" 00:15:57.967 } 00:15:57.967 } 00:15:57.967 ]' 00:15:57.967 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:57.967 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:57.967 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:57.967 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:57.967 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:57.967 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:57.967 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:57.967 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:58.227 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzU4Njg1NzdiNDBiNjBkYWE2MjAyZmQ0MTlmYzAyOGRiNmI1NjI4NjRiYzU2ZTZhfJv/wQ==: --dhchap-ctrl-secret DHHC-1:01:MjkwNTM2Nzc3ZGNiYjYzMWM1NDg5ZjRiMzIwOWIyNWNUTaOW: 00:15:58.227 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NzU4Njg1NzdiNDBiNjBkYWE2MjAyZmQ0MTlmYzAyOGRiNmI1NjI4NjRiYzU2ZTZhfJv/wQ==: --dhchap-ctrl-secret DHHC-1:01:MjkwNTM2Nzc3ZGNiYjYzMWM1NDg5ZjRiMzIwOWIyNWNUTaOW: 00:15:58.795 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:58.795 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:58.795 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:58.795 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.795 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.795 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.795 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:58.795 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:58.795 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:59.054 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:15:59.054 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:59.054 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:59.054 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:59.054 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:59.054 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:59.054 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:15:59.054 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.054 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.054 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.054 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:59.054 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:59.054 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:59.313 00:15:59.313 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:59.313 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:59.313 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:59.313 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:59.313 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:59.313 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.313 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.313 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.313 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:59.313 { 00:15:59.313 "cntlid": 55, 00:15:59.313 "qid": 0, 00:15:59.313 "state": "enabled", 00:15:59.313 "thread": "nvmf_tgt_poll_group_000", 00:15:59.313 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:59.313 "listen_address": { 00:15:59.313 "trtype": "TCP", 00:15:59.313 "adrfam": "IPv4", 00:15:59.313 "traddr": "10.0.0.2", 00:15:59.313 "trsvcid": "4420" 00:15:59.313 }, 00:15:59.313 "peer_address": { 00:15:59.313 "trtype": "TCP", 00:15:59.313 "adrfam": "IPv4", 00:15:59.313 "traddr": "10.0.0.1", 00:15:59.313 "trsvcid": "44150" 00:15:59.313 }, 00:15:59.313 "auth": { 00:15:59.313 "state": "completed", 00:15:59.313 "digest": "sha384", 00:15:59.313 "dhgroup": "null" 00:15:59.313 } 00:15:59.313 } 00:15:59.313 ]' 00:15:59.313 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:59.573 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:59.573 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:59.573 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:59.573 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:59.573 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:59.573 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:59.573 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:59.832 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWIyOWVhMzdlM2QxY2I3Y2NhOTAwMzdhNGY2OWQ5NDY1NDM0OWJlMWZjODJlY2MyZTk3OWY0YmI1Nzg0NGU3YbPRy9c=: 00:15:59.833 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NWIyOWVhMzdlM2QxY2I3Y2NhOTAwMzdhNGY2OWQ5NDY1NDM0OWJlMWZjODJlY2MyZTk3OWY0YmI1Nzg0NGU3YbPRy9c=: 00:16:00.402 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:00.402 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:00.402 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:00.402 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.402 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.402 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.402 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:00.402 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:00.402 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:00.402 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:00.690 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:16:00.690 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:00.690 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:00.690 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:00.690 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:00.690 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:00.690 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:00.690 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.690 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.690 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.690 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:00.690 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:00.690 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:00.948 00:16:00.948 12:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:00.948 12:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:00.948 12:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:00.948 12:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:00.948 12:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:00.948 12:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.948 12:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.948 12:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.948 12:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:00.948 { 00:16:00.948 "cntlid": 57, 00:16:00.948 "qid": 0, 00:16:00.948 "state": "enabled", 00:16:00.948 "thread": "nvmf_tgt_poll_group_000", 00:16:00.948 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:00.948 "listen_address": { 00:16:00.948 "trtype": "TCP", 00:16:00.948 "adrfam": "IPv4", 00:16:00.948 "traddr": "10.0.0.2", 00:16:00.948 "trsvcid": "4420" 00:16:00.948 }, 00:16:00.948 "peer_address": { 00:16:00.948 "trtype": "TCP", 00:16:00.948 "adrfam": "IPv4", 00:16:00.948 "traddr": "10.0.0.1", 00:16:00.948 "trsvcid": "44180" 00:16:00.948 }, 00:16:00.948 "auth": { 00:16:00.948 "state": "completed", 00:16:00.948 "digest": "sha384", 00:16:00.948 "dhgroup": "ffdhe2048" 00:16:00.948 } 00:16:00.948 } 00:16:00.948 ]' 00:16:00.948 12:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:01.205 12:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:01.206 12:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:01.206 12:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:01.206 12:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:01.206 12:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:01.206 12:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:01.206 12:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:01.463 12:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODdjYWIzZTI1NmQ4YzQ4NmU0OGI0MjM5OGRhNzA2ZDk3OGIyN2I3Y2E5YzFjMTM3TZ3J6w==: --dhchap-ctrl-secret DHHC-1:03:MWJlN2NhMjg3OWQ3NjNhMWY2NWJiOTQ2ZGY5YjQxMWEyY2JlYWY5NDA4ZmZjMTg4OWVjODkzOTk3M2ZlOTFkMJ89BcI=: 00:16:01.463 12:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ODdjYWIzZTI1NmQ4YzQ4NmU0OGI0MjM5OGRhNzA2ZDk3OGIyN2I3Y2E5YzFjMTM3TZ3J6w==: --dhchap-ctrl-secret DHHC-1:03:MWJlN2NhMjg3OWQ3NjNhMWY2NWJiOTQ2ZGY5YjQxMWEyY2JlYWY5NDA4ZmZjMTg4OWVjODkzOTk3M2ZlOTFkMJ89BcI=: 00:16:02.029 12:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:02.029 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:02.029 12:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:02.029 12:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.029 12:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.029 12:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.029 12:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:02.030 12:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:02.030 12:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:02.030 12:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:16:02.030 12:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:02.030 12:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:02.030 12:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:02.030 12:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:02.030 12:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:02.030 12:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:02.030 12:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.030 12:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.030 12:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.030 12:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:02.030 12:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:02.030 12:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:02.287 00:16:02.287 12:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:02.287 12:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:02.287 12:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:02.544 12:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:02.544 12:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:02.544 12:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.544 12:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.544 12:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.544 12:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:02.544 { 00:16:02.544 "cntlid": 59, 00:16:02.544 "qid": 0, 00:16:02.544 "state": "enabled", 00:16:02.544 "thread": "nvmf_tgt_poll_group_000", 00:16:02.544 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:02.544 "listen_address": { 00:16:02.544 "trtype": "TCP", 00:16:02.544 "adrfam": "IPv4", 00:16:02.544 "traddr": "10.0.0.2", 00:16:02.544 "trsvcid": "4420" 00:16:02.544 }, 00:16:02.544 "peer_address": { 00:16:02.544 "trtype": "TCP", 00:16:02.544 "adrfam": "IPv4", 00:16:02.544 "traddr": "10.0.0.1", 00:16:02.544 "trsvcid": "35048" 00:16:02.544 }, 00:16:02.544 "auth": { 00:16:02.544 "state": "completed", 00:16:02.545 "digest": "sha384", 00:16:02.545 "dhgroup": "ffdhe2048" 00:16:02.545 } 00:16:02.545 } 00:16:02.545 ]' 00:16:02.545 12:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:02.545 12:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:02.545 12:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:02.803 12:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:02.803 12:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:02.803 12:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:02.803 12:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:02.803 12:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:03.061 12:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODUwN2JiYjhmN2UyN2VlNTdlOTYyZjdmZWY5ODEwYTJErtK/: --dhchap-ctrl-secret DHHC-1:02:MTY1MTkwOTQxZTgxNjk0ZWU2MTkzYWM3ODBjNzI1ZWUyODYwM2QxYzkwOGIzNDljsyhgAw==: 00:16:03.061 12:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ODUwN2JiYjhmN2UyN2VlNTdlOTYyZjdmZWY5ODEwYTJErtK/: --dhchap-ctrl-secret DHHC-1:02:MTY1MTkwOTQxZTgxNjk0ZWU2MTkzYWM3ODBjNzI1ZWUyODYwM2QxYzkwOGIzNDljsyhgAw==: 00:16:03.628 12:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:03.628 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:03.628 12:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:03.628 12:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.628 12:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.628 12:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.628 12:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:03.628 12:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:03.628 12:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:03.628 12:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:16:03.628 12:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:03.628 12:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:03.628 12:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:03.628 12:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:03.628 12:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:03.628 12:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:03.628 12:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.628 12:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.628 12:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.628 12:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:03.628 12:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:03.628 12:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:03.888 00:16:03.888 12:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:03.888 12:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:03.888 12:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:04.148 12:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:04.148 12:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:04.148 12:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.148 12:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.148 12:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.148 12:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:04.148 { 00:16:04.148 "cntlid": 61, 00:16:04.148 "qid": 0, 00:16:04.148 "state": "enabled", 00:16:04.148 "thread": "nvmf_tgt_poll_group_000", 00:16:04.148 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:04.148 "listen_address": { 00:16:04.148 "trtype": "TCP", 00:16:04.148 "adrfam": "IPv4", 00:16:04.148 "traddr": "10.0.0.2", 00:16:04.148 "trsvcid": "4420" 00:16:04.148 }, 00:16:04.148 "peer_address": { 00:16:04.148 "trtype": "TCP", 00:16:04.148 "adrfam": "IPv4", 00:16:04.148 "traddr": "10.0.0.1", 00:16:04.148 "trsvcid": "35058" 00:16:04.148 }, 00:16:04.148 "auth": { 00:16:04.148 "state": "completed", 00:16:04.148 "digest": "sha384", 00:16:04.148 "dhgroup": "ffdhe2048" 00:16:04.148 } 00:16:04.148 } 00:16:04.148 ]' 00:16:04.148 12:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:04.148 12:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:04.148 12:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:04.148 12:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:04.148 12:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:04.408 12:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:04.408 12:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:04.408 12:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:04.408 12:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzU4Njg1NzdiNDBiNjBkYWE2MjAyZmQ0MTlmYzAyOGRiNmI1NjI4NjRiYzU2ZTZhfJv/wQ==: --dhchap-ctrl-secret DHHC-1:01:MjkwNTM2Nzc3ZGNiYjYzMWM1NDg5ZjRiMzIwOWIyNWNUTaOW: 00:16:04.408 12:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NzU4Njg1NzdiNDBiNjBkYWE2MjAyZmQ0MTlmYzAyOGRiNmI1NjI4NjRiYzU2ZTZhfJv/wQ==: --dhchap-ctrl-secret DHHC-1:01:MjkwNTM2Nzc3ZGNiYjYzMWM1NDg5ZjRiMzIwOWIyNWNUTaOW: 00:16:04.975 12:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:04.975 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:04.975 12:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:04.975 12:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.975 12:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.975 12:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.975 12:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:04.975 12:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:04.975 12:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:05.235 12:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:16:05.235 12:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:05.235 12:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:05.235 12:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:05.235 12:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:05.235 12:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:05.235 12:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:05.235 12:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.235 12:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.235 12:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.235 12:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:05.235 12:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:05.235 12:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:05.495 00:16:05.495 12:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:05.495 12:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:05.495 12:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:05.754 12:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:05.754 12:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:05.754 12:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.754 12:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.754 12:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.754 12:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:05.754 { 00:16:05.754 "cntlid": 63, 00:16:05.754 "qid": 0, 00:16:05.754 "state": "enabled", 00:16:05.754 "thread": "nvmf_tgt_poll_group_000", 00:16:05.754 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:05.754 "listen_address": { 00:16:05.754 "trtype": "TCP", 00:16:05.754 "adrfam": "IPv4", 00:16:05.754 "traddr": "10.0.0.2", 00:16:05.754 "trsvcid": "4420" 00:16:05.754 }, 00:16:05.754 "peer_address": { 00:16:05.754 "trtype": "TCP", 00:16:05.754 "adrfam": "IPv4", 00:16:05.754 "traddr": "10.0.0.1", 00:16:05.754 "trsvcid": "35076" 00:16:05.754 }, 00:16:05.754 "auth": { 00:16:05.754 "state": "completed", 00:16:05.754 "digest": "sha384", 00:16:05.754 "dhgroup": "ffdhe2048" 00:16:05.754 } 00:16:05.754 } 00:16:05.754 ]' 00:16:05.754 12:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:05.754 12:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:05.754 12:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:05.754 12:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:05.754 12:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:05.754 12:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:05.754 12:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:05.754 12:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:06.013 12:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWIyOWVhMzdlM2QxY2I3Y2NhOTAwMzdhNGY2OWQ5NDY1NDM0OWJlMWZjODJlY2MyZTk3OWY0YmI1Nzg0NGU3YbPRy9c=: 00:16:06.013 12:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NWIyOWVhMzdlM2QxY2I3Y2NhOTAwMzdhNGY2OWQ5NDY1NDM0OWJlMWZjODJlY2MyZTk3OWY0YmI1Nzg0NGU3YbPRy9c=: 00:16:06.583 12:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:06.583 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:06.583 12:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:06.583 12:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.583 12:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.583 12:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.583 12:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:06.583 12:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:06.583 12:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:06.583 12:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:06.841 12:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:16:06.841 12:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:06.841 12:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:06.841 12:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:06.841 12:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:06.841 12:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:06.841 12:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:06.841 12:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.841 12:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.841 12:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.841 12:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:06.841 12:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:06.841 12:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:07.100 00:16:07.100 12:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:07.100 12:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:07.100 12:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:07.359 12:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:07.359 12:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:07.359 12:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.359 12:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.359 12:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.359 12:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:07.359 { 00:16:07.359 "cntlid": 65, 00:16:07.359 "qid": 0, 00:16:07.359 "state": "enabled", 00:16:07.359 "thread": "nvmf_tgt_poll_group_000", 00:16:07.359 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:07.359 "listen_address": { 00:16:07.359 "trtype": "TCP", 00:16:07.359 "adrfam": "IPv4", 00:16:07.359 "traddr": "10.0.0.2", 00:16:07.359 "trsvcid": "4420" 00:16:07.359 }, 00:16:07.359 "peer_address": { 00:16:07.359 "trtype": "TCP", 00:16:07.359 "adrfam": "IPv4", 00:16:07.359 "traddr": "10.0.0.1", 00:16:07.359 "trsvcid": "35096" 00:16:07.359 }, 00:16:07.359 "auth": { 00:16:07.359 "state": "completed", 00:16:07.359 "digest": "sha384", 00:16:07.359 "dhgroup": "ffdhe3072" 00:16:07.359 } 00:16:07.359 } 00:16:07.359 ]' 00:16:07.359 12:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:07.359 12:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:07.359 12:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:07.359 12:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:07.359 12:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:07.359 12:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:07.359 12:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:07.359 12:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:07.618 12:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODdjYWIzZTI1NmQ4YzQ4NmU0OGI0MjM5OGRhNzA2ZDk3OGIyN2I3Y2E5YzFjMTM3TZ3J6w==: --dhchap-ctrl-secret DHHC-1:03:MWJlN2NhMjg3OWQ3NjNhMWY2NWJiOTQ2ZGY5YjQxMWEyY2JlYWY5NDA4ZmZjMTg4OWVjODkzOTk3M2ZlOTFkMJ89BcI=: 00:16:07.618 12:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ODdjYWIzZTI1NmQ4YzQ4NmU0OGI0MjM5OGRhNzA2ZDk3OGIyN2I3Y2E5YzFjMTM3TZ3J6w==: --dhchap-ctrl-secret DHHC-1:03:MWJlN2NhMjg3OWQ3NjNhMWY2NWJiOTQ2ZGY5YjQxMWEyY2JlYWY5NDA4ZmZjMTg4OWVjODkzOTk3M2ZlOTFkMJ89BcI=: 00:16:08.186 12:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:08.186 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:08.186 12:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:08.186 12:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.186 12:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.186 12:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.186 12:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:08.186 12:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:08.186 12:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:08.446 12:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:16:08.446 12:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:08.446 12:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:08.446 12:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:08.446 12:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:08.446 12:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:08.446 12:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:08.446 12:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.446 12:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.446 12:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.446 12:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:08.446 12:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:08.446 12:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:08.705 00:16:08.705 12:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:08.705 12:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:08.705 12:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:08.964 12:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:08.964 12:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:08.964 12:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.964 12:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.964 12:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.964 12:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:08.964 { 00:16:08.964 "cntlid": 67, 00:16:08.964 "qid": 0, 00:16:08.964 "state": "enabled", 00:16:08.964 "thread": "nvmf_tgt_poll_group_000", 00:16:08.964 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:08.964 "listen_address": { 00:16:08.964 "trtype": "TCP", 00:16:08.964 "adrfam": "IPv4", 00:16:08.964 "traddr": "10.0.0.2", 00:16:08.964 "trsvcid": "4420" 00:16:08.964 }, 00:16:08.964 "peer_address": { 00:16:08.964 "trtype": "TCP", 00:16:08.964 "adrfam": "IPv4", 00:16:08.964 "traddr": "10.0.0.1", 00:16:08.964 "trsvcid": "35110" 00:16:08.964 }, 00:16:08.964 "auth": { 00:16:08.964 "state": "completed", 00:16:08.964 "digest": "sha384", 00:16:08.964 "dhgroup": "ffdhe3072" 00:16:08.964 } 00:16:08.964 } 00:16:08.964 ]' 00:16:08.964 12:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:08.964 12:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:08.964 12:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:08.964 12:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:08.964 12:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:08.964 12:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:08.964 12:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:08.964 12:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:09.222 12:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODUwN2JiYjhmN2UyN2VlNTdlOTYyZjdmZWY5ODEwYTJErtK/: --dhchap-ctrl-secret DHHC-1:02:MTY1MTkwOTQxZTgxNjk0ZWU2MTkzYWM3ODBjNzI1ZWUyODYwM2QxYzkwOGIzNDljsyhgAw==: 00:16:09.222 12:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ODUwN2JiYjhmN2UyN2VlNTdlOTYyZjdmZWY5ODEwYTJErtK/: --dhchap-ctrl-secret DHHC-1:02:MTY1MTkwOTQxZTgxNjk0ZWU2MTkzYWM3ODBjNzI1ZWUyODYwM2QxYzkwOGIzNDljsyhgAw==: 00:16:09.788 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:09.788 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:09.788 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:09.788 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.788 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.788 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.788 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:09.788 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:09.788 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:10.074 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:16:10.074 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:10.074 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:10.074 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:10.075 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:10.075 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:10.075 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:10.075 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.075 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.075 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.075 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:10.075 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:10.075 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:10.334 00:16:10.334 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:10.334 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:10.334 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:10.593 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:10.593 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:10.593 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.593 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.593 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.593 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:10.593 { 00:16:10.593 "cntlid": 69, 00:16:10.593 "qid": 0, 00:16:10.593 "state": "enabled", 00:16:10.593 "thread": "nvmf_tgt_poll_group_000", 00:16:10.593 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:10.593 "listen_address": { 00:16:10.593 "trtype": "TCP", 00:16:10.593 "adrfam": "IPv4", 00:16:10.593 "traddr": "10.0.0.2", 00:16:10.593 "trsvcid": "4420" 00:16:10.593 }, 00:16:10.593 "peer_address": { 00:16:10.593 "trtype": "TCP", 00:16:10.593 "adrfam": "IPv4", 00:16:10.593 "traddr": "10.0.0.1", 00:16:10.593 "trsvcid": "35132" 00:16:10.593 }, 00:16:10.593 "auth": { 00:16:10.593 "state": "completed", 00:16:10.593 "digest": "sha384", 00:16:10.593 "dhgroup": "ffdhe3072" 00:16:10.593 } 00:16:10.593 } 00:16:10.593 ]' 00:16:10.593 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:10.593 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:10.593 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:10.593 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:10.593 12:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:10.593 12:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:10.593 12:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:10.593 12:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:10.852 12:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzU4Njg1NzdiNDBiNjBkYWE2MjAyZmQ0MTlmYzAyOGRiNmI1NjI4NjRiYzU2ZTZhfJv/wQ==: --dhchap-ctrl-secret DHHC-1:01:MjkwNTM2Nzc3ZGNiYjYzMWM1NDg5ZjRiMzIwOWIyNWNUTaOW: 00:16:10.852 12:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NzU4Njg1NzdiNDBiNjBkYWE2MjAyZmQ0MTlmYzAyOGRiNmI1NjI4NjRiYzU2ZTZhfJv/wQ==: --dhchap-ctrl-secret DHHC-1:01:MjkwNTM2Nzc3ZGNiYjYzMWM1NDg5ZjRiMzIwOWIyNWNUTaOW: 00:16:11.420 12:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:11.420 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:11.420 12:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:11.420 12:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.420 12:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.420 12:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.420 12:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:11.420 12:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:11.420 12:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:11.680 12:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:16:11.680 12:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:11.680 12:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:11.680 12:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:11.680 12:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:11.680 12:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:11.680 12:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:11.680 12:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.680 12:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.680 12:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.680 12:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:11.680 12:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:11.680 12:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:11.940 00:16:11.940 12:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:11.940 12:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:11.940 12:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:12.200 12:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:12.200 12:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:12.200 12:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.200 12:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.200 12:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.200 12:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:12.200 { 00:16:12.200 "cntlid": 71, 00:16:12.200 "qid": 0, 00:16:12.200 "state": "enabled", 00:16:12.200 "thread": "nvmf_tgt_poll_group_000", 00:16:12.200 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:12.200 "listen_address": { 00:16:12.200 "trtype": "TCP", 00:16:12.200 "adrfam": "IPv4", 00:16:12.200 "traddr": "10.0.0.2", 00:16:12.200 "trsvcid": "4420" 00:16:12.200 }, 00:16:12.200 "peer_address": { 00:16:12.200 "trtype": "TCP", 00:16:12.200 "adrfam": "IPv4", 00:16:12.200 "traddr": "10.0.0.1", 00:16:12.200 "trsvcid": "47228" 00:16:12.200 }, 00:16:12.200 "auth": { 00:16:12.200 "state": "completed", 00:16:12.200 "digest": "sha384", 00:16:12.200 "dhgroup": "ffdhe3072" 00:16:12.200 } 00:16:12.200 } 00:16:12.200 ]' 00:16:12.200 12:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:12.200 12:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:12.200 12:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:12.200 12:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:12.200 12:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:12.200 12:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:12.200 12:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:12.200 12:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:12.459 12:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWIyOWVhMzdlM2QxY2I3Y2NhOTAwMzdhNGY2OWQ5NDY1NDM0OWJlMWZjODJlY2MyZTk3OWY0YmI1Nzg0NGU3YbPRy9c=: 00:16:12.459 12:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NWIyOWVhMzdlM2QxY2I3Y2NhOTAwMzdhNGY2OWQ5NDY1NDM0OWJlMWZjODJlY2MyZTk3OWY0YmI1Nzg0NGU3YbPRy9c=: 00:16:13.031 12:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:13.031 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:13.031 12:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:13.031 12:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.031 12:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.031 12:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.031 12:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:13.031 12:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:13.031 12:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:13.031 12:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:13.291 12:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:16:13.291 12:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:13.291 12:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:13.291 12:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:13.291 12:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:13.291 12:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:13.291 12:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:13.291 12:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.291 12:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.291 12:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.291 12:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:13.291 12:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:13.291 12:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:13.551 00:16:13.551 12:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:13.551 12:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:13.551 12:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:13.811 12:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:13.811 12:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:13.811 12:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.811 12:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.811 12:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.811 12:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:13.811 { 00:16:13.811 "cntlid": 73, 00:16:13.811 "qid": 0, 00:16:13.811 "state": "enabled", 00:16:13.811 "thread": "nvmf_tgt_poll_group_000", 00:16:13.811 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:13.811 "listen_address": { 00:16:13.811 "trtype": "TCP", 00:16:13.811 "adrfam": "IPv4", 00:16:13.811 "traddr": "10.0.0.2", 00:16:13.811 "trsvcid": "4420" 00:16:13.811 }, 00:16:13.811 "peer_address": { 00:16:13.811 "trtype": "TCP", 00:16:13.811 "adrfam": "IPv4", 00:16:13.811 "traddr": "10.0.0.1", 00:16:13.811 "trsvcid": "47256" 00:16:13.811 }, 00:16:13.811 "auth": { 00:16:13.811 "state": "completed", 00:16:13.811 "digest": "sha384", 00:16:13.811 "dhgroup": "ffdhe4096" 00:16:13.811 } 00:16:13.811 } 00:16:13.811 ]' 00:16:13.811 12:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:13.812 12:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:13.812 12:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:13.812 12:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:13.812 12:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:13.812 12:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:13.812 12:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:13.812 12:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:14.082 12:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODdjYWIzZTI1NmQ4YzQ4NmU0OGI0MjM5OGRhNzA2ZDk3OGIyN2I3Y2E5YzFjMTM3TZ3J6w==: --dhchap-ctrl-secret DHHC-1:03:MWJlN2NhMjg3OWQ3NjNhMWY2NWJiOTQ2ZGY5YjQxMWEyY2JlYWY5NDA4ZmZjMTg4OWVjODkzOTk3M2ZlOTFkMJ89BcI=: 00:16:14.082 12:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ODdjYWIzZTI1NmQ4YzQ4NmU0OGI0MjM5OGRhNzA2ZDk3OGIyN2I3Y2E5YzFjMTM3TZ3J6w==: --dhchap-ctrl-secret DHHC-1:03:MWJlN2NhMjg3OWQ3NjNhMWY2NWJiOTQ2ZGY5YjQxMWEyY2JlYWY5NDA4ZmZjMTg4OWVjODkzOTk3M2ZlOTFkMJ89BcI=: 00:16:14.659 12:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:14.659 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:14.659 12:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:14.659 12:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.659 12:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.659 12:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.659 12:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:14.659 12:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:14.659 12:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:14.659 12:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:16:14.659 12:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:14.659 12:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:14.659 12:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:14.659 12:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:14.659 12:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:14.659 12:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:14.659 12:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.659 12:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.659 12:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.659 12:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:14.659 12:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:14.659 12:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:15.226 00:16:15.226 12:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:15.226 12:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:15.226 12:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:15.226 12:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:15.226 12:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:15.226 12:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.226 12:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.226 12:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.226 12:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:15.226 { 00:16:15.226 "cntlid": 75, 00:16:15.226 "qid": 0, 00:16:15.226 "state": "enabled", 00:16:15.226 "thread": "nvmf_tgt_poll_group_000", 00:16:15.226 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:15.226 "listen_address": { 00:16:15.227 "trtype": "TCP", 00:16:15.227 "adrfam": "IPv4", 00:16:15.227 "traddr": "10.0.0.2", 00:16:15.227 "trsvcid": "4420" 00:16:15.227 }, 00:16:15.227 "peer_address": { 00:16:15.227 "trtype": "TCP", 00:16:15.227 "adrfam": "IPv4", 00:16:15.227 "traddr": "10.0.0.1", 00:16:15.227 "trsvcid": "47278" 00:16:15.227 }, 00:16:15.227 "auth": { 00:16:15.227 "state": "completed", 00:16:15.227 "digest": "sha384", 00:16:15.227 "dhgroup": "ffdhe4096" 00:16:15.227 } 00:16:15.227 } 00:16:15.227 ]' 00:16:15.227 12:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:15.227 12:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:15.227 12:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:15.487 12:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:15.487 12:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:15.487 12:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:15.487 12:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:15.487 12:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:15.487 12:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODUwN2JiYjhmN2UyN2VlNTdlOTYyZjdmZWY5ODEwYTJErtK/: --dhchap-ctrl-secret DHHC-1:02:MTY1MTkwOTQxZTgxNjk0ZWU2MTkzYWM3ODBjNzI1ZWUyODYwM2QxYzkwOGIzNDljsyhgAw==: 00:16:15.487 12:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ODUwN2JiYjhmN2UyN2VlNTdlOTYyZjdmZWY5ODEwYTJErtK/: --dhchap-ctrl-secret DHHC-1:02:MTY1MTkwOTQxZTgxNjk0ZWU2MTkzYWM3ODBjNzI1ZWUyODYwM2QxYzkwOGIzNDljsyhgAw==: 00:16:16.053 12:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:16.053 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:16.053 12:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:16.054 12:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.054 12:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.311 12:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.311 12:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:16.311 12:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:16.311 12:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:16.311 12:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:16:16.311 12:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:16.311 12:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:16.311 12:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:16.311 12:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:16.311 12:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:16.311 12:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:16.311 12:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.311 12:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.311 12:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.311 12:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:16.311 12:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:16.311 12:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:16.569 00:16:16.569 12:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:16.569 12:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:16.569 12:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:16.828 12:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:16.828 12:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:16.828 12:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.828 12:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.828 12:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.828 12:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:16.828 { 00:16:16.828 "cntlid": 77, 00:16:16.828 "qid": 0, 00:16:16.828 "state": "enabled", 00:16:16.828 "thread": "nvmf_tgt_poll_group_000", 00:16:16.828 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:16.828 "listen_address": { 00:16:16.828 "trtype": "TCP", 00:16:16.828 "adrfam": "IPv4", 00:16:16.828 "traddr": "10.0.0.2", 00:16:16.828 "trsvcid": "4420" 00:16:16.828 }, 00:16:16.828 "peer_address": { 00:16:16.828 "trtype": "TCP", 00:16:16.828 "adrfam": "IPv4", 00:16:16.828 "traddr": "10.0.0.1", 00:16:16.828 "trsvcid": "47306" 00:16:16.828 }, 00:16:16.828 "auth": { 00:16:16.828 "state": "completed", 00:16:16.828 "digest": "sha384", 00:16:16.828 "dhgroup": "ffdhe4096" 00:16:16.828 } 00:16:16.828 } 00:16:16.828 ]' 00:16:16.828 12:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:16.828 12:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:16.828 12:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:17.086 12:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:17.086 12:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:17.086 12:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:17.086 12:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:17.086 12:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:17.086 12:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzU4Njg1NzdiNDBiNjBkYWE2MjAyZmQ0MTlmYzAyOGRiNmI1NjI4NjRiYzU2ZTZhfJv/wQ==: --dhchap-ctrl-secret DHHC-1:01:MjkwNTM2Nzc3ZGNiYjYzMWM1NDg5ZjRiMzIwOWIyNWNUTaOW: 00:16:17.086 12:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NzU4Njg1NzdiNDBiNjBkYWE2MjAyZmQ0MTlmYzAyOGRiNmI1NjI4NjRiYzU2ZTZhfJv/wQ==: --dhchap-ctrl-secret DHHC-1:01:MjkwNTM2Nzc3ZGNiYjYzMWM1NDg5ZjRiMzIwOWIyNWNUTaOW: 00:16:17.653 12:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:17.653 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:17.654 12:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:17.654 12:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.654 12:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.913 12:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.913 12:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:17.913 12:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:17.913 12:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:17.913 12:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:16:17.913 12:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:17.913 12:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:17.913 12:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:17.913 12:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:17.913 12:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:17.913 12:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:17.913 12:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.913 12:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.913 12:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.913 12:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:17.913 12:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:17.913 12:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:18.172 00:16:18.172 12:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:18.172 12:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:18.172 12:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:18.431 12:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:18.431 12:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:18.431 12:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.431 12:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.431 12:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.431 12:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:18.431 { 00:16:18.431 "cntlid": 79, 00:16:18.431 "qid": 0, 00:16:18.431 "state": "enabled", 00:16:18.431 "thread": "nvmf_tgt_poll_group_000", 00:16:18.431 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:18.431 "listen_address": { 00:16:18.431 "trtype": "TCP", 00:16:18.431 "adrfam": "IPv4", 00:16:18.431 "traddr": "10.0.0.2", 00:16:18.431 "trsvcid": "4420" 00:16:18.431 }, 00:16:18.431 "peer_address": { 00:16:18.431 "trtype": "TCP", 00:16:18.431 "adrfam": "IPv4", 00:16:18.431 "traddr": "10.0.0.1", 00:16:18.431 "trsvcid": "47334" 00:16:18.431 }, 00:16:18.431 "auth": { 00:16:18.431 "state": "completed", 00:16:18.431 "digest": "sha384", 00:16:18.431 "dhgroup": "ffdhe4096" 00:16:18.431 } 00:16:18.431 } 00:16:18.431 ]' 00:16:18.431 12:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:18.431 12:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:18.431 12:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:18.690 12:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:18.690 12:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:18.690 12:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:18.690 12:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:18.690 12:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:18.690 12:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWIyOWVhMzdlM2QxY2I3Y2NhOTAwMzdhNGY2OWQ5NDY1NDM0OWJlMWZjODJlY2MyZTk3OWY0YmI1Nzg0NGU3YbPRy9c=: 00:16:18.690 12:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NWIyOWVhMzdlM2QxY2I3Y2NhOTAwMzdhNGY2OWQ5NDY1NDM0OWJlMWZjODJlY2MyZTk3OWY0YmI1Nzg0NGU3YbPRy9c=: 00:16:19.258 12:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:19.258 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:19.258 12:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:19.258 12:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.259 12:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.517 12:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.517 12:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:19.517 12:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:19.517 12:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:19.517 12:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:19.517 12:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:16:19.517 12:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:19.517 12:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:19.517 12:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:19.517 12:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:19.517 12:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:19.517 12:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:19.517 12:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.517 12:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.517 12:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.517 12:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:19.517 12:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:19.517 12:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:20.085 00:16:20.085 12:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:20.085 12:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:20.085 12:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:20.085 12:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:20.085 12:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:20.085 12:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.085 12:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.085 12:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.085 12:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:20.085 { 00:16:20.085 "cntlid": 81, 00:16:20.085 "qid": 0, 00:16:20.085 "state": "enabled", 00:16:20.085 "thread": "nvmf_tgt_poll_group_000", 00:16:20.085 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:20.085 "listen_address": { 00:16:20.085 "trtype": "TCP", 00:16:20.085 "adrfam": "IPv4", 00:16:20.085 "traddr": "10.0.0.2", 00:16:20.085 "trsvcid": "4420" 00:16:20.085 }, 00:16:20.085 "peer_address": { 00:16:20.085 "trtype": "TCP", 00:16:20.085 "adrfam": "IPv4", 00:16:20.085 "traddr": "10.0.0.1", 00:16:20.085 "trsvcid": "47348" 00:16:20.085 }, 00:16:20.085 "auth": { 00:16:20.085 "state": "completed", 00:16:20.085 "digest": "sha384", 00:16:20.085 "dhgroup": "ffdhe6144" 00:16:20.085 } 00:16:20.085 } 00:16:20.085 ]' 00:16:20.085 12:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:20.085 12:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:20.085 12:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:20.344 12:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:20.344 12:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:20.344 12:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:20.344 12:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:20.344 12:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:20.344 12:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODdjYWIzZTI1NmQ4YzQ4NmU0OGI0MjM5OGRhNzA2ZDk3OGIyN2I3Y2E5YzFjMTM3TZ3J6w==: --dhchap-ctrl-secret DHHC-1:03:MWJlN2NhMjg3OWQ3NjNhMWY2NWJiOTQ2ZGY5YjQxMWEyY2JlYWY5NDA4ZmZjMTg4OWVjODkzOTk3M2ZlOTFkMJ89BcI=: 00:16:20.344 12:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ODdjYWIzZTI1NmQ4YzQ4NmU0OGI0MjM5OGRhNzA2ZDk3OGIyN2I3Y2E5YzFjMTM3TZ3J6w==: --dhchap-ctrl-secret DHHC-1:03:MWJlN2NhMjg3OWQ3NjNhMWY2NWJiOTQ2ZGY5YjQxMWEyY2JlYWY5NDA4ZmZjMTg4OWVjODkzOTk3M2ZlOTFkMJ89BcI=: 00:16:20.913 12:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:21.172 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:21.172 12:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:21.172 12:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.172 12:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.172 12:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.172 12:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:21.172 12:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:21.172 12:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:21.172 12:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:16:21.172 12:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:21.172 12:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:21.172 12:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:21.172 12:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:21.172 12:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:21.172 12:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:21.172 12:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.173 12:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.173 12:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.173 12:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:21.173 12:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:21.173 12:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:21.742 00:16:21.742 12:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:21.742 12:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:21.742 12:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:21.742 12:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:21.742 12:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:21.742 12:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.742 12:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.742 12:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.742 12:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:21.742 { 00:16:21.742 "cntlid": 83, 00:16:21.742 "qid": 0, 00:16:21.742 "state": "enabled", 00:16:21.742 "thread": "nvmf_tgt_poll_group_000", 00:16:21.742 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:21.742 "listen_address": { 00:16:21.742 "trtype": "TCP", 00:16:21.742 "adrfam": "IPv4", 00:16:21.742 "traddr": "10.0.0.2", 00:16:21.742 "trsvcid": "4420" 00:16:21.742 }, 00:16:21.742 "peer_address": { 00:16:21.742 "trtype": "TCP", 00:16:21.742 "adrfam": "IPv4", 00:16:21.742 "traddr": "10.0.0.1", 00:16:21.742 "trsvcid": "60856" 00:16:21.742 }, 00:16:21.742 "auth": { 00:16:21.742 "state": "completed", 00:16:21.742 "digest": "sha384", 00:16:21.742 "dhgroup": "ffdhe6144" 00:16:21.742 } 00:16:21.742 } 00:16:21.742 ]' 00:16:21.742 12:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:22.001 12:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:22.001 12:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:22.001 12:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:22.001 12:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:22.001 12:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:22.001 12:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:22.001 12:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:22.260 12:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODUwN2JiYjhmN2UyN2VlNTdlOTYyZjdmZWY5ODEwYTJErtK/: --dhchap-ctrl-secret DHHC-1:02:MTY1MTkwOTQxZTgxNjk0ZWU2MTkzYWM3ODBjNzI1ZWUyODYwM2QxYzkwOGIzNDljsyhgAw==: 00:16:22.260 12:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ODUwN2JiYjhmN2UyN2VlNTdlOTYyZjdmZWY5ODEwYTJErtK/: --dhchap-ctrl-secret DHHC-1:02:MTY1MTkwOTQxZTgxNjk0ZWU2MTkzYWM3ODBjNzI1ZWUyODYwM2QxYzkwOGIzNDljsyhgAw==: 00:16:22.827 12:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:22.827 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:22.827 12:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:22.827 12:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.827 12:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.828 12:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.828 12:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:22.828 12:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:22.828 12:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:23.086 12:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:16:23.086 12:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:23.086 12:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:23.086 12:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:23.086 12:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:23.086 12:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:23.086 12:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:23.086 12:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.086 12:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.086 12:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.086 12:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:23.086 12:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:23.086 12:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:23.345 00:16:23.345 12:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:23.345 12:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:23.345 12:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:23.604 12:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:23.604 12:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:23.604 12:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.604 12:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.605 12:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.605 12:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:23.605 { 00:16:23.605 "cntlid": 85, 00:16:23.605 "qid": 0, 00:16:23.605 "state": "enabled", 00:16:23.605 "thread": "nvmf_tgt_poll_group_000", 00:16:23.605 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:23.605 "listen_address": { 00:16:23.605 "trtype": "TCP", 00:16:23.605 "adrfam": "IPv4", 00:16:23.605 "traddr": "10.0.0.2", 00:16:23.605 "trsvcid": "4420" 00:16:23.605 }, 00:16:23.605 "peer_address": { 00:16:23.605 "trtype": "TCP", 00:16:23.605 "adrfam": "IPv4", 00:16:23.605 "traddr": "10.0.0.1", 00:16:23.605 "trsvcid": "60890" 00:16:23.605 }, 00:16:23.605 "auth": { 00:16:23.605 "state": "completed", 00:16:23.605 "digest": "sha384", 00:16:23.605 "dhgroup": "ffdhe6144" 00:16:23.605 } 00:16:23.605 } 00:16:23.605 ]' 00:16:23.605 12:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:23.605 12:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:23.605 12:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:23.605 12:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:23.605 12:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:23.605 12:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:23.605 12:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:23.605 12:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:23.865 12:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzU4Njg1NzdiNDBiNjBkYWE2MjAyZmQ0MTlmYzAyOGRiNmI1NjI4NjRiYzU2ZTZhfJv/wQ==: --dhchap-ctrl-secret DHHC-1:01:MjkwNTM2Nzc3ZGNiYjYzMWM1NDg5ZjRiMzIwOWIyNWNUTaOW: 00:16:23.865 12:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NzU4Njg1NzdiNDBiNjBkYWE2MjAyZmQ0MTlmYzAyOGRiNmI1NjI4NjRiYzU2ZTZhfJv/wQ==: --dhchap-ctrl-secret DHHC-1:01:MjkwNTM2Nzc3ZGNiYjYzMWM1NDg5ZjRiMzIwOWIyNWNUTaOW: 00:16:24.433 12:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:24.433 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:24.433 12:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:24.433 12:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.433 12:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.433 12:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.433 12:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:24.433 12:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:24.433 12:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:24.693 12:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:16:24.693 12:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:24.693 12:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:24.693 12:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:24.693 12:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:24.693 12:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:24.693 12:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:24.693 12:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.693 12:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.693 12:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.693 12:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:24.693 12:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:24.693 12:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:24.952 00:16:24.952 12:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:24.952 12:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:24.952 12:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:25.209 12:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:25.209 12:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:25.209 12:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.209 12:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.210 12:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.210 12:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:25.210 { 00:16:25.210 "cntlid": 87, 00:16:25.210 "qid": 0, 00:16:25.210 "state": "enabled", 00:16:25.210 "thread": "nvmf_tgt_poll_group_000", 00:16:25.210 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:25.210 "listen_address": { 00:16:25.210 "trtype": "TCP", 00:16:25.210 "adrfam": "IPv4", 00:16:25.210 "traddr": "10.0.0.2", 00:16:25.210 "trsvcid": "4420" 00:16:25.210 }, 00:16:25.210 "peer_address": { 00:16:25.210 "trtype": "TCP", 00:16:25.210 "adrfam": "IPv4", 00:16:25.210 "traddr": "10.0.0.1", 00:16:25.210 "trsvcid": "60910" 00:16:25.210 }, 00:16:25.210 "auth": { 00:16:25.210 "state": "completed", 00:16:25.210 "digest": "sha384", 00:16:25.210 "dhgroup": "ffdhe6144" 00:16:25.210 } 00:16:25.210 } 00:16:25.210 ]' 00:16:25.210 12:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:25.210 12:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:25.210 12:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:25.210 12:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:25.210 12:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:25.210 12:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:25.210 12:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:25.210 12:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:25.468 12:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWIyOWVhMzdlM2QxY2I3Y2NhOTAwMzdhNGY2OWQ5NDY1NDM0OWJlMWZjODJlY2MyZTk3OWY0YmI1Nzg0NGU3YbPRy9c=: 00:16:25.468 12:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NWIyOWVhMzdlM2QxY2I3Y2NhOTAwMzdhNGY2OWQ5NDY1NDM0OWJlMWZjODJlY2MyZTk3OWY0YmI1Nzg0NGU3YbPRy9c=: 00:16:26.036 12:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:26.036 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:26.036 12:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:26.036 12:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.036 12:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.036 12:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.036 12:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:26.036 12:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:26.036 12:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:26.036 12:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:26.296 12:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:16:26.296 12:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:26.296 12:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:26.296 12:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:26.296 12:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:26.296 12:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:26.296 12:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:26.296 12:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.296 12:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.296 12:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.296 12:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:26.296 12:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:26.296 12:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:26.865 00:16:26.865 12:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:26.865 12:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:26.865 12:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:27.124 12:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:27.124 12:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:27.124 12:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.124 12:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.124 12:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.124 12:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:27.124 { 00:16:27.124 "cntlid": 89, 00:16:27.125 "qid": 0, 00:16:27.125 "state": "enabled", 00:16:27.125 "thread": "nvmf_tgt_poll_group_000", 00:16:27.125 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:27.125 "listen_address": { 00:16:27.125 "trtype": "TCP", 00:16:27.125 "adrfam": "IPv4", 00:16:27.125 "traddr": "10.0.0.2", 00:16:27.125 "trsvcid": "4420" 00:16:27.125 }, 00:16:27.125 "peer_address": { 00:16:27.125 "trtype": "TCP", 00:16:27.125 "adrfam": "IPv4", 00:16:27.125 "traddr": "10.0.0.1", 00:16:27.125 "trsvcid": "60930" 00:16:27.125 }, 00:16:27.125 "auth": { 00:16:27.125 "state": "completed", 00:16:27.125 "digest": "sha384", 00:16:27.125 "dhgroup": "ffdhe8192" 00:16:27.125 } 00:16:27.125 } 00:16:27.125 ]' 00:16:27.125 12:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:27.125 12:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:27.125 12:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:27.125 12:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:27.125 12:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:27.125 12:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:27.125 12:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:27.125 12:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:27.384 12:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODdjYWIzZTI1NmQ4YzQ4NmU0OGI0MjM5OGRhNzA2ZDk3OGIyN2I3Y2E5YzFjMTM3TZ3J6w==: --dhchap-ctrl-secret DHHC-1:03:MWJlN2NhMjg3OWQ3NjNhMWY2NWJiOTQ2ZGY5YjQxMWEyY2JlYWY5NDA4ZmZjMTg4OWVjODkzOTk3M2ZlOTFkMJ89BcI=: 00:16:27.384 12:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ODdjYWIzZTI1NmQ4YzQ4NmU0OGI0MjM5OGRhNzA2ZDk3OGIyN2I3Y2E5YzFjMTM3TZ3J6w==: --dhchap-ctrl-secret DHHC-1:03:MWJlN2NhMjg3OWQ3NjNhMWY2NWJiOTQ2ZGY5YjQxMWEyY2JlYWY5NDA4ZmZjMTg4OWVjODkzOTk3M2ZlOTFkMJ89BcI=: 00:16:28.006 12:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:28.006 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:28.006 12:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:28.006 12:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.006 12:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.006 12:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.006 12:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:28.006 12:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:28.006 12:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:28.343 12:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:16:28.343 12:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:28.343 12:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:28.343 12:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:28.343 12:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:28.343 12:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:28.343 12:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:28.343 12:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.343 12:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.343 12:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.343 12:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:28.343 12:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:28.343 12:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:28.633 00:16:28.633 12:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:28.633 12:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:28.633 12:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:28.916 12:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:28.916 12:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:28.916 12:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.916 12:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.916 12:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.916 12:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:28.916 { 00:16:28.916 "cntlid": 91, 00:16:28.916 "qid": 0, 00:16:28.916 "state": "enabled", 00:16:28.916 "thread": "nvmf_tgt_poll_group_000", 00:16:28.916 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:28.916 "listen_address": { 00:16:28.916 "trtype": "TCP", 00:16:28.916 "adrfam": "IPv4", 00:16:28.916 "traddr": "10.0.0.2", 00:16:28.916 "trsvcid": "4420" 00:16:28.916 }, 00:16:28.916 "peer_address": { 00:16:28.916 "trtype": "TCP", 00:16:28.916 "adrfam": "IPv4", 00:16:28.916 "traddr": "10.0.0.1", 00:16:28.916 "trsvcid": "60958" 00:16:28.916 }, 00:16:28.917 "auth": { 00:16:28.917 "state": "completed", 00:16:28.917 "digest": "sha384", 00:16:28.917 "dhgroup": "ffdhe8192" 00:16:28.917 } 00:16:28.917 } 00:16:28.917 ]' 00:16:28.917 12:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:28.917 12:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:28.917 12:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:28.917 12:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:28.917 12:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:28.917 12:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:28.917 12:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:28.917 12:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:29.175 12:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODUwN2JiYjhmN2UyN2VlNTdlOTYyZjdmZWY5ODEwYTJErtK/: --dhchap-ctrl-secret DHHC-1:02:MTY1MTkwOTQxZTgxNjk0ZWU2MTkzYWM3ODBjNzI1ZWUyODYwM2QxYzkwOGIzNDljsyhgAw==: 00:16:29.175 12:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ODUwN2JiYjhmN2UyN2VlNTdlOTYyZjdmZWY5ODEwYTJErtK/: --dhchap-ctrl-secret DHHC-1:02:MTY1MTkwOTQxZTgxNjk0ZWU2MTkzYWM3ODBjNzI1ZWUyODYwM2QxYzkwOGIzNDljsyhgAw==: 00:16:29.742 12:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:29.742 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:29.742 12:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:29.742 12:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.742 12:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.742 12:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.742 12:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:29.742 12:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:29.742 12:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:30.002 12:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:16:30.002 12:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:30.002 12:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:30.002 12:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:30.002 12:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:30.002 12:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:30.002 12:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:30.002 12:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.002 12:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.002 12:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.002 12:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:30.002 12:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:30.002 12:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:30.571 00:16:30.571 12:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:30.571 12:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:30.571 12:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:30.830 12:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:30.830 12:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:30.830 12:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.830 12:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.830 12:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.830 12:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:30.830 { 00:16:30.830 "cntlid": 93, 00:16:30.830 "qid": 0, 00:16:30.830 "state": "enabled", 00:16:30.830 "thread": "nvmf_tgt_poll_group_000", 00:16:30.830 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:30.830 "listen_address": { 00:16:30.830 "trtype": "TCP", 00:16:30.830 "adrfam": "IPv4", 00:16:30.830 "traddr": "10.0.0.2", 00:16:30.830 "trsvcid": "4420" 00:16:30.830 }, 00:16:30.830 "peer_address": { 00:16:30.830 "trtype": "TCP", 00:16:30.830 "adrfam": "IPv4", 00:16:30.830 "traddr": "10.0.0.1", 00:16:30.830 "trsvcid": "60990" 00:16:30.830 }, 00:16:30.830 "auth": { 00:16:30.830 "state": "completed", 00:16:30.830 "digest": "sha384", 00:16:30.830 "dhgroup": "ffdhe8192" 00:16:30.830 } 00:16:30.830 } 00:16:30.830 ]' 00:16:30.830 12:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:30.830 12:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:30.830 12:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:30.830 12:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:30.830 12:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:30.830 12:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:30.830 12:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:30.830 12:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:31.090 12:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzU4Njg1NzdiNDBiNjBkYWE2MjAyZmQ0MTlmYzAyOGRiNmI1NjI4NjRiYzU2ZTZhfJv/wQ==: --dhchap-ctrl-secret DHHC-1:01:MjkwNTM2Nzc3ZGNiYjYzMWM1NDg5ZjRiMzIwOWIyNWNUTaOW: 00:16:31.090 12:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NzU4Njg1NzdiNDBiNjBkYWE2MjAyZmQ0MTlmYzAyOGRiNmI1NjI4NjRiYzU2ZTZhfJv/wQ==: --dhchap-ctrl-secret DHHC-1:01:MjkwNTM2Nzc3ZGNiYjYzMWM1NDg5ZjRiMzIwOWIyNWNUTaOW: 00:16:31.658 12:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:31.658 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:31.658 12:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:31.658 12:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.658 12:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.658 12:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.658 12:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:31.658 12:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:31.658 12:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:31.917 12:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:16:31.917 12:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:31.917 12:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:31.917 12:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:31.917 12:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:31.917 12:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:31.918 12:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:31.918 12:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.918 12:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.918 12:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.918 12:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:31.918 12:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:31.918 12:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:32.486 00:16:32.486 12:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:32.486 12:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:32.486 12:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:32.486 12:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:32.486 12:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:32.486 12:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.486 12:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.486 12:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.486 12:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:32.486 { 00:16:32.486 "cntlid": 95, 00:16:32.486 "qid": 0, 00:16:32.486 "state": "enabled", 00:16:32.486 "thread": "nvmf_tgt_poll_group_000", 00:16:32.486 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:32.486 "listen_address": { 00:16:32.486 "trtype": "TCP", 00:16:32.486 "adrfam": "IPv4", 00:16:32.486 "traddr": "10.0.0.2", 00:16:32.486 "trsvcid": "4420" 00:16:32.486 }, 00:16:32.486 "peer_address": { 00:16:32.486 "trtype": "TCP", 00:16:32.486 "adrfam": "IPv4", 00:16:32.486 "traddr": "10.0.0.1", 00:16:32.486 "trsvcid": "53870" 00:16:32.486 }, 00:16:32.486 "auth": { 00:16:32.486 "state": "completed", 00:16:32.486 "digest": "sha384", 00:16:32.486 "dhgroup": "ffdhe8192" 00:16:32.486 } 00:16:32.486 } 00:16:32.486 ]' 00:16:32.486 12:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:32.486 12:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:32.486 12:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:32.755 12:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:32.756 12:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:32.756 12:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:32.756 12:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:32.756 12:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:32.756 12:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWIyOWVhMzdlM2QxY2I3Y2NhOTAwMzdhNGY2OWQ5NDY1NDM0OWJlMWZjODJlY2MyZTk3OWY0YmI1Nzg0NGU3YbPRy9c=: 00:16:32.756 12:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NWIyOWVhMzdlM2QxY2I3Y2NhOTAwMzdhNGY2OWQ5NDY1NDM0OWJlMWZjODJlY2MyZTk3OWY0YmI1Nzg0NGU3YbPRy9c=: 00:16:33.330 12:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:33.330 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:33.330 12:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:33.330 12:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.330 12:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.330 12:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.330 12:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:16:33.330 12:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:33.330 12:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:33.330 12:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:33.330 12:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:33.588 12:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:16:33.588 12:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:33.588 12:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:33.588 12:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:33.588 12:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:33.588 12:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:33.588 12:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:33.588 12:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.589 12:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.589 12:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.589 12:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:33.589 12:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:33.589 12:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:33.848 00:16:33.848 12:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:33.848 12:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:33.848 12:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:34.107 12:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:34.107 12:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:34.107 12:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.107 12:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.107 12:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.107 12:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:34.107 { 00:16:34.107 "cntlid": 97, 00:16:34.107 "qid": 0, 00:16:34.107 "state": "enabled", 00:16:34.107 "thread": "nvmf_tgt_poll_group_000", 00:16:34.107 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:34.107 "listen_address": { 00:16:34.107 "trtype": "TCP", 00:16:34.107 "adrfam": "IPv4", 00:16:34.107 "traddr": "10.0.0.2", 00:16:34.107 "trsvcid": "4420" 00:16:34.107 }, 00:16:34.107 "peer_address": { 00:16:34.107 "trtype": "TCP", 00:16:34.107 "adrfam": "IPv4", 00:16:34.107 "traddr": "10.0.0.1", 00:16:34.107 "trsvcid": "53902" 00:16:34.107 }, 00:16:34.107 "auth": { 00:16:34.107 "state": "completed", 00:16:34.107 "digest": "sha512", 00:16:34.107 "dhgroup": "null" 00:16:34.107 } 00:16:34.107 } 00:16:34.107 ]' 00:16:34.107 12:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:34.107 12:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:34.107 12:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:34.107 12:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:34.107 12:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:34.107 12:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:34.107 12:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:34.107 12:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:34.365 12:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODdjYWIzZTI1NmQ4YzQ4NmU0OGI0MjM5OGRhNzA2ZDk3OGIyN2I3Y2E5YzFjMTM3TZ3J6w==: --dhchap-ctrl-secret DHHC-1:03:MWJlN2NhMjg3OWQ3NjNhMWY2NWJiOTQ2ZGY5YjQxMWEyY2JlYWY5NDA4ZmZjMTg4OWVjODkzOTk3M2ZlOTFkMJ89BcI=: 00:16:34.365 12:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ODdjYWIzZTI1NmQ4YzQ4NmU0OGI0MjM5OGRhNzA2ZDk3OGIyN2I3Y2E5YzFjMTM3TZ3J6w==: --dhchap-ctrl-secret DHHC-1:03:MWJlN2NhMjg3OWQ3NjNhMWY2NWJiOTQ2ZGY5YjQxMWEyY2JlYWY5NDA4ZmZjMTg4OWVjODkzOTk3M2ZlOTFkMJ89BcI=: 00:16:34.931 12:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:34.931 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:34.931 12:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:34.931 12:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.931 12:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.931 12:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.931 12:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:34.931 12:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:34.931 12:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:35.190 12:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:16:35.190 12:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:35.190 12:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:35.190 12:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:35.190 12:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:35.190 12:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:35.190 12:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:35.190 12:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.190 12:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.190 12:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.190 12:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:35.190 12:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:35.190 12:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:35.449 00:16:35.449 12:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:35.449 12:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:35.449 12:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:35.707 12:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:35.707 12:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:35.707 12:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.707 12:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.707 12:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.707 12:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:35.707 { 00:16:35.707 "cntlid": 99, 00:16:35.707 "qid": 0, 00:16:35.707 "state": "enabled", 00:16:35.707 "thread": "nvmf_tgt_poll_group_000", 00:16:35.707 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:35.707 "listen_address": { 00:16:35.707 "trtype": "TCP", 00:16:35.707 "adrfam": "IPv4", 00:16:35.707 "traddr": "10.0.0.2", 00:16:35.707 "trsvcid": "4420" 00:16:35.707 }, 00:16:35.707 "peer_address": { 00:16:35.707 "trtype": "TCP", 00:16:35.707 "adrfam": "IPv4", 00:16:35.707 "traddr": "10.0.0.1", 00:16:35.707 "trsvcid": "53914" 00:16:35.707 }, 00:16:35.707 "auth": { 00:16:35.707 "state": "completed", 00:16:35.707 "digest": "sha512", 00:16:35.707 "dhgroup": "null" 00:16:35.707 } 00:16:35.707 } 00:16:35.707 ]' 00:16:35.707 12:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:35.707 12:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:35.707 12:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:35.707 12:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:35.707 12:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:35.707 12:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:35.707 12:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:35.707 12:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:35.966 12:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODUwN2JiYjhmN2UyN2VlNTdlOTYyZjdmZWY5ODEwYTJErtK/: --dhchap-ctrl-secret DHHC-1:02:MTY1MTkwOTQxZTgxNjk0ZWU2MTkzYWM3ODBjNzI1ZWUyODYwM2QxYzkwOGIzNDljsyhgAw==: 00:16:35.966 12:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ODUwN2JiYjhmN2UyN2VlNTdlOTYyZjdmZWY5ODEwYTJErtK/: --dhchap-ctrl-secret DHHC-1:02:MTY1MTkwOTQxZTgxNjk0ZWU2MTkzYWM3ODBjNzI1ZWUyODYwM2QxYzkwOGIzNDljsyhgAw==: 00:16:36.534 12:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:36.534 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:36.534 12:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:36.534 12:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.534 12:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.534 12:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.534 12:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:36.534 12:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:36.534 12:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:36.794 12:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:16:36.794 12:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:36.794 12:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:36.794 12:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:36.794 12:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:36.794 12:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:36.794 12:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:36.794 12:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.794 12:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.794 12:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.794 12:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:36.794 12:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:36.794 12:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:37.054 00:16:37.054 12:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:37.054 12:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:37.054 12:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:37.312 12:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:37.312 12:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:37.312 12:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.312 12:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.312 12:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.312 12:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:37.312 { 00:16:37.312 "cntlid": 101, 00:16:37.312 "qid": 0, 00:16:37.312 "state": "enabled", 00:16:37.312 "thread": "nvmf_tgt_poll_group_000", 00:16:37.312 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:37.312 "listen_address": { 00:16:37.312 "trtype": "TCP", 00:16:37.312 "adrfam": "IPv4", 00:16:37.312 "traddr": "10.0.0.2", 00:16:37.312 "trsvcid": "4420" 00:16:37.312 }, 00:16:37.312 "peer_address": { 00:16:37.312 "trtype": "TCP", 00:16:37.312 "adrfam": "IPv4", 00:16:37.312 "traddr": "10.0.0.1", 00:16:37.312 "trsvcid": "53940" 00:16:37.312 }, 00:16:37.312 "auth": { 00:16:37.312 "state": "completed", 00:16:37.312 "digest": "sha512", 00:16:37.312 "dhgroup": "null" 00:16:37.312 } 00:16:37.312 } 00:16:37.312 ]' 00:16:37.312 12:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:37.312 12:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:37.312 12:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:37.312 12:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:37.312 12:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:37.312 12:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:37.312 12:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:37.312 12:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:37.571 12:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzU4Njg1NzdiNDBiNjBkYWE2MjAyZmQ0MTlmYzAyOGRiNmI1NjI4NjRiYzU2ZTZhfJv/wQ==: --dhchap-ctrl-secret DHHC-1:01:MjkwNTM2Nzc3ZGNiYjYzMWM1NDg5ZjRiMzIwOWIyNWNUTaOW: 00:16:37.571 12:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NzU4Njg1NzdiNDBiNjBkYWE2MjAyZmQ0MTlmYzAyOGRiNmI1NjI4NjRiYzU2ZTZhfJv/wQ==: --dhchap-ctrl-secret DHHC-1:01:MjkwNTM2Nzc3ZGNiYjYzMWM1NDg5ZjRiMzIwOWIyNWNUTaOW: 00:16:38.139 12:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:38.139 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:38.139 12:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:38.139 12:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.139 12:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.139 12:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.139 12:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:38.139 12:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:38.139 12:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:38.398 12:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:16:38.399 12:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:38.399 12:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:38.399 12:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:38.399 12:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:38.399 12:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:38.399 12:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:38.399 12:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.399 12:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.399 12:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.399 12:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:38.399 12:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:38.399 12:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:38.658 00:16:38.658 12:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:38.658 12:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:38.658 12:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:38.918 12:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:38.918 12:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:38.918 12:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.918 12:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.918 12:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.918 12:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:38.918 { 00:16:38.918 "cntlid": 103, 00:16:38.918 "qid": 0, 00:16:38.918 "state": "enabled", 00:16:38.918 "thread": "nvmf_tgt_poll_group_000", 00:16:38.918 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:38.918 "listen_address": { 00:16:38.918 "trtype": "TCP", 00:16:38.918 "adrfam": "IPv4", 00:16:38.918 "traddr": "10.0.0.2", 00:16:38.918 "trsvcid": "4420" 00:16:38.918 }, 00:16:38.918 "peer_address": { 00:16:38.918 "trtype": "TCP", 00:16:38.918 "adrfam": "IPv4", 00:16:38.918 "traddr": "10.0.0.1", 00:16:38.918 "trsvcid": "53966" 00:16:38.918 }, 00:16:38.918 "auth": { 00:16:38.918 "state": "completed", 00:16:38.918 "digest": "sha512", 00:16:38.918 "dhgroup": "null" 00:16:38.918 } 00:16:38.918 } 00:16:38.918 ]' 00:16:38.918 12:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:38.918 12:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:38.918 12:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:38.918 12:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:38.918 12:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:38.918 12:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:38.918 12:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:38.918 12:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:39.178 12:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWIyOWVhMzdlM2QxY2I3Y2NhOTAwMzdhNGY2OWQ5NDY1NDM0OWJlMWZjODJlY2MyZTk3OWY0YmI1Nzg0NGU3YbPRy9c=: 00:16:39.178 12:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NWIyOWVhMzdlM2QxY2I3Y2NhOTAwMzdhNGY2OWQ5NDY1NDM0OWJlMWZjODJlY2MyZTk3OWY0YmI1Nzg0NGU3YbPRy9c=: 00:16:39.747 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:39.747 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:39.747 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:39.747 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.747 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.747 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.747 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:39.747 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:39.747 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:39.747 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:40.006 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:16:40.006 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:40.006 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:40.006 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:40.006 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:40.006 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:40.006 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:40.006 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.006 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.006 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.006 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:40.006 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:40.006 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:40.266 00:16:40.266 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:40.266 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:40.266 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:40.266 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:40.266 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:40.266 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.266 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.266 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.266 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:40.266 { 00:16:40.266 "cntlid": 105, 00:16:40.266 "qid": 0, 00:16:40.266 "state": "enabled", 00:16:40.266 "thread": "nvmf_tgt_poll_group_000", 00:16:40.266 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:40.266 "listen_address": { 00:16:40.266 "trtype": "TCP", 00:16:40.266 "adrfam": "IPv4", 00:16:40.266 "traddr": "10.0.0.2", 00:16:40.266 "trsvcid": "4420" 00:16:40.266 }, 00:16:40.266 "peer_address": { 00:16:40.266 "trtype": "TCP", 00:16:40.266 "adrfam": "IPv4", 00:16:40.266 "traddr": "10.0.0.1", 00:16:40.266 "trsvcid": "53998" 00:16:40.266 }, 00:16:40.266 "auth": { 00:16:40.266 "state": "completed", 00:16:40.266 "digest": "sha512", 00:16:40.266 "dhgroup": "ffdhe2048" 00:16:40.266 } 00:16:40.266 } 00:16:40.266 ]' 00:16:40.266 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:40.525 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:40.525 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:40.525 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:40.525 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:40.525 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:40.526 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:40.526 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:40.784 12:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODdjYWIzZTI1NmQ4YzQ4NmU0OGI0MjM5OGRhNzA2ZDk3OGIyN2I3Y2E5YzFjMTM3TZ3J6w==: --dhchap-ctrl-secret DHHC-1:03:MWJlN2NhMjg3OWQ3NjNhMWY2NWJiOTQ2ZGY5YjQxMWEyY2JlYWY5NDA4ZmZjMTg4OWVjODkzOTk3M2ZlOTFkMJ89BcI=: 00:16:40.784 12:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ODdjYWIzZTI1NmQ4YzQ4NmU0OGI0MjM5OGRhNzA2ZDk3OGIyN2I3Y2E5YzFjMTM3TZ3J6w==: --dhchap-ctrl-secret DHHC-1:03:MWJlN2NhMjg3OWQ3NjNhMWY2NWJiOTQ2ZGY5YjQxMWEyY2JlYWY5NDA4ZmZjMTg4OWVjODkzOTk3M2ZlOTFkMJ89BcI=: 00:16:41.352 12:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:41.352 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:41.352 12:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:41.352 12:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.352 12:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.352 12:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.352 12:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:41.353 12:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:41.353 12:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:41.353 12:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:16:41.353 12:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:41.353 12:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:41.353 12:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:41.353 12:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:41.353 12:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:41.353 12:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:41.353 12:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.353 12:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.612 12:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.612 12:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:41.612 12:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:41.612 12:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:41.612 00:16:41.871 12:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:41.871 12:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:41.871 12:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:41.871 12:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:41.871 12:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:41.871 12:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.871 12:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.871 12:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.871 12:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:41.871 { 00:16:41.871 "cntlid": 107, 00:16:41.871 "qid": 0, 00:16:41.871 "state": "enabled", 00:16:41.871 "thread": "nvmf_tgt_poll_group_000", 00:16:41.871 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:41.871 "listen_address": { 00:16:41.871 "trtype": "TCP", 00:16:41.871 "adrfam": "IPv4", 00:16:41.871 "traddr": "10.0.0.2", 00:16:41.871 "trsvcid": "4420" 00:16:41.871 }, 00:16:41.871 "peer_address": { 00:16:41.871 "trtype": "TCP", 00:16:41.871 "adrfam": "IPv4", 00:16:41.871 "traddr": "10.0.0.1", 00:16:41.871 "trsvcid": "60784" 00:16:41.871 }, 00:16:41.871 "auth": { 00:16:41.871 "state": "completed", 00:16:41.871 "digest": "sha512", 00:16:41.871 "dhgroup": "ffdhe2048" 00:16:41.871 } 00:16:41.871 } 00:16:41.871 ]' 00:16:41.871 12:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:41.871 12:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:42.130 12:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:42.130 12:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:42.130 12:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:42.130 12:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:42.130 12:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:42.130 12:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:42.389 12:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODUwN2JiYjhmN2UyN2VlNTdlOTYyZjdmZWY5ODEwYTJErtK/: --dhchap-ctrl-secret DHHC-1:02:MTY1MTkwOTQxZTgxNjk0ZWU2MTkzYWM3ODBjNzI1ZWUyODYwM2QxYzkwOGIzNDljsyhgAw==: 00:16:42.389 12:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ODUwN2JiYjhmN2UyN2VlNTdlOTYyZjdmZWY5ODEwYTJErtK/: --dhchap-ctrl-secret DHHC-1:02:MTY1MTkwOTQxZTgxNjk0ZWU2MTkzYWM3ODBjNzI1ZWUyODYwM2QxYzkwOGIzNDljsyhgAw==: 00:16:42.957 12:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:42.957 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:42.957 12:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:42.957 12:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.957 12:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.957 12:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.957 12:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:42.957 12:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:42.957 12:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:42.957 12:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:16:42.957 12:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:42.957 12:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:42.957 12:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:42.957 12:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:42.957 12:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:42.957 12:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:42.957 12:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.957 12:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.957 12:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.957 12:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:42.957 12:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:42.957 12:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:43.217 00:16:43.217 12:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:43.217 12:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:43.217 12:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:43.476 12:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:43.476 12:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:43.476 12:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.476 12:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.476 12:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.476 12:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:43.476 { 00:16:43.476 "cntlid": 109, 00:16:43.476 "qid": 0, 00:16:43.476 "state": "enabled", 00:16:43.477 "thread": "nvmf_tgt_poll_group_000", 00:16:43.477 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:43.477 "listen_address": { 00:16:43.477 "trtype": "TCP", 00:16:43.477 "adrfam": "IPv4", 00:16:43.477 "traddr": "10.0.0.2", 00:16:43.477 "trsvcid": "4420" 00:16:43.477 }, 00:16:43.477 "peer_address": { 00:16:43.477 "trtype": "TCP", 00:16:43.477 "adrfam": "IPv4", 00:16:43.477 "traddr": "10.0.0.1", 00:16:43.477 "trsvcid": "60812" 00:16:43.477 }, 00:16:43.477 "auth": { 00:16:43.477 "state": "completed", 00:16:43.477 "digest": "sha512", 00:16:43.477 "dhgroup": "ffdhe2048" 00:16:43.477 } 00:16:43.477 } 00:16:43.477 ]' 00:16:43.477 12:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:43.477 12:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:43.477 12:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:43.477 12:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:43.477 12:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:43.736 12:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:43.736 12:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:43.736 12:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:43.736 12:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzU4Njg1NzdiNDBiNjBkYWE2MjAyZmQ0MTlmYzAyOGRiNmI1NjI4NjRiYzU2ZTZhfJv/wQ==: --dhchap-ctrl-secret DHHC-1:01:MjkwNTM2Nzc3ZGNiYjYzMWM1NDg5ZjRiMzIwOWIyNWNUTaOW: 00:16:43.736 12:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NzU4Njg1NzdiNDBiNjBkYWE2MjAyZmQ0MTlmYzAyOGRiNmI1NjI4NjRiYzU2ZTZhfJv/wQ==: --dhchap-ctrl-secret DHHC-1:01:MjkwNTM2Nzc3ZGNiYjYzMWM1NDg5ZjRiMzIwOWIyNWNUTaOW: 00:16:44.305 12:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:44.305 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:44.305 12:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:44.305 12:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.305 12:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.305 12:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.305 12:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:44.305 12:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:44.305 12:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:44.565 12:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:16:44.565 12:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:44.565 12:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:44.565 12:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:44.565 12:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:44.565 12:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:44.565 12:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:44.565 12:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.565 12:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.565 12:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.565 12:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:44.565 12:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:44.565 12:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:44.824 00:16:44.824 12:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:44.824 12:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:44.824 12:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:45.084 12:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:45.084 12:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:45.084 12:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.084 12:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.084 12:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.084 12:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:45.084 { 00:16:45.084 "cntlid": 111, 00:16:45.084 "qid": 0, 00:16:45.084 "state": "enabled", 00:16:45.084 "thread": "nvmf_tgt_poll_group_000", 00:16:45.084 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:45.084 "listen_address": { 00:16:45.084 "trtype": "TCP", 00:16:45.084 "adrfam": "IPv4", 00:16:45.084 "traddr": "10.0.0.2", 00:16:45.084 "trsvcid": "4420" 00:16:45.084 }, 00:16:45.084 "peer_address": { 00:16:45.084 "trtype": "TCP", 00:16:45.084 "adrfam": "IPv4", 00:16:45.084 "traddr": "10.0.0.1", 00:16:45.084 "trsvcid": "60848" 00:16:45.084 }, 00:16:45.084 "auth": { 00:16:45.084 "state": "completed", 00:16:45.084 "digest": "sha512", 00:16:45.084 "dhgroup": "ffdhe2048" 00:16:45.084 } 00:16:45.084 } 00:16:45.084 ]' 00:16:45.084 12:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:45.084 12:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:45.084 12:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:45.084 12:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:45.084 12:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:45.344 12:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:45.344 12:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:45.344 12:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:45.344 12:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWIyOWVhMzdlM2QxY2I3Y2NhOTAwMzdhNGY2OWQ5NDY1NDM0OWJlMWZjODJlY2MyZTk3OWY0YmI1Nzg0NGU3YbPRy9c=: 00:16:45.344 12:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NWIyOWVhMzdlM2QxY2I3Y2NhOTAwMzdhNGY2OWQ5NDY1NDM0OWJlMWZjODJlY2MyZTk3OWY0YmI1Nzg0NGU3YbPRy9c=: 00:16:45.913 12:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:45.913 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:45.913 12:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:45.913 12:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.913 12:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.913 12:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.913 12:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:45.913 12:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:45.913 12:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:45.913 12:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:46.172 12:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:16:46.172 12:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:46.172 12:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:46.172 12:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:46.172 12:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:46.172 12:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:46.172 12:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:46.172 12:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.172 12:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.172 12:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.172 12:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:46.172 12:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:46.173 12:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:46.432 00:16:46.432 12:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:46.432 12:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:46.432 12:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:46.692 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:46.692 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:46.692 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.692 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.692 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.692 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:46.692 { 00:16:46.692 "cntlid": 113, 00:16:46.692 "qid": 0, 00:16:46.692 "state": "enabled", 00:16:46.692 "thread": "nvmf_tgt_poll_group_000", 00:16:46.692 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:46.692 "listen_address": { 00:16:46.692 "trtype": "TCP", 00:16:46.692 "adrfam": "IPv4", 00:16:46.692 "traddr": "10.0.0.2", 00:16:46.692 "trsvcid": "4420" 00:16:46.692 }, 00:16:46.692 "peer_address": { 00:16:46.692 "trtype": "TCP", 00:16:46.692 "adrfam": "IPv4", 00:16:46.692 "traddr": "10.0.0.1", 00:16:46.692 "trsvcid": "60872" 00:16:46.692 }, 00:16:46.692 "auth": { 00:16:46.692 "state": "completed", 00:16:46.692 "digest": "sha512", 00:16:46.692 "dhgroup": "ffdhe3072" 00:16:46.692 } 00:16:46.692 } 00:16:46.692 ]' 00:16:46.692 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:46.692 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:46.692 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:46.692 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:46.692 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:46.952 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:46.952 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:46.952 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:46.952 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODdjYWIzZTI1NmQ4YzQ4NmU0OGI0MjM5OGRhNzA2ZDk3OGIyN2I3Y2E5YzFjMTM3TZ3J6w==: --dhchap-ctrl-secret DHHC-1:03:MWJlN2NhMjg3OWQ3NjNhMWY2NWJiOTQ2ZGY5YjQxMWEyY2JlYWY5NDA4ZmZjMTg4OWVjODkzOTk3M2ZlOTFkMJ89BcI=: 00:16:46.953 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ODdjYWIzZTI1NmQ4YzQ4NmU0OGI0MjM5OGRhNzA2ZDk3OGIyN2I3Y2E5YzFjMTM3TZ3J6w==: --dhchap-ctrl-secret DHHC-1:03:MWJlN2NhMjg3OWQ3NjNhMWY2NWJiOTQ2ZGY5YjQxMWEyY2JlYWY5NDA4ZmZjMTg4OWVjODkzOTk3M2ZlOTFkMJ89BcI=: 00:16:47.532 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:47.532 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:47.532 12:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:47.532 12:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.532 12:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.532 12:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.532 12:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:47.532 12:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:47.532 12:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:47.792 12:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:16:47.792 12:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:47.792 12:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:47.792 12:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:47.792 12:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:47.792 12:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:47.792 12:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:47.792 12:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.792 12:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.792 12:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.792 12:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:47.792 12:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:47.792 12:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:48.051 00:16:48.051 12:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:48.051 12:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:48.051 12:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:48.311 12:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:48.311 12:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:48.311 12:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.311 12:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.311 12:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.311 12:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:48.311 { 00:16:48.311 "cntlid": 115, 00:16:48.311 "qid": 0, 00:16:48.311 "state": "enabled", 00:16:48.311 "thread": "nvmf_tgt_poll_group_000", 00:16:48.311 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:48.311 "listen_address": { 00:16:48.311 "trtype": "TCP", 00:16:48.311 "adrfam": "IPv4", 00:16:48.311 "traddr": "10.0.0.2", 00:16:48.311 "trsvcid": "4420" 00:16:48.311 }, 00:16:48.311 "peer_address": { 00:16:48.311 "trtype": "TCP", 00:16:48.311 "adrfam": "IPv4", 00:16:48.311 "traddr": "10.0.0.1", 00:16:48.311 "trsvcid": "60906" 00:16:48.311 }, 00:16:48.311 "auth": { 00:16:48.311 "state": "completed", 00:16:48.311 "digest": "sha512", 00:16:48.311 "dhgroup": "ffdhe3072" 00:16:48.311 } 00:16:48.311 } 00:16:48.311 ]' 00:16:48.311 12:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:48.311 12:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:48.311 12:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:48.311 12:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:48.311 12:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:48.311 12:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:48.311 12:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:48.311 12:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:48.571 12:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODUwN2JiYjhmN2UyN2VlNTdlOTYyZjdmZWY5ODEwYTJErtK/: --dhchap-ctrl-secret DHHC-1:02:MTY1MTkwOTQxZTgxNjk0ZWU2MTkzYWM3ODBjNzI1ZWUyODYwM2QxYzkwOGIzNDljsyhgAw==: 00:16:48.571 12:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ODUwN2JiYjhmN2UyN2VlNTdlOTYyZjdmZWY5ODEwYTJErtK/: --dhchap-ctrl-secret DHHC-1:02:MTY1MTkwOTQxZTgxNjk0ZWU2MTkzYWM3ODBjNzI1ZWUyODYwM2QxYzkwOGIzNDljsyhgAw==: 00:16:49.140 12:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:49.140 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:49.140 12:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:49.140 12:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.140 12:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.140 12:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.140 12:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:49.140 12:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:49.140 12:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:49.400 12:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:16:49.400 12:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:49.400 12:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:49.400 12:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:49.400 12:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:49.400 12:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:49.400 12:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:49.400 12:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.400 12:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.400 12:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.400 12:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:49.400 12:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:49.400 12:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:49.659 00:16:49.659 12:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:49.659 12:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:49.659 12:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:49.918 12:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:49.918 12:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:49.918 12:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.918 12:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.918 12:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.918 12:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:49.918 { 00:16:49.918 "cntlid": 117, 00:16:49.918 "qid": 0, 00:16:49.918 "state": "enabled", 00:16:49.918 "thread": "nvmf_tgt_poll_group_000", 00:16:49.918 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:49.918 "listen_address": { 00:16:49.918 "trtype": "TCP", 00:16:49.918 "adrfam": "IPv4", 00:16:49.918 "traddr": "10.0.0.2", 00:16:49.918 "trsvcid": "4420" 00:16:49.918 }, 00:16:49.918 "peer_address": { 00:16:49.918 "trtype": "TCP", 00:16:49.918 "adrfam": "IPv4", 00:16:49.918 "traddr": "10.0.0.1", 00:16:49.918 "trsvcid": "60914" 00:16:49.918 }, 00:16:49.918 "auth": { 00:16:49.918 "state": "completed", 00:16:49.918 "digest": "sha512", 00:16:49.918 "dhgroup": "ffdhe3072" 00:16:49.918 } 00:16:49.918 } 00:16:49.918 ]' 00:16:49.918 12:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:49.918 12:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:49.918 12:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:49.918 12:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:49.918 12:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:49.918 12:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:49.918 12:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:49.918 12:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:50.178 12:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzU4Njg1NzdiNDBiNjBkYWE2MjAyZmQ0MTlmYzAyOGRiNmI1NjI4NjRiYzU2ZTZhfJv/wQ==: --dhchap-ctrl-secret DHHC-1:01:MjkwNTM2Nzc3ZGNiYjYzMWM1NDg5ZjRiMzIwOWIyNWNUTaOW: 00:16:50.178 12:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NzU4Njg1NzdiNDBiNjBkYWE2MjAyZmQ0MTlmYzAyOGRiNmI1NjI4NjRiYzU2ZTZhfJv/wQ==: --dhchap-ctrl-secret DHHC-1:01:MjkwNTM2Nzc3ZGNiYjYzMWM1NDg5ZjRiMzIwOWIyNWNUTaOW: 00:16:50.746 12:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:50.746 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:50.746 12:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:50.746 12:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.746 12:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.746 12:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.746 12:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:50.746 12:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:50.746 12:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:51.006 12:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:16:51.006 12:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:51.006 12:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:51.006 12:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:51.006 12:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:51.006 12:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:51.006 12:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:51.006 12:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.006 12:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.006 12:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.006 12:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:51.006 12:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:51.006 12:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:51.265 00:16:51.265 12:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:51.265 12:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:51.265 12:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:51.524 12:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:51.524 12:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:51.524 12:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.525 12:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.525 12:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.525 12:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:51.525 { 00:16:51.525 "cntlid": 119, 00:16:51.525 "qid": 0, 00:16:51.525 "state": "enabled", 00:16:51.525 "thread": "nvmf_tgt_poll_group_000", 00:16:51.525 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:51.525 "listen_address": { 00:16:51.525 "trtype": "TCP", 00:16:51.525 "adrfam": "IPv4", 00:16:51.525 "traddr": "10.0.0.2", 00:16:51.525 "trsvcid": "4420" 00:16:51.525 }, 00:16:51.525 "peer_address": { 00:16:51.525 "trtype": "TCP", 00:16:51.525 "adrfam": "IPv4", 00:16:51.525 "traddr": "10.0.0.1", 00:16:51.525 "trsvcid": "60926" 00:16:51.525 }, 00:16:51.525 "auth": { 00:16:51.525 "state": "completed", 00:16:51.525 "digest": "sha512", 00:16:51.525 "dhgroup": "ffdhe3072" 00:16:51.525 } 00:16:51.525 } 00:16:51.525 ]' 00:16:51.525 12:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:51.525 12:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:51.525 12:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:51.525 12:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:51.525 12:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:51.525 12:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:51.525 12:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:51.525 12:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:51.784 12:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWIyOWVhMzdlM2QxY2I3Y2NhOTAwMzdhNGY2OWQ5NDY1NDM0OWJlMWZjODJlY2MyZTk3OWY0YmI1Nzg0NGU3YbPRy9c=: 00:16:51.784 12:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NWIyOWVhMzdlM2QxY2I3Y2NhOTAwMzdhNGY2OWQ5NDY1NDM0OWJlMWZjODJlY2MyZTk3OWY0YmI1Nzg0NGU3YbPRy9c=: 00:16:52.354 12:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:52.354 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:52.354 12:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:52.354 12:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.354 12:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.354 12:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.354 12:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:52.354 12:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:52.354 12:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:52.354 12:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:52.614 12:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:16:52.614 12:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:52.614 12:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:52.614 12:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:52.614 12:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:52.614 12:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:52.614 12:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:52.614 12:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.614 12:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.614 12:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.614 12:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:52.614 12:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:52.614 12:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:52.872 00:16:52.872 12:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:52.872 12:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:52.872 12:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:53.130 12:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:53.130 12:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:53.130 12:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.130 12:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.130 12:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.130 12:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:53.130 { 00:16:53.130 "cntlid": 121, 00:16:53.130 "qid": 0, 00:16:53.130 "state": "enabled", 00:16:53.130 "thread": "nvmf_tgt_poll_group_000", 00:16:53.130 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:53.130 "listen_address": { 00:16:53.130 "trtype": "TCP", 00:16:53.130 "adrfam": "IPv4", 00:16:53.130 "traddr": "10.0.0.2", 00:16:53.130 "trsvcid": "4420" 00:16:53.130 }, 00:16:53.130 "peer_address": { 00:16:53.130 "trtype": "TCP", 00:16:53.130 "adrfam": "IPv4", 00:16:53.130 "traddr": "10.0.0.1", 00:16:53.130 "trsvcid": "40880" 00:16:53.130 }, 00:16:53.130 "auth": { 00:16:53.130 "state": "completed", 00:16:53.130 "digest": "sha512", 00:16:53.130 "dhgroup": "ffdhe4096" 00:16:53.130 } 00:16:53.130 } 00:16:53.130 ]' 00:16:53.130 12:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:53.130 12:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:53.130 12:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:53.130 12:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:53.130 12:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:53.130 12:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:53.130 12:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:53.130 12:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:53.389 12:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODdjYWIzZTI1NmQ4YzQ4NmU0OGI0MjM5OGRhNzA2ZDk3OGIyN2I3Y2E5YzFjMTM3TZ3J6w==: --dhchap-ctrl-secret DHHC-1:03:MWJlN2NhMjg3OWQ3NjNhMWY2NWJiOTQ2ZGY5YjQxMWEyY2JlYWY5NDA4ZmZjMTg4OWVjODkzOTk3M2ZlOTFkMJ89BcI=: 00:16:53.389 12:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ODdjYWIzZTI1NmQ4YzQ4NmU0OGI0MjM5OGRhNzA2ZDk3OGIyN2I3Y2E5YzFjMTM3TZ3J6w==: --dhchap-ctrl-secret DHHC-1:03:MWJlN2NhMjg3OWQ3NjNhMWY2NWJiOTQ2ZGY5YjQxMWEyY2JlYWY5NDA4ZmZjMTg4OWVjODkzOTk3M2ZlOTFkMJ89BcI=: 00:16:53.958 12:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:53.958 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:53.958 12:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:53.958 12:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.958 12:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.958 12:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.958 12:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:53.958 12:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:53.959 12:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:54.218 12:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:16:54.218 12:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:54.218 12:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:54.218 12:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:54.218 12:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:54.218 12:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:54.218 12:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:54.218 12:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.218 12:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.218 12:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.218 12:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:54.218 12:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:54.218 12:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:54.479 00:16:54.479 12:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:54.479 12:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:54.479 12:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:54.739 12:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:54.739 12:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:54.739 12:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.739 12:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.739 12:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.739 12:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:54.739 { 00:16:54.739 "cntlid": 123, 00:16:54.739 "qid": 0, 00:16:54.739 "state": "enabled", 00:16:54.739 "thread": "nvmf_tgt_poll_group_000", 00:16:54.739 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:54.739 "listen_address": { 00:16:54.739 "trtype": "TCP", 00:16:54.739 "adrfam": "IPv4", 00:16:54.739 "traddr": "10.0.0.2", 00:16:54.739 "trsvcid": "4420" 00:16:54.739 }, 00:16:54.739 "peer_address": { 00:16:54.739 "trtype": "TCP", 00:16:54.739 "adrfam": "IPv4", 00:16:54.739 "traddr": "10.0.0.1", 00:16:54.739 "trsvcid": "40908" 00:16:54.739 }, 00:16:54.739 "auth": { 00:16:54.739 "state": "completed", 00:16:54.739 "digest": "sha512", 00:16:54.739 "dhgroup": "ffdhe4096" 00:16:54.739 } 00:16:54.739 } 00:16:54.739 ]' 00:16:54.739 12:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:54.739 12:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:54.739 12:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:54.739 12:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:54.739 12:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:54.739 12:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:54.739 12:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:54.739 12:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:54.999 12:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODUwN2JiYjhmN2UyN2VlNTdlOTYyZjdmZWY5ODEwYTJErtK/: --dhchap-ctrl-secret DHHC-1:02:MTY1MTkwOTQxZTgxNjk0ZWU2MTkzYWM3ODBjNzI1ZWUyODYwM2QxYzkwOGIzNDljsyhgAw==: 00:16:54.999 12:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ODUwN2JiYjhmN2UyN2VlNTdlOTYyZjdmZWY5ODEwYTJErtK/: --dhchap-ctrl-secret DHHC-1:02:MTY1MTkwOTQxZTgxNjk0ZWU2MTkzYWM3ODBjNzI1ZWUyODYwM2QxYzkwOGIzNDljsyhgAw==: 00:16:55.568 12:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:55.568 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:55.568 12:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:55.568 12:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.568 12:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.568 12:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.568 12:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:55.568 12:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:55.568 12:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:55.827 12:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:16:55.827 12:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:55.827 12:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:55.827 12:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:55.827 12:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:55.827 12:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:55.827 12:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:55.827 12:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.827 12:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.827 12:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.827 12:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:55.827 12:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:55.827 12:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:56.087 00:16:56.087 12:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:56.087 12:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:56.087 12:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:56.348 12:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:56.348 12:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:56.348 12:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.348 12:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.348 12:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.348 12:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:56.348 { 00:16:56.348 "cntlid": 125, 00:16:56.348 "qid": 0, 00:16:56.348 "state": "enabled", 00:16:56.348 "thread": "nvmf_tgt_poll_group_000", 00:16:56.348 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:56.348 "listen_address": { 00:16:56.348 "trtype": "TCP", 00:16:56.348 "adrfam": "IPv4", 00:16:56.348 "traddr": "10.0.0.2", 00:16:56.348 "trsvcid": "4420" 00:16:56.348 }, 00:16:56.348 "peer_address": { 00:16:56.348 "trtype": "TCP", 00:16:56.348 "adrfam": "IPv4", 00:16:56.348 "traddr": "10.0.0.1", 00:16:56.348 "trsvcid": "40926" 00:16:56.348 }, 00:16:56.348 "auth": { 00:16:56.348 "state": "completed", 00:16:56.348 "digest": "sha512", 00:16:56.348 "dhgroup": "ffdhe4096" 00:16:56.348 } 00:16:56.348 } 00:16:56.348 ]' 00:16:56.348 12:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:56.348 12:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:56.348 12:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:56.348 12:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:56.348 12:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:56.348 12:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:56.348 12:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:56.348 12:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:56.607 12:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzU4Njg1NzdiNDBiNjBkYWE2MjAyZmQ0MTlmYzAyOGRiNmI1NjI4NjRiYzU2ZTZhfJv/wQ==: --dhchap-ctrl-secret DHHC-1:01:MjkwNTM2Nzc3ZGNiYjYzMWM1NDg5ZjRiMzIwOWIyNWNUTaOW: 00:16:56.607 12:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NzU4Njg1NzdiNDBiNjBkYWE2MjAyZmQ0MTlmYzAyOGRiNmI1NjI4NjRiYzU2ZTZhfJv/wQ==: --dhchap-ctrl-secret DHHC-1:01:MjkwNTM2Nzc3ZGNiYjYzMWM1NDg5ZjRiMzIwOWIyNWNUTaOW: 00:16:57.176 12:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:57.176 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:57.176 12:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:57.176 12:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.176 12:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.176 12:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.176 12:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:57.176 12:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:57.176 12:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:57.436 12:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:16:57.436 12:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:57.436 12:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:57.436 12:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:57.436 12:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:57.436 12:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:57.436 12:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:57.436 12:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.436 12:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.436 12:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.436 12:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:57.436 12:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:57.436 12:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:57.724 00:16:57.724 12:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:57.724 12:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:57.724 12:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:57.983 12:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:57.983 12:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:57.983 12:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.983 12:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.983 12:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.983 12:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:57.983 { 00:16:57.983 "cntlid": 127, 00:16:57.983 "qid": 0, 00:16:57.983 "state": "enabled", 00:16:57.983 "thread": "nvmf_tgt_poll_group_000", 00:16:57.983 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:57.983 "listen_address": { 00:16:57.983 "trtype": "TCP", 00:16:57.983 "adrfam": "IPv4", 00:16:57.983 "traddr": "10.0.0.2", 00:16:57.983 "trsvcid": "4420" 00:16:57.983 }, 00:16:57.983 "peer_address": { 00:16:57.983 "trtype": "TCP", 00:16:57.983 "adrfam": "IPv4", 00:16:57.983 "traddr": "10.0.0.1", 00:16:57.983 "trsvcid": "40954" 00:16:57.983 }, 00:16:57.983 "auth": { 00:16:57.984 "state": "completed", 00:16:57.984 "digest": "sha512", 00:16:57.984 "dhgroup": "ffdhe4096" 00:16:57.984 } 00:16:57.984 } 00:16:57.984 ]' 00:16:57.984 12:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:57.984 12:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:57.984 12:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:57.984 12:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:57.984 12:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:57.984 12:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:57.984 12:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:57.984 12:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:58.243 12:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWIyOWVhMzdlM2QxY2I3Y2NhOTAwMzdhNGY2OWQ5NDY1NDM0OWJlMWZjODJlY2MyZTk3OWY0YmI1Nzg0NGU3YbPRy9c=: 00:16:58.243 12:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NWIyOWVhMzdlM2QxY2I3Y2NhOTAwMzdhNGY2OWQ5NDY1NDM0OWJlMWZjODJlY2MyZTk3OWY0YmI1Nzg0NGU3YbPRy9c=: 00:16:58.812 12:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:58.812 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:58.812 12:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:58.812 12:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.812 12:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.812 12:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.812 12:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:58.812 12:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:58.812 12:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:58.812 12:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:59.071 12:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:16:59.071 12:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:59.071 12:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:59.071 12:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:59.071 12:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:59.071 12:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:59.071 12:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:59.071 12:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.071 12:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.071 12:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.071 12:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:59.071 12:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:59.071 12:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:59.330 00:16:59.330 12:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:59.330 12:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:59.330 12:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:59.588 12:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:59.588 12:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:59.588 12:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.588 12:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.588 12:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.588 12:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:59.588 { 00:16:59.588 "cntlid": 129, 00:16:59.588 "qid": 0, 00:16:59.588 "state": "enabled", 00:16:59.588 "thread": "nvmf_tgt_poll_group_000", 00:16:59.589 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:59.589 "listen_address": { 00:16:59.589 "trtype": "TCP", 00:16:59.589 "adrfam": "IPv4", 00:16:59.589 "traddr": "10.0.0.2", 00:16:59.589 "trsvcid": "4420" 00:16:59.589 }, 00:16:59.589 "peer_address": { 00:16:59.589 "trtype": "TCP", 00:16:59.589 "adrfam": "IPv4", 00:16:59.589 "traddr": "10.0.0.1", 00:16:59.589 "trsvcid": "40976" 00:16:59.589 }, 00:16:59.589 "auth": { 00:16:59.589 "state": "completed", 00:16:59.589 "digest": "sha512", 00:16:59.589 "dhgroup": "ffdhe6144" 00:16:59.589 } 00:16:59.589 } 00:16:59.589 ]' 00:16:59.589 12:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:59.589 12:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:59.589 12:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:59.848 12:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:59.848 12:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:59.848 12:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:59.848 12:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:59.848 12:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:59.848 12:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODdjYWIzZTI1NmQ4YzQ4NmU0OGI0MjM5OGRhNzA2ZDk3OGIyN2I3Y2E5YzFjMTM3TZ3J6w==: --dhchap-ctrl-secret DHHC-1:03:MWJlN2NhMjg3OWQ3NjNhMWY2NWJiOTQ2ZGY5YjQxMWEyY2JlYWY5NDA4ZmZjMTg4OWVjODkzOTk3M2ZlOTFkMJ89BcI=: 00:16:59.848 12:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ODdjYWIzZTI1NmQ4YzQ4NmU0OGI0MjM5OGRhNzA2ZDk3OGIyN2I3Y2E5YzFjMTM3TZ3J6w==: --dhchap-ctrl-secret DHHC-1:03:MWJlN2NhMjg3OWQ3NjNhMWY2NWJiOTQ2ZGY5YjQxMWEyY2JlYWY5NDA4ZmZjMTg4OWVjODkzOTk3M2ZlOTFkMJ89BcI=: 00:17:00.416 12:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:00.416 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:00.416 12:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:00.416 12:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.416 12:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.416 12:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.416 12:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:00.416 12:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:00.416 12:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:00.676 12:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:17:00.676 12:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:00.676 12:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:00.676 12:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:00.676 12:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:00.676 12:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:00.676 12:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:00.676 12:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.676 12:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.676 12:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.677 12:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:00.677 12:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:00.677 12:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:01.246 00:17:01.246 12:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:01.246 12:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:01.246 12:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:01.246 12:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:01.246 12:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:01.246 12:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.246 12:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.246 12:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.246 12:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:01.246 { 00:17:01.246 "cntlid": 131, 00:17:01.246 "qid": 0, 00:17:01.246 "state": "enabled", 00:17:01.246 "thread": "nvmf_tgt_poll_group_000", 00:17:01.246 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:01.246 "listen_address": { 00:17:01.246 "trtype": "TCP", 00:17:01.246 "adrfam": "IPv4", 00:17:01.246 "traddr": "10.0.0.2", 00:17:01.246 "trsvcid": "4420" 00:17:01.246 }, 00:17:01.246 "peer_address": { 00:17:01.246 "trtype": "TCP", 00:17:01.246 "adrfam": "IPv4", 00:17:01.246 "traddr": "10.0.0.1", 00:17:01.246 "trsvcid": "41018" 00:17:01.246 }, 00:17:01.246 "auth": { 00:17:01.246 "state": "completed", 00:17:01.246 "digest": "sha512", 00:17:01.246 "dhgroup": "ffdhe6144" 00:17:01.246 } 00:17:01.246 } 00:17:01.246 ]' 00:17:01.246 12:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:01.246 12:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:01.246 12:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:01.505 12:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:01.505 12:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:01.505 12:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:01.505 12:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:01.505 12:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:01.505 12:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODUwN2JiYjhmN2UyN2VlNTdlOTYyZjdmZWY5ODEwYTJErtK/: --dhchap-ctrl-secret DHHC-1:02:MTY1MTkwOTQxZTgxNjk0ZWU2MTkzYWM3ODBjNzI1ZWUyODYwM2QxYzkwOGIzNDljsyhgAw==: 00:17:01.505 12:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ODUwN2JiYjhmN2UyN2VlNTdlOTYyZjdmZWY5ODEwYTJErtK/: --dhchap-ctrl-secret DHHC-1:02:MTY1MTkwOTQxZTgxNjk0ZWU2MTkzYWM3ODBjNzI1ZWUyODYwM2QxYzkwOGIzNDljsyhgAw==: 00:17:02.074 12:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:02.333 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:02.333 12:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:02.333 12:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.333 12:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.333 12:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.333 12:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:02.333 12:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:02.333 12:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:02.333 12:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:17:02.333 12:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:02.333 12:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:02.333 12:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:02.333 12:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:02.333 12:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:02.333 12:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:02.333 12:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.333 12:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.333 12:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.334 12:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:02.334 12:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:02.334 12:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:02.901 00:17:02.901 12:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:02.901 12:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:02.901 12:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:02.901 12:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:02.901 12:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:02.901 12:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.901 12:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.901 12:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.901 12:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:02.901 { 00:17:02.901 "cntlid": 133, 00:17:02.901 "qid": 0, 00:17:02.901 "state": "enabled", 00:17:02.901 "thread": "nvmf_tgt_poll_group_000", 00:17:02.901 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:02.901 "listen_address": { 00:17:02.901 "trtype": "TCP", 00:17:02.901 "adrfam": "IPv4", 00:17:02.901 "traddr": "10.0.0.2", 00:17:02.901 "trsvcid": "4420" 00:17:02.901 }, 00:17:02.901 "peer_address": { 00:17:02.901 "trtype": "TCP", 00:17:02.901 "adrfam": "IPv4", 00:17:02.901 "traddr": "10.0.0.1", 00:17:02.901 "trsvcid": "34210" 00:17:02.901 }, 00:17:02.901 "auth": { 00:17:02.901 "state": "completed", 00:17:02.901 "digest": "sha512", 00:17:02.901 "dhgroup": "ffdhe6144" 00:17:02.901 } 00:17:02.901 } 00:17:02.901 ]' 00:17:02.901 12:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:03.161 12:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:03.161 12:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:03.161 12:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:03.161 12:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:03.161 12:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:03.161 12:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:03.161 12:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:03.420 12:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzU4Njg1NzdiNDBiNjBkYWE2MjAyZmQ0MTlmYzAyOGRiNmI1NjI4NjRiYzU2ZTZhfJv/wQ==: --dhchap-ctrl-secret DHHC-1:01:MjkwNTM2Nzc3ZGNiYjYzMWM1NDg5ZjRiMzIwOWIyNWNUTaOW: 00:17:03.420 12:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NzU4Njg1NzdiNDBiNjBkYWE2MjAyZmQ0MTlmYzAyOGRiNmI1NjI4NjRiYzU2ZTZhfJv/wQ==: --dhchap-ctrl-secret DHHC-1:01:MjkwNTM2Nzc3ZGNiYjYzMWM1NDg5ZjRiMzIwOWIyNWNUTaOW: 00:17:03.989 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:03.989 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:03.989 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:03.989 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.989 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.989 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.989 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:03.989 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:03.989 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:03.989 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:17:03.989 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:03.989 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:03.989 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:03.989 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:03.989 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:03.989 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:03.989 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.989 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.249 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.249 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:04.249 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:04.249 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:04.508 00:17:04.508 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:04.508 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:04.508 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:04.768 12:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:04.768 12:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:04.768 12:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.768 12:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.768 12:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.768 12:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:04.768 { 00:17:04.768 "cntlid": 135, 00:17:04.768 "qid": 0, 00:17:04.768 "state": "enabled", 00:17:04.768 "thread": "nvmf_tgt_poll_group_000", 00:17:04.768 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:04.768 "listen_address": { 00:17:04.768 "trtype": "TCP", 00:17:04.768 "adrfam": "IPv4", 00:17:04.768 "traddr": "10.0.0.2", 00:17:04.768 "trsvcid": "4420" 00:17:04.768 }, 00:17:04.768 "peer_address": { 00:17:04.768 "trtype": "TCP", 00:17:04.768 "adrfam": "IPv4", 00:17:04.768 "traddr": "10.0.0.1", 00:17:04.768 "trsvcid": "34242" 00:17:04.768 }, 00:17:04.768 "auth": { 00:17:04.768 "state": "completed", 00:17:04.768 "digest": "sha512", 00:17:04.768 "dhgroup": "ffdhe6144" 00:17:04.768 } 00:17:04.768 } 00:17:04.768 ]' 00:17:04.768 12:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:04.768 12:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:04.768 12:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:04.768 12:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:04.768 12:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:04.768 12:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:04.768 12:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:04.768 12:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:05.028 12:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWIyOWVhMzdlM2QxY2I3Y2NhOTAwMzdhNGY2OWQ5NDY1NDM0OWJlMWZjODJlY2MyZTk3OWY0YmI1Nzg0NGU3YbPRy9c=: 00:17:05.028 12:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NWIyOWVhMzdlM2QxY2I3Y2NhOTAwMzdhNGY2OWQ5NDY1NDM0OWJlMWZjODJlY2MyZTk3OWY0YmI1Nzg0NGU3YbPRy9c=: 00:17:05.596 12:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:05.596 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:05.596 12:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:05.596 12:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.596 12:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.596 12:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.596 12:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:05.596 12:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:05.596 12:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:05.596 12:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:05.855 12:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:17:05.855 12:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:05.855 12:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:05.855 12:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:05.855 12:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:05.855 12:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:05.855 12:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:05.855 12:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.855 12:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.855 12:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.855 12:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:05.855 12:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:05.855 12:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:06.424 00:17:06.424 12:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:06.424 12:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:06.424 12:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:06.424 12:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:06.424 12:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:06.424 12:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.424 12:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.424 12:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.424 12:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:06.424 { 00:17:06.424 "cntlid": 137, 00:17:06.424 "qid": 0, 00:17:06.424 "state": "enabled", 00:17:06.424 "thread": "nvmf_tgt_poll_group_000", 00:17:06.424 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:06.424 "listen_address": { 00:17:06.424 "trtype": "TCP", 00:17:06.424 "adrfam": "IPv4", 00:17:06.424 "traddr": "10.0.0.2", 00:17:06.424 "trsvcid": "4420" 00:17:06.424 }, 00:17:06.424 "peer_address": { 00:17:06.424 "trtype": "TCP", 00:17:06.424 "adrfam": "IPv4", 00:17:06.424 "traddr": "10.0.0.1", 00:17:06.424 "trsvcid": "34272" 00:17:06.424 }, 00:17:06.424 "auth": { 00:17:06.424 "state": "completed", 00:17:06.424 "digest": "sha512", 00:17:06.424 "dhgroup": "ffdhe8192" 00:17:06.424 } 00:17:06.424 } 00:17:06.424 ]' 00:17:06.424 12:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:06.424 12:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:06.424 12:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:06.684 12:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:06.684 12:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:06.684 12:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:06.684 12:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:06.684 12:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:06.684 12:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODdjYWIzZTI1NmQ4YzQ4NmU0OGI0MjM5OGRhNzA2ZDk3OGIyN2I3Y2E5YzFjMTM3TZ3J6w==: --dhchap-ctrl-secret DHHC-1:03:MWJlN2NhMjg3OWQ3NjNhMWY2NWJiOTQ2ZGY5YjQxMWEyY2JlYWY5NDA4ZmZjMTg4OWVjODkzOTk3M2ZlOTFkMJ89BcI=: 00:17:06.684 12:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ODdjYWIzZTI1NmQ4YzQ4NmU0OGI0MjM5OGRhNzA2ZDk3OGIyN2I3Y2E5YzFjMTM3TZ3J6w==: --dhchap-ctrl-secret DHHC-1:03:MWJlN2NhMjg3OWQ3NjNhMWY2NWJiOTQ2ZGY5YjQxMWEyY2JlYWY5NDA4ZmZjMTg4OWVjODkzOTk3M2ZlOTFkMJ89BcI=: 00:17:07.622 12:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:07.622 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:07.622 12:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:07.622 12:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.622 12:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.622 12:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.622 12:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:07.622 12:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:07.622 12:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:07.622 12:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:17:07.622 12:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:07.622 12:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:07.622 12:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:07.622 12:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:07.622 12:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:07.622 12:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:07.622 12:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.622 12:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.622 12:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.622 12:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:07.622 12:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:07.623 12:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:08.191 00:17:08.191 12:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:08.191 12:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:08.191 12:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:08.191 12:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:08.191 12:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:08.191 12:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.191 12:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.450 12:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.450 12:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:08.450 { 00:17:08.450 "cntlid": 139, 00:17:08.450 "qid": 0, 00:17:08.450 "state": "enabled", 00:17:08.451 "thread": "nvmf_tgt_poll_group_000", 00:17:08.451 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:08.451 "listen_address": { 00:17:08.451 "trtype": "TCP", 00:17:08.451 "adrfam": "IPv4", 00:17:08.451 "traddr": "10.0.0.2", 00:17:08.451 "trsvcid": "4420" 00:17:08.451 }, 00:17:08.451 "peer_address": { 00:17:08.451 "trtype": "TCP", 00:17:08.451 "adrfam": "IPv4", 00:17:08.451 "traddr": "10.0.0.1", 00:17:08.451 "trsvcid": "34304" 00:17:08.451 }, 00:17:08.451 "auth": { 00:17:08.451 "state": "completed", 00:17:08.451 "digest": "sha512", 00:17:08.451 "dhgroup": "ffdhe8192" 00:17:08.451 } 00:17:08.451 } 00:17:08.451 ]' 00:17:08.451 12:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:08.451 12:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:08.451 12:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:08.451 12:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:08.451 12:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:08.451 12:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:08.451 12:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:08.451 12:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:08.711 12:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODUwN2JiYjhmN2UyN2VlNTdlOTYyZjdmZWY5ODEwYTJErtK/: --dhchap-ctrl-secret DHHC-1:02:MTY1MTkwOTQxZTgxNjk0ZWU2MTkzYWM3ODBjNzI1ZWUyODYwM2QxYzkwOGIzNDljsyhgAw==: 00:17:08.711 12:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ODUwN2JiYjhmN2UyN2VlNTdlOTYyZjdmZWY5ODEwYTJErtK/: --dhchap-ctrl-secret DHHC-1:02:MTY1MTkwOTQxZTgxNjk0ZWU2MTkzYWM3ODBjNzI1ZWUyODYwM2QxYzkwOGIzNDljsyhgAw==: 00:17:09.279 12:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:09.279 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:09.279 12:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:09.279 12:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.279 12:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.279 12:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.279 12:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:09.279 12:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:09.279 12:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:09.538 12:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:17:09.538 12:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:09.539 12:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:09.539 12:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:09.539 12:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:09.539 12:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:09.539 12:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:09.539 12:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.539 12:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.539 12:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.539 12:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:09.539 12:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:09.539 12:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:10.108 00:17:10.108 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:10.108 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:10.108 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:10.108 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:10.108 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:10.108 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.108 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.108 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.108 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:10.108 { 00:17:10.108 "cntlid": 141, 00:17:10.108 "qid": 0, 00:17:10.108 "state": "enabled", 00:17:10.108 "thread": "nvmf_tgt_poll_group_000", 00:17:10.108 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:10.108 "listen_address": { 00:17:10.108 "trtype": "TCP", 00:17:10.108 "adrfam": "IPv4", 00:17:10.108 "traddr": "10.0.0.2", 00:17:10.108 "trsvcid": "4420" 00:17:10.108 }, 00:17:10.108 "peer_address": { 00:17:10.108 "trtype": "TCP", 00:17:10.108 "adrfam": "IPv4", 00:17:10.108 "traddr": "10.0.0.1", 00:17:10.108 "trsvcid": "34342" 00:17:10.108 }, 00:17:10.108 "auth": { 00:17:10.108 "state": "completed", 00:17:10.108 "digest": "sha512", 00:17:10.108 "dhgroup": "ffdhe8192" 00:17:10.108 } 00:17:10.108 } 00:17:10.108 ]' 00:17:10.108 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:10.108 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:10.108 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:10.368 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:10.368 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:10.368 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:10.368 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:10.368 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:10.368 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzU4Njg1NzdiNDBiNjBkYWE2MjAyZmQ0MTlmYzAyOGRiNmI1NjI4NjRiYzU2ZTZhfJv/wQ==: --dhchap-ctrl-secret DHHC-1:01:MjkwNTM2Nzc3ZGNiYjYzMWM1NDg5ZjRiMzIwOWIyNWNUTaOW: 00:17:10.368 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NzU4Njg1NzdiNDBiNjBkYWE2MjAyZmQ0MTlmYzAyOGRiNmI1NjI4NjRiYzU2ZTZhfJv/wQ==: --dhchap-ctrl-secret DHHC-1:01:MjkwNTM2Nzc3ZGNiYjYzMWM1NDg5ZjRiMzIwOWIyNWNUTaOW: 00:17:10.936 12:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:10.936 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:10.936 12:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:10.936 12:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.936 12:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.936 12:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.936 12:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:10.936 12:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:10.936 12:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:11.195 12:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:17:11.195 12:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:11.195 12:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:11.195 12:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:11.195 12:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:11.195 12:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:11.195 12:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:11.195 12:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.195 12:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.195 12:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.195 12:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:11.195 12:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:11.195 12:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:11.763 00:17:11.763 12:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:11.763 12:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:11.763 12:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:12.023 12:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:12.023 12:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:12.023 12:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.023 12:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.023 12:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.023 12:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:12.023 { 00:17:12.023 "cntlid": 143, 00:17:12.023 "qid": 0, 00:17:12.023 "state": "enabled", 00:17:12.023 "thread": "nvmf_tgt_poll_group_000", 00:17:12.023 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:12.023 "listen_address": { 00:17:12.023 "trtype": "TCP", 00:17:12.023 "adrfam": "IPv4", 00:17:12.023 "traddr": "10.0.0.2", 00:17:12.023 "trsvcid": "4420" 00:17:12.023 }, 00:17:12.023 "peer_address": { 00:17:12.023 "trtype": "TCP", 00:17:12.023 "adrfam": "IPv4", 00:17:12.023 "traddr": "10.0.0.1", 00:17:12.023 "trsvcid": "56364" 00:17:12.023 }, 00:17:12.023 "auth": { 00:17:12.023 "state": "completed", 00:17:12.023 "digest": "sha512", 00:17:12.023 "dhgroup": "ffdhe8192" 00:17:12.023 } 00:17:12.023 } 00:17:12.023 ]' 00:17:12.023 12:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:12.023 12:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:12.023 12:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:12.023 12:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:12.023 12:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:12.023 12:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:12.023 12:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:12.023 12:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:12.282 12:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWIyOWVhMzdlM2QxY2I3Y2NhOTAwMzdhNGY2OWQ5NDY1NDM0OWJlMWZjODJlY2MyZTk3OWY0YmI1Nzg0NGU3YbPRy9c=: 00:17:12.282 12:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NWIyOWVhMzdlM2QxY2I3Y2NhOTAwMzdhNGY2OWQ5NDY1NDM0OWJlMWZjODJlY2MyZTk3OWY0YmI1Nzg0NGU3YbPRy9c=: 00:17:12.849 12:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:12.849 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:12.849 12:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:12.849 12:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.849 12:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.849 12:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.849 12:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:17:12.849 12:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:17:12.849 12:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:17:12.849 12:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:12.849 12:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:12.849 12:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:13.109 12:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:17:13.109 12:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:13.109 12:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:13.109 12:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:13.109 12:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:13.109 12:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:13.109 12:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:13.109 12:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.109 12:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.109 12:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.109 12:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:13.109 12:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:13.109 12:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:13.676 00:17:13.676 12:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:13.676 12:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:13.676 12:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:13.676 12:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:13.676 12:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:13.677 12:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.677 12:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.677 12:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.677 12:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:13.677 { 00:17:13.677 "cntlid": 145, 00:17:13.677 "qid": 0, 00:17:13.677 "state": "enabled", 00:17:13.677 "thread": "nvmf_tgt_poll_group_000", 00:17:13.677 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:13.677 "listen_address": { 00:17:13.677 "trtype": "TCP", 00:17:13.677 "adrfam": "IPv4", 00:17:13.677 "traddr": "10.0.0.2", 00:17:13.677 "trsvcid": "4420" 00:17:13.677 }, 00:17:13.677 "peer_address": { 00:17:13.677 "trtype": "TCP", 00:17:13.677 "adrfam": "IPv4", 00:17:13.677 "traddr": "10.0.0.1", 00:17:13.677 "trsvcid": "56392" 00:17:13.677 }, 00:17:13.677 "auth": { 00:17:13.677 "state": "completed", 00:17:13.677 "digest": "sha512", 00:17:13.677 "dhgroup": "ffdhe8192" 00:17:13.677 } 00:17:13.677 } 00:17:13.677 ]' 00:17:13.677 12:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:13.677 12:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:13.677 12:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:13.936 12:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:13.936 12:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:13.936 12:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:13.936 12:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:13.936 12:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:13.936 12:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODdjYWIzZTI1NmQ4YzQ4NmU0OGI0MjM5OGRhNzA2ZDk3OGIyN2I3Y2E5YzFjMTM3TZ3J6w==: --dhchap-ctrl-secret DHHC-1:03:MWJlN2NhMjg3OWQ3NjNhMWY2NWJiOTQ2ZGY5YjQxMWEyY2JlYWY5NDA4ZmZjMTg4OWVjODkzOTk3M2ZlOTFkMJ89BcI=: 00:17:13.936 12:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ODdjYWIzZTI1NmQ4YzQ4NmU0OGI0MjM5OGRhNzA2ZDk3OGIyN2I3Y2E5YzFjMTM3TZ3J6w==: --dhchap-ctrl-secret DHHC-1:03:MWJlN2NhMjg3OWQ3NjNhMWY2NWJiOTQ2ZGY5YjQxMWEyY2JlYWY5NDA4ZmZjMTg4OWVjODkzOTk3M2ZlOTFkMJ89BcI=: 00:17:14.505 12:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:14.505 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:14.505 12:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:14.505 12:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.505 12:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.763 12:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.763 12:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:17:14.763 12:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.764 12:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.764 12:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.764 12:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:17:14.764 12:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:14.764 12:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:17:14.764 12:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:14.764 12:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:14.764 12:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:14.764 12:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:14.764 12:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:17:14.764 12:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:17:14.764 12:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:17:15.024 request: 00:17:15.024 { 00:17:15.024 "name": "nvme0", 00:17:15.024 "trtype": "tcp", 00:17:15.024 "traddr": "10.0.0.2", 00:17:15.024 "adrfam": "ipv4", 00:17:15.024 "trsvcid": "4420", 00:17:15.024 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:15.024 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:15.024 "prchk_reftag": false, 00:17:15.024 "prchk_guard": false, 00:17:15.024 "hdgst": false, 00:17:15.024 "ddgst": false, 00:17:15.024 "dhchap_key": "key2", 00:17:15.024 "allow_unrecognized_csi": false, 00:17:15.024 "method": "bdev_nvme_attach_controller", 00:17:15.024 "req_id": 1 00:17:15.024 } 00:17:15.024 Got JSON-RPC error response 00:17:15.024 response: 00:17:15.024 { 00:17:15.024 "code": -5, 00:17:15.024 "message": "Input/output error" 00:17:15.024 } 00:17:15.024 12:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:15.024 12:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:15.024 12:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:15.024 12:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:15.024 12:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:15.024 12:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.024 12:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.024 12:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.024 12:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:15.024 12:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.024 12:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.024 12:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.024 12:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:15.024 12:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:15.024 12:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:15.024 12:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:15.024 12:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:15.024 12:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:15.024 12:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:15.024 12:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:15.024 12:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:15.024 12:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:15.593 request: 00:17:15.593 { 00:17:15.593 "name": "nvme0", 00:17:15.593 "trtype": "tcp", 00:17:15.593 "traddr": "10.0.0.2", 00:17:15.593 "adrfam": "ipv4", 00:17:15.593 "trsvcid": "4420", 00:17:15.593 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:15.593 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:15.593 "prchk_reftag": false, 00:17:15.593 "prchk_guard": false, 00:17:15.593 "hdgst": false, 00:17:15.593 "ddgst": false, 00:17:15.593 "dhchap_key": "key1", 00:17:15.593 "dhchap_ctrlr_key": "ckey2", 00:17:15.593 "allow_unrecognized_csi": false, 00:17:15.593 "method": "bdev_nvme_attach_controller", 00:17:15.593 "req_id": 1 00:17:15.593 } 00:17:15.593 Got JSON-RPC error response 00:17:15.593 response: 00:17:15.593 { 00:17:15.593 "code": -5, 00:17:15.593 "message": "Input/output error" 00:17:15.593 } 00:17:15.593 12:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:15.593 12:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:15.593 12:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:15.593 12:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:15.593 12:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:15.593 12:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.593 12:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.593 12:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.593 12:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:17:15.593 12:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.593 12:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.593 12:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.593 12:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:15.593 12:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:15.593 12:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:15.593 12:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:15.593 12:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:15.593 12:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:15.593 12:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:15.593 12:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:15.593 12:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:15.593 12:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:16.162 request: 00:17:16.162 { 00:17:16.162 "name": "nvme0", 00:17:16.162 "trtype": "tcp", 00:17:16.162 "traddr": "10.0.0.2", 00:17:16.162 "adrfam": "ipv4", 00:17:16.162 "trsvcid": "4420", 00:17:16.162 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:16.162 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:16.162 "prchk_reftag": false, 00:17:16.162 "prchk_guard": false, 00:17:16.162 "hdgst": false, 00:17:16.162 "ddgst": false, 00:17:16.162 "dhchap_key": "key1", 00:17:16.162 "dhchap_ctrlr_key": "ckey1", 00:17:16.162 "allow_unrecognized_csi": false, 00:17:16.162 "method": "bdev_nvme_attach_controller", 00:17:16.162 "req_id": 1 00:17:16.162 } 00:17:16.162 Got JSON-RPC error response 00:17:16.162 response: 00:17:16.162 { 00:17:16.162 "code": -5, 00:17:16.162 "message": "Input/output error" 00:17:16.162 } 00:17:16.162 12:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:16.162 12:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:16.162 12:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:16.162 12:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:16.162 12:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:16.162 12:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.162 12:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.162 12:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.162 12:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 2503933 00:17:16.162 12:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 2503933 ']' 00:17:16.162 12:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 2503933 00:17:16.162 12:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:17:16.162 12:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:16.162 12:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2503933 00:17:16.162 12:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:16.162 12:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:16.162 12:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2503933' 00:17:16.162 killing process with pid 2503933 00:17:16.162 12:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 2503933 00:17:16.162 12:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 2503933 00:17:16.162 12:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:17:16.162 12:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:16.162 12:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:16.162 12:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.162 12:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=2526125 00:17:16.162 12:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:17:16.162 12:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 2526125 00:17:16.162 12:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2526125 ']' 00:17:16.162 12:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:16.162 12:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:16.162 12:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:16.162 12:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:16.162 12:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.422 12:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:16.422 12:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:17:16.422 12:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:16.422 12:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:16.422 12:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.422 12:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:16.422 12:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:17:16.422 12:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 2526125 00:17:16.422 12:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2526125 ']' 00:17:16.422 12:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:16.422 12:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:16.422 12:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:16.422 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:16.422 12:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:16.422 12:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.682 12:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:16.682 12:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:17:16.682 12:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:17:16.682 12:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.682 12:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.942 null0 00:17:16.942 12:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.942 12:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:16.942 12:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.RSE 00:17:16.942 12:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.942 12:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.942 12:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.942 12:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.PBP ]] 00:17:16.942 12:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.PBP 00:17:16.942 12:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.942 12:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.942 12:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.942 12:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:16.942 12:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.Cww 00:17:16.942 12:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.942 12:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.942 12:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.942 12:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.L2o ]] 00:17:16.942 12:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.L2o 00:17:16.942 12:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.942 12:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.942 12:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.942 12:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:16.942 12:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.ONg 00:17:16.942 12:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.942 12:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.942 12:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.942 12:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.KwW ]] 00:17:16.942 12:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.KwW 00:17:16.942 12:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.942 12:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.942 12:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.942 12:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:16.942 12:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.IrJ 00:17:16.942 12:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.942 12:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.942 12:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.942 12:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:17:16.942 12:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:17:16.942 12:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:16.942 12:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:16.942 12:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:16.942 12:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:16.942 12:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:16.942 12:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:16.942 12:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.942 12:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.942 12:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.942 12:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:16.942 12:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:16.942 12:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:17.880 nvme0n1 00:17:17.880 12:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:17.880 12:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:17.880 12:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:17.880 12:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:17.880 12:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:17.880 12:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.880 12:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.880 12:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.880 12:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:17.880 { 00:17:17.880 "cntlid": 1, 00:17:17.880 "qid": 0, 00:17:17.880 "state": "enabled", 00:17:17.880 "thread": "nvmf_tgt_poll_group_000", 00:17:17.880 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:17.880 "listen_address": { 00:17:17.880 "trtype": "TCP", 00:17:17.880 "adrfam": "IPv4", 00:17:17.880 "traddr": "10.0.0.2", 00:17:17.880 "trsvcid": "4420" 00:17:17.880 }, 00:17:17.880 "peer_address": { 00:17:17.880 "trtype": "TCP", 00:17:17.880 "adrfam": "IPv4", 00:17:17.880 "traddr": "10.0.0.1", 00:17:17.880 "trsvcid": "56436" 00:17:17.880 }, 00:17:17.880 "auth": { 00:17:17.880 "state": "completed", 00:17:17.880 "digest": "sha512", 00:17:17.881 "dhgroup": "ffdhe8192" 00:17:17.881 } 00:17:17.881 } 00:17:17.881 ]' 00:17:17.881 12:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:17.881 12:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:17.881 12:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:17.881 12:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:17.881 12:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:18.140 12:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:18.140 12:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:18.140 12:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:18.140 12:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWIyOWVhMzdlM2QxY2I3Y2NhOTAwMzdhNGY2OWQ5NDY1NDM0OWJlMWZjODJlY2MyZTk3OWY0YmI1Nzg0NGU3YbPRy9c=: 00:17:18.140 12:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NWIyOWVhMzdlM2QxY2I3Y2NhOTAwMzdhNGY2OWQ5NDY1NDM0OWJlMWZjODJlY2MyZTk3OWY0YmI1Nzg0NGU3YbPRy9c=: 00:17:18.710 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:18.710 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:18.710 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:18.710 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.710 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.710 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.711 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:18.711 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.711 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.711 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.711 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:17:18.711 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:17:18.972 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:17:18.972 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:18.972 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:17:18.972 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:18.972 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:18.972 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:18.972 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:18.972 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:18.972 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:18.972 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:19.231 request: 00:17:19.231 { 00:17:19.231 "name": "nvme0", 00:17:19.231 "trtype": "tcp", 00:17:19.231 "traddr": "10.0.0.2", 00:17:19.231 "adrfam": "ipv4", 00:17:19.231 "trsvcid": "4420", 00:17:19.231 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:19.231 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:19.231 "prchk_reftag": false, 00:17:19.231 "prchk_guard": false, 00:17:19.231 "hdgst": false, 00:17:19.231 "ddgst": false, 00:17:19.231 "dhchap_key": "key3", 00:17:19.231 "allow_unrecognized_csi": false, 00:17:19.231 "method": "bdev_nvme_attach_controller", 00:17:19.231 "req_id": 1 00:17:19.231 } 00:17:19.231 Got JSON-RPC error response 00:17:19.231 response: 00:17:19.232 { 00:17:19.232 "code": -5, 00:17:19.232 "message": "Input/output error" 00:17:19.232 } 00:17:19.232 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:19.232 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:19.232 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:19.232 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:19.232 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:17:19.232 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:17:19.232 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:17:19.232 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:17:19.491 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:17:19.491 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:19.491 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:17:19.491 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:19.491 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:19.491 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:19.491 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:19.491 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:19.491 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:19.491 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:19.491 request: 00:17:19.491 { 00:17:19.491 "name": "nvme0", 00:17:19.491 "trtype": "tcp", 00:17:19.491 "traddr": "10.0.0.2", 00:17:19.491 "adrfam": "ipv4", 00:17:19.491 "trsvcid": "4420", 00:17:19.491 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:19.491 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:19.491 "prchk_reftag": false, 00:17:19.491 "prchk_guard": false, 00:17:19.491 "hdgst": false, 00:17:19.491 "ddgst": false, 00:17:19.491 "dhchap_key": "key3", 00:17:19.491 "allow_unrecognized_csi": false, 00:17:19.491 "method": "bdev_nvme_attach_controller", 00:17:19.491 "req_id": 1 00:17:19.491 } 00:17:19.491 Got JSON-RPC error response 00:17:19.491 response: 00:17:19.491 { 00:17:19.491 "code": -5, 00:17:19.491 "message": "Input/output error" 00:17:19.491 } 00:17:19.751 12:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:19.751 12:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:19.751 12:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:19.751 12:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:19.751 12:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:17:19.751 12:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:17:19.751 12:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:17:19.751 12:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:19.751 12:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:19.751 12:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:19.751 12:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:19.751 12:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.751 12:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.751 12:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.751 12:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:19.751 12:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.751 12:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.751 12:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.751 12:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:19.751 12:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:19.751 12:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:19.751 12:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:19.751 12:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:19.751 12:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:19.751 12:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:19.751 12:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:19.751 12:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:19.751 12:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:20.320 request: 00:17:20.320 { 00:17:20.320 "name": "nvme0", 00:17:20.320 "trtype": "tcp", 00:17:20.320 "traddr": "10.0.0.2", 00:17:20.320 "adrfam": "ipv4", 00:17:20.320 "trsvcid": "4420", 00:17:20.320 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:20.320 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:20.320 "prchk_reftag": false, 00:17:20.320 "prchk_guard": false, 00:17:20.320 "hdgst": false, 00:17:20.320 "ddgst": false, 00:17:20.320 "dhchap_key": "key0", 00:17:20.320 "dhchap_ctrlr_key": "key1", 00:17:20.320 "allow_unrecognized_csi": false, 00:17:20.320 "method": "bdev_nvme_attach_controller", 00:17:20.321 "req_id": 1 00:17:20.321 } 00:17:20.321 Got JSON-RPC error response 00:17:20.321 response: 00:17:20.321 { 00:17:20.321 "code": -5, 00:17:20.321 "message": "Input/output error" 00:17:20.321 } 00:17:20.321 12:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:20.321 12:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:20.321 12:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:20.321 12:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:20.321 12:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:17:20.321 12:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:17:20.321 12:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:17:20.321 nvme0n1 00:17:20.321 12:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:17:20.321 12:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:20.321 12:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:17:20.579 12:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:20.579 12:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:20.579 12:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:20.836 12:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:17:20.836 12:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.836 12:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.836 12:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.836 12:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:17:20.836 12:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:20.836 12:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:21.769 nvme0n1 00:17:21.769 12:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:17:21.769 12:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:17:21.769 12:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:21.769 12:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:21.769 12:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:21.769 12:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.769 12:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.769 12:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.769 12:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:17:21.769 12:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:17:21.770 12:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:22.028 12:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:22.028 12:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:NzU4Njg1NzdiNDBiNjBkYWE2MjAyZmQ0MTlmYzAyOGRiNmI1NjI4NjRiYzU2ZTZhfJv/wQ==: --dhchap-ctrl-secret DHHC-1:03:NWIyOWVhMzdlM2QxY2I3Y2NhOTAwMzdhNGY2OWQ5NDY1NDM0OWJlMWZjODJlY2MyZTk3OWY0YmI1Nzg0NGU3YbPRy9c=: 00:17:22.028 12:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NzU4Njg1NzdiNDBiNjBkYWE2MjAyZmQ0MTlmYzAyOGRiNmI1NjI4NjRiYzU2ZTZhfJv/wQ==: --dhchap-ctrl-secret DHHC-1:03:NWIyOWVhMzdlM2QxY2I3Y2NhOTAwMzdhNGY2OWQ5NDY1NDM0OWJlMWZjODJlY2MyZTk3OWY0YmI1Nzg0NGU3YbPRy9c=: 00:17:22.594 12:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:17:22.594 12:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:17:22.594 12:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:17:22.594 12:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:17:22.594 12:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:17:22.594 12:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:17:22.594 12:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:17:22.594 12:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:22.594 12:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:22.851 12:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:17:22.851 12:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:22.851 12:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:17:22.851 12:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:22.851 12:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:22.851 12:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:22.851 12:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:22.851 12:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:17:22.851 12:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:22.851 12:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:23.108 request: 00:17:23.108 { 00:17:23.108 "name": "nvme0", 00:17:23.108 "trtype": "tcp", 00:17:23.108 "traddr": "10.0.0.2", 00:17:23.108 "adrfam": "ipv4", 00:17:23.108 "trsvcid": "4420", 00:17:23.108 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:23.108 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:23.108 "prchk_reftag": false, 00:17:23.108 "prchk_guard": false, 00:17:23.108 "hdgst": false, 00:17:23.108 "ddgst": false, 00:17:23.108 "dhchap_key": "key1", 00:17:23.108 "allow_unrecognized_csi": false, 00:17:23.108 "method": "bdev_nvme_attach_controller", 00:17:23.108 "req_id": 1 00:17:23.108 } 00:17:23.108 Got JSON-RPC error response 00:17:23.108 response: 00:17:23.108 { 00:17:23.108 "code": -5, 00:17:23.108 "message": "Input/output error" 00:17:23.108 } 00:17:23.108 12:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:23.108 12:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:23.108 12:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:23.108 12:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:23.108 12:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:23.108 12:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:23.108 12:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:24.041 nvme0n1 00:17:24.041 12:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:17:24.041 12:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:24.041 12:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:17:24.041 12:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:24.041 12:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:24.041 12:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:24.300 12:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:24.300 12:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.300 12:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.300 12:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.300 12:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:17:24.300 12:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:17:24.300 12:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:17:24.562 nvme0n1 00:17:24.562 12:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:17:24.562 12:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:17:24.563 12:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:24.857 12:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:24.857 12:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:24.857 12:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:25.137 12:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:25.137 12:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.137 12:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.137 12:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.137 12:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:ODUwN2JiYjhmN2UyN2VlNTdlOTYyZjdmZWY5ODEwYTJErtK/: '' 2s 00:17:25.137 12:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:17:25.137 12:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:17:25.137 12:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:ODUwN2JiYjhmN2UyN2VlNTdlOTYyZjdmZWY5ODEwYTJErtK/: 00:17:25.137 12:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:17:25.137 12:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:17:25.137 12:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:17:25.137 12:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:ODUwN2JiYjhmN2UyN2VlNTdlOTYyZjdmZWY5ODEwYTJErtK/: ]] 00:17:25.137 12:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:ODUwN2JiYjhmN2UyN2VlNTdlOTYyZjdmZWY5ODEwYTJErtK/: 00:17:25.137 12:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:17:25.137 12:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:17:25.137 12:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:17:27.100 12:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:17:27.100 12:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:17:27.100 12:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:17:27.100 12:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:17:27.100 12:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:17:27.100 12:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:17:27.100 12:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:17:27.100 12:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key2 00:17:27.100 12:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.100 12:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.100 12:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.100 12:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:NzU4Njg1NzdiNDBiNjBkYWE2MjAyZmQ0MTlmYzAyOGRiNmI1NjI4NjRiYzU2ZTZhfJv/wQ==: 2s 00:17:27.100 12:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:17:27.100 12:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:17:27.100 12:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:17:27.100 12:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:NzU4Njg1NzdiNDBiNjBkYWE2MjAyZmQ0MTlmYzAyOGRiNmI1NjI4NjRiYzU2ZTZhfJv/wQ==: 00:17:27.100 12:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:17:27.100 12:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:17:27.100 12:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:17:27.100 12:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:NzU4Njg1NzdiNDBiNjBkYWE2MjAyZmQ0MTlmYzAyOGRiNmI1NjI4NjRiYzU2ZTZhfJv/wQ==: ]] 00:17:27.100 12:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:NzU4Njg1NzdiNDBiNjBkYWE2MjAyZmQ0MTlmYzAyOGRiNmI1NjI4NjRiYzU2ZTZhfJv/wQ==: 00:17:27.100 12:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:17:27.100 12:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:17:28.998 12:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:17:28.998 12:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:17:28.998 12:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:17:28.998 12:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:17:28.998 12:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:17:28.998 12:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:17:28.998 12:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:17:28.998 12:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:29.256 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:29.256 12:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:29.256 12:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.256 12:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.256 12:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.257 12:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:29.257 12:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:29.257 12:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:29.822 nvme0n1 00:17:29.822 12:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:29.822 12:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.822 12:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.822 12:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.822 12:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:29.822 12:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:30.389 12:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:17:30.389 12:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:17:30.389 12:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:30.646 12:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:30.646 12:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:30.646 12:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.646 12:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.646 12:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.646 12:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:17:30.646 12:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:17:30.904 12:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:17:30.904 12:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:17:30.904 12:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:30.904 12:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:30.904 12:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:30.904 12:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.904 12:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.904 12:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.904 12:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:30.904 12:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:30.904 12:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:30.904 12:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:17:30.904 12:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:30.904 12:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:17:30.904 12:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:30.904 12:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:30.904 12:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:31.469 request: 00:17:31.469 { 00:17:31.469 "name": "nvme0", 00:17:31.469 "dhchap_key": "key1", 00:17:31.469 "dhchap_ctrlr_key": "key3", 00:17:31.469 "method": "bdev_nvme_set_keys", 00:17:31.469 "req_id": 1 00:17:31.469 } 00:17:31.469 Got JSON-RPC error response 00:17:31.469 response: 00:17:31.469 { 00:17:31.469 "code": -13, 00:17:31.469 "message": "Permission denied" 00:17:31.469 } 00:17:31.469 12:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:31.469 12:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:31.469 12:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:31.469 12:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:31.469 12:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:17:31.469 12:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:17:31.469 12:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:31.727 12:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:17:31.727 12:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:17:32.660 12:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:17:32.660 12:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:17:32.660 12:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:32.917 12:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:17:32.917 12:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:32.917 12:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.917 12:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.917 12:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.917 12:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:32.917 12:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:32.917 12:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:33.850 nvme0n1 00:17:33.850 12:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:33.850 12:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.850 12:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.850 12:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.850 12:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:33.851 12:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:33.851 12:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:33.851 12:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:17:33.851 12:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:33.851 12:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:17:33.851 12:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:33.851 12:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:33.851 12:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:34.108 request: 00:17:34.108 { 00:17:34.108 "name": "nvme0", 00:17:34.108 "dhchap_key": "key2", 00:17:34.108 "dhchap_ctrlr_key": "key0", 00:17:34.108 "method": "bdev_nvme_set_keys", 00:17:34.108 "req_id": 1 00:17:34.108 } 00:17:34.108 Got JSON-RPC error response 00:17:34.108 response: 00:17:34.108 { 00:17:34.108 "code": -13, 00:17:34.108 "message": "Permission denied" 00:17:34.108 } 00:17:34.108 12:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:34.108 12:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:34.108 12:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:34.108 12:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:34.108 12:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:17:34.108 12:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:17:34.108 12:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:34.366 12:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:17:34.366 12:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:17:35.299 12:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:17:35.299 12:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:17:35.299 12:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:35.557 12:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:17:35.557 12:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:17:35.557 12:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:17:35.557 12:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 2503955 00:17:35.557 12:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 2503955 ']' 00:17:35.557 12:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 2503955 00:17:35.557 12:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:17:35.557 12:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:35.557 12:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2503955 00:17:35.557 12:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:35.557 12:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:35.557 12:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2503955' 00:17:35.557 killing process with pid 2503955 00:17:35.557 12:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 2503955 00:17:35.557 12:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 2503955 00:17:35.816 12:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:17:35.816 12:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:35.816 12:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:17:35.816 12:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:35.816 12:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:17:35.816 12:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:35.816 12:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:35.816 rmmod nvme_tcp 00:17:35.816 rmmod nvme_fabrics 00:17:35.816 rmmod nvme_keyring 00:17:35.816 12:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:35.816 12:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:17:35.816 12:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:17:35.816 12:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 2526125 ']' 00:17:35.816 12:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 2526125 00:17:35.816 12:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 2526125 ']' 00:17:35.816 12:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 2526125 00:17:35.816 12:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:17:35.816 12:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:35.816 12:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2526125 00:17:36.075 12:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:36.075 12:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:36.075 12:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2526125' 00:17:36.075 killing process with pid 2526125 00:17:36.075 12:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 2526125 00:17:36.075 12:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 2526125 00:17:36.075 12:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:36.075 12:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:36.075 12:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:36.075 12:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:17:36.075 12:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:36.075 12:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:17:36.075 12:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:17:36.075 12:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:36.075 12:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:36.075 12:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:36.075 12:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:36.075 12:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:38.608 12:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:38.608 12:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.RSE /tmp/spdk.key-sha256.Cww /tmp/spdk.key-sha384.ONg /tmp/spdk.key-sha512.IrJ /tmp/spdk.key-sha512.PBP /tmp/spdk.key-sha384.L2o /tmp/spdk.key-sha256.KwW '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:17:38.608 00:17:38.608 real 2m32.206s 00:17:38.608 user 5m50.676s 00:17:38.608 sys 0m24.125s 00:17:38.608 12:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:38.608 12:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.608 ************************************ 00:17:38.608 END TEST nvmf_auth_target 00:17:38.608 ************************************ 00:17:38.608 12:41:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:17:38.608 12:41:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:38.608 12:41:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:17:38.608 12:41:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:38.608 12:41:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:38.608 ************************************ 00:17:38.608 START TEST nvmf_bdevio_no_huge 00:17:38.608 ************************************ 00:17:38.608 12:41:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:38.608 * Looking for test storage... 00:17:38.608 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:38.608 12:41:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:38.608 12:41:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lcov --version 00:17:38.608 12:41:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:38.608 12:41:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:38.608 12:41:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:38.608 12:41:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:38.608 12:41:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:38.608 12:41:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:17:38.608 12:41:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:17:38.608 12:41:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:17:38.608 12:41:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:17:38.608 12:41:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:17:38.608 12:41:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:17:38.608 12:41:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:17:38.608 12:41:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:38.608 12:41:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:17:38.608 12:41:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:17:38.608 12:41:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:38.608 12:41:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:38.608 12:41:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:17:38.608 12:41:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:17:38.608 12:41:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:38.608 12:41:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:17:38.608 12:41:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:17:38.608 12:41:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:17:38.608 12:41:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:17:38.608 12:41:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:38.608 12:41:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:17:38.608 12:41:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:17:38.608 12:41:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:38.608 12:41:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:38.608 12:41:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:17:38.608 12:41:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:38.608 12:41:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:38.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:38.608 --rc genhtml_branch_coverage=1 00:17:38.608 --rc genhtml_function_coverage=1 00:17:38.608 --rc genhtml_legend=1 00:17:38.608 --rc geninfo_all_blocks=1 00:17:38.608 --rc geninfo_unexecuted_blocks=1 00:17:38.608 00:17:38.608 ' 00:17:38.608 12:41:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:38.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:38.608 --rc genhtml_branch_coverage=1 00:17:38.608 --rc genhtml_function_coverage=1 00:17:38.608 --rc genhtml_legend=1 00:17:38.608 --rc geninfo_all_blocks=1 00:17:38.608 --rc geninfo_unexecuted_blocks=1 00:17:38.608 00:17:38.608 ' 00:17:38.608 12:41:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:38.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:38.608 --rc genhtml_branch_coverage=1 00:17:38.608 --rc genhtml_function_coverage=1 00:17:38.608 --rc genhtml_legend=1 00:17:38.608 --rc geninfo_all_blocks=1 00:17:38.608 --rc geninfo_unexecuted_blocks=1 00:17:38.608 00:17:38.608 ' 00:17:38.608 12:41:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:38.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:38.608 --rc genhtml_branch_coverage=1 00:17:38.608 --rc genhtml_function_coverage=1 00:17:38.608 --rc genhtml_legend=1 00:17:38.608 --rc geninfo_all_blocks=1 00:17:38.608 --rc geninfo_unexecuted_blocks=1 00:17:38.608 00:17:38.608 ' 00:17:38.608 12:41:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:38.608 12:41:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:17:38.608 12:41:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:38.608 12:41:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:38.608 12:41:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:38.608 12:41:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:38.608 12:41:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:38.608 12:41:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:38.608 12:41:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:38.608 12:41:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:38.608 12:41:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:38.608 12:41:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:38.608 12:41:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:38.608 12:41:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:17:38.608 12:41:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:38.608 12:41:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:38.608 12:41:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:38.608 12:41:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:38.608 12:41:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:38.608 12:41:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:17:38.608 12:41:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:38.608 12:41:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:38.608 12:41:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:38.609 12:41:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:38.609 12:41:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:38.609 12:41:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:38.609 12:41:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:17:38.609 12:41:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:38.609 12:41:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:17:38.609 12:41:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:38.609 12:41:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:38.609 12:41:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:38.609 12:41:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:38.609 12:41:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:38.609 12:41:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:38.609 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:38.609 12:41:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:38.609 12:41:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:38.609 12:41:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:38.609 12:41:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:38.609 12:41:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:38.609 12:41:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:17:38.609 12:41:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:38.609 12:41:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:38.609 12:41:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:38.609 12:41:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:38.609 12:41:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:38.609 12:41:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:38.609 12:41:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:38.609 12:41:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:38.609 12:41:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:38.609 12:41:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:38.609 12:41:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:17:38.609 12:41:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:43.879 12:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:43.879 12:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:17:43.879 12:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:43.879 12:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:43.879 12:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:43.879 12:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:43.879 12:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:43.879 12:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:17:43.879 12:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:43.879 12:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:17:43.879 12:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:17:43.879 12:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:17:43.879 12:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:17:43.879 12:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:17:43.879 12:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:17:43.879 12:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:43.879 12:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:43.879 12:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:43.879 12:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:43.879 12:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:43.879 12:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:43.879 12:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:43.879 12:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:43.879 12:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:43.879 12:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:43.879 12:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:43.879 12:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:43.879 12:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:43.879 12:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:43.880 12:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:43.880 12:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:43.880 12:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:43.880 12:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:43.880 12:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:43.880 12:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:17:43.880 Found 0000:86:00.0 (0x8086 - 0x159b) 00:17:43.880 12:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:43.880 12:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:43.880 12:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:43.880 12:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:43.880 12:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:43.880 12:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:43.880 12:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:17:43.880 Found 0000:86:00.1 (0x8086 - 0x159b) 00:17:43.880 12:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:43.880 12:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:43.880 12:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:43.880 12:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:43.880 12:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:43.880 12:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:43.880 12:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:43.880 12:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:43.880 12:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:43.880 12:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:43.880 12:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:43.880 12:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:43.880 12:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:43.880 12:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:43.880 12:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:43.880 12:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:17:43.880 Found net devices under 0000:86:00.0: cvl_0_0 00:17:43.880 12:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:43.880 12:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:43.880 12:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:43.880 12:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:43.880 12:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:43.880 12:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:43.880 12:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:43.880 12:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:43.880 12:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:17:43.880 Found net devices under 0000:86:00.1: cvl_0_1 00:17:43.880 12:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:43.880 12:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:43.880 12:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:17:43.880 12:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:43.880 12:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:43.880 12:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:43.880 12:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:43.880 12:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:43.880 12:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:43.880 12:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:43.880 12:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:43.880 12:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:43.880 12:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:43.880 12:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:43.880 12:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:43.880 12:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:43.880 12:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:43.880 12:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:43.880 12:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:43.880 12:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:43.880 12:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:43.880 12:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:43.880 12:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:43.880 12:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:43.880 12:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:43.880 12:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:43.880 12:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:43.880 12:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:43.880 12:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:43.880 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:43.880 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.423 ms 00:17:43.880 00:17:43.880 --- 10.0.0.2 ping statistics --- 00:17:43.880 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:43.880 rtt min/avg/max/mdev = 0.423/0.423/0.423/0.000 ms 00:17:43.880 12:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:43.880 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:43.880 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:17:43.880 00:17:43.880 --- 10.0.0.1 ping statistics --- 00:17:43.880 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:43.880 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:17:43.880 12:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:43.880 12:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:17:43.881 12:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:43.881 12:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:43.881 12:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:43.881 12:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:43.881 12:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:43.881 12:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:43.881 12:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:43.881 12:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:17:43.881 12:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:43.881 12:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:43.881 12:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:43.881 12:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=2532852 00:17:43.881 12:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 2532852 00:17:43.881 12:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:17:43.881 12:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 2532852 ']' 00:17:43.881 12:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:43.881 12:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:43.881 12:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:43.881 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:43.881 12:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:43.881 12:41:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:43.881 [2024-11-28 12:41:26.384777] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:17:43.881 [2024-11-28 12:41:26.384822] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:17:44.140 [2024-11-28 12:41:26.457072] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:44.140 [2024-11-28 12:41:26.504847] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:44.140 [2024-11-28 12:41:26.504882] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:44.140 [2024-11-28 12:41:26.504889] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:44.140 [2024-11-28 12:41:26.504895] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:44.140 [2024-11-28 12:41:26.504900] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:44.140 [2024-11-28 12:41:26.506020] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:17:44.140 [2024-11-28 12:41:26.506129] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:17:44.140 [2024-11-28 12:41:26.506234] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:44.140 [2024-11-28 12:41:26.506235] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:17:44.708 12:41:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:44.708 12:41:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:17:44.708 12:41:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:44.708 12:41:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:44.708 12:41:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:44.967 12:41:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:44.967 12:41:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:44.967 12:41:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.967 12:41:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:44.967 [2024-11-28 12:41:27.261086] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:44.967 12:41:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.967 12:41:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:44.967 12:41:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.967 12:41:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:44.967 Malloc0 00:17:44.967 12:41:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.967 12:41:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:44.967 12:41:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.967 12:41:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:44.967 12:41:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.967 12:41:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:44.967 12:41:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.967 12:41:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:44.967 12:41:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.967 12:41:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:44.967 12:41:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.967 12:41:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:44.967 [2024-11-28 12:41:27.305385] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:44.967 12:41:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.967 12:41:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:17:44.967 12:41:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:17:44.967 12:41:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:17:44.967 12:41:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:17:44.967 12:41:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:17:44.967 12:41:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:17:44.967 { 00:17:44.967 "params": { 00:17:44.967 "name": "Nvme$subsystem", 00:17:44.967 "trtype": "$TEST_TRANSPORT", 00:17:44.967 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:44.967 "adrfam": "ipv4", 00:17:44.967 "trsvcid": "$NVMF_PORT", 00:17:44.967 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:44.967 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:44.967 "hdgst": ${hdgst:-false}, 00:17:44.967 "ddgst": ${ddgst:-false} 00:17:44.967 }, 00:17:44.967 "method": "bdev_nvme_attach_controller" 00:17:44.967 } 00:17:44.967 EOF 00:17:44.967 )") 00:17:44.967 12:41:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:17:44.967 12:41:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:17:44.967 12:41:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:17:44.967 12:41:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:17:44.967 "params": { 00:17:44.967 "name": "Nvme1", 00:17:44.967 "trtype": "tcp", 00:17:44.967 "traddr": "10.0.0.2", 00:17:44.967 "adrfam": "ipv4", 00:17:44.967 "trsvcid": "4420", 00:17:44.967 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:44.967 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:44.967 "hdgst": false, 00:17:44.967 "ddgst": false 00:17:44.967 }, 00:17:44.967 "method": "bdev_nvme_attach_controller" 00:17:44.967 }' 00:17:44.967 [2024-11-28 12:41:27.356209] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:17:44.968 [2024-11-28 12:41:27.356253] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid2533098 ] 00:17:44.968 [2024-11-28 12:41:27.422623] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:44.968 [2024-11-28 12:41:27.471655] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:44.968 [2024-11-28 12:41:27.471753] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:44.968 [2024-11-28 12:41:27.471753] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:45.536 I/O targets: 00:17:45.536 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:17:45.536 00:17:45.536 00:17:45.536 CUnit - A unit testing framework for C - Version 2.1-3 00:17:45.536 http://cunit.sourceforge.net/ 00:17:45.536 00:17:45.536 00:17:45.536 Suite: bdevio tests on: Nvme1n1 00:17:45.536 Test: blockdev write read block ...passed 00:17:45.536 Test: blockdev write zeroes read block ...passed 00:17:45.536 Test: blockdev write zeroes read no split ...passed 00:17:45.536 Test: blockdev write zeroes read split ...passed 00:17:45.536 Test: blockdev write zeroes read split partial ...passed 00:17:45.536 Test: blockdev reset ...[2024-11-28 12:41:27.875806] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:17:45.536 [2024-11-28 12:41:27.875868] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d4c8e0 (9): Bad file descriptor 00:17:45.536 [2024-11-28 12:41:27.933524] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:17:45.536 passed 00:17:45.536 Test: blockdev write read 8 blocks ...passed 00:17:45.536 Test: blockdev write read size > 128k ...passed 00:17:45.536 Test: blockdev write read invalid size ...passed 00:17:45.536 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:45.536 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:45.536 Test: blockdev write read max offset ...passed 00:17:45.795 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:45.795 Test: blockdev writev readv 8 blocks ...passed 00:17:45.795 Test: blockdev writev readv 30 x 1block ...passed 00:17:45.795 Test: blockdev writev readv block ...passed 00:17:45.795 Test: blockdev writev readv size > 128k ...passed 00:17:45.795 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:45.795 Test: blockdev comparev and writev ...[2024-11-28 12:41:28.143548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:45.795 [2024-11-28 12:41:28.143577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:45.795 [2024-11-28 12:41:28.143590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:45.795 [2024-11-28 12:41:28.143598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:45.795 [2024-11-28 12:41:28.143845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:45.795 [2024-11-28 12:41:28.143861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:45.795 [2024-11-28 12:41:28.143873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:45.795 [2024-11-28 12:41:28.143880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:45.795 [2024-11-28 12:41:28.144128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:45.795 [2024-11-28 12:41:28.144138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:45.795 [2024-11-28 12:41:28.144150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:45.795 [2024-11-28 12:41:28.144157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:45.795 [2024-11-28 12:41:28.144389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:45.795 [2024-11-28 12:41:28.144399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:45.795 [2024-11-28 12:41:28.144410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:45.795 [2024-11-28 12:41:28.144418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:45.795 passed 00:17:45.795 Test: blockdev nvme passthru rw ...passed 00:17:45.795 Test: blockdev nvme passthru vendor specific ...[2024-11-28 12:41:28.226356] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:45.795 [2024-11-28 12:41:28.226370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:45.795 [2024-11-28 12:41:28.226483] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:45.795 [2024-11-28 12:41:28.226493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:45.795 [2024-11-28 12:41:28.226597] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:45.795 [2024-11-28 12:41:28.226606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:45.795 [2024-11-28 12:41:28.226715] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:45.795 [2024-11-28 12:41:28.226725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:45.795 passed 00:17:45.795 Test: blockdev nvme admin passthru ...passed 00:17:45.795 Test: blockdev copy ...passed 00:17:45.795 00:17:45.795 Run Summary: Type Total Ran Passed Failed Inactive 00:17:45.795 suites 1 1 n/a 0 0 00:17:45.795 tests 23 23 23 0 0 00:17:45.795 asserts 152 152 152 0 n/a 00:17:45.795 00:17:45.795 Elapsed time = 1.044 seconds 00:17:46.054 12:41:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:46.054 12:41:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.054 12:41:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:46.054 12:41:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.054 12:41:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:17:46.054 12:41:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:17:46.054 12:41:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:46.054 12:41:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:17:46.054 12:41:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:46.054 12:41:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:17:46.054 12:41:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:46.054 12:41:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:46.054 rmmod nvme_tcp 00:17:46.314 rmmod nvme_fabrics 00:17:46.314 rmmod nvme_keyring 00:17:46.314 12:41:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:46.314 12:41:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:17:46.314 12:41:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:17:46.314 12:41:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 2532852 ']' 00:17:46.314 12:41:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 2532852 00:17:46.314 12:41:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 2532852 ']' 00:17:46.314 12:41:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 2532852 00:17:46.314 12:41:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:17:46.314 12:41:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:46.314 12:41:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2532852 00:17:46.314 12:41:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:17:46.314 12:41:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:17:46.314 12:41:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2532852' 00:17:46.314 killing process with pid 2532852 00:17:46.314 12:41:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 2532852 00:17:46.314 12:41:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 2532852 00:17:46.573 12:41:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:46.573 12:41:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:46.573 12:41:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:46.574 12:41:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:17:46.574 12:41:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:17:46.574 12:41:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:17:46.574 12:41:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:46.574 12:41:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:46.574 12:41:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:46.574 12:41:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:46.574 12:41:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:46.574 12:41:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:49.108 12:41:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:49.108 00:17:49.108 real 0m10.346s 00:17:49.108 user 0m13.567s 00:17:49.108 sys 0m5.008s 00:17:49.108 12:41:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:49.108 12:41:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:49.108 ************************************ 00:17:49.108 END TEST nvmf_bdevio_no_huge 00:17:49.108 ************************************ 00:17:49.108 12:41:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:49.108 12:41:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:49.108 12:41:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:49.108 12:41:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:49.108 ************************************ 00:17:49.108 START TEST nvmf_tls 00:17:49.108 ************************************ 00:17:49.108 12:41:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:49.108 * Looking for test storage... 00:17:49.108 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:49.108 12:41:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:49.108 12:41:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lcov --version 00:17:49.108 12:41:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:49.108 12:41:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:49.108 12:41:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:49.108 12:41:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:49.108 12:41:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:49.108 12:41:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:17:49.108 12:41:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:17:49.108 12:41:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:17:49.108 12:41:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:17:49.108 12:41:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:17:49.108 12:41:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:17:49.109 12:41:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:17:49.109 12:41:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:49.109 12:41:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:17:49.109 12:41:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:17:49.109 12:41:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:49.109 12:41:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:49.109 12:41:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:17:49.109 12:41:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:17:49.109 12:41:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:49.109 12:41:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:17:49.109 12:41:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:17:49.109 12:41:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:17:49.109 12:41:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:17:49.109 12:41:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:49.109 12:41:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:17:49.109 12:41:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:17:49.109 12:41:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:49.109 12:41:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:49.109 12:41:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:17:49.109 12:41:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:49.109 12:41:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:49.109 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:49.109 --rc genhtml_branch_coverage=1 00:17:49.109 --rc genhtml_function_coverage=1 00:17:49.109 --rc genhtml_legend=1 00:17:49.109 --rc geninfo_all_blocks=1 00:17:49.109 --rc geninfo_unexecuted_blocks=1 00:17:49.109 00:17:49.109 ' 00:17:49.109 12:41:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:49.109 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:49.109 --rc genhtml_branch_coverage=1 00:17:49.109 --rc genhtml_function_coverage=1 00:17:49.109 --rc genhtml_legend=1 00:17:49.109 --rc geninfo_all_blocks=1 00:17:49.109 --rc geninfo_unexecuted_blocks=1 00:17:49.109 00:17:49.109 ' 00:17:49.109 12:41:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:49.109 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:49.109 --rc genhtml_branch_coverage=1 00:17:49.109 --rc genhtml_function_coverage=1 00:17:49.109 --rc genhtml_legend=1 00:17:49.109 --rc geninfo_all_blocks=1 00:17:49.109 --rc geninfo_unexecuted_blocks=1 00:17:49.109 00:17:49.109 ' 00:17:49.109 12:41:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:49.109 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:49.109 --rc genhtml_branch_coverage=1 00:17:49.109 --rc genhtml_function_coverage=1 00:17:49.109 --rc genhtml_legend=1 00:17:49.109 --rc geninfo_all_blocks=1 00:17:49.109 --rc geninfo_unexecuted_blocks=1 00:17:49.109 00:17:49.109 ' 00:17:49.109 12:41:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:49.109 12:41:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:17:49.109 12:41:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:49.109 12:41:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:49.109 12:41:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:49.109 12:41:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:49.109 12:41:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:49.109 12:41:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:49.109 12:41:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:49.109 12:41:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:49.109 12:41:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:49.109 12:41:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:49.109 12:41:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:49.109 12:41:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:17:49.109 12:41:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:49.109 12:41:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:49.109 12:41:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:49.109 12:41:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:49.109 12:41:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:49.109 12:41:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:17:49.109 12:41:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:49.109 12:41:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:49.109 12:41:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:49.109 12:41:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:49.109 12:41:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:49.109 12:41:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:49.109 12:41:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:17:49.109 12:41:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:49.109 12:41:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:17:49.109 12:41:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:49.109 12:41:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:49.109 12:41:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:49.109 12:41:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:49.109 12:41:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:49.109 12:41:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:49.109 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:49.109 12:41:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:49.109 12:41:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:49.109 12:41:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:49.109 12:41:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:49.109 12:41:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:17:49.109 12:41:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:49.109 12:41:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:49.109 12:41:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:49.109 12:41:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:49.109 12:41:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:49.109 12:41:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:49.109 12:41:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:49.109 12:41:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:49.109 12:41:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:49.109 12:41:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:49.109 12:41:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:17:49.109 12:41:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:54.378 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:54.378 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:17:54.378 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:54.378 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:54.378 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:54.378 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:54.378 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:54.378 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:17:54.378 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:54.378 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:17:54.378 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:17:54.378 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:17:54.378 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:17:54.378 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:17:54.378 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:17:54.378 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:54.378 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:54.378 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:54.378 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:54.378 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:54.378 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:54.378 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:54.378 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:54.378 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:54.378 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:54.378 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:54.378 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:54.378 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:54.378 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:54.378 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:54.378 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:54.378 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:54.378 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:54.378 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:54.378 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:17:54.378 Found 0000:86:00.0 (0x8086 - 0x159b) 00:17:54.378 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:54.378 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:54.378 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:54.378 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:54.378 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:54.378 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:54.378 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:17:54.378 Found 0000:86:00.1 (0x8086 - 0x159b) 00:17:54.378 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:54.378 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:54.378 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:54.378 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:54.379 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:54.379 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:54.379 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:54.379 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:54.379 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:54.379 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:54.379 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:54.379 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:54.379 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:54.379 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:54.379 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:54.379 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:17:54.379 Found net devices under 0000:86:00.0: cvl_0_0 00:17:54.379 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:54.379 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:54.379 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:54.379 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:54.379 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:54.379 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:54.379 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:54.379 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:54.379 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:17:54.379 Found net devices under 0000:86:00.1: cvl_0_1 00:17:54.379 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:54.379 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:54.379 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:17:54.379 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:54.379 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:54.379 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:54.379 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:54.379 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:54.379 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:54.379 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:54.379 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:54.379 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:54.379 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:54.379 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:54.379 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:54.379 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:54.379 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:54.379 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:54.379 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:54.379 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:54.379 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:54.379 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:54.379 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:54.379 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:54.379 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:54.379 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:54.379 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:54.379 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:54.379 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:54.379 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:54.379 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.303 ms 00:17:54.379 00:17:54.379 --- 10.0.0.2 ping statistics --- 00:17:54.379 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:54.379 rtt min/avg/max/mdev = 0.303/0.303/0.303/0.000 ms 00:17:54.379 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:54.379 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:54.379 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.200 ms 00:17:54.379 00:17:54.379 --- 10.0.0.1 ping statistics --- 00:17:54.379 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:54.379 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:17:54.379 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:54.379 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:17:54.379 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:54.379 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:54.379 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:54.379 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:54.379 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:54.379 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:54.379 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:54.379 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:17:54.379 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:54.379 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:54.379 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:54.379 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2536799 00:17:54.379 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:17:54.379 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2536799 00:17:54.379 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2536799 ']' 00:17:54.379 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:54.379 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:54.379 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:54.379 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:54.379 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:54.379 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:54.379 [2024-11-28 12:41:36.689174] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:17:54.379 [2024-11-28 12:41:36.689223] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:54.379 [2024-11-28 12:41:36.757708] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:54.379 [2024-11-28 12:41:36.798981] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:54.379 [2024-11-28 12:41:36.799016] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:54.379 [2024-11-28 12:41:36.799023] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:54.379 [2024-11-28 12:41:36.799029] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:54.379 [2024-11-28 12:41:36.799034] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:54.379 [2024-11-28 12:41:36.799593] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:54.379 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:54.379 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:54.379 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:54.379 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:54.379 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:54.379 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:54.379 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:17:54.379 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:17:54.638 true 00:17:54.638 12:41:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:54.638 12:41:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:17:54.896 12:41:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:17:54.896 12:41:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:17:54.896 12:41:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:17:55.154 12:41:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:55.154 12:41:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:17:55.154 12:41:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:17:55.154 12:41:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:17:55.154 12:41:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:17:55.412 12:41:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:55.412 12:41:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:17:55.669 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:17:55.669 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:17:55.669 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:55.669 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:17:55.927 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:17:55.927 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:17:55.927 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:17:55.927 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:55.927 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:17:56.185 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:17:56.185 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:17:56.185 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:17:56.444 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:56.444 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:17:56.703 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:17:56.703 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:17:56.703 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:17:56.703 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:17:56.703 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:17:56.703 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:17:56.703 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:17:56.703 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:17:56.703 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:17:56.703 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:56.703 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:17:56.703 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:17:56.703 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:17:56.703 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:17:56.703 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:17:56.703 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:17:56.703 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:17:56.703 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:17:56.703 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:17:56.703 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.1ej0AGm7jB 00:17:56.703 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:17:56.703 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.6BY9e0EPs9 00:17:56.703 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:56.703 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:17:56.703 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.1ej0AGm7jB 00:17:56.703 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.6BY9e0EPs9 00:17:56.703 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:17:56.961 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:17:57.221 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.1ej0AGm7jB 00:17:57.221 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.1ej0AGm7jB 00:17:57.221 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:57.221 [2024-11-28 12:41:39.712111] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:57.221 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:57.479 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:57.737 [2024-11-28 12:41:40.077054] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:57.737 [2024-11-28 12:41:40.077264] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:57.738 12:41:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:57.996 malloc0 00:17:57.996 12:41:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:57.996 12:41:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.1ej0AGm7jB 00:17:58.254 12:41:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:17:58.512 12:41:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.1ej0AGm7jB 00:18:08.487 Initializing NVMe Controllers 00:18:08.487 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:08.487 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:08.487 Initialization complete. Launching workers. 00:18:08.487 ======================================================== 00:18:08.487 Latency(us) 00:18:08.487 Device Information : IOPS MiB/s Average min max 00:18:08.487 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16109.99 62.93 3972.82 827.25 5383.42 00:18:08.487 ======================================================== 00:18:08.487 Total : 16109.99 62.93 3972.82 827.25 5383.42 00:18:08.487 00:18:08.487 12:41:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.1ej0AGm7jB 00:18:08.487 12:41:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:08.487 12:41:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:08.487 12:41:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:08.487 12:41:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.1ej0AGm7jB 00:18:08.487 12:41:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:08.487 12:41:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2539205 00:18:08.487 12:41:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:08.487 12:41:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:08.487 12:41:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2539205 /var/tmp/bdevperf.sock 00:18:08.487 12:41:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2539205 ']' 00:18:08.487 12:41:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:08.487 12:41:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:08.487 12:41:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:08.487 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:08.487 12:41:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:08.487 12:41:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:08.746 [2024-11-28 12:41:51.017483] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:18:08.746 [2024-11-28 12:41:51.017533] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2539205 ] 00:18:08.746 [2024-11-28 12:41:51.075560] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:08.746 [2024-11-28 12:41:51.118579] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:08.746 12:41:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:08.746 12:41:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:08.746 12:41:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.1ej0AGm7jB 00:18:09.005 12:41:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:09.264 [2024-11-28 12:41:51.554417] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:09.264 TLSTESTn1 00:18:09.264 12:41:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:09.264 Running I/O for 10 seconds... 00:18:11.578 5419.00 IOPS, 21.17 MiB/s [2024-11-28T11:41:55.034Z] 5479.50 IOPS, 21.40 MiB/s [2024-11-28T11:41:55.968Z] 5481.67 IOPS, 21.41 MiB/s [2024-11-28T11:41:56.903Z] 5497.50 IOPS, 21.47 MiB/s [2024-11-28T11:41:57.838Z] 5507.80 IOPS, 21.51 MiB/s [2024-11-28T11:41:58.772Z] 5481.17 IOPS, 21.41 MiB/s [2024-11-28T11:42:00.151Z] 5489.57 IOPS, 21.44 MiB/s [2024-11-28T11:42:01.088Z] 5498.75 IOPS, 21.48 MiB/s [2024-11-28T11:42:02.024Z] 5471.11 IOPS, 21.37 MiB/s [2024-11-28T11:42:02.024Z] 5458.60 IOPS, 21.32 MiB/s 00:18:19.505 Latency(us) 00:18:19.505 [2024-11-28T11:42:02.024Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:19.505 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:19.505 Verification LBA range: start 0x0 length 0x2000 00:18:19.505 TLSTESTn1 : 10.01 5463.39 21.34 0.00 0.00 23392.88 6496.61 25872.47 00:18:19.505 [2024-11-28T11:42:02.024Z] =================================================================================================================== 00:18:19.505 [2024-11-28T11:42:02.024Z] Total : 5463.39 21.34 0.00 0.00 23392.88 6496.61 25872.47 00:18:19.505 { 00:18:19.505 "results": [ 00:18:19.505 { 00:18:19.505 "job": "TLSTESTn1", 00:18:19.505 "core_mask": "0x4", 00:18:19.505 "workload": "verify", 00:18:19.505 "status": "finished", 00:18:19.505 "verify_range": { 00:18:19.505 "start": 0, 00:18:19.505 "length": 8192 00:18:19.505 }, 00:18:19.505 "queue_depth": 128, 00:18:19.505 "io_size": 4096, 00:18:19.505 "runtime": 10.014487, 00:18:19.505 "iops": 5463.385193869641, 00:18:19.505 "mibps": 21.341348413553284, 00:18:19.505 "io_failed": 0, 00:18:19.505 "io_timeout": 0, 00:18:19.505 "avg_latency_us": 23392.88104736256, 00:18:19.505 "min_latency_us": 6496.612173913043, 00:18:19.505 "max_latency_us": 25872.47304347826 00:18:19.505 } 00:18:19.505 ], 00:18:19.506 "core_count": 1 00:18:19.506 } 00:18:19.506 12:42:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:19.506 12:42:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 2539205 00:18:19.506 12:42:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2539205 ']' 00:18:19.506 12:42:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2539205 00:18:19.506 12:42:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:19.506 12:42:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:19.506 12:42:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2539205 00:18:19.506 12:42:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:19.506 12:42:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:19.506 12:42:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2539205' 00:18:19.506 killing process with pid 2539205 00:18:19.506 12:42:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2539205 00:18:19.506 Received shutdown signal, test time was about 10.000000 seconds 00:18:19.506 00:18:19.506 Latency(us) 00:18:19.506 [2024-11-28T11:42:02.025Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:19.506 [2024-11-28T11:42:02.025Z] =================================================================================================================== 00:18:19.506 [2024-11-28T11:42:02.025Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:19.506 12:42:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2539205 00:18:19.506 12:42:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.6BY9e0EPs9 00:18:19.506 12:42:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:19.506 12:42:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.6BY9e0EPs9 00:18:19.506 12:42:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:19.506 12:42:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:19.506 12:42:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:19.506 12:42:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:19.506 12:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.6BY9e0EPs9 00:18:19.506 12:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:19.506 12:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:19.506 12:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:19.506 12:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.6BY9e0EPs9 00:18:19.506 12:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:19.506 12:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2541097 00:18:19.506 12:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:19.506 12:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:19.506 12:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2541097 /var/tmp/bdevperf.sock 00:18:19.506 12:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2541097 ']' 00:18:19.506 12:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:19.506 12:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:19.506 12:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:19.506 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:19.506 12:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:19.506 12:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:19.765 [2024-11-28 12:42:02.048505] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:18:19.765 [2024-11-28 12:42:02.048553] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2541097 ] 00:18:19.765 [2024-11-28 12:42:02.107406] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:19.765 [2024-11-28 12:42:02.150712] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:19.765 12:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:19.765 12:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:19.765 12:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.6BY9e0EPs9 00:18:20.023 12:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:20.282 [2024-11-28 12:42:02.598959] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:20.282 [2024-11-28 12:42:02.607873] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:20.282 [2024-11-28 12:42:02.608371] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x166e1a0 (107): Transport endpoint is not connected 00:18:20.282 [2024-11-28 12:42:02.609365] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x166e1a0 (9): Bad file descriptor 00:18:20.282 [2024-11-28 12:42:02.610366] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:18:20.282 [2024-11-28 12:42:02.610375] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:20.282 [2024-11-28 12:42:02.610382] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:18:20.282 [2024-11-28 12:42:02.610390] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:18:20.282 request: 00:18:20.282 { 00:18:20.282 "name": "TLSTEST", 00:18:20.282 "trtype": "tcp", 00:18:20.282 "traddr": "10.0.0.2", 00:18:20.282 "adrfam": "ipv4", 00:18:20.282 "trsvcid": "4420", 00:18:20.282 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:20.283 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:20.283 "prchk_reftag": false, 00:18:20.283 "prchk_guard": false, 00:18:20.283 "hdgst": false, 00:18:20.283 "ddgst": false, 00:18:20.283 "psk": "key0", 00:18:20.283 "allow_unrecognized_csi": false, 00:18:20.283 "method": "bdev_nvme_attach_controller", 00:18:20.283 "req_id": 1 00:18:20.283 } 00:18:20.283 Got JSON-RPC error response 00:18:20.283 response: 00:18:20.283 { 00:18:20.283 "code": -5, 00:18:20.283 "message": "Input/output error" 00:18:20.283 } 00:18:20.283 12:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2541097 00:18:20.283 12:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2541097 ']' 00:18:20.283 12:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2541097 00:18:20.283 12:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:20.283 12:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:20.283 12:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2541097 00:18:20.283 12:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:20.283 12:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:20.283 12:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2541097' 00:18:20.283 killing process with pid 2541097 00:18:20.283 12:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2541097 00:18:20.283 Received shutdown signal, test time was about 10.000000 seconds 00:18:20.283 00:18:20.283 Latency(us) 00:18:20.283 [2024-11-28T11:42:02.802Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:20.283 [2024-11-28T11:42:02.802Z] =================================================================================================================== 00:18:20.283 [2024-11-28T11:42:02.802Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:20.283 12:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2541097 00:18:20.543 12:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:20.543 12:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:20.543 12:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:20.543 12:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:20.543 12:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:20.543 12:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.1ej0AGm7jB 00:18:20.543 12:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:20.543 12:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.1ej0AGm7jB 00:18:20.543 12:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:20.543 12:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:20.543 12:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:20.543 12:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:20.543 12:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.1ej0AGm7jB 00:18:20.543 12:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:20.543 12:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:20.543 12:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:18:20.543 12:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.1ej0AGm7jB 00:18:20.543 12:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:20.543 12:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2541185 00:18:20.543 12:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:20.543 12:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:20.543 12:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2541185 /var/tmp/bdevperf.sock 00:18:20.543 12:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2541185 ']' 00:18:20.543 12:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:20.543 12:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:20.543 12:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:20.543 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:20.543 12:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:20.543 12:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:20.543 [2024-11-28 12:42:02.887649] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:18:20.543 [2024-11-28 12:42:02.887700] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2541185 ] 00:18:20.543 [2024-11-28 12:42:02.946235] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:20.543 [2024-11-28 12:42:02.983584] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:20.802 12:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:20.802 12:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:20.802 12:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.1ej0AGm7jB 00:18:20.802 12:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:18:21.062 [2024-11-28 12:42:03.455942] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:21.062 [2024-11-28 12:42:03.460792] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:18:21.062 [2024-11-28 12:42:03.460815] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:18:21.062 [2024-11-28 12:42:03.460845] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:21.062 [2024-11-28 12:42:03.461477] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x184c1a0 (107): Transport endpoint is not connected 00:18:21.062 [2024-11-28 12:42:03.462469] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x184c1a0 (9): Bad file descriptor 00:18:21.062 [2024-11-28 12:42:03.463470] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:18:21.062 [2024-11-28 12:42:03.463480] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:21.062 [2024-11-28 12:42:03.463487] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:18:21.062 [2024-11-28 12:42:03.463496] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:18:21.062 request: 00:18:21.062 { 00:18:21.062 "name": "TLSTEST", 00:18:21.062 "trtype": "tcp", 00:18:21.062 "traddr": "10.0.0.2", 00:18:21.062 "adrfam": "ipv4", 00:18:21.062 "trsvcid": "4420", 00:18:21.062 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:21.062 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:18:21.062 "prchk_reftag": false, 00:18:21.062 "prchk_guard": false, 00:18:21.062 "hdgst": false, 00:18:21.062 "ddgst": false, 00:18:21.062 "psk": "key0", 00:18:21.062 "allow_unrecognized_csi": false, 00:18:21.062 "method": "bdev_nvme_attach_controller", 00:18:21.062 "req_id": 1 00:18:21.062 } 00:18:21.062 Got JSON-RPC error response 00:18:21.062 response: 00:18:21.062 { 00:18:21.062 "code": -5, 00:18:21.062 "message": "Input/output error" 00:18:21.062 } 00:18:21.062 12:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2541185 00:18:21.062 12:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2541185 ']' 00:18:21.062 12:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2541185 00:18:21.062 12:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:21.062 12:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:21.062 12:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2541185 00:18:21.062 12:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:21.062 12:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:21.062 12:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2541185' 00:18:21.062 killing process with pid 2541185 00:18:21.062 12:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2541185 00:18:21.062 Received shutdown signal, test time was about 10.000000 seconds 00:18:21.062 00:18:21.062 Latency(us) 00:18:21.062 [2024-11-28T11:42:03.581Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:21.062 [2024-11-28T11:42:03.581Z] =================================================================================================================== 00:18:21.062 [2024-11-28T11:42:03.581Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:21.062 12:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2541185 00:18:21.322 12:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:21.322 12:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:21.322 12:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:21.322 12:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:21.322 12:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:21.322 12:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.1ej0AGm7jB 00:18:21.322 12:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:21.322 12:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.1ej0AGm7jB 00:18:21.322 12:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:21.322 12:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:21.322 12:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:21.322 12:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:21.323 12:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.1ej0AGm7jB 00:18:21.323 12:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:21.323 12:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:18:21.323 12:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:21.323 12:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.1ej0AGm7jB 00:18:21.323 12:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:21.323 12:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2541419 00:18:21.323 12:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:21.323 12:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:21.323 12:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2541419 /var/tmp/bdevperf.sock 00:18:21.323 12:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2541419 ']' 00:18:21.323 12:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:21.323 12:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:21.323 12:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:21.323 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:21.323 12:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:21.323 12:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:21.323 [2024-11-28 12:42:03.738717] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:18:21.323 [2024-11-28 12:42:03.738767] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2541419 ] 00:18:21.323 [2024-11-28 12:42:03.800107] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:21.323 [2024-11-28 12:42:03.837252] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:21.582 12:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:21.582 12:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:21.582 12:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.1ej0AGm7jB 00:18:21.841 12:42:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:21.841 [2024-11-28 12:42:04.293111] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:21.841 [2024-11-28 12:42:04.302102] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:18:21.841 [2024-11-28 12:42:04.302130] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:18:21.841 [2024-11-28 12:42:04.302153] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:21.841 [2024-11-28 12:42:04.302506] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23151a0 (107): Transport endpoint is not connected 00:18:21.841 [2024-11-28 12:42:04.303500] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23151a0 (9): Bad file descriptor 00:18:21.841 [2024-11-28 12:42:04.304502] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:18:21.841 [2024-11-28 12:42:04.304511] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:21.841 [2024-11-28 12:42:04.304518] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:18:21.841 [2024-11-28 12:42:04.304525] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:18:21.841 request: 00:18:21.841 { 00:18:21.841 "name": "TLSTEST", 00:18:21.841 "trtype": "tcp", 00:18:21.841 "traddr": "10.0.0.2", 00:18:21.841 "adrfam": "ipv4", 00:18:21.841 "trsvcid": "4420", 00:18:21.841 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:18:21.841 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:21.841 "prchk_reftag": false, 00:18:21.841 "prchk_guard": false, 00:18:21.841 "hdgst": false, 00:18:21.841 "ddgst": false, 00:18:21.841 "psk": "key0", 00:18:21.841 "allow_unrecognized_csi": false, 00:18:21.841 "method": "bdev_nvme_attach_controller", 00:18:21.841 "req_id": 1 00:18:21.841 } 00:18:21.841 Got JSON-RPC error response 00:18:21.841 response: 00:18:21.841 { 00:18:21.841 "code": -5, 00:18:21.841 "message": "Input/output error" 00:18:21.841 } 00:18:21.841 12:42:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2541419 00:18:21.841 12:42:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2541419 ']' 00:18:21.841 12:42:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2541419 00:18:21.841 12:42:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:21.841 12:42:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:21.841 12:42:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2541419 00:18:22.100 12:42:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:22.100 12:42:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:22.100 12:42:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2541419' 00:18:22.100 killing process with pid 2541419 00:18:22.100 12:42:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2541419 00:18:22.100 Received shutdown signal, test time was about 10.000000 seconds 00:18:22.100 00:18:22.100 Latency(us) 00:18:22.100 [2024-11-28T11:42:04.619Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:22.100 [2024-11-28T11:42:04.619Z] =================================================================================================================== 00:18:22.100 [2024-11-28T11:42:04.619Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:22.100 12:42:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2541419 00:18:22.100 12:42:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:22.100 12:42:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:22.101 12:42:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:22.101 12:42:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:22.101 12:42:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:22.101 12:42:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:22.101 12:42:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:22.101 12:42:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:22.101 12:42:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:22.101 12:42:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:22.101 12:42:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:22.101 12:42:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:22.101 12:42:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:22.101 12:42:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:22.101 12:42:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:22.101 12:42:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:22.101 12:42:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:18:22.101 12:42:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:22.101 12:42:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2541444 00:18:22.101 12:42:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:22.101 12:42:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:22.101 12:42:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2541444 /var/tmp/bdevperf.sock 00:18:22.101 12:42:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2541444 ']' 00:18:22.101 12:42:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:22.101 12:42:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:22.101 12:42:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:22.101 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:22.101 12:42:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:22.101 12:42:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:22.101 [2024-11-28 12:42:04.578984] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:18:22.101 [2024-11-28 12:42:04.579034] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2541444 ] 00:18:22.360 [2024-11-28 12:42:04.639479] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:22.360 [2024-11-28 12:42:04.680317] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:22.360 12:42:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:22.360 12:42:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:22.360 12:42:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:18:22.618 [2024-11-28 12:42:04.952652] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:18:22.618 [2024-11-28 12:42:04.952688] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:18:22.618 request: 00:18:22.618 { 00:18:22.618 "name": "key0", 00:18:22.618 "path": "", 00:18:22.618 "method": "keyring_file_add_key", 00:18:22.618 "req_id": 1 00:18:22.618 } 00:18:22.618 Got JSON-RPC error response 00:18:22.618 response: 00:18:22.618 { 00:18:22.618 "code": -1, 00:18:22.618 "message": "Operation not permitted" 00:18:22.618 } 00:18:22.618 12:42:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:22.884 [2024-11-28 12:42:05.137225] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:22.884 [2024-11-28 12:42:05.137253] bdev_nvme.c:6722:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:18:22.884 request: 00:18:22.884 { 00:18:22.884 "name": "TLSTEST", 00:18:22.884 "trtype": "tcp", 00:18:22.884 "traddr": "10.0.0.2", 00:18:22.884 "adrfam": "ipv4", 00:18:22.884 "trsvcid": "4420", 00:18:22.884 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:22.884 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:22.884 "prchk_reftag": false, 00:18:22.884 "prchk_guard": false, 00:18:22.884 "hdgst": false, 00:18:22.884 "ddgst": false, 00:18:22.884 "psk": "key0", 00:18:22.884 "allow_unrecognized_csi": false, 00:18:22.884 "method": "bdev_nvme_attach_controller", 00:18:22.884 "req_id": 1 00:18:22.884 } 00:18:22.884 Got JSON-RPC error response 00:18:22.884 response: 00:18:22.884 { 00:18:22.884 "code": -126, 00:18:22.884 "message": "Required key not available" 00:18:22.884 } 00:18:22.884 12:42:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2541444 00:18:22.884 12:42:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2541444 ']' 00:18:22.884 12:42:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2541444 00:18:22.884 12:42:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:22.884 12:42:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:22.884 12:42:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2541444 00:18:22.884 12:42:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:22.884 12:42:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:22.884 12:42:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2541444' 00:18:22.884 killing process with pid 2541444 00:18:22.884 12:42:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2541444 00:18:22.884 Received shutdown signal, test time was about 10.000000 seconds 00:18:22.884 00:18:22.884 Latency(us) 00:18:22.885 [2024-11-28T11:42:05.404Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:22.885 [2024-11-28T11:42:05.404Z] =================================================================================================================== 00:18:22.885 [2024-11-28T11:42:05.404Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:22.885 12:42:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2541444 00:18:22.885 12:42:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:22.885 12:42:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:22.885 12:42:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:22.885 12:42:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:22.885 12:42:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:22.885 12:42:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 2536799 00:18:22.885 12:42:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2536799 ']' 00:18:22.885 12:42:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2536799 00:18:22.885 12:42:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:22.885 12:42:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:22.885 12:42:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2536799 00:18:23.147 12:42:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:23.147 12:42:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:23.147 12:42:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2536799' 00:18:23.147 killing process with pid 2536799 00:18:23.147 12:42:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2536799 00:18:23.147 12:42:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2536799 00:18:23.148 12:42:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:18:23.148 12:42:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:18:23.148 12:42:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:18:23.148 12:42:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:18:23.148 12:42:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:18:23.148 12:42:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:18:23.148 12:42:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:18:23.148 12:42:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:23.148 12:42:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:18:23.148 12:42:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.Re8rfBDX5W 00:18:23.148 12:42:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:23.148 12:42:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.Re8rfBDX5W 00:18:23.148 12:42:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:18:23.148 12:42:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:23.148 12:42:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:23.148 12:42:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:23.148 12:42:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2541686 00:18:23.148 12:42:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:23.148 12:42:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2541686 00:18:23.148 12:42:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2541686 ']' 00:18:23.148 12:42:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:23.148 12:42:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:23.148 12:42:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:23.148 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:23.148 12:42:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:23.148 12:42:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:23.407 [2024-11-28 12:42:05.682997] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:18:23.407 [2024-11-28 12:42:05.683049] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:23.407 [2024-11-28 12:42:05.750287] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:23.407 [2024-11-28 12:42:05.789988] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:23.407 [2024-11-28 12:42:05.790022] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:23.407 [2024-11-28 12:42:05.790029] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:23.407 [2024-11-28 12:42:05.790038] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:23.407 [2024-11-28 12:42:05.790043] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:23.407 [2024-11-28 12:42:05.790654] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:23.407 12:42:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:23.407 12:42:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:23.407 12:42:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:23.407 12:42:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:23.407 12:42:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:23.407 12:42:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:23.407 12:42:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.Re8rfBDX5W 00:18:23.407 12:42:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.Re8rfBDX5W 00:18:23.407 12:42:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:23.666 [2024-11-28 12:42:06.090848] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:23.666 12:42:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:23.925 12:42:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:24.183 [2024-11-28 12:42:06.471843] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:24.183 [2024-11-28 12:42:06.472065] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:24.184 12:42:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:24.184 malloc0 00:18:24.184 12:42:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:24.442 12:42:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.Re8rfBDX5W 00:18:24.700 12:42:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:24.700 12:42:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Re8rfBDX5W 00:18:24.700 12:42:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:24.700 12:42:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:24.700 12:42:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:24.701 12:42:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.Re8rfBDX5W 00:18:24.701 12:42:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:24.701 12:42:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2541939 00:18:24.701 12:42:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:24.701 12:42:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:24.701 12:42:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2541939 /var/tmp/bdevperf.sock 00:18:24.701 12:42:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2541939 ']' 00:18:24.701 12:42:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:24.701 12:42:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:24.701 12:42:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:24.701 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:24.701 12:42:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:24.701 12:42:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:24.959 [2024-11-28 12:42:07.252138] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:18:24.959 [2024-11-28 12:42:07.252187] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2541939 ] 00:18:24.959 [2024-11-28 12:42:07.310319] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:24.959 [2024-11-28 12:42:07.353228] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:24.959 12:42:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:24.959 12:42:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:24.959 12:42:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Re8rfBDX5W 00:18:25.217 12:42:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:25.476 [2024-11-28 12:42:07.813506] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:25.476 TLSTESTn1 00:18:25.476 12:42:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:25.476 Running I/O for 10 seconds... 00:18:27.788 5397.00 IOPS, 21.08 MiB/s [2024-11-28T11:42:11.245Z] 5445.00 IOPS, 21.27 MiB/s [2024-11-28T11:42:12.181Z] 5407.67 IOPS, 21.12 MiB/s [2024-11-28T11:42:13.117Z] 5382.50 IOPS, 21.03 MiB/s [2024-11-28T11:42:14.054Z] 5394.20 IOPS, 21.07 MiB/s [2024-11-28T11:42:15.428Z] 5398.83 IOPS, 21.09 MiB/s [2024-11-28T11:42:16.363Z] 5385.86 IOPS, 21.04 MiB/s [2024-11-28T11:42:17.299Z] 5374.00 IOPS, 20.99 MiB/s [2024-11-28T11:42:18.236Z] 5246.33 IOPS, 20.49 MiB/s [2024-11-28T11:42:18.236Z] 5140.50 IOPS, 20.08 MiB/s 00:18:35.717 Latency(us) 00:18:35.717 [2024-11-28T11:42:18.236Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:35.717 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:35.717 Verification LBA range: start 0x0 length 0x2000 00:18:35.717 TLSTESTn1 : 10.02 5142.54 20.09 0.00 0.00 24850.91 5100.41 33052.94 00:18:35.717 [2024-11-28T11:42:18.236Z] =================================================================================================================== 00:18:35.717 [2024-11-28T11:42:18.236Z] Total : 5142.54 20.09 0.00 0.00 24850.91 5100.41 33052.94 00:18:35.717 { 00:18:35.717 "results": [ 00:18:35.717 { 00:18:35.717 "job": "TLSTESTn1", 00:18:35.717 "core_mask": "0x4", 00:18:35.717 "workload": "verify", 00:18:35.717 "status": "finished", 00:18:35.717 "verify_range": { 00:18:35.717 "start": 0, 00:18:35.717 "length": 8192 00:18:35.717 }, 00:18:35.717 "queue_depth": 128, 00:18:35.717 "io_size": 4096, 00:18:35.717 "runtime": 10.020735, 00:18:35.717 "iops": 5142.536949634932, 00:18:35.717 "mibps": 20.088034959511454, 00:18:35.717 "io_failed": 0, 00:18:35.717 "io_timeout": 0, 00:18:35.717 "avg_latency_us": 24850.907760209782, 00:18:35.717 "min_latency_us": 5100.410434782609, 00:18:35.717 "max_latency_us": 33052.93913043478 00:18:35.717 } 00:18:35.717 ], 00:18:35.717 "core_count": 1 00:18:35.717 } 00:18:35.717 12:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:35.717 12:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 2541939 00:18:35.717 12:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2541939 ']' 00:18:35.717 12:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2541939 00:18:35.717 12:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:35.717 12:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:35.717 12:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2541939 00:18:35.717 12:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:35.717 12:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:35.717 12:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2541939' 00:18:35.717 killing process with pid 2541939 00:18:35.717 12:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2541939 00:18:35.717 Received shutdown signal, test time was about 10.000000 seconds 00:18:35.717 00:18:35.717 Latency(us) 00:18:35.717 [2024-11-28T11:42:18.236Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:35.717 [2024-11-28T11:42:18.236Z] =================================================================================================================== 00:18:35.717 [2024-11-28T11:42:18.236Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:35.717 12:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2541939 00:18:35.977 12:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.Re8rfBDX5W 00:18:35.977 12:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Re8rfBDX5W 00:18:35.977 12:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:35.977 12:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Re8rfBDX5W 00:18:35.977 12:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:35.977 12:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:35.977 12:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:35.977 12:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:35.977 12:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Re8rfBDX5W 00:18:35.977 12:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:35.977 12:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:35.977 12:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:35.977 12:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.Re8rfBDX5W 00:18:35.977 12:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:35.977 12:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2544165 00:18:35.977 12:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:35.977 12:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:35.977 12:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2544165 /var/tmp/bdevperf.sock 00:18:35.977 12:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2544165 ']' 00:18:35.977 12:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:35.977 12:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:35.977 12:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:35.977 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:35.977 12:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:35.977 12:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:35.977 [2024-11-28 12:42:18.325932] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:18:35.977 [2024-11-28 12:42:18.325989] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2544165 ] 00:18:35.977 [2024-11-28 12:42:18.383793] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:35.977 [2024-11-28 12:42:18.426274] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:36.236 12:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:36.236 12:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:36.236 12:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Re8rfBDX5W 00:18:36.236 [2024-11-28 12:42:18.689861] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.Re8rfBDX5W': 0100666 00:18:36.236 [2024-11-28 12:42:18.689889] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:18:36.236 request: 00:18:36.236 { 00:18:36.236 "name": "key0", 00:18:36.236 "path": "/tmp/tmp.Re8rfBDX5W", 00:18:36.236 "method": "keyring_file_add_key", 00:18:36.236 "req_id": 1 00:18:36.236 } 00:18:36.236 Got JSON-RPC error response 00:18:36.236 response: 00:18:36.236 { 00:18:36.236 "code": -1, 00:18:36.236 "message": "Operation not permitted" 00:18:36.236 } 00:18:36.236 12:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:36.496 [2024-11-28 12:42:18.886467] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:36.496 [2024-11-28 12:42:18.886503] bdev_nvme.c:6722:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:18:36.496 request: 00:18:36.496 { 00:18:36.496 "name": "TLSTEST", 00:18:36.496 "trtype": "tcp", 00:18:36.496 "traddr": "10.0.0.2", 00:18:36.496 "adrfam": "ipv4", 00:18:36.496 "trsvcid": "4420", 00:18:36.496 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:36.496 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:36.496 "prchk_reftag": false, 00:18:36.496 "prchk_guard": false, 00:18:36.496 "hdgst": false, 00:18:36.496 "ddgst": false, 00:18:36.496 "psk": "key0", 00:18:36.496 "allow_unrecognized_csi": false, 00:18:36.496 "method": "bdev_nvme_attach_controller", 00:18:36.496 "req_id": 1 00:18:36.496 } 00:18:36.496 Got JSON-RPC error response 00:18:36.496 response: 00:18:36.496 { 00:18:36.496 "code": -126, 00:18:36.496 "message": "Required key not available" 00:18:36.496 } 00:18:36.496 12:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2544165 00:18:36.496 12:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2544165 ']' 00:18:36.496 12:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2544165 00:18:36.496 12:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:36.496 12:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:36.496 12:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2544165 00:18:36.496 12:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:36.496 12:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:36.496 12:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2544165' 00:18:36.496 killing process with pid 2544165 00:18:36.496 12:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2544165 00:18:36.496 Received shutdown signal, test time was about 10.000000 seconds 00:18:36.496 00:18:36.496 Latency(us) 00:18:36.496 [2024-11-28T11:42:19.015Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:36.496 [2024-11-28T11:42:19.015Z] =================================================================================================================== 00:18:36.496 [2024-11-28T11:42:19.015Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:36.496 12:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2544165 00:18:36.756 12:42:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:36.756 12:42:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:36.756 12:42:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:36.756 12:42:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:36.756 12:42:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:36.756 12:42:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 2541686 00:18:36.756 12:42:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2541686 ']' 00:18:36.756 12:42:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2541686 00:18:36.756 12:42:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:36.756 12:42:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:36.756 12:42:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2541686 00:18:36.756 12:42:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:36.756 12:42:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:36.756 12:42:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2541686' 00:18:36.756 killing process with pid 2541686 00:18:36.756 12:42:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2541686 00:18:36.756 12:42:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2541686 00:18:37.015 12:42:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:18:37.015 12:42:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:37.015 12:42:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:37.015 12:42:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:37.015 12:42:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2544402 00:18:37.015 12:42:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:37.016 12:42:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2544402 00:18:37.016 12:42:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2544402 ']' 00:18:37.016 12:42:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:37.016 12:42:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:37.016 12:42:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:37.016 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:37.016 12:42:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:37.016 12:42:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:37.016 [2024-11-28 12:42:19.388654] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:18:37.016 [2024-11-28 12:42:19.388704] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:37.016 [2024-11-28 12:42:19.452208] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:37.016 [2024-11-28 12:42:19.493045] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:37.016 [2024-11-28 12:42:19.493082] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:37.016 [2024-11-28 12:42:19.493089] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:37.016 [2024-11-28 12:42:19.493095] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:37.016 [2024-11-28 12:42:19.493101] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:37.016 [2024-11-28 12:42:19.493656] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:37.275 12:42:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:37.275 12:42:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:37.275 12:42:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:37.275 12:42:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:37.275 12:42:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:37.275 12:42:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:37.275 12:42:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.Re8rfBDX5W 00:18:37.275 12:42:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:37.275 12:42:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.Re8rfBDX5W 00:18:37.275 12:42:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:18:37.275 12:42:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:37.275 12:42:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:18:37.275 12:42:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:37.275 12:42:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.Re8rfBDX5W 00:18:37.275 12:42:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.Re8rfBDX5W 00:18:37.275 12:42:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:37.534 [2024-11-28 12:42:19.799524] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:37.534 12:42:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:37.534 12:42:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:37.793 [2024-11-28 12:42:20.180521] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:37.793 [2024-11-28 12:42:20.180730] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:37.793 12:42:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:38.052 malloc0 00:18:38.052 12:42:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:38.311 12:42:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.Re8rfBDX5W 00:18:38.311 [2024-11-28 12:42:20.754256] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.Re8rfBDX5W': 0100666 00:18:38.311 [2024-11-28 12:42:20.754286] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:18:38.311 request: 00:18:38.311 { 00:18:38.311 "name": "key0", 00:18:38.311 "path": "/tmp/tmp.Re8rfBDX5W", 00:18:38.311 "method": "keyring_file_add_key", 00:18:38.311 "req_id": 1 00:18:38.311 } 00:18:38.311 Got JSON-RPC error response 00:18:38.311 response: 00:18:38.311 { 00:18:38.311 "code": -1, 00:18:38.311 "message": "Operation not permitted" 00:18:38.311 } 00:18:38.312 12:42:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:38.573 [2024-11-28 12:42:20.942766] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:18:38.573 [2024-11-28 12:42:20.942798] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:18:38.573 request: 00:18:38.573 { 00:18:38.573 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:38.573 "host": "nqn.2016-06.io.spdk:host1", 00:18:38.573 "psk": "key0", 00:18:38.573 "method": "nvmf_subsystem_add_host", 00:18:38.573 "req_id": 1 00:18:38.573 } 00:18:38.573 Got JSON-RPC error response 00:18:38.573 response: 00:18:38.573 { 00:18:38.573 "code": -32603, 00:18:38.573 "message": "Internal error" 00:18:38.573 } 00:18:38.573 12:42:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:38.573 12:42:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:38.573 12:42:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:38.573 12:42:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:38.573 12:42:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 2544402 00:18:38.573 12:42:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2544402 ']' 00:18:38.573 12:42:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2544402 00:18:38.573 12:42:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:38.573 12:42:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:38.573 12:42:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2544402 00:18:38.573 12:42:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:38.573 12:42:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:38.573 12:42:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2544402' 00:18:38.573 killing process with pid 2544402 00:18:38.573 12:42:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2544402 00:18:38.573 12:42:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2544402 00:18:38.833 12:42:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.Re8rfBDX5W 00:18:38.833 12:42:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:18:38.833 12:42:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:38.833 12:42:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:38.833 12:42:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:38.833 12:42:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2544673 00:18:38.833 12:42:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:38.833 12:42:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2544673 00:18:38.833 12:42:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2544673 ']' 00:18:38.833 12:42:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:38.833 12:42:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:38.833 12:42:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:38.833 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:38.833 12:42:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:38.833 12:42:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:38.833 [2024-11-28 12:42:21.231505] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:18:38.833 [2024-11-28 12:42:21.231551] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:38.833 [2024-11-28 12:42:21.297440] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:38.833 [2024-11-28 12:42:21.332938] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:38.833 [2024-11-28 12:42:21.332979] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:38.833 [2024-11-28 12:42:21.332987] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:38.833 [2024-11-28 12:42:21.332993] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:38.833 [2024-11-28 12:42:21.332998] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:38.833 [2024-11-28 12:42:21.333553] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:39.093 12:42:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:39.093 12:42:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:39.093 12:42:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:39.093 12:42:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:39.093 12:42:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:39.093 12:42:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:39.093 12:42:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.Re8rfBDX5W 00:18:39.093 12:42:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.Re8rfBDX5W 00:18:39.093 12:42:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:39.352 [2024-11-28 12:42:21.633922] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:39.352 12:42:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:39.352 12:42:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:39.611 [2024-11-28 12:42:22.014916] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:39.611 [2024-11-28 12:42:22.015129] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:39.611 12:42:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:39.871 malloc0 00:18:39.871 12:42:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:40.172 12:42:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.Re8rfBDX5W 00:18:40.172 12:42:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:40.470 12:42:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:40.470 12:42:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=2544931 00:18:40.470 12:42:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:40.470 12:42:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 2544931 /var/tmp/bdevperf.sock 00:18:40.470 12:42:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2544931 ']' 00:18:40.470 12:42:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:40.470 12:42:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:40.470 12:42:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:40.470 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:40.470 12:42:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:40.470 12:42:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:40.470 [2024-11-28 12:42:22.822912] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:18:40.470 [2024-11-28 12:42:22.822965] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2544931 ] 00:18:40.470 [2024-11-28 12:42:22.882620] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:40.470 [2024-11-28 12:42:22.926373] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:40.762 12:42:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:40.762 12:42:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:40.762 12:42:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Re8rfBDX5W 00:18:40.762 12:42:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:41.049 [2024-11-28 12:42:23.395162] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:41.049 TLSTESTn1 00:18:41.049 12:42:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:18:41.309 12:42:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:18:41.309 "subsystems": [ 00:18:41.309 { 00:18:41.309 "subsystem": "keyring", 00:18:41.309 "config": [ 00:18:41.309 { 00:18:41.309 "method": "keyring_file_add_key", 00:18:41.309 "params": { 00:18:41.309 "name": "key0", 00:18:41.309 "path": "/tmp/tmp.Re8rfBDX5W" 00:18:41.309 } 00:18:41.309 } 00:18:41.309 ] 00:18:41.309 }, 00:18:41.309 { 00:18:41.309 "subsystem": "iobuf", 00:18:41.309 "config": [ 00:18:41.309 { 00:18:41.309 "method": "iobuf_set_options", 00:18:41.309 "params": { 00:18:41.309 "small_pool_count": 8192, 00:18:41.309 "large_pool_count": 1024, 00:18:41.309 "small_bufsize": 8192, 00:18:41.310 "large_bufsize": 135168, 00:18:41.310 "enable_numa": false 00:18:41.310 } 00:18:41.310 } 00:18:41.310 ] 00:18:41.310 }, 00:18:41.310 { 00:18:41.310 "subsystem": "sock", 00:18:41.310 "config": [ 00:18:41.310 { 00:18:41.310 "method": "sock_set_default_impl", 00:18:41.310 "params": { 00:18:41.310 "impl_name": "posix" 00:18:41.310 } 00:18:41.310 }, 00:18:41.310 { 00:18:41.310 "method": "sock_impl_set_options", 00:18:41.310 "params": { 00:18:41.310 "impl_name": "ssl", 00:18:41.310 "recv_buf_size": 4096, 00:18:41.310 "send_buf_size": 4096, 00:18:41.310 "enable_recv_pipe": true, 00:18:41.310 "enable_quickack": false, 00:18:41.310 "enable_placement_id": 0, 00:18:41.310 "enable_zerocopy_send_server": true, 00:18:41.310 "enable_zerocopy_send_client": false, 00:18:41.310 "zerocopy_threshold": 0, 00:18:41.310 "tls_version": 0, 00:18:41.310 "enable_ktls": false 00:18:41.310 } 00:18:41.310 }, 00:18:41.310 { 00:18:41.310 "method": "sock_impl_set_options", 00:18:41.310 "params": { 00:18:41.310 "impl_name": "posix", 00:18:41.310 "recv_buf_size": 2097152, 00:18:41.310 "send_buf_size": 2097152, 00:18:41.310 "enable_recv_pipe": true, 00:18:41.310 "enable_quickack": false, 00:18:41.310 "enable_placement_id": 0, 00:18:41.310 "enable_zerocopy_send_server": true, 00:18:41.310 "enable_zerocopy_send_client": false, 00:18:41.310 "zerocopy_threshold": 0, 00:18:41.310 "tls_version": 0, 00:18:41.310 "enable_ktls": false 00:18:41.310 } 00:18:41.310 } 00:18:41.310 ] 00:18:41.310 }, 00:18:41.310 { 00:18:41.310 "subsystem": "vmd", 00:18:41.310 "config": [] 00:18:41.310 }, 00:18:41.310 { 00:18:41.310 "subsystem": "accel", 00:18:41.310 "config": [ 00:18:41.310 { 00:18:41.310 "method": "accel_set_options", 00:18:41.310 "params": { 00:18:41.310 "small_cache_size": 128, 00:18:41.310 "large_cache_size": 16, 00:18:41.310 "task_count": 2048, 00:18:41.310 "sequence_count": 2048, 00:18:41.310 "buf_count": 2048 00:18:41.310 } 00:18:41.310 } 00:18:41.310 ] 00:18:41.310 }, 00:18:41.310 { 00:18:41.310 "subsystem": "bdev", 00:18:41.310 "config": [ 00:18:41.310 { 00:18:41.310 "method": "bdev_set_options", 00:18:41.310 "params": { 00:18:41.310 "bdev_io_pool_size": 65535, 00:18:41.310 "bdev_io_cache_size": 256, 00:18:41.310 "bdev_auto_examine": true, 00:18:41.310 "iobuf_small_cache_size": 128, 00:18:41.310 "iobuf_large_cache_size": 16 00:18:41.310 } 00:18:41.310 }, 00:18:41.310 { 00:18:41.310 "method": "bdev_raid_set_options", 00:18:41.310 "params": { 00:18:41.310 "process_window_size_kb": 1024, 00:18:41.310 "process_max_bandwidth_mb_sec": 0 00:18:41.310 } 00:18:41.310 }, 00:18:41.310 { 00:18:41.310 "method": "bdev_iscsi_set_options", 00:18:41.310 "params": { 00:18:41.310 "timeout_sec": 30 00:18:41.310 } 00:18:41.310 }, 00:18:41.310 { 00:18:41.310 "method": "bdev_nvme_set_options", 00:18:41.310 "params": { 00:18:41.310 "action_on_timeout": "none", 00:18:41.310 "timeout_us": 0, 00:18:41.310 "timeout_admin_us": 0, 00:18:41.310 "keep_alive_timeout_ms": 10000, 00:18:41.310 "arbitration_burst": 0, 00:18:41.310 "low_priority_weight": 0, 00:18:41.310 "medium_priority_weight": 0, 00:18:41.310 "high_priority_weight": 0, 00:18:41.310 "nvme_adminq_poll_period_us": 10000, 00:18:41.310 "nvme_ioq_poll_period_us": 0, 00:18:41.310 "io_queue_requests": 0, 00:18:41.310 "delay_cmd_submit": true, 00:18:41.310 "transport_retry_count": 4, 00:18:41.310 "bdev_retry_count": 3, 00:18:41.310 "transport_ack_timeout": 0, 00:18:41.310 "ctrlr_loss_timeout_sec": 0, 00:18:41.310 "reconnect_delay_sec": 0, 00:18:41.310 "fast_io_fail_timeout_sec": 0, 00:18:41.310 "disable_auto_failback": false, 00:18:41.310 "generate_uuids": false, 00:18:41.310 "transport_tos": 0, 00:18:41.310 "nvme_error_stat": false, 00:18:41.310 "rdma_srq_size": 0, 00:18:41.310 "io_path_stat": false, 00:18:41.310 "allow_accel_sequence": false, 00:18:41.310 "rdma_max_cq_size": 0, 00:18:41.310 "rdma_cm_event_timeout_ms": 0, 00:18:41.310 "dhchap_digests": [ 00:18:41.310 "sha256", 00:18:41.310 "sha384", 00:18:41.310 "sha512" 00:18:41.310 ], 00:18:41.310 "dhchap_dhgroups": [ 00:18:41.310 "null", 00:18:41.310 "ffdhe2048", 00:18:41.310 "ffdhe3072", 00:18:41.310 "ffdhe4096", 00:18:41.310 "ffdhe6144", 00:18:41.310 "ffdhe8192" 00:18:41.310 ] 00:18:41.310 } 00:18:41.310 }, 00:18:41.310 { 00:18:41.310 "method": "bdev_nvme_set_hotplug", 00:18:41.310 "params": { 00:18:41.310 "period_us": 100000, 00:18:41.310 "enable": false 00:18:41.310 } 00:18:41.310 }, 00:18:41.310 { 00:18:41.310 "method": "bdev_malloc_create", 00:18:41.310 "params": { 00:18:41.310 "name": "malloc0", 00:18:41.310 "num_blocks": 8192, 00:18:41.310 "block_size": 4096, 00:18:41.310 "physical_block_size": 4096, 00:18:41.310 "uuid": "2e6a3449-6e73-4e23-a83f-67a805fbe970", 00:18:41.310 "optimal_io_boundary": 0, 00:18:41.310 "md_size": 0, 00:18:41.310 "dif_type": 0, 00:18:41.310 "dif_is_head_of_md": false, 00:18:41.310 "dif_pi_format": 0 00:18:41.310 } 00:18:41.310 }, 00:18:41.310 { 00:18:41.310 "method": "bdev_wait_for_examine" 00:18:41.310 } 00:18:41.310 ] 00:18:41.310 }, 00:18:41.310 { 00:18:41.310 "subsystem": "nbd", 00:18:41.310 "config": [] 00:18:41.310 }, 00:18:41.310 { 00:18:41.310 "subsystem": "scheduler", 00:18:41.310 "config": [ 00:18:41.310 { 00:18:41.310 "method": "framework_set_scheduler", 00:18:41.310 "params": { 00:18:41.310 "name": "static" 00:18:41.310 } 00:18:41.310 } 00:18:41.310 ] 00:18:41.310 }, 00:18:41.310 { 00:18:41.310 "subsystem": "nvmf", 00:18:41.310 "config": [ 00:18:41.310 { 00:18:41.310 "method": "nvmf_set_config", 00:18:41.310 "params": { 00:18:41.310 "discovery_filter": "match_any", 00:18:41.310 "admin_cmd_passthru": { 00:18:41.310 "identify_ctrlr": false 00:18:41.310 }, 00:18:41.310 "dhchap_digests": [ 00:18:41.310 "sha256", 00:18:41.310 "sha384", 00:18:41.310 "sha512" 00:18:41.310 ], 00:18:41.310 "dhchap_dhgroups": [ 00:18:41.310 "null", 00:18:41.310 "ffdhe2048", 00:18:41.310 "ffdhe3072", 00:18:41.310 "ffdhe4096", 00:18:41.310 "ffdhe6144", 00:18:41.310 "ffdhe8192" 00:18:41.310 ] 00:18:41.310 } 00:18:41.310 }, 00:18:41.310 { 00:18:41.310 "method": "nvmf_set_max_subsystems", 00:18:41.310 "params": { 00:18:41.310 "max_subsystems": 1024 00:18:41.310 } 00:18:41.310 }, 00:18:41.310 { 00:18:41.310 "method": "nvmf_set_crdt", 00:18:41.310 "params": { 00:18:41.310 "crdt1": 0, 00:18:41.310 "crdt2": 0, 00:18:41.310 "crdt3": 0 00:18:41.310 } 00:18:41.310 }, 00:18:41.310 { 00:18:41.310 "method": "nvmf_create_transport", 00:18:41.310 "params": { 00:18:41.310 "trtype": "TCP", 00:18:41.310 "max_queue_depth": 128, 00:18:41.310 "max_io_qpairs_per_ctrlr": 127, 00:18:41.310 "in_capsule_data_size": 4096, 00:18:41.310 "max_io_size": 131072, 00:18:41.310 "io_unit_size": 131072, 00:18:41.310 "max_aq_depth": 128, 00:18:41.310 "num_shared_buffers": 511, 00:18:41.310 "buf_cache_size": 4294967295, 00:18:41.310 "dif_insert_or_strip": false, 00:18:41.310 "zcopy": false, 00:18:41.310 "c2h_success": false, 00:18:41.310 "sock_priority": 0, 00:18:41.310 "abort_timeout_sec": 1, 00:18:41.310 "ack_timeout": 0, 00:18:41.310 "data_wr_pool_size": 0 00:18:41.310 } 00:18:41.310 }, 00:18:41.310 { 00:18:41.310 "method": "nvmf_create_subsystem", 00:18:41.310 "params": { 00:18:41.311 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:41.311 "allow_any_host": false, 00:18:41.311 "serial_number": "SPDK00000000000001", 00:18:41.311 "model_number": "SPDK bdev Controller", 00:18:41.311 "max_namespaces": 10, 00:18:41.311 "min_cntlid": 1, 00:18:41.311 "max_cntlid": 65519, 00:18:41.311 "ana_reporting": false 00:18:41.311 } 00:18:41.311 }, 00:18:41.311 { 00:18:41.311 "method": "nvmf_subsystem_add_host", 00:18:41.311 "params": { 00:18:41.311 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:41.311 "host": "nqn.2016-06.io.spdk:host1", 00:18:41.311 "psk": "key0" 00:18:41.311 } 00:18:41.311 }, 00:18:41.311 { 00:18:41.311 "method": "nvmf_subsystem_add_ns", 00:18:41.311 "params": { 00:18:41.311 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:41.311 "namespace": { 00:18:41.311 "nsid": 1, 00:18:41.311 "bdev_name": "malloc0", 00:18:41.311 "nguid": "2E6A34496E734E23A83F67A805FBE970", 00:18:41.311 "uuid": "2e6a3449-6e73-4e23-a83f-67a805fbe970", 00:18:41.311 "no_auto_visible": false 00:18:41.311 } 00:18:41.311 } 00:18:41.311 }, 00:18:41.311 { 00:18:41.311 "method": "nvmf_subsystem_add_listener", 00:18:41.311 "params": { 00:18:41.311 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:41.311 "listen_address": { 00:18:41.311 "trtype": "TCP", 00:18:41.311 "adrfam": "IPv4", 00:18:41.311 "traddr": "10.0.0.2", 00:18:41.311 "trsvcid": "4420" 00:18:41.311 }, 00:18:41.311 "secure_channel": true 00:18:41.311 } 00:18:41.311 } 00:18:41.311 ] 00:18:41.311 } 00:18:41.311 ] 00:18:41.311 }' 00:18:41.311 12:42:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:18:41.571 12:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:18:41.571 "subsystems": [ 00:18:41.571 { 00:18:41.571 "subsystem": "keyring", 00:18:41.571 "config": [ 00:18:41.571 { 00:18:41.571 "method": "keyring_file_add_key", 00:18:41.571 "params": { 00:18:41.571 "name": "key0", 00:18:41.571 "path": "/tmp/tmp.Re8rfBDX5W" 00:18:41.571 } 00:18:41.571 } 00:18:41.571 ] 00:18:41.571 }, 00:18:41.571 { 00:18:41.571 "subsystem": "iobuf", 00:18:41.571 "config": [ 00:18:41.571 { 00:18:41.571 "method": "iobuf_set_options", 00:18:41.571 "params": { 00:18:41.571 "small_pool_count": 8192, 00:18:41.571 "large_pool_count": 1024, 00:18:41.571 "small_bufsize": 8192, 00:18:41.571 "large_bufsize": 135168, 00:18:41.571 "enable_numa": false 00:18:41.571 } 00:18:41.571 } 00:18:41.571 ] 00:18:41.571 }, 00:18:41.571 { 00:18:41.571 "subsystem": "sock", 00:18:41.571 "config": [ 00:18:41.571 { 00:18:41.571 "method": "sock_set_default_impl", 00:18:41.571 "params": { 00:18:41.571 "impl_name": "posix" 00:18:41.571 } 00:18:41.571 }, 00:18:41.571 { 00:18:41.571 "method": "sock_impl_set_options", 00:18:41.571 "params": { 00:18:41.571 "impl_name": "ssl", 00:18:41.571 "recv_buf_size": 4096, 00:18:41.571 "send_buf_size": 4096, 00:18:41.571 "enable_recv_pipe": true, 00:18:41.571 "enable_quickack": false, 00:18:41.571 "enable_placement_id": 0, 00:18:41.571 "enable_zerocopy_send_server": true, 00:18:41.571 "enable_zerocopy_send_client": false, 00:18:41.571 "zerocopy_threshold": 0, 00:18:41.571 "tls_version": 0, 00:18:41.571 "enable_ktls": false 00:18:41.571 } 00:18:41.571 }, 00:18:41.571 { 00:18:41.571 "method": "sock_impl_set_options", 00:18:41.571 "params": { 00:18:41.571 "impl_name": "posix", 00:18:41.571 "recv_buf_size": 2097152, 00:18:41.571 "send_buf_size": 2097152, 00:18:41.571 "enable_recv_pipe": true, 00:18:41.571 "enable_quickack": false, 00:18:41.571 "enable_placement_id": 0, 00:18:41.571 "enable_zerocopy_send_server": true, 00:18:41.571 "enable_zerocopy_send_client": false, 00:18:41.571 "zerocopy_threshold": 0, 00:18:41.571 "tls_version": 0, 00:18:41.571 "enable_ktls": false 00:18:41.571 } 00:18:41.571 } 00:18:41.571 ] 00:18:41.571 }, 00:18:41.571 { 00:18:41.571 "subsystem": "vmd", 00:18:41.571 "config": [] 00:18:41.571 }, 00:18:41.571 { 00:18:41.571 "subsystem": "accel", 00:18:41.571 "config": [ 00:18:41.571 { 00:18:41.571 "method": "accel_set_options", 00:18:41.571 "params": { 00:18:41.571 "small_cache_size": 128, 00:18:41.571 "large_cache_size": 16, 00:18:41.571 "task_count": 2048, 00:18:41.571 "sequence_count": 2048, 00:18:41.571 "buf_count": 2048 00:18:41.571 } 00:18:41.571 } 00:18:41.571 ] 00:18:41.571 }, 00:18:41.571 { 00:18:41.571 "subsystem": "bdev", 00:18:41.571 "config": [ 00:18:41.571 { 00:18:41.571 "method": "bdev_set_options", 00:18:41.572 "params": { 00:18:41.572 "bdev_io_pool_size": 65535, 00:18:41.572 "bdev_io_cache_size": 256, 00:18:41.572 "bdev_auto_examine": true, 00:18:41.572 "iobuf_small_cache_size": 128, 00:18:41.572 "iobuf_large_cache_size": 16 00:18:41.572 } 00:18:41.572 }, 00:18:41.572 { 00:18:41.572 "method": "bdev_raid_set_options", 00:18:41.572 "params": { 00:18:41.572 "process_window_size_kb": 1024, 00:18:41.572 "process_max_bandwidth_mb_sec": 0 00:18:41.572 } 00:18:41.572 }, 00:18:41.572 { 00:18:41.572 "method": "bdev_iscsi_set_options", 00:18:41.572 "params": { 00:18:41.572 "timeout_sec": 30 00:18:41.572 } 00:18:41.572 }, 00:18:41.572 { 00:18:41.572 "method": "bdev_nvme_set_options", 00:18:41.572 "params": { 00:18:41.572 "action_on_timeout": "none", 00:18:41.572 "timeout_us": 0, 00:18:41.572 "timeout_admin_us": 0, 00:18:41.572 "keep_alive_timeout_ms": 10000, 00:18:41.572 "arbitration_burst": 0, 00:18:41.572 "low_priority_weight": 0, 00:18:41.572 "medium_priority_weight": 0, 00:18:41.572 "high_priority_weight": 0, 00:18:41.572 "nvme_adminq_poll_period_us": 10000, 00:18:41.572 "nvme_ioq_poll_period_us": 0, 00:18:41.572 "io_queue_requests": 512, 00:18:41.572 "delay_cmd_submit": true, 00:18:41.572 "transport_retry_count": 4, 00:18:41.572 "bdev_retry_count": 3, 00:18:41.572 "transport_ack_timeout": 0, 00:18:41.572 "ctrlr_loss_timeout_sec": 0, 00:18:41.572 "reconnect_delay_sec": 0, 00:18:41.572 "fast_io_fail_timeout_sec": 0, 00:18:41.572 "disable_auto_failback": false, 00:18:41.572 "generate_uuids": false, 00:18:41.572 "transport_tos": 0, 00:18:41.572 "nvme_error_stat": false, 00:18:41.572 "rdma_srq_size": 0, 00:18:41.572 "io_path_stat": false, 00:18:41.572 "allow_accel_sequence": false, 00:18:41.572 "rdma_max_cq_size": 0, 00:18:41.572 "rdma_cm_event_timeout_ms": 0, 00:18:41.572 "dhchap_digests": [ 00:18:41.572 "sha256", 00:18:41.572 "sha384", 00:18:41.572 "sha512" 00:18:41.572 ], 00:18:41.572 "dhchap_dhgroups": [ 00:18:41.572 "null", 00:18:41.572 "ffdhe2048", 00:18:41.572 "ffdhe3072", 00:18:41.572 "ffdhe4096", 00:18:41.572 "ffdhe6144", 00:18:41.572 "ffdhe8192" 00:18:41.572 ] 00:18:41.572 } 00:18:41.572 }, 00:18:41.572 { 00:18:41.572 "method": "bdev_nvme_attach_controller", 00:18:41.572 "params": { 00:18:41.572 "name": "TLSTEST", 00:18:41.572 "trtype": "TCP", 00:18:41.572 "adrfam": "IPv4", 00:18:41.572 "traddr": "10.0.0.2", 00:18:41.572 "trsvcid": "4420", 00:18:41.572 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:41.572 "prchk_reftag": false, 00:18:41.572 "prchk_guard": false, 00:18:41.572 "ctrlr_loss_timeout_sec": 0, 00:18:41.572 "reconnect_delay_sec": 0, 00:18:41.572 "fast_io_fail_timeout_sec": 0, 00:18:41.572 "psk": "key0", 00:18:41.572 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:41.572 "hdgst": false, 00:18:41.572 "ddgst": false, 00:18:41.572 "multipath": "multipath" 00:18:41.572 } 00:18:41.572 }, 00:18:41.572 { 00:18:41.572 "method": "bdev_nvme_set_hotplug", 00:18:41.572 "params": { 00:18:41.572 "period_us": 100000, 00:18:41.572 "enable": false 00:18:41.572 } 00:18:41.572 }, 00:18:41.572 { 00:18:41.572 "method": "bdev_wait_for_examine" 00:18:41.572 } 00:18:41.572 ] 00:18:41.572 }, 00:18:41.572 { 00:18:41.572 "subsystem": "nbd", 00:18:41.572 "config": [] 00:18:41.572 } 00:18:41.572 ] 00:18:41.572 }' 00:18:41.572 12:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 2544931 00:18:41.572 12:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2544931 ']' 00:18:41.572 12:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2544931 00:18:41.572 12:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:41.572 12:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:41.572 12:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2544931 00:18:41.572 12:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:41.572 12:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:41.572 12:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2544931' 00:18:41.572 killing process with pid 2544931 00:18:41.572 12:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2544931 00:18:41.572 Received shutdown signal, test time was about 10.000000 seconds 00:18:41.572 00:18:41.572 Latency(us) 00:18:41.572 [2024-11-28T11:42:24.091Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:41.572 [2024-11-28T11:42:24.091Z] =================================================================================================================== 00:18:41.572 [2024-11-28T11:42:24.091Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:41.572 12:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2544931 00:18:41.832 12:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 2544673 00:18:41.832 12:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2544673 ']' 00:18:41.832 12:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2544673 00:18:41.832 12:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:41.832 12:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:41.832 12:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2544673 00:18:41.832 12:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:41.832 12:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:41.832 12:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2544673' 00:18:41.832 killing process with pid 2544673 00:18:41.832 12:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2544673 00:18:41.832 12:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2544673 00:18:42.092 12:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:18:42.092 12:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:42.092 12:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:42.092 12:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:18:42.092 "subsystems": [ 00:18:42.092 { 00:18:42.092 "subsystem": "keyring", 00:18:42.092 "config": [ 00:18:42.092 { 00:18:42.092 "method": "keyring_file_add_key", 00:18:42.092 "params": { 00:18:42.092 "name": "key0", 00:18:42.092 "path": "/tmp/tmp.Re8rfBDX5W" 00:18:42.092 } 00:18:42.092 } 00:18:42.092 ] 00:18:42.092 }, 00:18:42.092 { 00:18:42.092 "subsystem": "iobuf", 00:18:42.092 "config": [ 00:18:42.092 { 00:18:42.092 "method": "iobuf_set_options", 00:18:42.092 "params": { 00:18:42.092 "small_pool_count": 8192, 00:18:42.092 "large_pool_count": 1024, 00:18:42.092 "small_bufsize": 8192, 00:18:42.092 "large_bufsize": 135168, 00:18:42.092 "enable_numa": false 00:18:42.092 } 00:18:42.092 } 00:18:42.092 ] 00:18:42.092 }, 00:18:42.092 { 00:18:42.092 "subsystem": "sock", 00:18:42.092 "config": [ 00:18:42.092 { 00:18:42.092 "method": "sock_set_default_impl", 00:18:42.092 "params": { 00:18:42.092 "impl_name": "posix" 00:18:42.092 } 00:18:42.092 }, 00:18:42.092 { 00:18:42.092 "method": "sock_impl_set_options", 00:18:42.092 "params": { 00:18:42.092 "impl_name": "ssl", 00:18:42.092 "recv_buf_size": 4096, 00:18:42.092 "send_buf_size": 4096, 00:18:42.092 "enable_recv_pipe": true, 00:18:42.092 "enable_quickack": false, 00:18:42.092 "enable_placement_id": 0, 00:18:42.092 "enable_zerocopy_send_server": true, 00:18:42.092 "enable_zerocopy_send_client": false, 00:18:42.092 "zerocopy_threshold": 0, 00:18:42.092 "tls_version": 0, 00:18:42.092 "enable_ktls": false 00:18:42.092 } 00:18:42.092 }, 00:18:42.092 { 00:18:42.092 "method": "sock_impl_set_options", 00:18:42.092 "params": { 00:18:42.092 "impl_name": "posix", 00:18:42.092 "recv_buf_size": 2097152, 00:18:42.092 "send_buf_size": 2097152, 00:18:42.092 "enable_recv_pipe": true, 00:18:42.092 "enable_quickack": false, 00:18:42.092 "enable_placement_id": 0, 00:18:42.092 "enable_zerocopy_send_server": true, 00:18:42.092 "enable_zerocopy_send_client": false, 00:18:42.092 "zerocopy_threshold": 0, 00:18:42.093 "tls_version": 0, 00:18:42.093 "enable_ktls": false 00:18:42.093 } 00:18:42.093 } 00:18:42.093 ] 00:18:42.093 }, 00:18:42.093 { 00:18:42.093 "subsystem": "vmd", 00:18:42.093 "config": [] 00:18:42.093 }, 00:18:42.093 { 00:18:42.093 "subsystem": "accel", 00:18:42.093 "config": [ 00:18:42.093 { 00:18:42.093 "method": "accel_set_options", 00:18:42.093 "params": { 00:18:42.093 "small_cache_size": 128, 00:18:42.093 "large_cache_size": 16, 00:18:42.093 "task_count": 2048, 00:18:42.093 "sequence_count": 2048, 00:18:42.093 "buf_count": 2048 00:18:42.093 } 00:18:42.093 } 00:18:42.093 ] 00:18:42.093 }, 00:18:42.093 { 00:18:42.093 "subsystem": "bdev", 00:18:42.093 "config": [ 00:18:42.093 { 00:18:42.093 "method": "bdev_set_options", 00:18:42.093 "params": { 00:18:42.093 "bdev_io_pool_size": 65535, 00:18:42.093 "bdev_io_cache_size": 256, 00:18:42.093 "bdev_auto_examine": true, 00:18:42.093 "iobuf_small_cache_size": 128, 00:18:42.093 "iobuf_large_cache_size": 16 00:18:42.093 } 00:18:42.093 }, 00:18:42.093 { 00:18:42.093 "method": "bdev_raid_set_options", 00:18:42.093 "params": { 00:18:42.093 "process_window_size_kb": 1024, 00:18:42.093 "process_max_bandwidth_mb_sec": 0 00:18:42.093 } 00:18:42.093 }, 00:18:42.093 { 00:18:42.093 "method": "bdev_iscsi_set_options", 00:18:42.093 "params": { 00:18:42.093 "timeout_sec": 30 00:18:42.093 } 00:18:42.093 }, 00:18:42.093 { 00:18:42.093 "method": "bdev_nvme_set_options", 00:18:42.093 "params": { 00:18:42.093 "action_on_timeout": "none", 00:18:42.093 "timeout_us": 0, 00:18:42.093 "timeout_admin_us": 0, 00:18:42.093 "keep_alive_timeout_ms": 10000, 00:18:42.093 "arbitration_burst": 0, 00:18:42.093 "low_priority_weight": 0, 00:18:42.093 "medium_priority_weight": 0, 00:18:42.093 "high_priority_weight": 0, 00:18:42.093 "nvme_adminq_poll_period_us": 10000, 00:18:42.093 "nvme_ioq_poll_period_us": 0, 00:18:42.093 "io_queue_requests": 0, 00:18:42.093 "delay_cmd_submit": true, 00:18:42.093 "transport_retry_count": 4, 00:18:42.093 "bdev_retry_count": 3, 00:18:42.093 "transport_ack_timeout": 0, 00:18:42.093 "ctrlr_loss_timeout_sec": 0, 00:18:42.093 "reconnect_delay_sec": 0, 00:18:42.093 "fast_io_fail_timeout_sec": 0, 00:18:42.093 "disable_auto_failback": false, 00:18:42.093 "generate_uuids": false, 00:18:42.093 "transport_tos": 0, 00:18:42.093 "nvme_error_stat": false, 00:18:42.093 "rdma_srq_size": 0, 00:18:42.093 "io_path_stat": false, 00:18:42.093 "allow_accel_sequence": false, 00:18:42.093 "rdma_max_cq_size": 0, 00:18:42.093 "rdma_cm_event_timeout_ms": 0, 00:18:42.093 "dhchap_digests": [ 00:18:42.093 "sha256", 00:18:42.093 "sha384", 00:18:42.093 "sha512" 00:18:42.093 ], 00:18:42.093 "dhchap_dhgroups": [ 00:18:42.093 "null", 00:18:42.093 "ffdhe2048", 00:18:42.093 "ffdhe3072", 00:18:42.093 "ffdhe4096", 00:18:42.093 "ffdhe6144", 00:18:42.093 "ffdhe8192" 00:18:42.093 ] 00:18:42.093 } 00:18:42.093 }, 00:18:42.093 { 00:18:42.093 "method": "bdev_nvme_set_hotplug", 00:18:42.093 "params": { 00:18:42.093 "period_us": 100000, 00:18:42.093 "enable": false 00:18:42.093 } 00:18:42.093 }, 00:18:42.093 { 00:18:42.093 "method": "bdev_malloc_create", 00:18:42.093 "params": { 00:18:42.093 "name": "malloc0", 00:18:42.093 "num_blocks": 8192, 00:18:42.093 "block_size": 4096, 00:18:42.093 "physical_block_size": 4096, 00:18:42.093 "uuid": "2e6a3449-6e73-4e23-a83f-67a805fbe970", 00:18:42.093 "optimal_io_boundary": 0, 00:18:42.093 "md_size": 0, 00:18:42.093 "dif_type": 0, 00:18:42.093 "dif_is_head_of_md": false, 00:18:42.093 "dif_pi_format": 0 00:18:42.093 } 00:18:42.093 }, 00:18:42.093 { 00:18:42.093 "method": "bdev_wait_for_examine" 00:18:42.093 } 00:18:42.093 ] 00:18:42.093 }, 00:18:42.093 { 00:18:42.093 "subsystem": "nbd", 00:18:42.093 "config": [] 00:18:42.093 }, 00:18:42.093 { 00:18:42.093 "subsystem": "scheduler", 00:18:42.093 "config": [ 00:18:42.093 { 00:18:42.093 "method": "framework_set_scheduler", 00:18:42.093 "params": { 00:18:42.093 "name": "static" 00:18:42.093 } 00:18:42.093 } 00:18:42.093 ] 00:18:42.093 }, 00:18:42.093 { 00:18:42.093 "subsystem": "nvmf", 00:18:42.093 "config": [ 00:18:42.093 { 00:18:42.093 "method": "nvmf_set_config", 00:18:42.093 "params": { 00:18:42.093 "discovery_filter": "match_any", 00:18:42.093 "admin_cmd_passthru": { 00:18:42.093 "identify_ctrlr": false 00:18:42.093 }, 00:18:42.093 "dhchap_digests": [ 00:18:42.093 "sha256", 00:18:42.093 "sha384", 00:18:42.093 "sha512" 00:18:42.093 ], 00:18:42.093 "dhchap_dhgroups": [ 00:18:42.093 "null", 00:18:42.093 "ffdhe2048", 00:18:42.093 "ffdhe3072", 00:18:42.093 "ffdhe4096", 00:18:42.093 "ffdhe6144", 00:18:42.093 "ffdhe8192" 00:18:42.093 ] 00:18:42.093 } 00:18:42.093 }, 00:18:42.093 { 00:18:42.093 "method": "nvmf_set_max_subsystems", 00:18:42.093 "params": { 00:18:42.093 "max_subsystems": 1024 00:18:42.093 } 00:18:42.093 }, 00:18:42.093 { 00:18:42.093 "method": "nvmf_set_crdt", 00:18:42.093 "params": { 00:18:42.093 "crdt1": 0, 00:18:42.093 "crdt2": 0, 00:18:42.093 "crdt3": 0 00:18:42.093 } 00:18:42.093 }, 00:18:42.093 { 00:18:42.093 "method": "nvmf_create_transport", 00:18:42.093 "params": { 00:18:42.093 "trtype": "TCP", 00:18:42.093 "max_queue_depth": 128, 00:18:42.093 "max_io_qpairs_per_ctrlr": 127, 00:18:42.093 "in_capsule_data_size": 4096, 00:18:42.093 "max_io_size": 131072, 00:18:42.093 "io_unit_size": 131072, 00:18:42.093 "max_aq_depth": 128, 00:18:42.093 "num_shared_buffers": 511, 00:18:42.093 "buf_cache_size": 4294967295, 00:18:42.093 "dif_insert_or_strip": false, 00:18:42.093 "zcopy": false, 00:18:42.093 "c2h_success": false, 00:18:42.093 "sock_priority": 0, 00:18:42.093 "abort_timeout_sec": 1, 00:18:42.093 "ack_timeout": 0, 00:18:42.093 "data_wr_pool_size": 0 00:18:42.093 } 00:18:42.093 }, 00:18:42.093 { 00:18:42.093 "method": "nvmf_create_subsystem", 00:18:42.093 "params": { 00:18:42.093 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:42.093 "allow_any_host": false, 00:18:42.093 "serial_number": "SPDK00000000000001", 00:18:42.093 "model_number": "SPDK bdev Controller", 00:18:42.093 "max_namespaces": 10, 00:18:42.093 "min_cntlid": 1, 00:18:42.093 "max_cntlid": 65519, 00:18:42.093 "ana_reporting": false 00:18:42.093 } 00:18:42.093 }, 00:18:42.093 { 00:18:42.093 "method": "nvmf_subsystem_add_host", 00:18:42.093 "params": { 00:18:42.093 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:42.093 "host": "nqn.2016-06.io.spdk:host1", 00:18:42.093 "psk": "key0" 00:18:42.093 } 00:18:42.093 }, 00:18:42.093 { 00:18:42.093 "method": "nvmf_subsystem_add_ns", 00:18:42.093 "params": { 00:18:42.093 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:42.093 "namespace": { 00:18:42.093 "nsid": 1, 00:18:42.093 "bdev_name": "malloc0", 00:18:42.093 "nguid": "2E6A34496E734E23A83F67A805FBE970", 00:18:42.093 "uuid": "2e6a3449-6e73-4e23-a83f-67a805fbe970", 00:18:42.093 "no_auto_visible": false 00:18:42.093 } 00:18:42.093 } 00:18:42.093 }, 00:18:42.093 { 00:18:42.093 "method": "nvmf_subsystem_add_listener", 00:18:42.093 "params": { 00:18:42.093 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:42.093 "listen_address": { 00:18:42.093 "trtype": "TCP", 00:18:42.093 "adrfam": "IPv4", 00:18:42.093 "traddr": "10.0.0.2", 00:18:42.093 "trsvcid": "4420" 00:18:42.093 }, 00:18:42.093 "secure_channel": true 00:18:42.093 } 00:18:42.093 } 00:18:42.093 ] 00:18:42.093 } 00:18:42.093 ] 00:18:42.093 }' 00:18:42.094 12:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:42.094 12:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2545296 00:18:42.094 12:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:18:42.094 12:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2545296 00:18:42.094 12:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2545296 ']' 00:18:42.094 12:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:42.094 12:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:42.094 12:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:42.094 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:42.094 12:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:42.094 12:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:42.094 [2024-11-28 12:42:24.509963] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:18:42.094 [2024-11-28 12:42:24.510012] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:42.094 [2024-11-28 12:42:24.575543] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:42.354 [2024-11-28 12:42:24.618130] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:42.354 [2024-11-28 12:42:24.618164] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:42.354 [2024-11-28 12:42:24.618171] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:42.354 [2024-11-28 12:42:24.618178] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:42.354 [2024-11-28 12:42:24.618183] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:42.354 [2024-11-28 12:42:24.618800] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:42.354 [2024-11-28 12:42:24.832999] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:42.354 [2024-11-28 12:42:24.865028] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:42.354 [2024-11-28 12:42:24.865231] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:42.922 12:42:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:42.922 12:42:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:42.922 12:42:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:42.922 12:42:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:42.922 12:42:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:42.922 12:42:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:42.922 12:42:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=2545436 00:18:42.922 12:42:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 2545436 /var/tmp/bdevperf.sock 00:18:42.922 12:42:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2545436 ']' 00:18:42.922 12:42:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:42.922 12:42:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:18:42.922 12:42:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:42.922 12:42:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:42.922 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:42.922 12:42:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:18:42.922 "subsystems": [ 00:18:42.922 { 00:18:42.922 "subsystem": "keyring", 00:18:42.922 "config": [ 00:18:42.922 { 00:18:42.922 "method": "keyring_file_add_key", 00:18:42.922 "params": { 00:18:42.922 "name": "key0", 00:18:42.922 "path": "/tmp/tmp.Re8rfBDX5W" 00:18:42.922 } 00:18:42.922 } 00:18:42.922 ] 00:18:42.922 }, 00:18:42.922 { 00:18:42.922 "subsystem": "iobuf", 00:18:42.922 "config": [ 00:18:42.922 { 00:18:42.922 "method": "iobuf_set_options", 00:18:42.922 "params": { 00:18:42.922 "small_pool_count": 8192, 00:18:42.923 "large_pool_count": 1024, 00:18:42.923 "small_bufsize": 8192, 00:18:42.923 "large_bufsize": 135168, 00:18:42.923 "enable_numa": false 00:18:42.923 } 00:18:42.923 } 00:18:42.923 ] 00:18:42.923 }, 00:18:42.923 { 00:18:42.923 "subsystem": "sock", 00:18:42.923 "config": [ 00:18:42.923 { 00:18:42.923 "method": "sock_set_default_impl", 00:18:42.923 "params": { 00:18:42.923 "impl_name": "posix" 00:18:42.923 } 00:18:42.923 }, 00:18:42.923 { 00:18:42.923 "method": "sock_impl_set_options", 00:18:42.923 "params": { 00:18:42.923 "impl_name": "ssl", 00:18:42.923 "recv_buf_size": 4096, 00:18:42.923 "send_buf_size": 4096, 00:18:42.923 "enable_recv_pipe": true, 00:18:42.923 "enable_quickack": false, 00:18:42.923 "enable_placement_id": 0, 00:18:42.923 "enable_zerocopy_send_server": true, 00:18:42.923 "enable_zerocopy_send_client": false, 00:18:42.923 "zerocopy_threshold": 0, 00:18:42.923 "tls_version": 0, 00:18:42.923 "enable_ktls": false 00:18:42.923 } 00:18:42.923 }, 00:18:42.923 { 00:18:42.923 "method": "sock_impl_set_options", 00:18:42.923 "params": { 00:18:42.923 "impl_name": "posix", 00:18:42.923 "recv_buf_size": 2097152, 00:18:42.923 "send_buf_size": 2097152, 00:18:42.923 "enable_recv_pipe": true, 00:18:42.923 "enable_quickack": false, 00:18:42.923 "enable_placement_id": 0, 00:18:42.923 "enable_zerocopy_send_server": true, 00:18:42.923 "enable_zerocopy_send_client": false, 00:18:42.923 "zerocopy_threshold": 0, 00:18:42.923 "tls_version": 0, 00:18:42.923 "enable_ktls": false 00:18:42.923 } 00:18:42.923 } 00:18:42.923 ] 00:18:42.923 }, 00:18:42.923 { 00:18:42.923 "subsystem": "vmd", 00:18:42.923 "config": [] 00:18:42.923 }, 00:18:42.923 { 00:18:42.923 "subsystem": "accel", 00:18:42.923 "config": [ 00:18:42.923 { 00:18:42.923 "method": "accel_set_options", 00:18:42.923 "params": { 00:18:42.923 "small_cache_size": 128, 00:18:42.923 "large_cache_size": 16, 00:18:42.923 "task_count": 2048, 00:18:42.923 "sequence_count": 2048, 00:18:42.923 "buf_count": 2048 00:18:42.923 } 00:18:42.923 } 00:18:42.923 ] 00:18:42.923 }, 00:18:42.923 { 00:18:42.923 "subsystem": "bdev", 00:18:42.923 "config": [ 00:18:42.923 { 00:18:42.923 "method": "bdev_set_options", 00:18:42.923 "params": { 00:18:42.923 "bdev_io_pool_size": 65535, 00:18:42.923 "bdev_io_cache_size": 256, 00:18:42.923 "bdev_auto_examine": true, 00:18:42.923 "iobuf_small_cache_size": 128, 00:18:42.923 "iobuf_large_cache_size": 16 00:18:42.923 } 00:18:42.923 }, 00:18:42.923 { 00:18:42.923 "method": "bdev_raid_set_options", 00:18:42.923 "params": { 00:18:42.923 "process_window_size_kb": 1024, 00:18:42.923 "process_max_bandwidth_mb_sec": 0 00:18:42.923 } 00:18:42.923 }, 00:18:42.923 { 00:18:42.923 "method": "bdev_iscsi_set_options", 00:18:42.923 "params": { 00:18:42.923 "timeout_sec": 30 00:18:42.923 } 00:18:42.923 }, 00:18:42.923 { 00:18:42.923 "method": "bdev_nvme_set_options", 00:18:42.923 "params": { 00:18:42.923 "action_on_timeout": "none", 00:18:42.923 "timeout_us": 0, 00:18:42.923 "timeout_admin_us": 0, 00:18:42.923 "keep_alive_timeout_ms": 10000, 00:18:42.923 "arbitration_burst": 0, 00:18:42.923 "low_priority_weight": 0, 00:18:42.923 "medium_priority_weight": 0, 00:18:42.923 "high_priority_weight": 0, 00:18:42.923 "nvme_adminq_poll_period_us": 10000, 00:18:42.923 "nvme_ioq_poll_period_us": 0, 00:18:42.923 "io_queue_requests": 512, 00:18:42.923 "delay_cmd_submit": true, 00:18:42.923 "transport_retry_count": 4, 00:18:42.923 "bdev_retry_count": 3, 00:18:42.923 "transport_ack_timeout": 0, 00:18:42.923 "ctrlr_loss_timeout_sec": 0, 00:18:42.923 "reconnect_delay_sec": 0, 00:18:42.923 "fast_io_fail_timeout_sec": 0, 00:18:42.923 "disable_auto_failback": false, 00:18:42.923 "generate_uuids": false, 00:18:42.923 "transport_tos": 0, 00:18:42.923 "nvme_error_stat": false, 00:18:42.923 "rdma_srq_size": 0, 00:18:42.923 "io_path_stat": false, 00:18:42.923 "allow_accel_sequence": false, 00:18:42.923 "rdma_max_cq_size": 0, 00:18:42.923 "rdma_cm_event_timeout_ms": 0, 00:18:42.923 "dhchap_digests": [ 00:18:42.923 "sha256", 00:18:42.923 "sha384", 00:18:42.923 "sha512" 00:18:42.923 ], 00:18:42.923 "dhchap_dhgroups": [ 00:18:42.923 "null", 00:18:42.923 "ffdhe2048", 00:18:42.923 "ffdhe3072", 00:18:42.923 "ffdhe4096", 00:18:42.923 "ffdhe6144", 00:18:42.923 "ffdhe8192" 00:18:42.923 ] 00:18:42.923 } 00:18:42.923 }, 00:18:42.923 { 00:18:42.923 "method": "bdev_nvme_attach_controller", 00:18:42.923 "params": { 00:18:42.923 "name": "TLSTEST", 00:18:42.923 "trtype": "TCP", 00:18:42.923 "adrfam": "IPv4", 00:18:42.923 "traddr": "10.0.0.2", 00:18:42.923 "trsvcid": "4420", 00:18:42.923 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:42.923 "prchk_reftag": false, 00:18:42.923 "prchk_guard": false, 00:18:42.923 "ctrlr_loss_timeout_sec": 0, 00:18:42.923 "reconnect_delay_sec": 0, 00:18:42.923 "fast_io_fail_timeout_sec": 0, 00:18:42.923 "psk": "key0", 00:18:42.923 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:42.923 "hdgst": false, 00:18:42.923 "ddgst": false, 00:18:42.923 "multipath": "multipath" 00:18:42.923 } 00:18:42.923 }, 00:18:42.923 { 00:18:42.923 "method": "bdev_nvme_set_hotplug", 00:18:42.923 "params": { 00:18:42.923 "period_us": 100000, 00:18:42.923 "enable": false 00:18:42.923 } 00:18:42.923 }, 00:18:42.923 { 00:18:42.923 "method": "bdev_wait_for_examine" 00:18:42.923 } 00:18:42.923 ] 00:18:42.923 }, 00:18:42.923 { 00:18:42.923 "subsystem": "nbd", 00:18:42.923 "config": [] 00:18:42.923 } 00:18:42.923 ] 00:18:42.923 }' 00:18:42.923 12:42:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:42.923 12:42:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:42.923 [2024-11-28 12:42:25.425895] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:18:42.923 [2024-11-28 12:42:25.425943] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2545436 ] 00:18:43.183 [2024-11-28 12:42:25.484335] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:43.183 [2024-11-28 12:42:25.526970] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:43.183 [2024-11-28 12:42:25.679680] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:43.750 12:42:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:43.750 12:42:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:43.750 12:42:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:44.009 Running I/O for 10 seconds... 00:18:45.882 5334.00 IOPS, 20.84 MiB/s [2024-11-28T11:42:29.779Z] 5147.00 IOPS, 20.11 MiB/s [2024-11-28T11:42:30.715Z] 5216.00 IOPS, 20.38 MiB/s [2024-11-28T11:42:31.648Z] 5287.50 IOPS, 20.65 MiB/s [2024-11-28T11:42:32.583Z] 5314.80 IOPS, 20.76 MiB/s [2024-11-28T11:42:33.518Z] 5326.67 IOPS, 20.81 MiB/s [2024-11-28T11:42:34.454Z] 5335.43 IOPS, 20.84 MiB/s [2024-11-28T11:42:35.387Z] 5343.12 IOPS, 20.87 MiB/s [2024-11-28T11:42:36.758Z] 5356.89 IOPS, 20.93 MiB/s [2024-11-28T11:42:36.758Z] 5383.50 IOPS, 21.03 MiB/s 00:18:54.239 Latency(us) 00:18:54.239 [2024-11-28T11:42:36.758Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:54.239 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:54.239 Verification LBA range: start 0x0 length 0x2000 00:18:54.239 TLSTESTn1 : 10.01 5388.18 21.05 0.00 0.00 23719.49 6411.13 24390.79 00:18:54.239 [2024-11-28T11:42:36.758Z] =================================================================================================================== 00:18:54.239 [2024-11-28T11:42:36.758Z] Total : 5388.18 21.05 0.00 0.00 23719.49 6411.13 24390.79 00:18:54.239 { 00:18:54.239 "results": [ 00:18:54.239 { 00:18:54.239 "job": "TLSTESTn1", 00:18:54.239 "core_mask": "0x4", 00:18:54.239 "workload": "verify", 00:18:54.239 "status": "finished", 00:18:54.239 "verify_range": { 00:18:54.239 "start": 0, 00:18:54.239 "length": 8192 00:18:54.239 }, 00:18:54.239 "queue_depth": 128, 00:18:54.239 "io_size": 4096, 00:18:54.239 "runtime": 10.014522, 00:18:54.239 "iops": 5388.1752918411885, 00:18:54.239 "mibps": 21.047559733754643, 00:18:54.239 "io_failed": 0, 00:18:54.239 "io_timeout": 0, 00:18:54.239 "avg_latency_us": 23719.485172011475, 00:18:54.239 "min_latency_us": 6411.130434782609, 00:18:54.239 "max_latency_us": 24390.78956521739 00:18:54.239 } 00:18:54.239 ], 00:18:54.239 "core_count": 1 00:18:54.239 } 00:18:54.240 12:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:54.240 12:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 2545436 00:18:54.240 12:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2545436 ']' 00:18:54.240 12:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2545436 00:18:54.240 12:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:54.240 12:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:54.240 12:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2545436 00:18:54.240 12:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:54.240 12:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:54.240 12:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2545436' 00:18:54.240 killing process with pid 2545436 00:18:54.240 12:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2545436 00:18:54.240 Received shutdown signal, test time was about 10.000000 seconds 00:18:54.240 00:18:54.240 Latency(us) 00:18:54.240 [2024-11-28T11:42:36.759Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:54.240 [2024-11-28T11:42:36.759Z] =================================================================================================================== 00:18:54.240 [2024-11-28T11:42:36.759Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:54.240 12:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2545436 00:18:54.240 12:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 2545296 00:18:54.240 12:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2545296 ']' 00:18:54.240 12:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2545296 00:18:54.240 12:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:54.240 12:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:54.240 12:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2545296 00:18:54.240 12:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:54.240 12:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:54.240 12:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2545296' 00:18:54.240 killing process with pid 2545296 00:18:54.240 12:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2545296 00:18:54.240 12:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2545296 00:18:54.498 12:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:18:54.498 12:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:54.498 12:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:54.498 12:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:54.498 12:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2547276 00:18:54.498 12:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:54.498 12:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2547276 00:18:54.498 12:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2547276 ']' 00:18:54.498 12:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:54.499 12:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:54.499 12:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:54.499 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:54.499 12:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:54.499 12:42:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:54.499 [2024-11-28 12:42:36.896726] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:18:54.499 [2024-11-28 12:42:36.896772] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:54.499 [2024-11-28 12:42:36.961436] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:54.499 [2024-11-28 12:42:37.002588] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:54.499 [2024-11-28 12:42:37.002625] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:54.499 [2024-11-28 12:42:37.002633] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:54.499 [2024-11-28 12:42:37.002639] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:54.499 [2024-11-28 12:42:37.002644] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:54.499 [2024-11-28 12:42:37.003207] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:54.757 12:42:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:54.757 12:42:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:54.757 12:42:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:54.757 12:42:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:54.757 12:42:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:54.757 12:42:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:54.757 12:42:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.Re8rfBDX5W 00:18:54.757 12:42:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.Re8rfBDX5W 00:18:54.757 12:42:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:55.015 [2024-11-28 12:42:37.317674] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:55.015 12:42:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:55.273 12:42:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:55.273 [2024-11-28 12:42:37.702679] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:55.273 [2024-11-28 12:42:37.702876] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:55.273 12:42:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:55.530 malloc0 00:18:55.530 12:42:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:55.787 12:42:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.Re8rfBDX5W 00:18:55.788 12:42:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:56.045 12:42:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:18:56.045 12:42:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=2547541 00:18:56.045 12:42:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:56.045 12:42:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 2547541 /var/tmp/bdevperf.sock 00:18:56.045 12:42:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2547541 ']' 00:18:56.045 12:42:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:56.045 12:42:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:56.045 12:42:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:56.045 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:56.045 12:42:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:56.045 12:42:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:56.045 [2024-11-28 12:42:38.495752] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:18:56.045 [2024-11-28 12:42:38.495800] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2547541 ] 00:18:56.045 [2024-11-28 12:42:38.559722] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:56.302 [2024-11-28 12:42:38.602884] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:56.303 12:42:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:56.303 12:42:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:56.303 12:42:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Re8rfBDX5W 00:18:56.560 12:42:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:56.560 [2024-11-28 12:42:39.060888] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:56.818 nvme0n1 00:18:56.818 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:56.818 Running I/O for 1 seconds... 00:18:57.753 5174.00 IOPS, 20.21 MiB/s 00:18:57.753 Latency(us) 00:18:57.753 [2024-11-28T11:42:40.272Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:57.753 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:57.753 Verification LBA range: start 0x0 length 0x2000 00:18:57.753 nvme0n1 : 1.01 5225.31 20.41 0.00 0.00 24311.76 5869.75 56303.97 00:18:57.753 [2024-11-28T11:42:40.272Z] =================================================================================================================== 00:18:57.753 [2024-11-28T11:42:40.272Z] Total : 5225.31 20.41 0.00 0.00 24311.76 5869.75 56303.97 00:18:57.753 { 00:18:57.753 "results": [ 00:18:57.753 { 00:18:57.753 "job": "nvme0n1", 00:18:57.753 "core_mask": "0x2", 00:18:57.753 "workload": "verify", 00:18:57.753 "status": "finished", 00:18:57.753 "verify_range": { 00:18:57.753 "start": 0, 00:18:57.753 "length": 8192 00:18:57.753 }, 00:18:57.753 "queue_depth": 128, 00:18:57.753 "io_size": 4096, 00:18:57.753 "runtime": 1.014677, 00:18:57.753 "iops": 5225.308152249435, 00:18:57.753 "mibps": 20.411359969724355, 00:18:57.753 "io_failed": 0, 00:18:57.753 "io_timeout": 0, 00:18:57.753 "avg_latency_us": 24311.756844504947, 00:18:57.753 "min_latency_us": 5869.746086956522, 00:18:57.753 "max_latency_us": 56303.97217391304 00:18:57.753 } 00:18:57.753 ], 00:18:57.753 "core_count": 1 00:18:57.753 } 00:18:57.753 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 2547541 00:18:57.753 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2547541 ']' 00:18:57.753 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2547541 00:18:57.753 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:57.753 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:57.753 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2547541 00:18:58.012 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:58.012 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:58.012 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2547541' 00:18:58.012 killing process with pid 2547541 00:18:58.012 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2547541 00:18:58.012 Received shutdown signal, test time was about 1.000000 seconds 00:18:58.012 00:18:58.012 Latency(us) 00:18:58.012 [2024-11-28T11:42:40.531Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:58.012 [2024-11-28T11:42:40.531Z] =================================================================================================================== 00:18:58.012 [2024-11-28T11:42:40.531Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:58.012 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2547541 00:18:58.012 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 2547276 00:18:58.012 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2547276 ']' 00:18:58.012 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2547276 00:18:58.012 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:58.012 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:58.012 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2547276 00:18:58.012 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:58.012 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:58.012 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2547276' 00:18:58.012 killing process with pid 2547276 00:18:58.012 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2547276 00:18:58.012 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2547276 00:18:58.271 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:18:58.271 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:58.271 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:58.271 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:58.271 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2548000 00:18:58.271 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:58.271 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2548000 00:18:58.271 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2548000 ']' 00:18:58.271 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:58.271 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:58.271 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:58.271 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:58.271 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:58.271 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:58.271 [2024-11-28 12:42:40.747198] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:18:58.271 [2024-11-28 12:42:40.747246] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:58.529 [2024-11-28 12:42:40.814286] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:58.529 [2024-11-28 12:42:40.850806] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:58.529 [2024-11-28 12:42:40.850843] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:58.529 [2024-11-28 12:42:40.850850] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:58.529 [2024-11-28 12:42:40.850856] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:58.529 [2024-11-28 12:42:40.850861] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:58.529 [2024-11-28 12:42:40.851460] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:58.529 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:58.529 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:58.529 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:58.529 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:58.529 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:58.529 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:58.529 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:18:58.529 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.529 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:58.529 [2024-11-28 12:42:40.987635] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:58.529 malloc0 00:18:58.529 [2024-11-28 12:42:41.015877] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:58.529 [2024-11-28 12:42:41.016090] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:58.529 12:42:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.529 12:42:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=2548022 00:18:58.787 12:42:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:18:58.787 12:42:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 2548022 /var/tmp/bdevperf.sock 00:18:58.787 12:42:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2548022 ']' 00:18:58.787 12:42:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:58.787 12:42:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:58.787 12:42:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:58.787 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:58.787 12:42:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:58.787 12:42:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:58.787 [2024-11-28 12:42:41.091445] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:18:58.788 [2024-11-28 12:42:41.091487] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2548022 ] 00:18:58.788 [2024-11-28 12:42:41.151876] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:58.788 [2024-11-28 12:42:41.192575] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:58.788 12:42:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:58.788 12:42:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:58.788 12:42:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Re8rfBDX5W 00:18:59.046 12:42:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:59.304 [2024-11-28 12:42:41.650111] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:59.304 nvme0n1 00:18:59.304 12:42:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:59.562 Running I/O for 1 seconds... 00:19:00.495 5300.00 IOPS, 20.70 MiB/s 00:19:00.495 Latency(us) 00:19:00.495 [2024-11-28T11:42:43.014Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:00.495 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:00.495 Verification LBA range: start 0x0 length 0x2000 00:19:00.495 nvme0n1 : 1.02 5330.80 20.82 0.00 0.00 23802.17 6439.62 38751.72 00:19:00.495 [2024-11-28T11:42:43.014Z] =================================================================================================================== 00:19:00.495 [2024-11-28T11:42:43.014Z] Total : 5330.80 20.82 0.00 0.00 23802.17 6439.62 38751.72 00:19:00.495 { 00:19:00.495 "results": [ 00:19:00.495 { 00:19:00.495 "job": "nvme0n1", 00:19:00.495 "core_mask": "0x2", 00:19:00.495 "workload": "verify", 00:19:00.495 "status": "finished", 00:19:00.495 "verify_range": { 00:19:00.495 "start": 0, 00:19:00.495 "length": 8192 00:19:00.495 }, 00:19:00.495 "queue_depth": 128, 00:19:00.495 "io_size": 4096, 00:19:00.495 "runtime": 1.018422, 00:19:00.495 "iops": 5330.796074711662, 00:19:00.495 "mibps": 20.82342216684243, 00:19:00.495 "io_failed": 0, 00:19:00.495 "io_timeout": 0, 00:19:00.495 "avg_latency_us": 23802.174286560905, 00:19:00.495 "min_latency_us": 6439.624347826087, 00:19:00.495 "max_latency_us": 38751.72173913044 00:19:00.495 } 00:19:00.495 ], 00:19:00.495 "core_count": 1 00:19:00.495 } 00:19:00.495 12:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:19:00.495 12:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.495 12:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:00.495 12:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.495 12:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:19:00.495 "subsystems": [ 00:19:00.495 { 00:19:00.495 "subsystem": "keyring", 00:19:00.495 "config": [ 00:19:00.495 { 00:19:00.495 "method": "keyring_file_add_key", 00:19:00.495 "params": { 00:19:00.495 "name": "key0", 00:19:00.495 "path": "/tmp/tmp.Re8rfBDX5W" 00:19:00.495 } 00:19:00.495 } 00:19:00.495 ] 00:19:00.495 }, 00:19:00.495 { 00:19:00.495 "subsystem": "iobuf", 00:19:00.495 "config": [ 00:19:00.495 { 00:19:00.495 "method": "iobuf_set_options", 00:19:00.495 "params": { 00:19:00.495 "small_pool_count": 8192, 00:19:00.495 "large_pool_count": 1024, 00:19:00.495 "small_bufsize": 8192, 00:19:00.495 "large_bufsize": 135168, 00:19:00.495 "enable_numa": false 00:19:00.495 } 00:19:00.495 } 00:19:00.495 ] 00:19:00.495 }, 00:19:00.495 { 00:19:00.495 "subsystem": "sock", 00:19:00.495 "config": [ 00:19:00.495 { 00:19:00.495 "method": "sock_set_default_impl", 00:19:00.495 "params": { 00:19:00.495 "impl_name": "posix" 00:19:00.495 } 00:19:00.495 }, 00:19:00.495 { 00:19:00.495 "method": "sock_impl_set_options", 00:19:00.495 "params": { 00:19:00.495 "impl_name": "ssl", 00:19:00.495 "recv_buf_size": 4096, 00:19:00.495 "send_buf_size": 4096, 00:19:00.495 "enable_recv_pipe": true, 00:19:00.495 "enable_quickack": false, 00:19:00.495 "enable_placement_id": 0, 00:19:00.495 "enable_zerocopy_send_server": true, 00:19:00.495 "enable_zerocopy_send_client": false, 00:19:00.495 "zerocopy_threshold": 0, 00:19:00.495 "tls_version": 0, 00:19:00.495 "enable_ktls": false 00:19:00.495 } 00:19:00.495 }, 00:19:00.495 { 00:19:00.495 "method": "sock_impl_set_options", 00:19:00.495 "params": { 00:19:00.495 "impl_name": "posix", 00:19:00.495 "recv_buf_size": 2097152, 00:19:00.495 "send_buf_size": 2097152, 00:19:00.495 "enable_recv_pipe": true, 00:19:00.495 "enable_quickack": false, 00:19:00.495 "enable_placement_id": 0, 00:19:00.495 "enable_zerocopy_send_server": true, 00:19:00.495 "enable_zerocopy_send_client": false, 00:19:00.495 "zerocopy_threshold": 0, 00:19:00.495 "tls_version": 0, 00:19:00.495 "enable_ktls": false 00:19:00.495 } 00:19:00.495 } 00:19:00.495 ] 00:19:00.495 }, 00:19:00.495 { 00:19:00.495 "subsystem": "vmd", 00:19:00.495 "config": [] 00:19:00.495 }, 00:19:00.495 { 00:19:00.495 "subsystem": "accel", 00:19:00.495 "config": [ 00:19:00.495 { 00:19:00.495 "method": "accel_set_options", 00:19:00.495 "params": { 00:19:00.495 "small_cache_size": 128, 00:19:00.495 "large_cache_size": 16, 00:19:00.495 "task_count": 2048, 00:19:00.495 "sequence_count": 2048, 00:19:00.495 "buf_count": 2048 00:19:00.495 } 00:19:00.495 } 00:19:00.495 ] 00:19:00.495 }, 00:19:00.495 { 00:19:00.495 "subsystem": "bdev", 00:19:00.495 "config": [ 00:19:00.495 { 00:19:00.495 "method": "bdev_set_options", 00:19:00.495 "params": { 00:19:00.495 "bdev_io_pool_size": 65535, 00:19:00.495 "bdev_io_cache_size": 256, 00:19:00.495 "bdev_auto_examine": true, 00:19:00.495 "iobuf_small_cache_size": 128, 00:19:00.495 "iobuf_large_cache_size": 16 00:19:00.495 } 00:19:00.495 }, 00:19:00.495 { 00:19:00.495 "method": "bdev_raid_set_options", 00:19:00.495 "params": { 00:19:00.495 "process_window_size_kb": 1024, 00:19:00.495 "process_max_bandwidth_mb_sec": 0 00:19:00.495 } 00:19:00.495 }, 00:19:00.495 { 00:19:00.495 "method": "bdev_iscsi_set_options", 00:19:00.495 "params": { 00:19:00.495 "timeout_sec": 30 00:19:00.495 } 00:19:00.495 }, 00:19:00.495 { 00:19:00.495 "method": "bdev_nvme_set_options", 00:19:00.495 "params": { 00:19:00.495 "action_on_timeout": "none", 00:19:00.495 "timeout_us": 0, 00:19:00.495 "timeout_admin_us": 0, 00:19:00.495 "keep_alive_timeout_ms": 10000, 00:19:00.495 "arbitration_burst": 0, 00:19:00.495 "low_priority_weight": 0, 00:19:00.495 "medium_priority_weight": 0, 00:19:00.495 "high_priority_weight": 0, 00:19:00.495 "nvme_adminq_poll_period_us": 10000, 00:19:00.495 "nvme_ioq_poll_period_us": 0, 00:19:00.495 "io_queue_requests": 0, 00:19:00.495 "delay_cmd_submit": true, 00:19:00.495 "transport_retry_count": 4, 00:19:00.495 "bdev_retry_count": 3, 00:19:00.495 "transport_ack_timeout": 0, 00:19:00.495 "ctrlr_loss_timeout_sec": 0, 00:19:00.495 "reconnect_delay_sec": 0, 00:19:00.495 "fast_io_fail_timeout_sec": 0, 00:19:00.495 "disable_auto_failback": false, 00:19:00.495 "generate_uuids": false, 00:19:00.495 "transport_tos": 0, 00:19:00.495 "nvme_error_stat": false, 00:19:00.495 "rdma_srq_size": 0, 00:19:00.495 "io_path_stat": false, 00:19:00.495 "allow_accel_sequence": false, 00:19:00.495 "rdma_max_cq_size": 0, 00:19:00.495 "rdma_cm_event_timeout_ms": 0, 00:19:00.495 "dhchap_digests": [ 00:19:00.495 "sha256", 00:19:00.495 "sha384", 00:19:00.495 "sha512" 00:19:00.495 ], 00:19:00.495 "dhchap_dhgroups": [ 00:19:00.495 "null", 00:19:00.495 "ffdhe2048", 00:19:00.495 "ffdhe3072", 00:19:00.495 "ffdhe4096", 00:19:00.495 "ffdhe6144", 00:19:00.495 "ffdhe8192" 00:19:00.495 ] 00:19:00.495 } 00:19:00.495 }, 00:19:00.495 { 00:19:00.495 "method": "bdev_nvme_set_hotplug", 00:19:00.495 "params": { 00:19:00.495 "period_us": 100000, 00:19:00.495 "enable": false 00:19:00.495 } 00:19:00.495 }, 00:19:00.495 { 00:19:00.495 "method": "bdev_malloc_create", 00:19:00.495 "params": { 00:19:00.495 "name": "malloc0", 00:19:00.495 "num_blocks": 8192, 00:19:00.495 "block_size": 4096, 00:19:00.495 "physical_block_size": 4096, 00:19:00.495 "uuid": "f6ae281e-0ba2-40ed-8618-10e4b51ff37e", 00:19:00.495 "optimal_io_boundary": 0, 00:19:00.495 "md_size": 0, 00:19:00.495 "dif_type": 0, 00:19:00.495 "dif_is_head_of_md": false, 00:19:00.495 "dif_pi_format": 0 00:19:00.495 } 00:19:00.495 }, 00:19:00.495 { 00:19:00.495 "method": "bdev_wait_for_examine" 00:19:00.495 } 00:19:00.495 ] 00:19:00.495 }, 00:19:00.495 { 00:19:00.495 "subsystem": "nbd", 00:19:00.495 "config": [] 00:19:00.495 }, 00:19:00.495 { 00:19:00.495 "subsystem": "scheduler", 00:19:00.495 "config": [ 00:19:00.495 { 00:19:00.495 "method": "framework_set_scheduler", 00:19:00.495 "params": { 00:19:00.495 "name": "static" 00:19:00.495 } 00:19:00.495 } 00:19:00.495 ] 00:19:00.495 }, 00:19:00.495 { 00:19:00.495 "subsystem": "nvmf", 00:19:00.495 "config": [ 00:19:00.495 { 00:19:00.495 "method": "nvmf_set_config", 00:19:00.495 "params": { 00:19:00.495 "discovery_filter": "match_any", 00:19:00.495 "admin_cmd_passthru": { 00:19:00.495 "identify_ctrlr": false 00:19:00.495 }, 00:19:00.495 "dhchap_digests": [ 00:19:00.495 "sha256", 00:19:00.495 "sha384", 00:19:00.495 "sha512" 00:19:00.495 ], 00:19:00.495 "dhchap_dhgroups": [ 00:19:00.495 "null", 00:19:00.496 "ffdhe2048", 00:19:00.496 "ffdhe3072", 00:19:00.496 "ffdhe4096", 00:19:00.496 "ffdhe6144", 00:19:00.496 "ffdhe8192" 00:19:00.496 ] 00:19:00.496 } 00:19:00.496 }, 00:19:00.496 { 00:19:00.496 "method": "nvmf_set_max_subsystems", 00:19:00.496 "params": { 00:19:00.496 "max_subsystems": 1024 00:19:00.496 } 00:19:00.496 }, 00:19:00.496 { 00:19:00.496 "method": "nvmf_set_crdt", 00:19:00.496 "params": { 00:19:00.496 "crdt1": 0, 00:19:00.496 "crdt2": 0, 00:19:00.496 "crdt3": 0 00:19:00.496 } 00:19:00.496 }, 00:19:00.496 { 00:19:00.496 "method": "nvmf_create_transport", 00:19:00.496 "params": { 00:19:00.496 "trtype": "TCP", 00:19:00.496 "max_queue_depth": 128, 00:19:00.496 "max_io_qpairs_per_ctrlr": 127, 00:19:00.496 "in_capsule_data_size": 4096, 00:19:00.496 "max_io_size": 131072, 00:19:00.496 "io_unit_size": 131072, 00:19:00.496 "max_aq_depth": 128, 00:19:00.496 "num_shared_buffers": 511, 00:19:00.496 "buf_cache_size": 4294967295, 00:19:00.496 "dif_insert_or_strip": false, 00:19:00.496 "zcopy": false, 00:19:00.496 "c2h_success": false, 00:19:00.496 "sock_priority": 0, 00:19:00.496 "abort_timeout_sec": 1, 00:19:00.496 "ack_timeout": 0, 00:19:00.496 "data_wr_pool_size": 0 00:19:00.496 } 00:19:00.496 }, 00:19:00.496 { 00:19:00.496 "method": "nvmf_create_subsystem", 00:19:00.496 "params": { 00:19:00.496 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:00.496 "allow_any_host": false, 00:19:00.496 "serial_number": "00000000000000000000", 00:19:00.496 "model_number": "SPDK bdev Controller", 00:19:00.496 "max_namespaces": 32, 00:19:00.496 "min_cntlid": 1, 00:19:00.496 "max_cntlid": 65519, 00:19:00.496 "ana_reporting": false 00:19:00.496 } 00:19:00.496 }, 00:19:00.496 { 00:19:00.496 "method": "nvmf_subsystem_add_host", 00:19:00.496 "params": { 00:19:00.496 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:00.496 "host": "nqn.2016-06.io.spdk:host1", 00:19:00.496 "psk": "key0" 00:19:00.496 } 00:19:00.496 }, 00:19:00.496 { 00:19:00.496 "method": "nvmf_subsystem_add_ns", 00:19:00.496 "params": { 00:19:00.496 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:00.496 "namespace": { 00:19:00.496 "nsid": 1, 00:19:00.496 "bdev_name": "malloc0", 00:19:00.496 "nguid": "F6AE281E0BA240ED861810E4B51FF37E", 00:19:00.496 "uuid": "f6ae281e-0ba2-40ed-8618-10e4b51ff37e", 00:19:00.496 "no_auto_visible": false 00:19:00.496 } 00:19:00.496 } 00:19:00.496 }, 00:19:00.496 { 00:19:00.496 "method": "nvmf_subsystem_add_listener", 00:19:00.496 "params": { 00:19:00.496 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:00.496 "listen_address": { 00:19:00.496 "trtype": "TCP", 00:19:00.496 "adrfam": "IPv4", 00:19:00.496 "traddr": "10.0.0.2", 00:19:00.496 "trsvcid": "4420" 00:19:00.496 }, 00:19:00.496 "secure_channel": false, 00:19:00.496 "sock_impl": "ssl" 00:19:00.496 } 00:19:00.496 } 00:19:00.496 ] 00:19:00.496 } 00:19:00.496 ] 00:19:00.496 }' 00:19:00.496 12:42:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:19:00.754 12:42:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:19:00.754 "subsystems": [ 00:19:00.754 { 00:19:00.754 "subsystem": "keyring", 00:19:00.754 "config": [ 00:19:00.754 { 00:19:00.754 "method": "keyring_file_add_key", 00:19:00.754 "params": { 00:19:00.754 "name": "key0", 00:19:00.754 "path": "/tmp/tmp.Re8rfBDX5W" 00:19:00.754 } 00:19:00.754 } 00:19:00.754 ] 00:19:00.754 }, 00:19:00.754 { 00:19:00.754 "subsystem": "iobuf", 00:19:00.754 "config": [ 00:19:00.754 { 00:19:00.754 "method": "iobuf_set_options", 00:19:00.754 "params": { 00:19:00.754 "small_pool_count": 8192, 00:19:00.754 "large_pool_count": 1024, 00:19:00.754 "small_bufsize": 8192, 00:19:00.754 "large_bufsize": 135168, 00:19:00.754 "enable_numa": false 00:19:00.754 } 00:19:00.754 } 00:19:00.754 ] 00:19:00.754 }, 00:19:00.754 { 00:19:00.754 "subsystem": "sock", 00:19:00.754 "config": [ 00:19:00.754 { 00:19:00.754 "method": "sock_set_default_impl", 00:19:00.754 "params": { 00:19:00.754 "impl_name": "posix" 00:19:00.754 } 00:19:00.754 }, 00:19:00.754 { 00:19:00.754 "method": "sock_impl_set_options", 00:19:00.754 "params": { 00:19:00.754 "impl_name": "ssl", 00:19:00.754 "recv_buf_size": 4096, 00:19:00.754 "send_buf_size": 4096, 00:19:00.754 "enable_recv_pipe": true, 00:19:00.754 "enable_quickack": false, 00:19:00.754 "enable_placement_id": 0, 00:19:00.754 "enable_zerocopy_send_server": true, 00:19:00.754 "enable_zerocopy_send_client": false, 00:19:00.754 "zerocopy_threshold": 0, 00:19:00.754 "tls_version": 0, 00:19:00.754 "enable_ktls": false 00:19:00.754 } 00:19:00.754 }, 00:19:00.754 { 00:19:00.754 "method": "sock_impl_set_options", 00:19:00.754 "params": { 00:19:00.754 "impl_name": "posix", 00:19:00.754 "recv_buf_size": 2097152, 00:19:00.754 "send_buf_size": 2097152, 00:19:00.754 "enable_recv_pipe": true, 00:19:00.754 "enable_quickack": false, 00:19:00.754 "enable_placement_id": 0, 00:19:00.754 "enable_zerocopy_send_server": true, 00:19:00.754 "enable_zerocopy_send_client": false, 00:19:00.754 "zerocopy_threshold": 0, 00:19:00.754 "tls_version": 0, 00:19:00.754 "enable_ktls": false 00:19:00.754 } 00:19:00.754 } 00:19:00.754 ] 00:19:00.754 }, 00:19:00.754 { 00:19:00.754 "subsystem": "vmd", 00:19:00.754 "config": [] 00:19:00.754 }, 00:19:00.754 { 00:19:00.754 "subsystem": "accel", 00:19:00.754 "config": [ 00:19:00.754 { 00:19:00.754 "method": "accel_set_options", 00:19:00.754 "params": { 00:19:00.754 "small_cache_size": 128, 00:19:00.754 "large_cache_size": 16, 00:19:00.754 "task_count": 2048, 00:19:00.754 "sequence_count": 2048, 00:19:00.754 "buf_count": 2048 00:19:00.754 } 00:19:00.754 } 00:19:00.754 ] 00:19:00.754 }, 00:19:00.754 { 00:19:00.754 "subsystem": "bdev", 00:19:00.754 "config": [ 00:19:00.754 { 00:19:00.754 "method": "bdev_set_options", 00:19:00.754 "params": { 00:19:00.754 "bdev_io_pool_size": 65535, 00:19:00.754 "bdev_io_cache_size": 256, 00:19:00.754 "bdev_auto_examine": true, 00:19:00.754 "iobuf_small_cache_size": 128, 00:19:00.754 "iobuf_large_cache_size": 16 00:19:00.754 } 00:19:00.754 }, 00:19:00.754 { 00:19:00.754 "method": "bdev_raid_set_options", 00:19:00.754 "params": { 00:19:00.754 "process_window_size_kb": 1024, 00:19:00.754 "process_max_bandwidth_mb_sec": 0 00:19:00.754 } 00:19:00.754 }, 00:19:00.754 { 00:19:00.754 "method": "bdev_iscsi_set_options", 00:19:00.754 "params": { 00:19:00.754 "timeout_sec": 30 00:19:00.754 } 00:19:00.754 }, 00:19:00.754 { 00:19:00.754 "method": "bdev_nvme_set_options", 00:19:00.754 "params": { 00:19:00.754 "action_on_timeout": "none", 00:19:00.754 "timeout_us": 0, 00:19:00.754 "timeout_admin_us": 0, 00:19:00.754 "keep_alive_timeout_ms": 10000, 00:19:00.754 "arbitration_burst": 0, 00:19:00.754 "low_priority_weight": 0, 00:19:00.754 "medium_priority_weight": 0, 00:19:00.754 "high_priority_weight": 0, 00:19:00.754 "nvme_adminq_poll_period_us": 10000, 00:19:00.754 "nvme_ioq_poll_period_us": 0, 00:19:00.754 "io_queue_requests": 512, 00:19:00.754 "delay_cmd_submit": true, 00:19:00.754 "transport_retry_count": 4, 00:19:00.754 "bdev_retry_count": 3, 00:19:00.754 "transport_ack_timeout": 0, 00:19:00.754 "ctrlr_loss_timeout_sec": 0, 00:19:00.754 "reconnect_delay_sec": 0, 00:19:00.754 "fast_io_fail_timeout_sec": 0, 00:19:00.754 "disable_auto_failback": false, 00:19:00.754 "generate_uuids": false, 00:19:00.754 "transport_tos": 0, 00:19:00.754 "nvme_error_stat": false, 00:19:00.754 "rdma_srq_size": 0, 00:19:00.754 "io_path_stat": false, 00:19:00.754 "allow_accel_sequence": false, 00:19:00.754 "rdma_max_cq_size": 0, 00:19:00.754 "rdma_cm_event_timeout_ms": 0, 00:19:00.754 "dhchap_digests": [ 00:19:00.754 "sha256", 00:19:00.754 "sha384", 00:19:00.754 "sha512" 00:19:00.754 ], 00:19:00.754 "dhchap_dhgroups": [ 00:19:00.754 "null", 00:19:00.754 "ffdhe2048", 00:19:00.754 "ffdhe3072", 00:19:00.754 "ffdhe4096", 00:19:00.754 "ffdhe6144", 00:19:00.754 "ffdhe8192" 00:19:00.754 ] 00:19:00.754 } 00:19:00.754 }, 00:19:00.754 { 00:19:00.754 "method": "bdev_nvme_attach_controller", 00:19:00.754 "params": { 00:19:00.754 "name": "nvme0", 00:19:00.754 "trtype": "TCP", 00:19:00.754 "adrfam": "IPv4", 00:19:00.754 "traddr": "10.0.0.2", 00:19:00.754 "trsvcid": "4420", 00:19:00.754 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:00.754 "prchk_reftag": false, 00:19:00.754 "prchk_guard": false, 00:19:00.754 "ctrlr_loss_timeout_sec": 0, 00:19:00.754 "reconnect_delay_sec": 0, 00:19:00.754 "fast_io_fail_timeout_sec": 0, 00:19:00.754 "psk": "key0", 00:19:00.754 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:00.754 "hdgst": false, 00:19:00.754 "ddgst": false, 00:19:00.754 "multipath": "multipath" 00:19:00.754 } 00:19:00.754 }, 00:19:00.754 { 00:19:00.754 "method": "bdev_nvme_set_hotplug", 00:19:00.754 "params": { 00:19:00.754 "period_us": 100000, 00:19:00.754 "enable": false 00:19:00.754 } 00:19:00.754 }, 00:19:00.754 { 00:19:00.754 "method": "bdev_enable_histogram", 00:19:00.754 "params": { 00:19:00.754 "name": "nvme0n1", 00:19:00.754 "enable": true 00:19:00.754 } 00:19:00.754 }, 00:19:00.754 { 00:19:00.754 "method": "bdev_wait_for_examine" 00:19:00.754 } 00:19:00.754 ] 00:19:00.754 }, 00:19:00.754 { 00:19:00.754 "subsystem": "nbd", 00:19:00.754 "config": [] 00:19:00.754 } 00:19:00.754 ] 00:19:00.754 }' 00:19:00.754 12:42:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 2548022 00:19:00.754 12:42:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2548022 ']' 00:19:00.754 12:42:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2548022 00:19:00.754 12:42:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:00.754 12:42:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:00.754 12:42:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2548022 00:19:01.013 12:42:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:01.013 12:42:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:01.013 12:42:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2548022' 00:19:01.013 killing process with pid 2548022 00:19:01.013 12:42:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2548022 00:19:01.013 Received shutdown signal, test time was about 1.000000 seconds 00:19:01.013 00:19:01.013 Latency(us) 00:19:01.013 [2024-11-28T11:42:43.532Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:01.013 [2024-11-28T11:42:43.532Z] =================================================================================================================== 00:19:01.013 [2024-11-28T11:42:43.532Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:01.013 12:42:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2548022 00:19:01.013 12:42:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 2548000 00:19:01.013 12:42:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2548000 ']' 00:19:01.013 12:42:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2548000 00:19:01.013 12:42:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:01.013 12:42:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:01.013 12:42:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2548000 00:19:01.013 12:42:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:01.013 12:42:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:01.013 12:42:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2548000' 00:19:01.013 killing process with pid 2548000 00:19:01.013 12:42:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2548000 00:19:01.013 12:42:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2548000 00:19:01.274 12:42:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:19:01.274 "subsystems": [ 00:19:01.274 { 00:19:01.274 "subsystem": "keyring", 00:19:01.274 "config": [ 00:19:01.274 { 00:19:01.274 "method": "keyring_file_add_key", 00:19:01.274 "params": { 00:19:01.274 "name": "key0", 00:19:01.274 "path": "/tmp/tmp.Re8rfBDX5W" 00:19:01.274 } 00:19:01.274 } 00:19:01.274 ] 00:19:01.274 }, 00:19:01.274 { 00:19:01.274 "subsystem": "iobuf", 00:19:01.274 "config": [ 00:19:01.274 { 00:19:01.274 "method": "iobuf_set_options", 00:19:01.274 "params": { 00:19:01.274 "small_pool_count": 8192, 00:19:01.274 "large_pool_count": 1024, 00:19:01.274 "small_bufsize": 8192, 00:19:01.274 "large_bufsize": 135168, 00:19:01.274 "enable_numa": false 00:19:01.274 } 00:19:01.274 } 00:19:01.274 ] 00:19:01.274 }, 00:19:01.274 { 00:19:01.274 "subsystem": "sock", 00:19:01.274 "config": [ 00:19:01.274 { 00:19:01.274 "method": "sock_set_default_impl", 00:19:01.274 "params": { 00:19:01.274 "impl_name": "posix" 00:19:01.274 } 00:19:01.274 }, 00:19:01.274 { 00:19:01.274 "method": "sock_impl_set_options", 00:19:01.274 "params": { 00:19:01.274 "impl_name": "ssl", 00:19:01.274 "recv_buf_size": 4096, 00:19:01.274 "send_buf_size": 4096, 00:19:01.274 "enable_recv_pipe": true, 00:19:01.274 "enable_quickack": false, 00:19:01.274 "enable_placement_id": 0, 00:19:01.274 "enable_zerocopy_send_server": true, 00:19:01.274 "enable_zerocopy_send_client": false, 00:19:01.274 "zerocopy_threshold": 0, 00:19:01.274 "tls_version": 0, 00:19:01.274 "enable_ktls": false 00:19:01.274 } 00:19:01.274 }, 00:19:01.274 { 00:19:01.274 "method": "sock_impl_set_options", 00:19:01.274 "params": { 00:19:01.274 "impl_name": "posix", 00:19:01.274 "recv_buf_size": 2097152, 00:19:01.274 "send_buf_size": 2097152, 00:19:01.274 "enable_recv_pipe": true, 00:19:01.274 "enable_quickack": false, 00:19:01.274 "enable_placement_id": 0, 00:19:01.274 "enable_zerocopy_send_server": true, 00:19:01.274 "enable_zerocopy_send_client": false, 00:19:01.274 "zerocopy_threshold": 0, 00:19:01.274 "tls_version": 0, 00:19:01.274 "enable_ktls": false 00:19:01.274 } 00:19:01.274 } 00:19:01.274 ] 00:19:01.274 }, 00:19:01.274 { 00:19:01.274 "subsystem": "vmd", 00:19:01.274 "config": [] 00:19:01.274 }, 00:19:01.274 { 00:19:01.274 "subsystem": "accel", 00:19:01.274 "config": [ 00:19:01.274 { 00:19:01.274 "method": "accel_set_options", 00:19:01.274 "params": { 00:19:01.274 "small_cache_size": 128, 00:19:01.274 "large_cache_size": 16, 00:19:01.274 "task_count": 2048, 00:19:01.274 "sequence_count": 2048, 00:19:01.274 "buf_count": 2048 00:19:01.274 } 00:19:01.274 } 00:19:01.274 ] 00:19:01.274 }, 00:19:01.274 { 00:19:01.274 "subsystem": "bdev", 00:19:01.274 "config": [ 00:19:01.274 { 00:19:01.274 "method": "bdev_set_options", 00:19:01.274 "params": { 00:19:01.274 "bdev_io_pool_size": 65535, 00:19:01.274 "bdev_io_cache_size": 256, 00:19:01.274 "bdev_auto_examine": true, 00:19:01.274 "iobuf_small_cache_size": 128, 00:19:01.274 "iobuf_large_cache_size": 16 00:19:01.274 } 00:19:01.274 }, 00:19:01.274 { 00:19:01.274 "method": "bdev_raid_set_options", 00:19:01.274 "params": { 00:19:01.274 "process_window_size_kb": 1024, 00:19:01.274 "process_max_bandwidth_mb_sec": 0 00:19:01.274 } 00:19:01.274 }, 00:19:01.274 { 00:19:01.274 "method": "bdev_iscsi_set_options", 00:19:01.274 "params": { 00:19:01.274 "timeout_sec": 30 00:19:01.274 } 00:19:01.274 }, 00:19:01.274 { 00:19:01.274 "method": "bdev_nvme_set_options", 00:19:01.274 "params": { 00:19:01.274 "action_on_timeout": "none", 00:19:01.274 "timeout_us": 0, 00:19:01.274 "timeout_admin_us": 0, 00:19:01.274 "keep_alive_timeout_ms": 10000, 00:19:01.274 "arbitration_burst": 0, 00:19:01.274 "low_priority_weight": 0, 00:19:01.274 "medium_priority_weight": 0, 00:19:01.274 "high_priority_weight": 0, 00:19:01.274 "nvme_adminq_poll_period_us": 10000, 00:19:01.274 "nvme_ioq_poll_period_us": 0, 00:19:01.274 "io_queue_requests": 0, 00:19:01.274 "delay_cmd_submit": true, 00:19:01.274 "transport_retry_count": 4, 00:19:01.274 "bdev_retry_count": 3, 00:19:01.274 "transport_ack_timeout": 0, 00:19:01.274 "ctrlr_loss_timeout_sec": 0, 00:19:01.274 "reconnect_delay_sec": 0, 00:19:01.274 "fast_io_fail_timeout_sec": 0, 00:19:01.274 "disable_auto_failback": false, 00:19:01.274 "generate_uuids": false, 00:19:01.274 "transport_tos": 0, 00:19:01.274 "nvme_error_stat": false, 00:19:01.274 "rdma_srq_size": 0, 00:19:01.274 "io_path_stat": false, 00:19:01.274 "allow_accel_sequence": false, 00:19:01.274 "rdma_max_cq_size": 0, 00:19:01.274 "rdma_cm_event_timeout_ms": 0, 00:19:01.274 "dhchap_digests": [ 00:19:01.274 "sha256", 00:19:01.274 "sha384", 00:19:01.274 "sha512" 00:19:01.274 ], 00:19:01.274 "dhchap_dhgroups": [ 00:19:01.274 "null", 00:19:01.274 "ffdhe2048", 00:19:01.274 "ffdhe3072", 00:19:01.274 "ffdhe4096", 00:19:01.274 "ffdhe6144", 00:19:01.274 "ffdhe8192" 00:19:01.274 ] 00:19:01.274 } 00:19:01.274 }, 00:19:01.274 { 00:19:01.274 "method": "bdev_nvme_set_hotplug", 00:19:01.274 "params": { 00:19:01.274 "period_us": 100000, 00:19:01.274 "enable": false 00:19:01.274 } 00:19:01.274 }, 00:19:01.274 { 00:19:01.274 "method": "bdev_malloc_create", 00:19:01.274 "params": { 00:19:01.274 "name": "malloc0", 00:19:01.274 "num_blocks": 8192, 00:19:01.274 "block_size": 4096, 00:19:01.274 "physical_block_size": 4096, 00:19:01.274 "uuid": "f6ae281e-0ba2-40ed-8618-10e4b51ff37e", 00:19:01.274 "optimal_io_boundary": 0, 00:19:01.274 "md_size": 0, 00:19:01.274 "dif_type": 0, 00:19:01.274 "dif_is_head_of_md": false, 00:19:01.274 "dif_pi_format": 0 00:19:01.274 } 00:19:01.274 }, 00:19:01.274 { 00:19:01.274 "method": "bdev_wait_for_examine" 00:19:01.274 } 00:19:01.274 ] 00:19:01.274 }, 00:19:01.274 { 00:19:01.274 "subsystem": "nbd", 00:19:01.274 "config": [] 00:19:01.274 }, 00:19:01.274 { 00:19:01.274 "subsystem": "scheduler", 00:19:01.274 "config": [ 00:19:01.274 { 00:19:01.274 "method": "framework_set_scheduler", 00:19:01.274 "params": { 00:19:01.274 "name": "static" 00:19:01.274 } 00:19:01.274 } 00:19:01.274 ] 00:19:01.274 }, 00:19:01.274 { 00:19:01.274 "subsystem": "nvmf", 00:19:01.274 "config": [ 00:19:01.274 { 00:19:01.274 "method": "nvmf_set_config", 00:19:01.274 "params": { 00:19:01.274 "discovery_filter": "match_any", 00:19:01.274 "admin_cmd_passthru": { 00:19:01.274 "identify_ctrlr": false 00:19:01.274 }, 00:19:01.274 "dhchap_digests": [ 00:19:01.274 "sha256", 00:19:01.274 "sha384", 00:19:01.274 "sha512" 00:19:01.274 ], 00:19:01.274 "dhchap_dhgroups": [ 00:19:01.274 "null", 00:19:01.274 "ffdhe2048", 00:19:01.274 "ffdhe3072", 00:19:01.274 "ffdhe4096", 00:19:01.274 "ffdhe6144", 00:19:01.274 "ffdhe8192" 00:19:01.274 ] 00:19:01.274 } 00:19:01.274 }, 00:19:01.274 { 00:19:01.274 "method": "nvmf_set_max_subsystems", 00:19:01.274 "params": { 00:19:01.274 "max_subsystems": 1024 00:19:01.274 } 00:19:01.274 }, 00:19:01.274 { 00:19:01.274 "method": "nvmf_set_crdt", 00:19:01.274 "params": { 00:19:01.274 "crdt1": 0, 00:19:01.274 "crdt2": 0, 00:19:01.274 "crdt3": 0 00:19:01.274 } 00:19:01.274 }, 00:19:01.274 { 00:19:01.274 "method": "nvmf_create_transport", 00:19:01.274 "params": { 00:19:01.274 "trtype": "TCP", 00:19:01.274 "max_queue_depth": 128, 00:19:01.274 "max_io_qpairs_per_ctrlr": 127, 00:19:01.274 "in_capsule_data_size": 4096, 00:19:01.274 "max_io_size": 131072, 00:19:01.275 "io_unit_size": 131072, 00:19:01.275 "max_aq_depth": 128, 00:19:01.275 "num_shared_buffers": 511, 00:19:01.275 "buf_cache_size": 4294967295, 00:19:01.275 "dif_insert_or_strip": false, 00:19:01.275 "zcopy": false, 00:19:01.275 "c2h_success": false, 00:19:01.275 "sock_priority": 0, 00:19:01.275 "abort_timeout_sec": 1, 00:19:01.275 "ack_timeout": 0, 00:19:01.275 "data_wr_pool_size": 0 00:19:01.275 } 00:19:01.275 }, 00:19:01.275 { 00:19:01.275 "method": "nvmf_create_subsystem", 00:19:01.275 "params": { 00:19:01.275 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:01.275 "allow_any_host": false, 00:19:01.275 "serial_number": "00000000000000000000", 00:19:01.275 "model_number": "SPDK bdev Controller", 00:19:01.275 "max_namespaces": 32, 00:19:01.275 "min_cntlid": 1, 00:19:01.275 "max_cntlid": 65519, 00:19:01.275 "ana_reporting": false 00:19:01.275 } 00:19:01.275 }, 00:19:01.275 { 00:19:01.275 "method": "nvmf_subsystem_add_host", 00:19:01.275 "params": { 00:19:01.275 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:01.275 "host": "nqn.2016-06.io.spdk:host1", 00:19:01.275 "psk": "key0" 00:19:01.275 } 00:19:01.275 }, 00:19:01.275 { 00:19:01.275 "method": "nvmf_subsystem_add_ns", 00:19:01.275 "params": { 00:19:01.275 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:01.275 "namespace": { 00:19:01.275 "nsid": 1, 00:19:01.275 "bdev_name": "malloc0", 00:19:01.275 "nguid": "F6AE281E0BA240ED861810E4B51FF37E", 00:19:01.275 "uuid": "f6ae281e-0ba2-40ed-8618-10e4b51ff37e", 00:19:01.275 "no_auto_visible": false 00:19:01.275 } 00:19:01.275 } 00:19:01.275 }, 00:19:01.275 { 00:19:01.275 "method": "nvmf_subsystem_add_listener", 00:19:01.275 "params": { 00:19:01.275 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:01.275 "listen_address": { 00:19:01.275 "trtype": "TCP", 00:19:01.275 "adrfam": "IPv4", 00:19:01.275 "traddr": "10.0.0.2", 00:19:01.275 "trsvcid": "4420" 00:19:01.275 }, 00:19:01.275 "secure_channel": false, 00:19:01.275 "sock_impl": "ssl" 00:19:01.275 } 00:19:01.275 } 00:19:01.275 ] 00:19:01.275 } 00:19:01.275 ] 00:19:01.275 }' 00:19:01.275 12:42:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:19:01.275 12:42:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:01.275 12:42:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:01.275 12:42:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:01.275 12:42:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2548500 00:19:01.275 12:42:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:19:01.275 12:42:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2548500 00:19:01.275 12:42:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2548500 ']' 00:19:01.275 12:42:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:01.275 12:42:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:01.275 12:42:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:01.275 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:01.275 12:42:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:01.275 12:42:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:01.275 [2024-11-28 12:42:43.747941] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:19:01.275 [2024-11-28 12:42:43.747999] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:01.534 [2024-11-28 12:42:43.814271] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:01.534 [2024-11-28 12:42:43.854889] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:01.534 [2024-11-28 12:42:43.854928] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:01.534 [2024-11-28 12:42:43.854935] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:01.534 [2024-11-28 12:42:43.854941] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:01.534 [2024-11-28 12:42:43.854951] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:01.534 [2024-11-28 12:42:43.855565] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:01.793 [2024-11-28 12:42:44.068747] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:01.793 [2024-11-28 12:42:44.100785] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:01.793 [2024-11-28 12:42:44.100997] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:02.360 12:42:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:02.360 12:42:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:02.360 12:42:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:02.360 12:42:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:02.360 12:42:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:02.360 12:42:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:02.360 12:42:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=2548741 00:19:02.360 12:42:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 2548741 /var/tmp/bdevperf.sock 00:19:02.360 12:42:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2548741 ']' 00:19:02.360 12:42:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:02.360 12:42:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:19:02.360 12:42:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:02.360 12:42:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:02.360 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:02.360 12:42:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:19:02.360 "subsystems": [ 00:19:02.360 { 00:19:02.360 "subsystem": "keyring", 00:19:02.360 "config": [ 00:19:02.360 { 00:19:02.360 "method": "keyring_file_add_key", 00:19:02.361 "params": { 00:19:02.361 "name": "key0", 00:19:02.361 "path": "/tmp/tmp.Re8rfBDX5W" 00:19:02.361 } 00:19:02.361 } 00:19:02.361 ] 00:19:02.361 }, 00:19:02.361 { 00:19:02.361 "subsystem": "iobuf", 00:19:02.361 "config": [ 00:19:02.361 { 00:19:02.361 "method": "iobuf_set_options", 00:19:02.361 "params": { 00:19:02.361 "small_pool_count": 8192, 00:19:02.361 "large_pool_count": 1024, 00:19:02.361 "small_bufsize": 8192, 00:19:02.361 "large_bufsize": 135168, 00:19:02.361 "enable_numa": false 00:19:02.361 } 00:19:02.361 } 00:19:02.361 ] 00:19:02.361 }, 00:19:02.361 { 00:19:02.361 "subsystem": "sock", 00:19:02.361 "config": [ 00:19:02.361 { 00:19:02.361 "method": "sock_set_default_impl", 00:19:02.361 "params": { 00:19:02.361 "impl_name": "posix" 00:19:02.361 } 00:19:02.361 }, 00:19:02.361 { 00:19:02.361 "method": "sock_impl_set_options", 00:19:02.361 "params": { 00:19:02.361 "impl_name": "ssl", 00:19:02.361 "recv_buf_size": 4096, 00:19:02.361 "send_buf_size": 4096, 00:19:02.361 "enable_recv_pipe": true, 00:19:02.361 "enable_quickack": false, 00:19:02.361 "enable_placement_id": 0, 00:19:02.361 "enable_zerocopy_send_server": true, 00:19:02.361 "enable_zerocopy_send_client": false, 00:19:02.361 "zerocopy_threshold": 0, 00:19:02.361 "tls_version": 0, 00:19:02.361 "enable_ktls": false 00:19:02.361 } 00:19:02.361 }, 00:19:02.361 { 00:19:02.361 "method": "sock_impl_set_options", 00:19:02.361 "params": { 00:19:02.361 "impl_name": "posix", 00:19:02.361 "recv_buf_size": 2097152, 00:19:02.361 "send_buf_size": 2097152, 00:19:02.361 "enable_recv_pipe": true, 00:19:02.361 "enable_quickack": false, 00:19:02.361 "enable_placement_id": 0, 00:19:02.361 "enable_zerocopy_send_server": true, 00:19:02.361 "enable_zerocopy_send_client": false, 00:19:02.361 "zerocopy_threshold": 0, 00:19:02.361 "tls_version": 0, 00:19:02.361 "enable_ktls": false 00:19:02.361 } 00:19:02.361 } 00:19:02.361 ] 00:19:02.361 }, 00:19:02.361 { 00:19:02.361 "subsystem": "vmd", 00:19:02.361 "config": [] 00:19:02.361 }, 00:19:02.361 { 00:19:02.361 "subsystem": "accel", 00:19:02.361 "config": [ 00:19:02.361 { 00:19:02.361 "method": "accel_set_options", 00:19:02.361 "params": { 00:19:02.361 "small_cache_size": 128, 00:19:02.361 "large_cache_size": 16, 00:19:02.361 "task_count": 2048, 00:19:02.361 "sequence_count": 2048, 00:19:02.361 "buf_count": 2048 00:19:02.361 } 00:19:02.361 } 00:19:02.361 ] 00:19:02.361 }, 00:19:02.361 { 00:19:02.361 "subsystem": "bdev", 00:19:02.361 "config": [ 00:19:02.361 { 00:19:02.361 "method": "bdev_set_options", 00:19:02.361 "params": { 00:19:02.361 "bdev_io_pool_size": 65535, 00:19:02.361 "bdev_io_cache_size": 256, 00:19:02.361 "bdev_auto_examine": true, 00:19:02.361 "iobuf_small_cache_size": 128, 00:19:02.361 "iobuf_large_cache_size": 16 00:19:02.361 } 00:19:02.361 }, 00:19:02.361 { 00:19:02.361 "method": "bdev_raid_set_options", 00:19:02.361 "params": { 00:19:02.361 "process_window_size_kb": 1024, 00:19:02.361 "process_max_bandwidth_mb_sec": 0 00:19:02.361 } 00:19:02.361 }, 00:19:02.361 { 00:19:02.361 "method": "bdev_iscsi_set_options", 00:19:02.361 "params": { 00:19:02.361 "timeout_sec": 30 00:19:02.361 } 00:19:02.361 }, 00:19:02.361 { 00:19:02.361 "method": "bdev_nvme_set_options", 00:19:02.361 "params": { 00:19:02.361 "action_on_timeout": "none", 00:19:02.361 "timeout_us": 0, 00:19:02.361 "timeout_admin_us": 0, 00:19:02.361 "keep_alive_timeout_ms": 10000, 00:19:02.361 "arbitration_burst": 0, 00:19:02.361 "low_priority_weight": 0, 00:19:02.361 "medium_priority_weight": 0, 00:19:02.361 "high_priority_weight": 0, 00:19:02.361 "nvme_adminq_poll_period_us": 10000, 00:19:02.361 "nvme_ioq_poll_period_us": 0, 00:19:02.361 "io_queue_requests": 512, 00:19:02.361 "delay_cmd_submit": true, 00:19:02.361 "transport_retry_count": 4, 00:19:02.361 "bdev_retry_count": 3, 00:19:02.361 "transport_ack_timeout": 0, 00:19:02.361 "ctrlr_loss_timeout_sec": 0, 00:19:02.361 "reconnect_delay_sec": 0, 00:19:02.361 "fast_io_fail_timeout_sec": 0, 00:19:02.361 "disable_auto_failback": false, 00:19:02.361 "generate_uuids": false, 00:19:02.361 "transport_tos": 0, 00:19:02.361 "nvme_error_stat": false, 00:19:02.361 "rdma_srq_size": 0, 00:19:02.361 "io_path_stat": false, 00:19:02.361 "allow_accel_sequence": false, 00:19:02.361 "rdma_max_cq_size": 0, 00:19:02.361 "rdma_cm_event_timeout_ms": 0, 00:19:02.361 "dhchap_digests": [ 00:19:02.361 "sha256", 00:19:02.361 "sha384", 00:19:02.361 "sha512" 00:19:02.361 ], 00:19:02.361 "dhchap_dhgroups": [ 00:19:02.361 "null", 00:19:02.361 "ffdhe2048", 00:19:02.361 "ffdhe3072", 00:19:02.361 "ffdhe4096", 00:19:02.361 "ffdhe6144", 00:19:02.361 "ffdhe8192" 00:19:02.361 ] 00:19:02.361 } 00:19:02.361 }, 00:19:02.361 { 00:19:02.361 "method": "bdev_nvme_attach_controller", 00:19:02.361 "params": { 00:19:02.361 "name": "nvme0", 00:19:02.361 "trtype": "TCP", 00:19:02.361 "adrfam": "IPv4", 00:19:02.361 "traddr": "10.0.0.2", 00:19:02.361 "trsvcid": "4420", 00:19:02.361 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:02.361 "prchk_reftag": false, 00:19:02.361 "prchk_guard": false, 00:19:02.361 "ctrlr_loss_timeout_sec": 0, 00:19:02.361 "reconnect_delay_sec": 0, 00:19:02.361 "fast_io_fail_timeout_sec": 0, 00:19:02.361 "psk": "key0", 00:19:02.361 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:02.361 "hdgst": false, 00:19:02.361 "ddgst": false, 00:19:02.361 "multipath": "multipath" 00:19:02.361 } 00:19:02.361 }, 00:19:02.361 { 00:19:02.361 "method": "bdev_nvme_set_hotplug", 00:19:02.361 "params": { 00:19:02.361 "period_us": 100000, 00:19:02.361 "enable": false 00:19:02.361 } 00:19:02.361 }, 00:19:02.361 { 00:19:02.361 "method": "bdev_enable_histogram", 00:19:02.361 "params": { 00:19:02.361 "name": "nvme0n1", 00:19:02.361 "enable": true 00:19:02.361 } 00:19:02.361 }, 00:19:02.361 { 00:19:02.361 "method": "bdev_wait_for_examine" 00:19:02.361 } 00:19:02.361 ] 00:19:02.361 }, 00:19:02.361 { 00:19:02.361 "subsystem": "nbd", 00:19:02.361 "config": [] 00:19:02.361 } 00:19:02.361 ] 00:19:02.361 }' 00:19:02.361 12:42:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:02.361 12:42:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:02.361 [2024-11-28 12:42:44.666570] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:19:02.361 [2024-11-28 12:42:44.666615] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2548741 ] 00:19:02.361 [2024-11-28 12:42:44.729451] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:02.361 [2024-11-28 12:42:44.770589] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:02.620 [2024-11-28 12:42:44.925772] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:03.184 12:42:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:03.184 12:42:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:03.184 12:42:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:03.184 12:42:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:19:03.184 12:42:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:03.184 12:42:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:03.478 Running I/O for 1 seconds... 00:19:04.410 5070.00 IOPS, 19.80 MiB/s 00:19:04.410 Latency(us) 00:19:04.410 [2024-11-28T11:42:46.929Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:04.410 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:04.410 Verification LBA range: start 0x0 length 0x2000 00:19:04.410 nvme0n1 : 1.02 5106.25 19.95 0.00 0.00 24870.73 4900.95 36016.31 00:19:04.410 [2024-11-28T11:42:46.929Z] =================================================================================================================== 00:19:04.410 [2024-11-28T11:42:46.929Z] Total : 5106.25 19.95 0.00 0.00 24870.73 4900.95 36016.31 00:19:04.410 { 00:19:04.410 "results": [ 00:19:04.410 { 00:19:04.410 "job": "nvme0n1", 00:19:04.410 "core_mask": "0x2", 00:19:04.410 "workload": "verify", 00:19:04.410 "status": "finished", 00:19:04.410 "verify_range": { 00:19:04.410 "start": 0, 00:19:04.410 "length": 8192 00:19:04.410 }, 00:19:04.410 "queue_depth": 128, 00:19:04.410 "io_size": 4096, 00:19:04.410 "runtime": 1.017969, 00:19:04.410 "iops": 5106.2458679979445, 00:19:04.410 "mibps": 19.94627292186697, 00:19:04.410 "io_failed": 0, 00:19:04.410 "io_timeout": 0, 00:19:04.410 "avg_latency_us": 24870.726718303027, 00:19:04.410 "min_latency_us": 4900.953043478261, 00:19:04.410 "max_latency_us": 36016.30608695652 00:19:04.410 } 00:19:04.410 ], 00:19:04.410 "core_count": 1 00:19:04.410 } 00:19:04.410 12:42:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:19:04.410 12:42:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:19:04.410 12:42:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:19:04.411 12:42:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:19:04.411 12:42:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:19:04.411 12:42:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:19:04.411 12:42:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:04.411 12:42:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:19:04.411 12:42:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:19:04.411 12:42:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:19:04.411 12:42:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:04.411 nvmf_trace.0 00:19:04.411 12:42:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:19:04.411 12:42:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 2548741 00:19:04.411 12:42:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2548741 ']' 00:19:04.411 12:42:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2548741 00:19:04.411 12:42:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:04.411 12:42:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:04.411 12:42:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2548741 00:19:04.668 12:42:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:04.668 12:42:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:04.668 12:42:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2548741' 00:19:04.668 killing process with pid 2548741 00:19:04.668 12:42:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2548741 00:19:04.668 Received shutdown signal, test time was about 1.000000 seconds 00:19:04.668 00:19:04.668 Latency(us) 00:19:04.668 [2024-11-28T11:42:47.187Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:04.668 [2024-11-28T11:42:47.187Z] =================================================================================================================== 00:19:04.668 [2024-11-28T11:42:47.187Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:04.668 12:42:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2548741 00:19:04.668 12:42:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:19:04.668 12:42:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:04.668 12:42:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:19:04.668 12:42:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:04.668 12:42:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:19:04.668 12:42:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:04.668 12:42:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:04.668 rmmod nvme_tcp 00:19:04.668 rmmod nvme_fabrics 00:19:04.668 rmmod nvme_keyring 00:19:04.668 12:42:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:04.668 12:42:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:19:04.668 12:42:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:19:04.668 12:42:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 2548500 ']' 00:19:04.668 12:42:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 2548500 00:19:04.668 12:42:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2548500 ']' 00:19:04.668 12:42:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2548500 00:19:04.926 12:42:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:04.926 12:42:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:04.926 12:42:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2548500 00:19:04.926 12:42:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:04.926 12:42:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:04.926 12:42:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2548500' 00:19:04.926 killing process with pid 2548500 00:19:04.926 12:42:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2548500 00:19:04.926 12:42:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2548500 00:19:04.926 12:42:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:04.926 12:42:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:04.926 12:42:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:04.926 12:42:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:19:04.926 12:42:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:19:04.926 12:42:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:04.926 12:42:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:19:04.926 12:42:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:04.926 12:42:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:04.926 12:42:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:04.926 12:42:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:04.926 12:42:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:07.460 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:07.460 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.1ej0AGm7jB /tmp/tmp.6BY9e0EPs9 /tmp/tmp.Re8rfBDX5W 00:19:07.460 00:19:07.460 real 1m18.360s 00:19:07.460 user 2m1.155s 00:19:07.460 sys 0m29.397s 00:19:07.460 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:07.460 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:07.460 ************************************ 00:19:07.460 END TEST nvmf_tls 00:19:07.460 ************************************ 00:19:07.460 12:42:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:19:07.460 12:42:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:07.460 12:42:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:07.460 12:42:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:07.460 ************************************ 00:19:07.460 START TEST nvmf_fips 00:19:07.460 ************************************ 00:19:07.460 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:19:07.460 * Looking for test storage... 00:19:07.460 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:19:07.460 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:07.460 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lcov --version 00:19:07.460 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:07.460 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:07.460 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:07.460 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:07.460 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:07.460 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:19:07.460 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:19:07.460 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:19:07.460 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:19:07.460 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:19:07.460 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:19:07.460 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:19:07.460 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:07.460 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:19:07.460 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:19:07.460 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:07.460 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:07.460 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:19:07.460 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:19:07.460 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:07.460 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:19:07.460 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:19:07.460 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:19:07.460 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:19:07.460 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:07.460 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:19:07.460 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:19:07.460 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:07.460 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:07.460 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:19:07.460 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:07.460 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:07.460 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:07.460 --rc genhtml_branch_coverage=1 00:19:07.460 --rc genhtml_function_coverage=1 00:19:07.460 --rc genhtml_legend=1 00:19:07.460 --rc geninfo_all_blocks=1 00:19:07.460 --rc geninfo_unexecuted_blocks=1 00:19:07.460 00:19:07.460 ' 00:19:07.460 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:07.460 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:07.460 --rc genhtml_branch_coverage=1 00:19:07.460 --rc genhtml_function_coverage=1 00:19:07.460 --rc genhtml_legend=1 00:19:07.460 --rc geninfo_all_blocks=1 00:19:07.460 --rc geninfo_unexecuted_blocks=1 00:19:07.460 00:19:07.460 ' 00:19:07.460 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:07.460 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:07.460 --rc genhtml_branch_coverage=1 00:19:07.460 --rc genhtml_function_coverage=1 00:19:07.460 --rc genhtml_legend=1 00:19:07.460 --rc geninfo_all_blocks=1 00:19:07.460 --rc geninfo_unexecuted_blocks=1 00:19:07.460 00:19:07.460 ' 00:19:07.460 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:07.460 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:07.460 --rc genhtml_branch_coverage=1 00:19:07.460 --rc genhtml_function_coverage=1 00:19:07.460 --rc genhtml_legend=1 00:19:07.460 --rc geninfo_all_blocks=1 00:19:07.460 --rc geninfo_unexecuted_blocks=1 00:19:07.460 00:19:07.460 ' 00:19:07.460 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:07.460 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:19:07.460 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:07.460 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:07.460 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:07.460 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:07.460 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:07.460 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:07.460 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:07.460 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:07.460 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:07.460 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:07.460 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:07.460 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:19:07.460 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:07.460 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:07.460 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:07.460 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:07.460 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:07.460 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:19:07.460 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:07.460 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:07.460 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:07.461 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:07.461 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:07.461 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:07.461 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:19:07.461 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:07.461 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:19:07.461 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:07.461 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:07.461 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:07.461 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:07.461 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:07.461 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:07.461 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:07.461 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:07.461 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:07.461 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:07.461 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:07.461 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:19:07.461 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:19:07.461 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:19:07.461 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:19:07.461 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:19:07.461 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:19:07.461 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:07.461 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:07.461 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:19:07.461 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:19:07.461 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:19:07.461 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:19:07.461 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:19:07.461 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:19:07.461 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:19:07.461 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:07.461 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:19:07.461 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:19:07.461 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:07.461 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:07.461 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:19:07.461 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:19:07.461 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:19:07.461 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:19:07.461 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:19:07.461 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:19:07.461 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:19:07.461 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:19:07.461 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:19:07.461 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:19:07.461 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:07.461 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:07.461 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:19:07.461 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:07.461 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:19:07.461 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:19:07.461 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:07.461 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:19:07.461 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:19:07.461 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:19:07.461 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:19:07.461 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:19:07.461 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:19:07.461 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:19:07.461 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:07.461 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:19:07.461 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:19:07.461 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:19:07.461 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:19:07.461 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:19:07.461 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:19:07.461 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:19:07.461 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:19:07.461 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:19:07.461 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:19:07.461 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:19:07.461 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:19:07.461 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:19:07.461 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:19:07.461 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:19:07.461 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:19:07.461 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:19:07.461 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:19:07.461 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:19:07.461 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:19:07.461 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:19:07.461 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:19:07.461 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:19:07.461 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:19:07.461 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:19:07.461 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:07.461 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:19:07.461 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:07.461 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:19:07.461 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:07.461 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:19:07.461 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:19:07.461 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:19:07.461 Error setting digest 00:19:07.461 40922EC3997F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:19:07.461 40922EC3997F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:19:07.461 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:19:07.461 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:07.462 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:07.462 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:07.462 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:19:07.462 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:07.462 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:07.462 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:07.462 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:07.462 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:07.462 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:07.462 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:07.462 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:07.462 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:07.462 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:07.462 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:19:07.462 12:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:12.726 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:12.726 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:19:12.726 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:12.726 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:12.726 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:12.726 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:12.726 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:12.726 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:19:12.726 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:12.726 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:19:12.726 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:19:12.726 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:19:12.726 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:19:12.726 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:19:12.726 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:19:12.726 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:12.726 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:12.726 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:12.726 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:12.726 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:12.726 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:12.726 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:12.726 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:12.726 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:12.726 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:12.726 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:12.726 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:12.726 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:12.726 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:12.726 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:12.726 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:12.726 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:12.726 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:12.726 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:12.726 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:12.726 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:12.726 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:12.726 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:12.726 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:12.726 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:12.726 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:12.726 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:12.726 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:12.726 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:12.726 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:12.726 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:12.726 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:12.726 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:12.726 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:12.726 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:12.726 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:12.726 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:12.726 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:12.726 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:12.726 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:12.726 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:12.726 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:12.726 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:12.726 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:12.726 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:12.726 Found net devices under 0000:86:00.0: cvl_0_0 00:19:12.726 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:12.726 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:12.726 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:12.726 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:12.726 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:12.726 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:12.726 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:12.726 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:12.726 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:12.726 Found net devices under 0000:86:00.1: cvl_0_1 00:19:12.726 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:12.726 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:12.727 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:19:12.727 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:12.727 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:12.727 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:12.727 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:12.727 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:12.727 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:12.727 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:12.727 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:12.727 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:12.727 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:12.727 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:12.727 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:12.727 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:12.727 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:12.727 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:12.727 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:12.727 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:12.727 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:12.727 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:12.727 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:12.727 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:12.727 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:12.985 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:12.985 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:12.985 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:12.985 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:12.985 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:12.985 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.419 ms 00:19:12.985 00:19:12.985 --- 10.0.0.2 ping statistics --- 00:19:12.985 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:12.985 rtt min/avg/max/mdev = 0.419/0.419/0.419/0.000 ms 00:19:12.985 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:12.985 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:12.985 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.245 ms 00:19:12.985 00:19:12.985 --- 10.0.0.1 ping statistics --- 00:19:12.985 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:12.985 rtt min/avg/max/mdev = 0.245/0.245/0.245/0.000 ms 00:19:12.985 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:12.985 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:19:12.985 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:12.985 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:12.985 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:12.985 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:12.985 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:12.985 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:12.985 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:12.985 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:19:12.985 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:12.985 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:12.985 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:12.985 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=2552545 00:19:12.985 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:12.985 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 2552545 00:19:12.985 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 2552545 ']' 00:19:12.985 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:12.985 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:12.985 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:12.985 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:12.985 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:12.985 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:12.985 [2024-11-28 12:42:55.387305] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:19:12.985 [2024-11-28 12:42:55.387354] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:12.985 [2024-11-28 12:42:55.447799] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:12.985 [2024-11-28 12:42:55.489964] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:12.985 [2024-11-28 12:42:55.489998] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:12.985 [2024-11-28 12:42:55.490005] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:12.985 [2024-11-28 12:42:55.490012] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:12.985 [2024-11-28 12:42:55.490019] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:12.985 [2024-11-28 12:42:55.490582] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:13.243 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:13.243 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:19:13.243 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:13.244 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:13.244 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:13.244 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:13.244 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:19:13.244 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:19:13.244 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:19:13.244 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.4L3 00:19:13.244 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:19:13.244 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.4L3 00:19:13.244 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.4L3 00:19:13.244 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.4L3 00:19:13.244 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:13.502 [2024-11-28 12:42:55.804038] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:13.502 [2024-11-28 12:42:55.820049] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:13.502 [2024-11-28 12:42:55.820264] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:13.502 malloc0 00:19:13.502 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:13.502 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=2552789 00:19:13.503 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:13.503 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 2552789 /var/tmp/bdevperf.sock 00:19:13.503 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 2552789 ']' 00:19:13.503 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:13.503 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:13.503 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:13.503 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:13.503 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:13.503 12:42:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:13.503 [2024-11-28 12:42:55.936649] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:19:13.503 [2024-11-28 12:42:55.936697] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2552789 ] 00:19:13.503 [2024-11-28 12:42:55.994847] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:13.762 [2024-11-28 12:42:56.036383] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:13.762 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:13.762 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:19:13.762 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.4L3 00:19:14.021 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:14.021 [2024-11-28 12:42:56.477020] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:14.279 TLSTESTn1 00:19:14.279 12:42:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:14.279 Running I/O for 10 seconds... 00:19:16.151 5382.00 IOPS, 21.02 MiB/s [2024-11-28T11:43:00.048Z] 5378.00 IOPS, 21.01 MiB/s [2024-11-28T11:43:00.980Z] 5370.00 IOPS, 20.98 MiB/s [2024-11-28T11:43:01.916Z] 5412.25 IOPS, 21.14 MiB/s [2024-11-28T11:43:02.852Z] 5436.40 IOPS, 21.24 MiB/s [2024-11-28T11:43:03.788Z] 5435.00 IOPS, 21.23 MiB/s [2024-11-28T11:43:04.724Z] 5237.43 IOPS, 20.46 MiB/s [2024-11-28T11:43:05.677Z] 5108.00 IOPS, 19.95 MiB/s [2024-11-28T11:43:07.055Z] 4969.33 IOPS, 19.41 MiB/s [2024-11-28T11:43:07.055Z] 4852.80 IOPS, 18.96 MiB/s 00:19:24.536 Latency(us) 00:19:24.536 [2024-11-28T11:43:07.055Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:24.536 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:24.536 Verification LBA range: start 0x0 length 0x2000 00:19:24.536 TLSTESTn1 : 10.03 4850.80 18.95 0.00 0.00 26333.28 5271.37 51061.09 00:19:24.536 [2024-11-28T11:43:07.055Z] =================================================================================================================== 00:19:24.536 [2024-11-28T11:43:07.055Z] Total : 4850.80 18.95 0.00 0.00 26333.28 5271.37 51061.09 00:19:24.536 { 00:19:24.536 "results": [ 00:19:24.536 { 00:19:24.536 "job": "TLSTESTn1", 00:19:24.536 "core_mask": "0x4", 00:19:24.536 "workload": "verify", 00:19:24.536 "status": "finished", 00:19:24.536 "verify_range": { 00:19:24.536 "start": 0, 00:19:24.536 "length": 8192 00:19:24.536 }, 00:19:24.536 "queue_depth": 128, 00:19:24.536 "io_size": 4096, 00:19:24.536 "runtime": 10.030311, 00:19:24.536 "iops": 4850.796749971162, 00:19:24.536 "mibps": 18.948424804574852, 00:19:24.536 "io_failed": 0, 00:19:24.536 "io_timeout": 0, 00:19:24.536 "avg_latency_us": 26333.28295346562, 00:19:24.536 "min_latency_us": 5271.373913043478, 00:19:24.536 "max_latency_us": 51061.09217391304 00:19:24.536 } 00:19:24.536 ], 00:19:24.536 "core_count": 1 00:19:24.536 } 00:19:24.536 12:43:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:19:24.536 12:43:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:19:24.536 12:43:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:19:24.536 12:43:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:19:24.536 12:43:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:19:24.536 12:43:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:24.536 12:43:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:19:24.536 12:43:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:19:24.536 12:43:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:19:24.536 12:43:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:24.536 nvmf_trace.0 00:19:24.536 12:43:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:19:24.536 12:43:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 2552789 00:19:24.536 12:43:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 2552789 ']' 00:19:24.536 12:43:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 2552789 00:19:24.536 12:43:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:19:24.536 12:43:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:24.536 12:43:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2552789 00:19:24.536 12:43:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:24.536 12:43:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:24.536 12:43:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2552789' 00:19:24.536 killing process with pid 2552789 00:19:24.536 12:43:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 2552789 00:19:24.536 Received shutdown signal, test time was about 10.000000 seconds 00:19:24.536 00:19:24.536 Latency(us) 00:19:24.536 [2024-11-28T11:43:07.055Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:24.536 [2024-11-28T11:43:07.055Z] =================================================================================================================== 00:19:24.536 [2024-11-28T11:43:07.055Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:24.536 12:43:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 2552789 00:19:24.536 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:19:24.536 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:24.536 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:19:24.536 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:24.536 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:19:24.536 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:24.536 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:24.536 rmmod nvme_tcp 00:19:24.536 rmmod nvme_fabrics 00:19:24.795 rmmod nvme_keyring 00:19:24.795 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:24.795 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:19:24.795 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:19:24.795 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 2552545 ']' 00:19:24.795 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 2552545 00:19:24.795 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 2552545 ']' 00:19:24.795 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 2552545 00:19:24.795 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:19:24.795 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:24.795 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2552545 00:19:24.795 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:24.795 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:24.795 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2552545' 00:19:24.795 killing process with pid 2552545 00:19:24.795 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 2552545 00:19:24.795 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 2552545 00:19:24.795 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:24.795 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:24.795 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:24.795 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:19:24.795 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:19:24.795 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:19:24.795 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:25.054 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:25.054 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:25.054 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:25.054 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:25.054 12:43:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:26.959 12:43:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:26.959 12:43:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.4L3 00:19:26.959 00:19:26.959 real 0m19.825s 00:19:26.959 user 0m20.776s 00:19:26.959 sys 0m9.322s 00:19:26.959 12:43:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:26.959 12:43:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:26.959 ************************************ 00:19:26.959 END TEST nvmf_fips 00:19:26.959 ************************************ 00:19:26.959 12:43:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:19:26.959 12:43:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:26.959 12:43:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:26.959 12:43:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:26.959 ************************************ 00:19:26.960 START TEST nvmf_control_msg_list 00:19:26.960 ************************************ 00:19:26.960 12:43:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:19:27.218 * Looking for test storage... 00:19:27.218 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:27.218 12:43:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:27.218 12:43:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lcov --version 00:19:27.218 12:43:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:27.219 12:43:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:27.219 12:43:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:27.219 12:43:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:27.219 12:43:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:27.219 12:43:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:19:27.219 12:43:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:19:27.219 12:43:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:19:27.219 12:43:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:19:27.219 12:43:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:19:27.219 12:43:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:19:27.219 12:43:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:19:27.219 12:43:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:27.219 12:43:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:19:27.219 12:43:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:19:27.219 12:43:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:27.219 12:43:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:27.219 12:43:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:19:27.219 12:43:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:19:27.219 12:43:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:27.219 12:43:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:19:27.219 12:43:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:19:27.219 12:43:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:19:27.219 12:43:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:19:27.219 12:43:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:27.219 12:43:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:19:27.219 12:43:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:19:27.219 12:43:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:27.219 12:43:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:27.219 12:43:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:19:27.219 12:43:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:27.219 12:43:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:27.219 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:27.219 --rc genhtml_branch_coverage=1 00:19:27.219 --rc genhtml_function_coverage=1 00:19:27.219 --rc genhtml_legend=1 00:19:27.219 --rc geninfo_all_blocks=1 00:19:27.219 --rc geninfo_unexecuted_blocks=1 00:19:27.219 00:19:27.219 ' 00:19:27.219 12:43:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:27.219 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:27.219 --rc genhtml_branch_coverage=1 00:19:27.219 --rc genhtml_function_coverage=1 00:19:27.219 --rc genhtml_legend=1 00:19:27.219 --rc geninfo_all_blocks=1 00:19:27.219 --rc geninfo_unexecuted_blocks=1 00:19:27.219 00:19:27.219 ' 00:19:27.219 12:43:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:27.219 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:27.219 --rc genhtml_branch_coverage=1 00:19:27.219 --rc genhtml_function_coverage=1 00:19:27.219 --rc genhtml_legend=1 00:19:27.219 --rc geninfo_all_blocks=1 00:19:27.219 --rc geninfo_unexecuted_blocks=1 00:19:27.219 00:19:27.219 ' 00:19:27.219 12:43:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:27.219 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:27.219 --rc genhtml_branch_coverage=1 00:19:27.219 --rc genhtml_function_coverage=1 00:19:27.219 --rc genhtml_legend=1 00:19:27.219 --rc geninfo_all_blocks=1 00:19:27.219 --rc geninfo_unexecuted_blocks=1 00:19:27.219 00:19:27.219 ' 00:19:27.219 12:43:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:27.219 12:43:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:19:27.219 12:43:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:27.219 12:43:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:27.219 12:43:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:27.219 12:43:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:27.219 12:43:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:27.219 12:43:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:27.219 12:43:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:27.219 12:43:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:27.219 12:43:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:27.219 12:43:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:27.219 12:43:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:27.219 12:43:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:19:27.219 12:43:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:27.219 12:43:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:27.219 12:43:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:27.219 12:43:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:27.219 12:43:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:27.219 12:43:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:19:27.219 12:43:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:27.219 12:43:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:27.219 12:43:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:27.219 12:43:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:27.219 12:43:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:27.219 12:43:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:27.219 12:43:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:19:27.219 12:43:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:27.219 12:43:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:19:27.219 12:43:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:27.219 12:43:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:27.219 12:43:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:27.219 12:43:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:27.219 12:43:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:27.219 12:43:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:27.219 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:27.219 12:43:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:27.219 12:43:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:27.219 12:43:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:27.219 12:43:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:19:27.219 12:43:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:27.220 12:43:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:27.220 12:43:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:27.220 12:43:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:27.220 12:43:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:27.220 12:43:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:27.220 12:43:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:27.220 12:43:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:27.220 12:43:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:27.220 12:43:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:27.220 12:43:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:19:27.220 12:43:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:32.491 12:43:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:32.491 12:43:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:19:32.491 12:43:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:32.491 12:43:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:32.491 12:43:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:32.491 12:43:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:32.491 12:43:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:32.491 12:43:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:19:32.491 12:43:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:32.491 12:43:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:19:32.492 12:43:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:19:32.492 12:43:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:19:32.492 12:43:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:19:32.492 12:43:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:19:32.492 12:43:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:19:32.492 12:43:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:32.492 12:43:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:32.492 12:43:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:32.492 12:43:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:32.492 12:43:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:32.492 12:43:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:32.492 12:43:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:32.492 12:43:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:32.492 12:43:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:32.492 12:43:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:32.492 12:43:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:32.492 12:43:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:32.492 12:43:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:32.492 12:43:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:32.492 12:43:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:32.492 12:43:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:32.492 12:43:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:32.492 12:43:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:32.492 12:43:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:32.492 12:43:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:32.492 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:32.492 12:43:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:32.492 12:43:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:32.492 12:43:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:32.492 12:43:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:32.492 12:43:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:32.492 12:43:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:32.492 12:43:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:32.492 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:32.492 12:43:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:32.492 12:43:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:32.492 12:43:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:32.492 12:43:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:32.492 12:43:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:32.492 12:43:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:32.492 12:43:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:32.492 12:43:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:32.492 12:43:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:32.492 12:43:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:32.492 12:43:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:32.492 12:43:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:32.492 12:43:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:32.492 12:43:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:32.492 12:43:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:32.492 12:43:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:32.492 Found net devices under 0000:86:00.0: cvl_0_0 00:19:32.492 12:43:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:32.492 12:43:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:32.492 12:43:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:32.492 12:43:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:32.492 12:43:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:32.492 12:43:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:32.492 12:43:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:32.492 12:43:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:32.492 12:43:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:32.492 Found net devices under 0000:86:00.1: cvl_0_1 00:19:32.492 12:43:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:32.492 12:43:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:32.492 12:43:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:19:32.492 12:43:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:32.492 12:43:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:32.492 12:43:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:32.492 12:43:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:32.492 12:43:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:32.492 12:43:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:32.492 12:43:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:32.492 12:43:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:32.492 12:43:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:32.492 12:43:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:32.492 12:43:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:32.492 12:43:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:32.492 12:43:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:32.492 12:43:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:32.492 12:43:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:32.492 12:43:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:32.492 12:43:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:32.492 12:43:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:32.492 12:43:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:32.492 12:43:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:32.492 12:43:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:32.492 12:43:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:32.754 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:32.754 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:32.754 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:32.754 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:32.754 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:32.754 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.427 ms 00:19:32.754 00:19:32.754 --- 10.0.0.2 ping statistics --- 00:19:32.754 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:32.754 rtt min/avg/max/mdev = 0.427/0.427/0.427/0.000 ms 00:19:32.754 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:32.754 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:32.754 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:19:32.754 00:19:32.754 --- 10.0.0.1 ping statistics --- 00:19:32.754 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:32.754 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:19:32.754 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:32.754 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:19:32.754 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:32.754 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:32.754 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:32.754 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:32.755 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:32.755 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:32.755 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:32.755 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:19:32.755 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:32.755 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:32.755 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:32.755 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=2557935 00:19:32.755 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 2557935 00:19:32.755 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:32.755 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 2557935 ']' 00:19:32.755 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:32.755 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:32.755 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:32.755 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:32.755 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:32.755 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:32.755 [2024-11-28 12:43:15.151115] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:19:32.755 [2024-11-28 12:43:15.151159] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:32.755 [2024-11-28 12:43:15.213504] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:32.755 [2024-11-28 12:43:15.255250] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:32.755 [2024-11-28 12:43:15.255287] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:32.755 [2024-11-28 12:43:15.255294] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:32.755 [2024-11-28 12:43:15.255301] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:32.755 [2024-11-28 12:43:15.255306] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:32.755 [2024-11-28 12:43:15.255871] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:33.016 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:33.016 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:19:33.016 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:33.016 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:33.016 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:33.016 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:33.016 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:19:33.016 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:19:33.016 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:19:33.016 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.016 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:33.016 [2024-11-28 12:43:15.389212] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:33.016 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.016 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:19:33.016 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.016 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:33.016 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.016 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:19:33.016 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.016 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:33.016 Malloc0 00:19:33.016 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.016 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:19:33.017 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.017 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:33.017 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.017 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:33.017 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.017 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:33.017 [2024-11-28 12:43:15.429497] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:33.017 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.017 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=2558150 00:19:33.017 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:33.017 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=2558152 00:19:33.017 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:33.017 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=2558154 00:19:33.017 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 2558150 00:19:33.017 12:43:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:33.017 [2024-11-28 12:43:15.503924] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:33.017 [2024-11-28 12:43:15.514012] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:33.017 [2024-11-28 12:43:15.514158] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:34.396 Initializing NVMe Controllers 00:19:34.396 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:19:34.396 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:19:34.396 Initialization complete. Launching workers. 00:19:34.396 ======================================================== 00:19:34.396 Latency(us) 00:19:34.397 Device Information : IOPS MiB/s Average min max 00:19:34.397 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 3888.00 15.19 256.82 145.43 426.28 00:19:34.397 ======================================================== 00:19:34.397 Total : 3888.00 15.19 256.82 145.43 426.28 00:19:34.397 00:19:34.397 Initializing NVMe Controllers 00:19:34.397 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:19:34.397 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:19:34.397 Initialization complete. Launching workers. 00:19:34.397 ======================================================== 00:19:34.397 Latency(us) 00:19:34.397 Device Information : IOPS MiB/s Average min max 00:19:34.397 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 25.00 0.10 41411.87 40819.81 41951.49 00:19:34.397 ======================================================== 00:19:34.397 Total : 25.00 0.10 41411.87 40819.81 41951.49 00:19:34.397 00:19:34.397 [2024-11-28 12:43:16.655036] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc97b0 is same with the state(6) to be set 00:19:34.397 Initializing NVMe Controllers 00:19:34.397 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:19:34.397 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:19:34.397 Initialization complete. Launching workers. 00:19:34.397 ======================================================== 00:19:34.397 Latency(us) 00:19:34.397 Device Information : IOPS MiB/s Average min max 00:19:34.397 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 25.00 0.10 41456.98 40844.94 41932.73 00:19:34.397 ======================================================== 00:19:34.397 Total : 25.00 0.10 41456.98 40844.94 41932.73 00:19:34.397 00:19:34.397 12:43:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 2558152 00:19:34.397 12:43:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 2558154 00:19:34.397 12:43:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:19:34.397 12:43:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:19:34.397 12:43:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:34.397 12:43:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:19:34.397 12:43:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:34.397 12:43:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:19:34.397 12:43:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:34.397 12:43:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:34.397 rmmod nvme_tcp 00:19:34.397 rmmod nvme_fabrics 00:19:34.397 rmmod nvme_keyring 00:19:34.397 12:43:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:34.397 12:43:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:19:34.397 12:43:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:19:34.397 12:43:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 2557935 ']' 00:19:34.397 12:43:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 2557935 00:19:34.397 12:43:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 2557935 ']' 00:19:34.397 12:43:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 2557935 00:19:34.397 12:43:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:19:34.397 12:43:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:34.397 12:43:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2557935 00:19:34.397 12:43:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:34.397 12:43:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:34.397 12:43:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2557935' 00:19:34.397 killing process with pid 2557935 00:19:34.397 12:43:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 2557935 00:19:34.397 12:43:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 2557935 00:19:34.656 12:43:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:34.656 12:43:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:34.656 12:43:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:34.656 12:43:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:19:34.656 12:43:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:19:34.656 12:43:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:34.656 12:43:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:19:34.656 12:43:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:34.656 12:43:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:34.656 12:43:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:34.656 12:43:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:34.656 12:43:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:36.564 12:43:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:36.564 00:19:36.564 real 0m9.563s 00:19:36.564 user 0m6.620s 00:19:36.564 sys 0m4.918s 00:19:36.564 12:43:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:36.564 12:43:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:36.564 ************************************ 00:19:36.564 END TEST nvmf_control_msg_list 00:19:36.564 ************************************ 00:19:36.564 12:43:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:19:36.564 12:43:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:36.564 12:43:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:36.564 12:43:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:36.823 ************************************ 00:19:36.823 START TEST nvmf_wait_for_buf 00:19:36.823 ************************************ 00:19:36.823 12:43:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:19:36.823 * Looking for test storage... 00:19:36.823 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:36.823 12:43:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:36.823 12:43:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lcov --version 00:19:36.823 12:43:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:36.823 12:43:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:36.823 12:43:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:36.823 12:43:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:36.823 12:43:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:36.823 12:43:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:19:36.823 12:43:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:19:36.823 12:43:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:19:36.823 12:43:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:19:36.823 12:43:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:19:36.823 12:43:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:19:36.823 12:43:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:19:36.823 12:43:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:36.823 12:43:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:19:36.824 12:43:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:19:36.824 12:43:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:36.824 12:43:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:36.824 12:43:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:19:36.824 12:43:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:19:36.824 12:43:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:36.824 12:43:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:19:36.824 12:43:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:19:36.824 12:43:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:19:36.824 12:43:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:19:36.824 12:43:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:36.824 12:43:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:19:36.824 12:43:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:19:36.824 12:43:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:36.824 12:43:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:36.824 12:43:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:19:36.824 12:43:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:36.824 12:43:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:36.824 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:36.824 --rc genhtml_branch_coverage=1 00:19:36.824 --rc genhtml_function_coverage=1 00:19:36.824 --rc genhtml_legend=1 00:19:36.824 --rc geninfo_all_blocks=1 00:19:36.824 --rc geninfo_unexecuted_blocks=1 00:19:36.824 00:19:36.824 ' 00:19:36.824 12:43:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:36.824 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:36.824 --rc genhtml_branch_coverage=1 00:19:36.824 --rc genhtml_function_coverage=1 00:19:36.824 --rc genhtml_legend=1 00:19:36.824 --rc geninfo_all_blocks=1 00:19:36.824 --rc geninfo_unexecuted_blocks=1 00:19:36.824 00:19:36.824 ' 00:19:36.824 12:43:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:36.824 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:36.824 --rc genhtml_branch_coverage=1 00:19:36.824 --rc genhtml_function_coverage=1 00:19:36.824 --rc genhtml_legend=1 00:19:36.824 --rc geninfo_all_blocks=1 00:19:36.824 --rc geninfo_unexecuted_blocks=1 00:19:36.824 00:19:36.824 ' 00:19:36.824 12:43:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:36.824 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:36.824 --rc genhtml_branch_coverage=1 00:19:36.824 --rc genhtml_function_coverage=1 00:19:36.824 --rc genhtml_legend=1 00:19:36.824 --rc geninfo_all_blocks=1 00:19:36.824 --rc geninfo_unexecuted_blocks=1 00:19:36.824 00:19:36.824 ' 00:19:36.824 12:43:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:36.824 12:43:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:19:36.824 12:43:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:36.824 12:43:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:36.824 12:43:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:36.824 12:43:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:36.824 12:43:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:36.824 12:43:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:36.824 12:43:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:36.824 12:43:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:36.824 12:43:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:36.824 12:43:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:36.824 12:43:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:36.824 12:43:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:19:36.824 12:43:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:36.824 12:43:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:36.824 12:43:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:36.824 12:43:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:36.824 12:43:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:36.824 12:43:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:19:36.824 12:43:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:36.824 12:43:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:36.824 12:43:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:36.824 12:43:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:36.824 12:43:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:36.824 12:43:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:36.824 12:43:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:19:36.824 12:43:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:36.824 12:43:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:19:36.824 12:43:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:36.824 12:43:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:36.824 12:43:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:36.824 12:43:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:36.824 12:43:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:36.824 12:43:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:36.824 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:36.824 12:43:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:36.824 12:43:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:36.824 12:43:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:36.824 12:43:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:19:36.824 12:43:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:36.824 12:43:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:36.824 12:43:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:36.824 12:43:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:36.824 12:43:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:36.824 12:43:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:36.824 12:43:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:36.824 12:43:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:36.824 12:43:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:36.824 12:43:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:36.824 12:43:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:19:36.824 12:43:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:42.096 12:43:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:42.096 12:43:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:19:42.096 12:43:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:42.096 12:43:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:42.096 12:43:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:42.096 12:43:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:42.096 12:43:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:42.096 12:43:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:19:42.096 12:43:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:42.096 12:43:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:19:42.096 12:43:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:19:42.096 12:43:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:19:42.096 12:43:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:19:42.096 12:43:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:19:42.096 12:43:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:19:42.096 12:43:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:42.096 12:43:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:42.096 12:43:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:42.096 12:43:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:42.096 12:43:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:42.096 12:43:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:42.096 12:43:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:42.096 12:43:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:42.096 12:43:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:42.096 12:43:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:42.096 12:43:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:42.096 12:43:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:42.096 12:43:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:42.096 12:43:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:42.096 12:43:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:42.096 12:43:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:42.096 12:43:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:42.096 12:43:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:42.096 12:43:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:42.096 12:43:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:42.096 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:42.096 12:43:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:42.096 12:43:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:42.096 12:43:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:42.096 12:43:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:42.096 12:43:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:42.096 12:43:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:42.096 12:43:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:42.096 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:42.096 12:43:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:42.096 12:43:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:42.096 12:43:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:42.096 12:43:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:42.096 12:43:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:42.096 12:43:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:42.096 12:43:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:42.096 12:43:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:42.096 12:43:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:42.096 12:43:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:42.096 12:43:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:42.096 12:43:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:42.096 12:43:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:42.096 12:43:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:42.096 12:43:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:42.096 12:43:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:42.096 Found net devices under 0000:86:00.0: cvl_0_0 00:19:42.096 12:43:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:42.096 12:43:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:42.096 12:43:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:42.096 12:43:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:42.096 12:43:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:42.096 12:43:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:42.096 12:43:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:42.096 12:43:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:42.096 12:43:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:42.096 Found net devices under 0000:86:00.1: cvl_0_1 00:19:42.096 12:43:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:42.096 12:43:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:42.096 12:43:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:19:42.096 12:43:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:42.096 12:43:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:42.096 12:43:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:42.096 12:43:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:42.096 12:43:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:42.096 12:43:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:42.096 12:43:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:42.096 12:43:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:42.096 12:43:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:42.096 12:43:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:42.096 12:43:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:42.096 12:43:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:42.096 12:43:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:42.096 12:43:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:42.097 12:43:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:42.097 12:43:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:42.097 12:43:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:42.097 12:43:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:42.355 12:43:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:42.355 12:43:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:42.355 12:43:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:42.355 12:43:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:42.355 12:43:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:42.355 12:43:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:42.355 12:43:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:42.355 12:43:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:42.355 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:42.355 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.306 ms 00:19:42.355 00:19:42.355 --- 10.0.0.2 ping statistics --- 00:19:42.355 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:42.355 rtt min/avg/max/mdev = 0.306/0.306/0.306/0.000 ms 00:19:42.355 12:43:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:42.355 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:42.355 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.125 ms 00:19:42.355 00:19:42.355 --- 10.0.0.1 ping statistics --- 00:19:42.355 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:42.355 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:19:42.355 12:43:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:42.355 12:43:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:19:42.355 12:43:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:42.355 12:43:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:42.355 12:43:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:42.355 12:43:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:42.355 12:43:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:42.355 12:43:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:42.355 12:43:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:42.355 12:43:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:19:42.355 12:43:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:42.355 12:43:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:42.355 12:43:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:42.355 12:43:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=2561714 00:19:42.355 12:43:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:19:42.355 12:43:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 2561714 00:19:42.355 12:43:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 2561714 ']' 00:19:42.355 12:43:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:42.355 12:43:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:42.355 12:43:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:42.355 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:42.355 12:43:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:42.355 12:43:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:42.614 [2024-11-28 12:43:24.895967] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:19:42.614 [2024-11-28 12:43:24.896012] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:42.614 [2024-11-28 12:43:24.961092] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:42.614 [2024-11-28 12:43:24.999871] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:42.614 [2024-11-28 12:43:24.999912] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:42.614 [2024-11-28 12:43:24.999919] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:42.614 [2024-11-28 12:43:24.999925] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:42.614 [2024-11-28 12:43:24.999930] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:42.614 [2024-11-28 12:43:25.000494] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:42.614 12:43:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:42.614 12:43:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:19:42.614 12:43:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:42.614 12:43:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:42.614 12:43:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:42.614 12:43:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:42.614 12:43:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:19:42.614 12:43:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:19:42.614 12:43:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:19:42.614 12:43:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.614 12:43:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:42.614 12:43:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.614 12:43:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:19:42.614 12:43:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.614 12:43:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:42.614 12:43:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.614 12:43:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:19:42.614 12:43:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.614 12:43:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:42.881 12:43:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.881 12:43:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:19:42.881 12:43:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.881 12:43:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:42.881 Malloc0 00:19:42.881 12:43:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.881 12:43:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:19:42.881 12:43:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.881 12:43:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:42.881 [2024-11-28 12:43:25.188066] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:42.881 12:43:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.881 12:43:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:19:42.881 12:43:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.881 12:43:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:42.881 12:43:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.881 12:43:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:19:42.881 12:43:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.881 12:43:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:42.881 12:43:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.881 12:43:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:42.881 12:43:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.881 12:43:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:42.881 [2024-11-28 12:43:25.216268] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:42.881 12:43:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.881 12:43:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:42.881 [2024-11-28 12:43:25.302032] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:44.257 Initializing NVMe Controllers 00:19:44.257 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:19:44.257 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:19:44.257 Initialization complete. Launching workers. 00:19:44.257 ======================================================== 00:19:44.257 Latency(us) 00:19:44.257 Device Information : IOPS MiB/s Average min max 00:19:44.257 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 30.00 3.75 136499.37 7287.91 191534.95 00:19:44.257 ======================================================== 00:19:44.257 Total : 30.00 3.75 136499.37 7287.91 191534.95 00:19:44.257 00:19:44.257 12:43:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:19:44.257 12:43:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:19:44.257 12:43:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.257 12:43:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:44.258 12:43:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.258 12:43:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=454 00:19:44.258 12:43:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 454 -eq 0 ]] 00:19:44.258 12:43:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:19:44.258 12:43:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:19:44.258 12:43:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:44.258 12:43:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:19:44.258 12:43:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:44.258 12:43:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:19:44.258 12:43:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:44.258 12:43:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:44.258 rmmod nvme_tcp 00:19:44.258 rmmod nvme_fabrics 00:19:44.258 rmmod nvme_keyring 00:19:44.258 12:43:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:44.258 12:43:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:19:44.258 12:43:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:19:44.258 12:43:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 2561714 ']' 00:19:44.258 12:43:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 2561714 00:19:44.258 12:43:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 2561714 ']' 00:19:44.258 12:43:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 2561714 00:19:44.258 12:43:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:19:44.258 12:43:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:44.258 12:43:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2561714 00:19:44.516 12:43:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:44.516 12:43:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:44.516 12:43:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2561714' 00:19:44.516 killing process with pid 2561714 00:19:44.516 12:43:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 2561714 00:19:44.516 12:43:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 2561714 00:19:44.516 12:43:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:44.516 12:43:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:44.516 12:43:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:44.516 12:43:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:19:44.516 12:43:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:19:44.516 12:43:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:44.516 12:43:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:19:44.516 12:43:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:44.516 12:43:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:44.516 12:43:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:44.516 12:43:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:44.516 12:43:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:47.048 12:43:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:47.048 00:19:47.048 real 0m9.939s 00:19:47.048 user 0m3.761s 00:19:47.048 sys 0m4.617s 00:19:47.048 12:43:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:47.048 12:43:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:47.048 ************************************ 00:19:47.048 END TEST nvmf_wait_for_buf 00:19:47.048 ************************************ 00:19:47.048 12:43:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:19:47.048 12:43:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:19:47.048 12:43:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:19:47.048 12:43:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:19:47.048 12:43:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:19:47.048 12:43:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:52.316 12:43:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:52.316 12:43:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:19:52.316 12:43:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:52.316 12:43:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:52.316 12:43:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:52.316 12:43:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:52.316 12:43:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:52.316 12:43:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:19:52.316 12:43:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:52.316 12:43:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:19:52.316 12:43:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:19:52.316 12:43:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:19:52.316 12:43:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:19:52.316 12:43:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:19:52.316 12:43:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:19:52.316 12:43:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:52.316 12:43:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:52.316 12:43:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:52.316 12:43:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:52.316 12:43:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:52.316 12:43:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:52.316 12:43:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:52.316 12:43:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:52.316 12:43:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:52.316 12:43:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:52.316 12:43:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:52.316 12:43:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:52.316 12:43:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:52.316 12:43:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:52.316 12:43:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:52.316 12:43:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:52.316 12:43:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:52.316 12:43:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:52.316 12:43:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:52.316 12:43:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:52.316 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:52.316 12:43:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:52.316 12:43:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:52.316 12:43:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:52.316 12:43:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:52.316 12:43:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:52.316 12:43:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:52.316 12:43:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:52.316 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:52.316 12:43:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:52.316 12:43:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:52.316 12:43:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:52.316 12:43:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:52.316 12:43:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:52.316 12:43:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:52.316 12:43:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:52.316 12:43:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:52.316 12:43:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:52.316 12:43:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:52.317 12:43:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:52.317 12:43:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:52.317 12:43:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:52.317 12:43:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:52.317 12:43:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:52.317 12:43:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:52.317 Found net devices under 0000:86:00.0: cvl_0_0 00:19:52.317 12:43:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:52.317 12:43:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:52.317 12:43:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:52.317 12:43:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:52.317 12:43:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:52.317 12:43:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:52.317 12:43:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:52.317 12:43:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:52.317 12:43:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:52.317 Found net devices under 0000:86:00.1: cvl_0_1 00:19:52.317 12:43:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:52.317 12:43:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:52.317 12:43:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:52.317 12:43:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:19:52.317 12:43:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:19:52.317 12:43:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:52.317 12:43:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:52.317 12:43:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:52.317 ************************************ 00:19:52.317 START TEST nvmf_perf_adq 00:19:52.317 ************************************ 00:19:52.317 12:43:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:19:52.317 * Looking for test storage... 00:19:52.317 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:52.317 12:43:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:52.317 12:43:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:52.317 12:43:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # lcov --version 00:19:52.317 12:43:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:52.317 12:43:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:52.317 12:43:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:52.317 12:43:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:52.317 12:43:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:19:52.317 12:43:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:19:52.317 12:43:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:19:52.317 12:43:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:19:52.317 12:43:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:19:52.317 12:43:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:19:52.317 12:43:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:19:52.317 12:43:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:52.317 12:43:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:19:52.317 12:43:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:19:52.317 12:43:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:52.317 12:43:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:52.317 12:43:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:19:52.317 12:43:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:19:52.317 12:43:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:52.317 12:43:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:19:52.317 12:43:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:19:52.317 12:43:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:19:52.317 12:43:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:19:52.317 12:43:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:52.317 12:43:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:19:52.317 12:43:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:19:52.317 12:43:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:52.317 12:43:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:52.317 12:43:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:19:52.317 12:43:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:52.317 12:43:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:52.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:52.317 --rc genhtml_branch_coverage=1 00:19:52.317 --rc genhtml_function_coverage=1 00:19:52.317 --rc genhtml_legend=1 00:19:52.317 --rc geninfo_all_blocks=1 00:19:52.317 --rc geninfo_unexecuted_blocks=1 00:19:52.317 00:19:52.317 ' 00:19:52.317 12:43:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:52.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:52.317 --rc genhtml_branch_coverage=1 00:19:52.317 --rc genhtml_function_coverage=1 00:19:52.317 --rc genhtml_legend=1 00:19:52.317 --rc geninfo_all_blocks=1 00:19:52.317 --rc geninfo_unexecuted_blocks=1 00:19:52.317 00:19:52.317 ' 00:19:52.317 12:43:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:52.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:52.317 --rc genhtml_branch_coverage=1 00:19:52.317 --rc genhtml_function_coverage=1 00:19:52.317 --rc genhtml_legend=1 00:19:52.317 --rc geninfo_all_blocks=1 00:19:52.317 --rc geninfo_unexecuted_blocks=1 00:19:52.317 00:19:52.317 ' 00:19:52.317 12:43:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:52.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:52.317 --rc genhtml_branch_coverage=1 00:19:52.317 --rc genhtml_function_coverage=1 00:19:52.317 --rc genhtml_legend=1 00:19:52.317 --rc geninfo_all_blocks=1 00:19:52.317 --rc geninfo_unexecuted_blocks=1 00:19:52.317 00:19:52.317 ' 00:19:52.317 12:43:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:52.317 12:43:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:19:52.317 12:43:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:52.317 12:43:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:52.317 12:43:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:52.317 12:43:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:52.317 12:43:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:52.317 12:43:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:52.317 12:43:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:52.317 12:43:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:52.317 12:43:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:52.317 12:43:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:52.317 12:43:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:52.317 12:43:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:19:52.317 12:43:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:52.317 12:43:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:52.317 12:43:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:52.317 12:43:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:52.317 12:43:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:52.317 12:43:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:19:52.317 12:43:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:52.317 12:43:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:52.317 12:43:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:52.317 12:43:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:52.317 12:43:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:52.318 12:43:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:52.318 12:43:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:19:52.318 12:43:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:52.318 12:43:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:19:52.318 12:43:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:52.318 12:43:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:52.318 12:43:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:52.318 12:43:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:52.318 12:43:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:52.318 12:43:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:52.318 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:52.318 12:43:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:52.318 12:43:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:52.318 12:43:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:52.318 12:43:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:19:52.318 12:43:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:19:52.318 12:43:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:58.892 12:43:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:58.892 12:43:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:19:58.892 12:43:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:58.892 12:43:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:58.892 12:43:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:58.892 12:43:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:58.892 12:43:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:58.892 12:43:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:19:58.892 12:43:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:58.892 12:43:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:19:58.892 12:43:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:19:58.892 12:43:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:19:58.892 12:43:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:19:58.892 12:43:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:19:58.892 12:43:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:19:58.892 12:43:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:58.892 12:43:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:58.892 12:43:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:58.892 12:43:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:58.892 12:43:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:58.892 12:43:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:58.892 12:43:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:58.892 12:43:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:58.892 12:43:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:58.892 12:43:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:58.892 12:43:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:58.892 12:43:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:58.892 12:43:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:58.892 12:43:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:58.892 12:43:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:58.892 12:43:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:58.892 12:43:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:58.892 12:43:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:58.892 12:43:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:58.892 12:43:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:58.892 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:58.892 12:43:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:58.892 12:43:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:58.892 12:43:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:58.892 12:43:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:58.892 12:43:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:58.892 12:43:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:58.892 12:43:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:58.892 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:58.892 12:43:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:58.892 12:43:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:58.892 12:43:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:58.892 12:43:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:58.892 12:43:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:58.892 12:43:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:58.893 12:43:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:58.893 12:43:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:58.893 12:43:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:58.893 12:43:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:58.893 12:43:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:58.893 12:43:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:58.893 12:43:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:58.893 12:43:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:58.893 12:43:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:58.893 12:43:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:58.893 Found net devices under 0000:86:00.0: cvl_0_0 00:19:58.893 12:43:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:58.893 12:43:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:58.893 12:43:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:58.893 12:43:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:58.893 12:43:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:58.893 12:43:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:58.893 12:43:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:58.893 12:43:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:58.893 12:43:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:58.893 Found net devices under 0000:86:00.1: cvl_0_1 00:19:58.893 12:43:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:58.893 12:43:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:58.893 12:43:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:58.893 12:43:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:19:58.893 12:43:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:19:58.893 12:43:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:19:58.893 12:43:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:19:58.893 12:43:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:19:59.176 12:43:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:20:01.171 12:43:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:20:06.455 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:20:06.455 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:06.455 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:06.455 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:06.455 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:06.455 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:06.455 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:06.455 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:06.455 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:06.455 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:06.455 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:06.455 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:20:06.455 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:06.455 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:06.456 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:20:06.456 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:06.456 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:06.456 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:06.456 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:06.456 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:06.456 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:20:06.456 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:06.456 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:20:06.456 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:20:06.456 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:20:06.456 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:20:06.456 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:20:06.456 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:20:06.456 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:06.456 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:06.456 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:06.456 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:06.456 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:06.456 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:06.456 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:06.456 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:06.456 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:06.456 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:06.456 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:06.456 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:06.456 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:06.456 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:06.456 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:06.456 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:06.456 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:06.456 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:06.456 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:06.456 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:06.456 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:06.456 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:06.456 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:06.456 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:06.456 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:06.456 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:06.456 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:06.456 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:06.456 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:06.456 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:06.456 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:06.456 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:06.456 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:06.456 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:06.456 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:06.456 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:06.456 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:06.456 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:06.456 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:06.456 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:06.456 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:06.456 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:06.456 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:06.456 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:06.456 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:06.456 Found net devices under 0000:86:00.0: cvl_0_0 00:20:06.456 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:06.456 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:06.456 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:06.456 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:06.456 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:06.456 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:06.456 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:06.456 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:06.456 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:06.456 Found net devices under 0000:86:00.1: cvl_0_1 00:20:06.456 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:06.456 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:06.456 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:20:06.456 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:06.456 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:06.456 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:06.456 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:06.456 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:06.456 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:06.456 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:06.456 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:06.456 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:06.456 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:06.456 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:06.456 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:06.456 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:06.456 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:06.456 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:06.456 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:06.456 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:06.456 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:06.456 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:06.456 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:06.456 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:06.456 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:06.456 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:06.456 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:06.456 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:06.456 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:06.456 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:06.456 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.400 ms 00:20:06.456 00:20:06.456 --- 10.0.0.2 ping statistics --- 00:20:06.456 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:06.456 rtt min/avg/max/mdev = 0.400/0.400/0.400/0.000 ms 00:20:06.456 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:06.456 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:06.456 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:20:06.456 00:20:06.456 --- 10.0.0.1 ping statistics --- 00:20:06.456 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:06.456 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:20:06.456 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:06.456 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:20:06.456 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:06.456 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:06.457 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:06.457 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:06.457 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:06.457 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:06.457 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:06.457 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:20:06.457 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:06.457 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:06.457 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:06.457 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=2570059 00:20:06.457 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 2570059 00:20:06.457 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:20:06.457 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 2570059 ']' 00:20:06.457 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:06.457 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:06.457 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:06.457 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:06.457 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:06.457 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:06.457 [2024-11-28 12:43:48.741311] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:20:06.457 [2024-11-28 12:43:48.741359] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:06.457 [2024-11-28 12:43:48.809726] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:06.457 [2024-11-28 12:43:48.854489] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:06.457 [2024-11-28 12:43:48.854528] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:06.457 [2024-11-28 12:43:48.854534] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:06.457 [2024-11-28 12:43:48.854540] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:06.457 [2024-11-28 12:43:48.854546] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:06.457 [2024-11-28 12:43:48.856165] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:06.457 [2024-11-28 12:43:48.856264] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:06.457 [2024-11-28 12:43:48.856287] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:06.457 [2024-11-28 12:43:48.856289] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:06.457 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:06.457 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:20:06.457 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:06.457 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:06.457 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:06.457 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:06.457 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:20:06.457 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:20:06.457 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:20:06.457 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.457 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:06.457 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.716 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:20:06.716 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:20:06.716 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.716 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:06.716 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.716 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:20:06.716 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.716 12:43:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:06.716 12:43:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.716 12:43:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:20:06.716 12:43:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.717 12:43:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:06.717 [2024-11-28 12:43:49.079783] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:06.717 12:43:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.717 12:43:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:06.717 12:43:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.717 12:43:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:06.717 Malloc1 00:20:06.717 12:43:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.717 12:43:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:06.717 12:43:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.717 12:43:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:06.717 12:43:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.717 12:43:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:06.717 12:43:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.717 12:43:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:06.717 12:43:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.717 12:43:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:06.717 12:43:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.717 12:43:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:06.717 [2024-11-28 12:43:49.149630] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:06.717 12:43:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.717 12:43:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=2570099 00:20:06.717 12:43:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:20:06.717 12:43:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:20:09.251 12:43:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:20:09.251 12:43:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.251 12:43:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:09.251 12:43:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.251 12:43:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:20:09.251 "tick_rate": 2300000000, 00:20:09.251 "poll_groups": [ 00:20:09.251 { 00:20:09.251 "name": "nvmf_tgt_poll_group_000", 00:20:09.251 "admin_qpairs": 1, 00:20:09.251 "io_qpairs": 1, 00:20:09.251 "current_admin_qpairs": 1, 00:20:09.251 "current_io_qpairs": 1, 00:20:09.251 "pending_bdev_io": 0, 00:20:09.251 "completed_nvme_io": 19831, 00:20:09.251 "transports": [ 00:20:09.251 { 00:20:09.251 "trtype": "TCP" 00:20:09.251 } 00:20:09.251 ] 00:20:09.251 }, 00:20:09.251 { 00:20:09.251 "name": "nvmf_tgt_poll_group_001", 00:20:09.251 "admin_qpairs": 0, 00:20:09.251 "io_qpairs": 1, 00:20:09.251 "current_admin_qpairs": 0, 00:20:09.251 "current_io_qpairs": 1, 00:20:09.251 "pending_bdev_io": 0, 00:20:09.251 "completed_nvme_io": 20274, 00:20:09.251 "transports": [ 00:20:09.251 { 00:20:09.251 "trtype": "TCP" 00:20:09.251 } 00:20:09.251 ] 00:20:09.251 }, 00:20:09.252 { 00:20:09.252 "name": "nvmf_tgt_poll_group_002", 00:20:09.252 "admin_qpairs": 0, 00:20:09.252 "io_qpairs": 1, 00:20:09.252 "current_admin_qpairs": 0, 00:20:09.252 "current_io_qpairs": 1, 00:20:09.252 "pending_bdev_io": 0, 00:20:09.252 "completed_nvme_io": 20133, 00:20:09.252 "transports": [ 00:20:09.252 { 00:20:09.252 "trtype": "TCP" 00:20:09.252 } 00:20:09.252 ] 00:20:09.252 }, 00:20:09.252 { 00:20:09.252 "name": "nvmf_tgt_poll_group_003", 00:20:09.252 "admin_qpairs": 0, 00:20:09.252 "io_qpairs": 1, 00:20:09.252 "current_admin_qpairs": 0, 00:20:09.252 "current_io_qpairs": 1, 00:20:09.252 "pending_bdev_io": 0, 00:20:09.252 "completed_nvme_io": 19875, 00:20:09.252 "transports": [ 00:20:09.252 { 00:20:09.252 "trtype": "TCP" 00:20:09.252 } 00:20:09.252 ] 00:20:09.252 } 00:20:09.252 ] 00:20:09.252 }' 00:20:09.252 12:43:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:20:09.252 12:43:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:20:09.252 12:43:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:20:09.252 12:43:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:20:09.252 12:43:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 2570099 00:20:17.376 Initializing NVMe Controllers 00:20:17.376 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:17.376 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:20:17.376 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:20:17.376 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:20:17.376 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:20:17.376 Initialization complete. Launching workers. 00:20:17.376 ======================================================== 00:20:17.376 Latency(us) 00:20:17.376 Device Information : IOPS MiB/s Average min max 00:20:17.376 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10559.36 41.25 6062.27 2205.00 9850.36 00:20:17.376 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10740.76 41.96 5958.87 1971.46 9388.29 00:20:17.376 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10715.86 41.86 5972.51 1732.08 9731.91 00:20:17.376 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10579.16 41.32 6050.11 1705.65 9814.56 00:20:17.376 ======================================================== 00:20:17.376 Total : 42595.14 166.39 6010.60 1705.65 9850.36 00:20:17.376 00:20:17.376 12:43:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:20:17.376 12:43:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:17.376 12:43:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:20:17.376 12:43:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:17.376 12:43:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:20:17.376 12:43:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:17.376 12:43:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:17.376 rmmod nvme_tcp 00:20:17.376 rmmod nvme_fabrics 00:20:17.376 rmmod nvme_keyring 00:20:17.376 12:43:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:17.376 12:43:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:20:17.376 12:43:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:20:17.376 12:43:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 2570059 ']' 00:20:17.376 12:43:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 2570059 00:20:17.376 12:43:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 2570059 ']' 00:20:17.376 12:43:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 2570059 00:20:17.376 12:43:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:20:17.376 12:43:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:17.376 12:43:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2570059 00:20:17.376 12:43:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:17.376 12:43:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:17.376 12:43:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2570059' 00:20:17.376 killing process with pid 2570059 00:20:17.376 12:43:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 2570059 00:20:17.376 12:43:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 2570059 00:20:17.376 12:43:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:17.376 12:43:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:17.376 12:43:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:17.376 12:43:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:20:17.376 12:43:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:20:17.376 12:43:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:17.376 12:43:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:20:17.376 12:43:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:17.376 12:43:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:17.376 12:43:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:17.376 12:43:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:17.376 12:43:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:19.281 12:44:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:19.281 12:44:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:20:19.281 12:44:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:20:19.281 12:44:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:20:20.659 12:44:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:20:22.567 12:44:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:20:27.843 12:44:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:20:27.843 12:44:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:27.843 12:44:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:27.843 12:44:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:27.843 12:44:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:27.843 12:44:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:27.843 12:44:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:27.843 12:44:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:27.843 12:44:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:27.843 12:44:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:27.843 12:44:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:27.843 12:44:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:20:27.843 12:44:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:27.843 12:44:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:27.843 12:44:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:20:27.843 12:44:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:27.843 12:44:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:27.843 12:44:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:27.843 12:44:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:27.843 12:44:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:27.843 12:44:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:20:27.844 12:44:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:27.844 12:44:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:20:27.844 12:44:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:20:27.844 12:44:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:20:27.844 12:44:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:20:27.844 12:44:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:20:27.844 12:44:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:20:27.844 12:44:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:27.844 12:44:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:27.844 12:44:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:27.844 12:44:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:27.844 12:44:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:27.844 12:44:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:27.844 12:44:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:27.844 12:44:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:27.844 12:44:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:27.844 12:44:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:27.844 12:44:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:27.844 12:44:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:27.844 12:44:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:27.844 12:44:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:27.844 12:44:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:27.844 12:44:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:27.844 12:44:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:27.844 12:44:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:27.844 12:44:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:27.844 12:44:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:27.844 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:27.844 12:44:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:27.844 12:44:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:27.844 12:44:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:27.844 12:44:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:27.844 12:44:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:27.844 12:44:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:27.844 12:44:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:27.844 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:27.844 12:44:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:27.844 12:44:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:27.844 12:44:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:27.844 12:44:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:27.844 12:44:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:27.844 12:44:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:27.844 12:44:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:27.844 12:44:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:27.844 12:44:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:27.844 12:44:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:27.844 12:44:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:27.844 12:44:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:27.844 12:44:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:27.844 12:44:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:27.844 12:44:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:27.844 12:44:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:27.844 Found net devices under 0000:86:00.0: cvl_0_0 00:20:27.844 12:44:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:27.844 12:44:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:27.844 12:44:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:27.844 12:44:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:27.844 12:44:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:27.844 12:44:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:27.844 12:44:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:27.844 12:44:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:27.844 12:44:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:27.844 Found net devices under 0000:86:00.1: cvl_0_1 00:20:27.844 12:44:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:27.844 12:44:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:27.844 12:44:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:20:27.844 12:44:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:27.844 12:44:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:27.844 12:44:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:27.844 12:44:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:27.844 12:44:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:27.844 12:44:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:27.844 12:44:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:27.844 12:44:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:27.844 12:44:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:27.844 12:44:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:27.844 12:44:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:27.844 12:44:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:27.844 12:44:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:27.844 12:44:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:27.844 12:44:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:27.844 12:44:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:27.844 12:44:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:27.844 12:44:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:27.844 12:44:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:27.844 12:44:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:27.844 12:44:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:27.844 12:44:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:27.844 12:44:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:27.844 12:44:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:27.844 12:44:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:27.844 12:44:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:27.844 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:27.844 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.393 ms 00:20:27.844 00:20:27.844 --- 10.0.0.2 ping statistics --- 00:20:27.844 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:27.844 rtt min/avg/max/mdev = 0.393/0.393/0.393/0.000 ms 00:20:27.844 12:44:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:27.844 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:27.844 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.078 ms 00:20:27.844 00:20:27.844 --- 10.0.0.1 ping statistics --- 00:20:27.844 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:27.844 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:20:27.844 12:44:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:27.844 12:44:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:20:27.844 12:44:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:27.844 12:44:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:27.844 12:44:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:27.844 12:44:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:27.844 12:44:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:27.844 12:44:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:27.844 12:44:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:27.844 12:44:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:20:27.845 12:44:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:20:27.845 12:44:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:20:27.845 12:44:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:20:27.845 net.core.busy_poll = 1 00:20:27.845 12:44:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:20:27.845 net.core.busy_read = 1 00:20:27.845 12:44:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:20:27.845 12:44:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:20:27.845 12:44:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:20:27.845 12:44:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:20:27.845 12:44:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:20:27.845 12:44:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:20:27.845 12:44:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:27.845 12:44:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:27.845 12:44:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:27.845 12:44:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=2573869 00:20:27.845 12:44:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 2573869 00:20:27.845 12:44:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 2573869 ']' 00:20:27.845 12:44:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:27.845 12:44:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:27.845 12:44:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:27.845 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:27.845 12:44:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:20:27.845 12:44:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:27.845 12:44:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:27.845 [2024-11-28 12:44:10.296964] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:20:27.845 [2024-11-28 12:44:10.297010] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:28.104 [2024-11-28 12:44:10.364101] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:28.104 [2024-11-28 12:44:10.406934] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:28.104 [2024-11-28 12:44:10.406979] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:28.104 [2024-11-28 12:44:10.406987] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:28.104 [2024-11-28 12:44:10.406993] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:28.104 [2024-11-28 12:44:10.406998] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:28.104 [2024-11-28 12:44:10.408535] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:28.104 [2024-11-28 12:44:10.408630] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:28.104 [2024-11-28 12:44:10.408701] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:28.104 [2024-11-28 12:44:10.408702] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:28.104 12:44:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:28.104 12:44:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:20:28.104 12:44:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:28.104 12:44:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:28.104 12:44:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:28.104 12:44:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:28.104 12:44:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:20:28.104 12:44:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:20:28.104 12:44:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:20:28.104 12:44:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.104 12:44:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:28.104 12:44:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.104 12:44:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:20:28.104 12:44:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:20:28.104 12:44:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.104 12:44:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:28.105 12:44:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.105 12:44:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:20:28.105 12:44:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.105 12:44:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:28.105 12:44:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.105 12:44:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:20:28.105 12:44:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.105 12:44:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:28.105 [2024-11-28 12:44:10.620045] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:28.364 12:44:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.364 12:44:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:28.364 12:44:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.364 12:44:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:28.364 Malloc1 00:20:28.364 12:44:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.364 12:44:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:28.364 12:44:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.364 12:44:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:28.364 12:44:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.364 12:44:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:28.364 12:44:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.364 12:44:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:28.364 12:44:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.364 12:44:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:28.364 12:44:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.364 12:44:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:28.364 [2024-11-28 12:44:10.686137] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:28.364 12:44:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.364 12:44:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=2574100 00:20:28.364 12:44:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:20:28.364 12:44:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:20:30.270 12:44:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:20:30.270 12:44:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.270 12:44:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:30.270 12:44:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.270 12:44:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:20:30.270 "tick_rate": 2300000000, 00:20:30.270 "poll_groups": [ 00:20:30.270 { 00:20:30.270 "name": "nvmf_tgt_poll_group_000", 00:20:30.270 "admin_qpairs": 1, 00:20:30.270 "io_qpairs": 3, 00:20:30.270 "current_admin_qpairs": 1, 00:20:30.270 "current_io_qpairs": 3, 00:20:30.270 "pending_bdev_io": 0, 00:20:30.270 "completed_nvme_io": 29505, 00:20:30.270 "transports": [ 00:20:30.270 { 00:20:30.270 "trtype": "TCP" 00:20:30.270 } 00:20:30.270 ] 00:20:30.270 }, 00:20:30.270 { 00:20:30.270 "name": "nvmf_tgt_poll_group_001", 00:20:30.270 "admin_qpairs": 0, 00:20:30.270 "io_qpairs": 1, 00:20:30.270 "current_admin_qpairs": 0, 00:20:30.270 "current_io_qpairs": 1, 00:20:30.270 "pending_bdev_io": 0, 00:20:30.270 "completed_nvme_io": 27385, 00:20:30.270 "transports": [ 00:20:30.271 { 00:20:30.271 "trtype": "TCP" 00:20:30.271 } 00:20:30.271 ] 00:20:30.271 }, 00:20:30.271 { 00:20:30.271 "name": "nvmf_tgt_poll_group_002", 00:20:30.271 "admin_qpairs": 0, 00:20:30.271 "io_qpairs": 0, 00:20:30.271 "current_admin_qpairs": 0, 00:20:30.271 "current_io_qpairs": 0, 00:20:30.271 "pending_bdev_io": 0, 00:20:30.271 "completed_nvme_io": 0, 00:20:30.271 "transports": [ 00:20:30.271 { 00:20:30.271 "trtype": "TCP" 00:20:30.271 } 00:20:30.271 ] 00:20:30.271 }, 00:20:30.271 { 00:20:30.271 "name": "nvmf_tgt_poll_group_003", 00:20:30.271 "admin_qpairs": 0, 00:20:30.271 "io_qpairs": 0, 00:20:30.271 "current_admin_qpairs": 0, 00:20:30.271 "current_io_qpairs": 0, 00:20:30.271 "pending_bdev_io": 0, 00:20:30.271 "completed_nvme_io": 0, 00:20:30.271 "transports": [ 00:20:30.271 { 00:20:30.271 "trtype": "TCP" 00:20:30.271 } 00:20:30.271 ] 00:20:30.271 } 00:20:30.271 ] 00:20:30.271 }' 00:20:30.271 12:44:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:20:30.271 12:44:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:20:30.271 12:44:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:20:30.271 12:44:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:20:30.271 12:44:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 2574100 00:20:38.393 Initializing NVMe Controllers 00:20:38.393 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:38.393 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:20:38.393 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:20:38.393 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:20:38.393 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:20:38.393 Initialization complete. Launching workers. 00:20:38.393 ======================================================== 00:20:38.393 Latency(us) 00:20:38.393 Device Information : IOPS MiB/s Average min max 00:20:38.393 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 5559.40 21.72 11539.64 1719.29 57320.93 00:20:38.393 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 4923.50 19.23 13005.16 1637.07 59447.75 00:20:38.393 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 5082.90 19.86 12627.74 1091.61 60366.37 00:20:38.393 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 14637.90 57.18 4371.70 1056.89 44983.37 00:20:38.393 ======================================================== 00:20:38.393 Total : 30203.70 117.98 8487.78 1056.89 60366.37 00:20:38.393 00:20:38.393 12:44:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:20:38.393 12:44:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:38.393 12:44:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:20:38.393 12:44:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:38.393 12:44:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:20:38.393 12:44:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:38.393 12:44:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:38.393 rmmod nvme_tcp 00:20:38.653 rmmod nvme_fabrics 00:20:38.653 rmmod nvme_keyring 00:20:38.653 12:44:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:38.653 12:44:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:20:38.653 12:44:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:20:38.653 12:44:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 2573869 ']' 00:20:38.653 12:44:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 2573869 00:20:38.653 12:44:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 2573869 ']' 00:20:38.653 12:44:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 2573869 00:20:38.653 12:44:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:20:38.653 12:44:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:38.653 12:44:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2573869 00:20:38.653 12:44:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:38.653 12:44:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:38.653 12:44:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2573869' 00:20:38.653 killing process with pid 2573869 00:20:38.653 12:44:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 2573869 00:20:38.653 12:44:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 2573869 00:20:38.912 12:44:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:38.912 12:44:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:38.912 12:44:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:38.912 12:44:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:20:38.912 12:44:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:20:38.912 12:44:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:38.913 12:44:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:20:38.913 12:44:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:38.913 12:44:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:38.913 12:44:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:38.913 12:44:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:38.913 12:44:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:42.204 12:44:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:42.205 12:44:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:20:42.205 00:20:42.205 real 0m49.748s 00:20:42.205 user 2m43.810s 00:20:42.205 sys 0m10.230s 00:20:42.205 12:44:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:42.205 12:44:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:42.205 ************************************ 00:20:42.205 END TEST nvmf_perf_adq 00:20:42.205 ************************************ 00:20:42.205 12:44:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:20:42.205 12:44:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:42.205 12:44:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:42.205 12:44:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:42.205 ************************************ 00:20:42.205 START TEST nvmf_shutdown 00:20:42.205 ************************************ 00:20:42.205 12:44:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:20:42.205 * Looking for test storage... 00:20:42.205 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:42.205 12:44:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:42.205 12:44:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:20:42.205 12:44:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:42.205 12:44:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:42.205 12:44:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:42.205 12:44:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:42.205 12:44:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:42.205 12:44:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:20:42.205 12:44:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:20:42.205 12:44:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:20:42.205 12:44:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:20:42.205 12:44:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:20:42.205 12:44:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:20:42.205 12:44:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:20:42.205 12:44:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:42.205 12:44:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:20:42.205 12:44:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:20:42.205 12:44:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:42.205 12:44:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:42.205 12:44:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:20:42.205 12:44:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:20:42.205 12:44:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:42.205 12:44:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:20:42.205 12:44:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:20:42.205 12:44:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:20:42.205 12:44:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:20:42.205 12:44:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:42.205 12:44:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:20:42.205 12:44:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:20:42.205 12:44:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:42.205 12:44:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:42.205 12:44:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:20:42.205 12:44:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:42.205 12:44:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:42.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:42.205 --rc genhtml_branch_coverage=1 00:20:42.205 --rc genhtml_function_coverage=1 00:20:42.205 --rc genhtml_legend=1 00:20:42.205 --rc geninfo_all_blocks=1 00:20:42.205 --rc geninfo_unexecuted_blocks=1 00:20:42.205 00:20:42.205 ' 00:20:42.205 12:44:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:42.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:42.205 --rc genhtml_branch_coverage=1 00:20:42.205 --rc genhtml_function_coverage=1 00:20:42.205 --rc genhtml_legend=1 00:20:42.205 --rc geninfo_all_blocks=1 00:20:42.205 --rc geninfo_unexecuted_blocks=1 00:20:42.205 00:20:42.205 ' 00:20:42.205 12:44:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:42.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:42.205 --rc genhtml_branch_coverage=1 00:20:42.205 --rc genhtml_function_coverage=1 00:20:42.205 --rc genhtml_legend=1 00:20:42.205 --rc geninfo_all_blocks=1 00:20:42.205 --rc geninfo_unexecuted_blocks=1 00:20:42.205 00:20:42.205 ' 00:20:42.205 12:44:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:42.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:42.205 --rc genhtml_branch_coverage=1 00:20:42.205 --rc genhtml_function_coverage=1 00:20:42.205 --rc genhtml_legend=1 00:20:42.205 --rc geninfo_all_blocks=1 00:20:42.205 --rc geninfo_unexecuted_blocks=1 00:20:42.205 00:20:42.205 ' 00:20:42.205 12:44:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:42.205 12:44:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:20:42.205 12:44:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:42.205 12:44:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:42.205 12:44:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:42.205 12:44:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:42.205 12:44:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:42.205 12:44:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:42.205 12:44:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:42.205 12:44:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:42.205 12:44:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:42.205 12:44:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:42.205 12:44:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:42.205 12:44:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:20:42.205 12:44:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:42.206 12:44:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:42.206 12:44:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:42.206 12:44:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:42.206 12:44:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:42.206 12:44:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:20:42.206 12:44:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:42.206 12:44:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:42.206 12:44:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:42.206 12:44:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.206 12:44:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.206 12:44:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.206 12:44:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:20:42.206 12:44:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.206 12:44:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:20:42.206 12:44:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:42.206 12:44:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:42.206 12:44:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:42.206 12:44:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:42.206 12:44:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:42.206 12:44:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:42.206 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:42.206 12:44:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:42.206 12:44:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:42.206 12:44:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:42.206 12:44:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:20:42.206 12:44:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:20:42.206 12:44:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:20:42.206 12:44:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:42.206 12:44:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:42.206 12:44:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:42.206 ************************************ 00:20:42.206 START TEST nvmf_shutdown_tc1 00:20:42.206 ************************************ 00:20:42.206 12:44:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:20:42.206 12:44:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:20:42.206 12:44:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:20:42.206 12:44:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:42.206 12:44:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:42.206 12:44:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:42.206 12:44:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:42.206 12:44:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:42.206 12:44:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:42.206 12:44:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:42.206 12:44:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:42.206 12:44:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:42.206 12:44:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:42.206 12:44:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:20:42.206 12:44:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:47.496 12:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:47.496 12:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:20:47.496 12:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:47.496 12:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:47.496 12:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:47.496 12:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:47.496 12:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:47.496 12:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:20:47.496 12:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:47.496 12:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:20:47.496 12:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:20:47.496 12:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:20:47.496 12:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:20:47.496 12:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:20:47.496 12:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:20:47.496 12:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:47.496 12:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:47.496 12:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:47.496 12:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:47.496 12:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:47.496 12:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:47.496 12:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:47.496 12:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:47.496 12:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:47.496 12:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:47.496 12:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:47.496 12:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:47.496 12:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:47.496 12:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:47.496 12:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:47.496 12:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:47.496 12:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:47.496 12:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:47.496 12:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:47.496 12:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:47.496 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:47.496 12:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:47.496 12:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:47.496 12:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:47.496 12:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:47.496 12:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:47.496 12:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:47.496 12:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:47.496 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:47.496 12:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:47.496 12:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:47.496 12:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:47.496 12:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:47.496 12:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:47.496 12:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:47.496 12:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:47.496 12:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:47.496 12:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:47.497 12:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:47.497 12:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:47.497 12:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:47.497 12:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:47.497 12:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:47.497 12:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:47.497 12:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:47.497 Found net devices under 0000:86:00.0: cvl_0_0 00:20:47.497 12:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:47.497 12:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:47.497 12:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:47.497 12:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:47.497 12:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:47.497 12:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:47.497 12:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:47.497 12:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:47.497 12:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:47.497 Found net devices under 0000:86:00.1: cvl_0_1 00:20:47.497 12:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:47.497 12:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:47.497 12:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:20:47.497 12:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:47.497 12:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:47.497 12:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:47.497 12:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:47.497 12:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:47.497 12:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:47.497 12:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:47.497 12:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:47.497 12:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:47.497 12:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:47.497 12:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:47.497 12:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:47.497 12:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:47.497 12:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:47.497 12:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:47.497 12:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:47.497 12:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:47.497 12:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:47.497 12:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:47.497 12:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:47.497 12:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:47.497 12:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:47.497 12:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:47.497 12:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:47.497 12:44:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:47.497 12:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:47.497 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:47.497 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.464 ms 00:20:47.497 00:20:47.497 --- 10.0.0.2 ping statistics --- 00:20:47.497 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:47.497 rtt min/avg/max/mdev = 0.464/0.464/0.464/0.000 ms 00:20:47.756 12:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:47.756 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:47.756 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.146 ms 00:20:47.756 00:20:47.756 --- 10.0.0.1 ping statistics --- 00:20:47.756 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:47.756 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:20:47.756 12:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:47.756 12:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:20:47.756 12:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:47.756 12:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:47.756 12:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:47.756 12:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:47.756 12:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:47.756 12:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:47.756 12:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:47.756 12:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:20:47.756 12:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:47.756 12:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:47.756 12:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:47.756 12:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:47.756 12:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=2579339 00:20:47.756 12:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 2579339 00:20:47.756 12:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 2579339 ']' 00:20:47.756 12:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:47.756 12:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:47.756 12:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:47.756 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:47.756 12:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:47.756 12:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:47.756 [2024-11-28 12:44:30.101896] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:20:47.756 [2024-11-28 12:44:30.101957] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:47.756 [2024-11-28 12:44:30.169789] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:47.756 [2024-11-28 12:44:30.214393] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:47.756 [2024-11-28 12:44:30.214429] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:47.756 [2024-11-28 12:44:30.214436] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:47.756 [2024-11-28 12:44:30.214446] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:47.756 [2024-11-28 12:44:30.214468] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:47.756 [2024-11-28 12:44:30.216071] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:47.756 [2024-11-28 12:44:30.216156] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:47.756 [2024-11-28 12:44:30.216266] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:47.756 [2024-11-28 12:44:30.216267] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:20:48.015 12:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:48.016 12:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:20:48.016 12:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:48.016 12:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:48.016 12:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:48.016 12:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:48.016 12:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:48.016 12:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.016 12:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:48.016 [2024-11-28 12:44:30.355358] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:48.016 12:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.016 12:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:20:48.016 12:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:20:48.016 12:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:48.016 12:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:48.016 12:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:48.016 12:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:48.016 12:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:48.016 12:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:48.016 12:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:48.016 12:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:48.016 12:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:48.016 12:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:48.016 12:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:48.016 12:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:48.016 12:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:48.016 12:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:48.016 12:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:48.016 12:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:48.016 12:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:48.016 12:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:48.016 12:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:48.016 12:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:48.016 12:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:48.016 12:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:48.016 12:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:48.016 12:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:20:48.016 12:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.016 12:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:48.016 Malloc1 00:20:48.016 [2024-11-28 12:44:30.470377] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:48.016 Malloc2 00:20:48.274 Malloc3 00:20:48.274 Malloc4 00:20:48.274 Malloc5 00:20:48.274 Malloc6 00:20:48.274 Malloc7 00:20:48.274 Malloc8 00:20:48.274 Malloc9 00:20:48.534 Malloc10 00:20:48.534 12:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.534 12:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:20:48.534 12:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:48.534 12:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:48.534 12:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=2579611 00:20:48.534 12:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 2579611 /var/tmp/bdevperf.sock 00:20:48.534 12:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 2579611 ']' 00:20:48.534 12:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:48.534 12:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:20:48.534 12:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:48.534 12:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:48.534 12:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:48.534 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:48.534 12:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:20:48.534 12:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:48.534 12:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:20:48.534 12:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:48.534 12:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:48.534 12:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:48.534 { 00:20:48.534 "params": { 00:20:48.534 "name": "Nvme$subsystem", 00:20:48.534 "trtype": "$TEST_TRANSPORT", 00:20:48.534 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:48.534 "adrfam": "ipv4", 00:20:48.534 "trsvcid": "$NVMF_PORT", 00:20:48.534 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:48.534 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:48.534 "hdgst": ${hdgst:-false}, 00:20:48.534 "ddgst": ${ddgst:-false} 00:20:48.534 }, 00:20:48.534 "method": "bdev_nvme_attach_controller" 00:20:48.534 } 00:20:48.534 EOF 00:20:48.534 )") 00:20:48.534 12:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:48.534 12:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:48.534 12:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:48.534 { 00:20:48.534 "params": { 00:20:48.535 "name": "Nvme$subsystem", 00:20:48.535 "trtype": "$TEST_TRANSPORT", 00:20:48.535 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:48.535 "adrfam": "ipv4", 00:20:48.535 "trsvcid": "$NVMF_PORT", 00:20:48.535 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:48.535 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:48.535 "hdgst": ${hdgst:-false}, 00:20:48.535 "ddgst": ${ddgst:-false} 00:20:48.535 }, 00:20:48.535 "method": "bdev_nvme_attach_controller" 00:20:48.535 } 00:20:48.535 EOF 00:20:48.535 )") 00:20:48.535 12:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:48.535 12:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:48.535 12:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:48.535 { 00:20:48.535 "params": { 00:20:48.535 "name": "Nvme$subsystem", 00:20:48.535 "trtype": "$TEST_TRANSPORT", 00:20:48.535 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:48.535 "adrfam": "ipv4", 00:20:48.535 "trsvcid": "$NVMF_PORT", 00:20:48.535 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:48.535 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:48.535 "hdgst": ${hdgst:-false}, 00:20:48.535 "ddgst": ${ddgst:-false} 00:20:48.535 }, 00:20:48.535 "method": "bdev_nvme_attach_controller" 00:20:48.535 } 00:20:48.535 EOF 00:20:48.535 )") 00:20:48.535 12:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:48.535 12:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:48.535 12:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:48.535 { 00:20:48.535 "params": { 00:20:48.535 "name": "Nvme$subsystem", 00:20:48.535 "trtype": "$TEST_TRANSPORT", 00:20:48.535 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:48.535 "adrfam": "ipv4", 00:20:48.535 "trsvcid": "$NVMF_PORT", 00:20:48.535 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:48.535 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:48.535 "hdgst": ${hdgst:-false}, 00:20:48.535 "ddgst": ${ddgst:-false} 00:20:48.535 }, 00:20:48.535 "method": "bdev_nvme_attach_controller" 00:20:48.535 } 00:20:48.535 EOF 00:20:48.535 )") 00:20:48.535 12:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:48.535 12:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:48.535 12:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:48.535 { 00:20:48.535 "params": { 00:20:48.535 "name": "Nvme$subsystem", 00:20:48.535 "trtype": "$TEST_TRANSPORT", 00:20:48.535 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:48.535 "adrfam": "ipv4", 00:20:48.535 "trsvcid": "$NVMF_PORT", 00:20:48.535 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:48.535 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:48.535 "hdgst": ${hdgst:-false}, 00:20:48.535 "ddgst": ${ddgst:-false} 00:20:48.535 }, 00:20:48.535 "method": "bdev_nvme_attach_controller" 00:20:48.535 } 00:20:48.535 EOF 00:20:48.535 )") 00:20:48.535 12:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:48.535 12:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:48.535 12:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:48.535 { 00:20:48.535 "params": { 00:20:48.535 "name": "Nvme$subsystem", 00:20:48.535 "trtype": "$TEST_TRANSPORT", 00:20:48.535 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:48.535 "adrfam": "ipv4", 00:20:48.535 "trsvcid": "$NVMF_PORT", 00:20:48.535 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:48.535 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:48.535 "hdgst": ${hdgst:-false}, 00:20:48.535 "ddgst": ${ddgst:-false} 00:20:48.535 }, 00:20:48.535 "method": "bdev_nvme_attach_controller" 00:20:48.535 } 00:20:48.535 EOF 00:20:48.535 )") 00:20:48.535 12:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:48.535 12:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:48.535 [2024-11-28 12:44:30.938540] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:20:48.535 [2024-11-28 12:44:30.938589] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:20:48.535 12:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:48.535 { 00:20:48.535 "params": { 00:20:48.535 "name": "Nvme$subsystem", 00:20:48.535 "trtype": "$TEST_TRANSPORT", 00:20:48.535 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:48.535 "adrfam": "ipv4", 00:20:48.535 "trsvcid": "$NVMF_PORT", 00:20:48.535 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:48.535 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:48.535 "hdgst": ${hdgst:-false}, 00:20:48.535 "ddgst": ${ddgst:-false} 00:20:48.535 }, 00:20:48.535 "method": "bdev_nvme_attach_controller" 00:20:48.535 } 00:20:48.535 EOF 00:20:48.535 )") 00:20:48.535 12:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:48.535 12:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:48.535 12:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:48.535 { 00:20:48.535 "params": { 00:20:48.535 "name": "Nvme$subsystem", 00:20:48.535 "trtype": "$TEST_TRANSPORT", 00:20:48.535 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:48.535 "adrfam": "ipv4", 00:20:48.535 "trsvcid": "$NVMF_PORT", 00:20:48.535 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:48.535 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:48.535 "hdgst": ${hdgst:-false}, 00:20:48.535 "ddgst": ${ddgst:-false} 00:20:48.535 }, 00:20:48.535 "method": "bdev_nvme_attach_controller" 00:20:48.535 } 00:20:48.535 EOF 00:20:48.535 )") 00:20:48.535 12:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:48.535 12:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:48.535 12:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:48.535 { 00:20:48.535 "params": { 00:20:48.535 "name": "Nvme$subsystem", 00:20:48.535 "trtype": "$TEST_TRANSPORT", 00:20:48.535 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:48.535 "adrfam": "ipv4", 00:20:48.535 "trsvcid": "$NVMF_PORT", 00:20:48.535 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:48.535 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:48.535 "hdgst": ${hdgst:-false}, 00:20:48.535 "ddgst": ${ddgst:-false} 00:20:48.535 }, 00:20:48.535 "method": "bdev_nvme_attach_controller" 00:20:48.535 } 00:20:48.535 EOF 00:20:48.535 )") 00:20:48.535 12:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:48.535 12:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:48.535 12:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:48.535 { 00:20:48.535 "params": { 00:20:48.535 "name": "Nvme$subsystem", 00:20:48.535 "trtype": "$TEST_TRANSPORT", 00:20:48.535 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:48.535 "adrfam": "ipv4", 00:20:48.535 "trsvcid": "$NVMF_PORT", 00:20:48.535 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:48.535 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:48.535 "hdgst": ${hdgst:-false}, 00:20:48.535 "ddgst": ${ddgst:-false} 00:20:48.535 }, 00:20:48.535 "method": "bdev_nvme_attach_controller" 00:20:48.535 } 00:20:48.535 EOF 00:20:48.535 )") 00:20:48.535 12:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:48.535 12:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:20:48.535 12:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:20:48.535 12:44:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:48.535 "params": { 00:20:48.535 "name": "Nvme1", 00:20:48.535 "trtype": "tcp", 00:20:48.535 "traddr": "10.0.0.2", 00:20:48.535 "adrfam": "ipv4", 00:20:48.535 "trsvcid": "4420", 00:20:48.535 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:48.535 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:48.535 "hdgst": false, 00:20:48.535 "ddgst": false 00:20:48.535 }, 00:20:48.535 "method": "bdev_nvme_attach_controller" 00:20:48.535 },{ 00:20:48.535 "params": { 00:20:48.535 "name": "Nvme2", 00:20:48.535 "trtype": "tcp", 00:20:48.535 "traddr": "10.0.0.2", 00:20:48.535 "adrfam": "ipv4", 00:20:48.535 "trsvcid": "4420", 00:20:48.535 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:48.535 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:48.535 "hdgst": false, 00:20:48.535 "ddgst": false 00:20:48.535 }, 00:20:48.535 "method": "bdev_nvme_attach_controller" 00:20:48.535 },{ 00:20:48.535 "params": { 00:20:48.535 "name": "Nvme3", 00:20:48.535 "trtype": "tcp", 00:20:48.535 "traddr": "10.0.0.2", 00:20:48.535 "adrfam": "ipv4", 00:20:48.535 "trsvcid": "4420", 00:20:48.535 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:48.535 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:48.535 "hdgst": false, 00:20:48.535 "ddgst": false 00:20:48.535 }, 00:20:48.535 "method": "bdev_nvme_attach_controller" 00:20:48.535 },{ 00:20:48.536 "params": { 00:20:48.536 "name": "Nvme4", 00:20:48.536 "trtype": "tcp", 00:20:48.536 "traddr": "10.0.0.2", 00:20:48.536 "adrfam": "ipv4", 00:20:48.536 "trsvcid": "4420", 00:20:48.536 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:48.536 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:48.536 "hdgst": false, 00:20:48.536 "ddgst": false 00:20:48.536 }, 00:20:48.536 "method": "bdev_nvme_attach_controller" 00:20:48.536 },{ 00:20:48.536 "params": { 00:20:48.536 "name": "Nvme5", 00:20:48.536 "trtype": "tcp", 00:20:48.536 "traddr": "10.0.0.2", 00:20:48.536 "adrfam": "ipv4", 00:20:48.536 "trsvcid": "4420", 00:20:48.536 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:48.536 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:48.536 "hdgst": false, 00:20:48.536 "ddgst": false 00:20:48.536 }, 00:20:48.536 "method": "bdev_nvme_attach_controller" 00:20:48.536 },{ 00:20:48.536 "params": { 00:20:48.536 "name": "Nvme6", 00:20:48.536 "trtype": "tcp", 00:20:48.536 "traddr": "10.0.0.2", 00:20:48.536 "adrfam": "ipv4", 00:20:48.536 "trsvcid": "4420", 00:20:48.536 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:48.536 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:48.536 "hdgst": false, 00:20:48.536 "ddgst": false 00:20:48.536 }, 00:20:48.536 "method": "bdev_nvme_attach_controller" 00:20:48.536 },{ 00:20:48.536 "params": { 00:20:48.536 "name": "Nvme7", 00:20:48.536 "trtype": "tcp", 00:20:48.536 "traddr": "10.0.0.2", 00:20:48.536 "adrfam": "ipv4", 00:20:48.536 "trsvcid": "4420", 00:20:48.536 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:48.536 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:48.536 "hdgst": false, 00:20:48.536 "ddgst": false 00:20:48.536 }, 00:20:48.536 "method": "bdev_nvme_attach_controller" 00:20:48.536 },{ 00:20:48.536 "params": { 00:20:48.536 "name": "Nvme8", 00:20:48.536 "trtype": "tcp", 00:20:48.536 "traddr": "10.0.0.2", 00:20:48.536 "adrfam": "ipv4", 00:20:48.536 "trsvcid": "4420", 00:20:48.536 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:48.536 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:48.536 "hdgst": false, 00:20:48.536 "ddgst": false 00:20:48.536 }, 00:20:48.536 "method": "bdev_nvme_attach_controller" 00:20:48.536 },{ 00:20:48.536 "params": { 00:20:48.536 "name": "Nvme9", 00:20:48.536 "trtype": "tcp", 00:20:48.536 "traddr": "10.0.0.2", 00:20:48.536 "adrfam": "ipv4", 00:20:48.536 "trsvcid": "4420", 00:20:48.536 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:48.536 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:48.536 "hdgst": false, 00:20:48.536 "ddgst": false 00:20:48.536 }, 00:20:48.536 "method": "bdev_nvme_attach_controller" 00:20:48.536 },{ 00:20:48.536 "params": { 00:20:48.536 "name": "Nvme10", 00:20:48.536 "trtype": "tcp", 00:20:48.536 "traddr": "10.0.0.2", 00:20:48.536 "adrfam": "ipv4", 00:20:48.536 "trsvcid": "4420", 00:20:48.536 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:48.536 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:48.536 "hdgst": false, 00:20:48.536 "ddgst": false 00:20:48.536 }, 00:20:48.536 "method": "bdev_nvme_attach_controller" 00:20:48.536 }' 00:20:48.536 [2024-11-28 12:44:31.002792] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:48.536 [2024-11-28 12:44:31.044201] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:50.437 12:44:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:50.437 12:44:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:20:50.437 12:44:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:50.437 12:44:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.437 12:44:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:50.437 12:44:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.437 12:44:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 2579611 00:20:50.437 12:44:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:20:50.437 12:44:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:20:51.373 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 2579611 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:20:51.373 12:44:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 2579339 00:20:51.373 12:44:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:20:51.373 12:44:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:51.373 12:44:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:20:51.373 12:44:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:20:51.373 12:44:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:51.373 12:44:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:51.373 { 00:20:51.373 "params": { 00:20:51.373 "name": "Nvme$subsystem", 00:20:51.373 "trtype": "$TEST_TRANSPORT", 00:20:51.373 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:51.373 "adrfam": "ipv4", 00:20:51.373 "trsvcid": "$NVMF_PORT", 00:20:51.373 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:51.373 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:51.373 "hdgst": ${hdgst:-false}, 00:20:51.373 "ddgst": ${ddgst:-false} 00:20:51.373 }, 00:20:51.373 "method": "bdev_nvme_attach_controller" 00:20:51.373 } 00:20:51.373 EOF 00:20:51.373 )") 00:20:51.373 12:44:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:51.373 12:44:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:51.373 12:44:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:51.373 { 00:20:51.373 "params": { 00:20:51.373 "name": "Nvme$subsystem", 00:20:51.373 "trtype": "$TEST_TRANSPORT", 00:20:51.373 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:51.373 "adrfam": "ipv4", 00:20:51.373 "trsvcid": "$NVMF_PORT", 00:20:51.373 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:51.373 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:51.373 "hdgst": ${hdgst:-false}, 00:20:51.373 "ddgst": ${ddgst:-false} 00:20:51.373 }, 00:20:51.373 "method": "bdev_nvme_attach_controller" 00:20:51.373 } 00:20:51.373 EOF 00:20:51.373 )") 00:20:51.373 12:44:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:51.373 12:44:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:51.373 12:44:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:51.373 { 00:20:51.373 "params": { 00:20:51.373 "name": "Nvme$subsystem", 00:20:51.373 "trtype": "$TEST_TRANSPORT", 00:20:51.373 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:51.373 "adrfam": "ipv4", 00:20:51.373 "trsvcid": "$NVMF_PORT", 00:20:51.373 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:51.373 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:51.373 "hdgst": ${hdgst:-false}, 00:20:51.373 "ddgst": ${ddgst:-false} 00:20:51.373 }, 00:20:51.373 "method": "bdev_nvme_attach_controller" 00:20:51.373 } 00:20:51.373 EOF 00:20:51.373 )") 00:20:51.373 12:44:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:51.373 12:44:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:51.373 12:44:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:51.373 { 00:20:51.373 "params": { 00:20:51.373 "name": "Nvme$subsystem", 00:20:51.373 "trtype": "$TEST_TRANSPORT", 00:20:51.374 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:51.374 "adrfam": "ipv4", 00:20:51.374 "trsvcid": "$NVMF_PORT", 00:20:51.374 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:51.374 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:51.374 "hdgst": ${hdgst:-false}, 00:20:51.374 "ddgst": ${ddgst:-false} 00:20:51.374 }, 00:20:51.374 "method": "bdev_nvme_attach_controller" 00:20:51.374 } 00:20:51.374 EOF 00:20:51.374 )") 00:20:51.374 12:44:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:51.374 12:44:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:51.374 12:44:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:51.374 { 00:20:51.374 "params": { 00:20:51.374 "name": "Nvme$subsystem", 00:20:51.374 "trtype": "$TEST_TRANSPORT", 00:20:51.374 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:51.374 "adrfam": "ipv4", 00:20:51.374 "trsvcid": "$NVMF_PORT", 00:20:51.374 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:51.374 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:51.374 "hdgst": ${hdgst:-false}, 00:20:51.374 "ddgst": ${ddgst:-false} 00:20:51.374 }, 00:20:51.374 "method": "bdev_nvme_attach_controller" 00:20:51.374 } 00:20:51.374 EOF 00:20:51.374 )") 00:20:51.374 12:44:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:51.374 12:44:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:51.374 12:44:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:51.374 { 00:20:51.374 "params": { 00:20:51.374 "name": "Nvme$subsystem", 00:20:51.374 "trtype": "$TEST_TRANSPORT", 00:20:51.374 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:51.374 "adrfam": "ipv4", 00:20:51.374 "trsvcid": "$NVMF_PORT", 00:20:51.374 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:51.374 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:51.374 "hdgst": ${hdgst:-false}, 00:20:51.374 "ddgst": ${ddgst:-false} 00:20:51.374 }, 00:20:51.374 "method": "bdev_nvme_attach_controller" 00:20:51.374 } 00:20:51.374 EOF 00:20:51.374 )") 00:20:51.374 12:44:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:51.374 12:44:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:51.374 12:44:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:51.374 { 00:20:51.374 "params": { 00:20:51.374 "name": "Nvme$subsystem", 00:20:51.374 "trtype": "$TEST_TRANSPORT", 00:20:51.374 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:51.374 "adrfam": "ipv4", 00:20:51.374 "trsvcid": "$NVMF_PORT", 00:20:51.374 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:51.374 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:51.374 "hdgst": ${hdgst:-false}, 00:20:51.374 "ddgst": ${ddgst:-false} 00:20:51.374 }, 00:20:51.374 "method": "bdev_nvme_attach_controller" 00:20:51.374 } 00:20:51.374 EOF 00:20:51.374 )") 00:20:51.374 [2024-11-28 12:44:33.874631] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:20:51.374 [2024-11-28 12:44:33.874680] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2580097 ] 00:20:51.374 12:44:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:51.374 12:44:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:51.374 12:44:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:51.374 { 00:20:51.374 "params": { 00:20:51.374 "name": "Nvme$subsystem", 00:20:51.374 "trtype": "$TEST_TRANSPORT", 00:20:51.374 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:51.374 "adrfam": "ipv4", 00:20:51.374 "trsvcid": "$NVMF_PORT", 00:20:51.374 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:51.374 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:51.374 "hdgst": ${hdgst:-false}, 00:20:51.374 "ddgst": ${ddgst:-false} 00:20:51.374 }, 00:20:51.374 "method": "bdev_nvme_attach_controller" 00:20:51.374 } 00:20:51.374 EOF 00:20:51.374 )") 00:20:51.374 12:44:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:51.374 12:44:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:51.374 12:44:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:51.374 { 00:20:51.374 "params": { 00:20:51.374 "name": "Nvme$subsystem", 00:20:51.374 "trtype": "$TEST_TRANSPORT", 00:20:51.374 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:51.374 "adrfam": "ipv4", 00:20:51.374 "trsvcid": "$NVMF_PORT", 00:20:51.374 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:51.374 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:51.374 "hdgst": ${hdgst:-false}, 00:20:51.374 "ddgst": ${ddgst:-false} 00:20:51.374 }, 00:20:51.374 "method": "bdev_nvme_attach_controller" 00:20:51.374 } 00:20:51.374 EOF 00:20:51.374 )") 00:20:51.374 12:44:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:51.633 12:44:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:51.633 12:44:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:51.633 { 00:20:51.633 "params": { 00:20:51.633 "name": "Nvme$subsystem", 00:20:51.633 "trtype": "$TEST_TRANSPORT", 00:20:51.633 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:51.633 "adrfam": "ipv4", 00:20:51.633 "trsvcid": "$NVMF_PORT", 00:20:51.633 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:51.633 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:51.633 "hdgst": ${hdgst:-false}, 00:20:51.633 "ddgst": ${ddgst:-false} 00:20:51.633 }, 00:20:51.633 "method": "bdev_nvme_attach_controller" 00:20:51.633 } 00:20:51.633 EOF 00:20:51.633 )") 00:20:51.633 12:44:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:51.633 12:44:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:20:51.633 12:44:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:20:51.633 12:44:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:51.633 "params": { 00:20:51.633 "name": "Nvme1", 00:20:51.633 "trtype": "tcp", 00:20:51.633 "traddr": "10.0.0.2", 00:20:51.633 "adrfam": "ipv4", 00:20:51.633 "trsvcid": "4420", 00:20:51.633 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:51.633 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:51.633 "hdgst": false, 00:20:51.633 "ddgst": false 00:20:51.633 }, 00:20:51.633 "method": "bdev_nvme_attach_controller" 00:20:51.633 },{ 00:20:51.633 "params": { 00:20:51.633 "name": "Nvme2", 00:20:51.633 "trtype": "tcp", 00:20:51.633 "traddr": "10.0.0.2", 00:20:51.633 "adrfam": "ipv4", 00:20:51.633 "trsvcid": "4420", 00:20:51.633 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:51.633 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:51.633 "hdgst": false, 00:20:51.633 "ddgst": false 00:20:51.633 }, 00:20:51.634 "method": "bdev_nvme_attach_controller" 00:20:51.634 },{ 00:20:51.634 "params": { 00:20:51.634 "name": "Nvme3", 00:20:51.634 "trtype": "tcp", 00:20:51.634 "traddr": "10.0.0.2", 00:20:51.634 "adrfam": "ipv4", 00:20:51.634 "trsvcid": "4420", 00:20:51.634 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:51.634 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:51.634 "hdgst": false, 00:20:51.634 "ddgst": false 00:20:51.634 }, 00:20:51.634 "method": "bdev_nvme_attach_controller" 00:20:51.634 },{ 00:20:51.634 "params": { 00:20:51.634 "name": "Nvme4", 00:20:51.634 "trtype": "tcp", 00:20:51.634 "traddr": "10.0.0.2", 00:20:51.634 "adrfam": "ipv4", 00:20:51.634 "trsvcid": "4420", 00:20:51.634 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:51.634 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:51.634 "hdgst": false, 00:20:51.634 "ddgst": false 00:20:51.634 }, 00:20:51.634 "method": "bdev_nvme_attach_controller" 00:20:51.634 },{ 00:20:51.634 "params": { 00:20:51.634 "name": "Nvme5", 00:20:51.634 "trtype": "tcp", 00:20:51.634 "traddr": "10.0.0.2", 00:20:51.634 "adrfam": "ipv4", 00:20:51.634 "trsvcid": "4420", 00:20:51.634 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:51.634 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:51.634 "hdgst": false, 00:20:51.634 "ddgst": false 00:20:51.634 }, 00:20:51.634 "method": "bdev_nvme_attach_controller" 00:20:51.634 },{ 00:20:51.634 "params": { 00:20:51.634 "name": "Nvme6", 00:20:51.634 "trtype": "tcp", 00:20:51.634 "traddr": "10.0.0.2", 00:20:51.634 "adrfam": "ipv4", 00:20:51.634 "trsvcid": "4420", 00:20:51.634 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:51.634 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:51.634 "hdgst": false, 00:20:51.634 "ddgst": false 00:20:51.634 }, 00:20:51.634 "method": "bdev_nvme_attach_controller" 00:20:51.634 },{ 00:20:51.634 "params": { 00:20:51.634 "name": "Nvme7", 00:20:51.634 "trtype": "tcp", 00:20:51.634 "traddr": "10.0.0.2", 00:20:51.634 "adrfam": "ipv4", 00:20:51.634 "trsvcid": "4420", 00:20:51.634 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:51.634 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:51.634 "hdgst": false, 00:20:51.634 "ddgst": false 00:20:51.634 }, 00:20:51.634 "method": "bdev_nvme_attach_controller" 00:20:51.634 },{ 00:20:51.634 "params": { 00:20:51.634 "name": "Nvme8", 00:20:51.634 "trtype": "tcp", 00:20:51.634 "traddr": "10.0.0.2", 00:20:51.634 "adrfam": "ipv4", 00:20:51.634 "trsvcid": "4420", 00:20:51.634 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:51.634 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:51.634 "hdgst": false, 00:20:51.634 "ddgst": false 00:20:51.634 }, 00:20:51.634 "method": "bdev_nvme_attach_controller" 00:20:51.634 },{ 00:20:51.634 "params": { 00:20:51.634 "name": "Nvme9", 00:20:51.634 "trtype": "tcp", 00:20:51.634 "traddr": "10.0.0.2", 00:20:51.634 "adrfam": "ipv4", 00:20:51.634 "trsvcid": "4420", 00:20:51.634 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:51.634 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:51.634 "hdgst": false, 00:20:51.634 "ddgst": false 00:20:51.634 }, 00:20:51.634 "method": "bdev_nvme_attach_controller" 00:20:51.634 },{ 00:20:51.634 "params": { 00:20:51.634 "name": "Nvme10", 00:20:51.634 "trtype": "tcp", 00:20:51.634 "traddr": "10.0.0.2", 00:20:51.634 "adrfam": "ipv4", 00:20:51.634 "trsvcid": "4420", 00:20:51.634 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:51.634 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:51.634 "hdgst": false, 00:20:51.634 "ddgst": false 00:20:51.634 }, 00:20:51.634 "method": "bdev_nvme_attach_controller" 00:20:51.634 }' 00:20:51.634 [2024-11-28 12:44:33.940124] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:51.634 [2024-11-28 12:44:33.982032] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:53.011 Running I/O for 1 seconds... 00:20:54.207 2186.00 IOPS, 136.62 MiB/s 00:20:54.207 Latency(us) 00:20:54.207 [2024-11-28T11:44:36.726Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:54.207 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:54.207 Verification LBA range: start 0x0 length 0x400 00:20:54.207 Nvme1n1 : 1.15 278.59 17.41 0.00 0.00 227038.25 15386.71 225215.89 00:20:54.207 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:54.207 Verification LBA range: start 0x0 length 0x400 00:20:54.207 Nvme2n1 : 1.09 234.86 14.68 0.00 0.00 266020.51 18692.01 230686.72 00:20:54.207 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:54.207 Verification LBA range: start 0x0 length 0x400 00:20:54.207 Nvme3n1 : 1.12 290.33 18.15 0.00 0.00 211125.03 6895.53 217921.45 00:20:54.207 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:54.207 Verification LBA range: start 0x0 length 0x400 00:20:54.207 Nvme4n1 : 1.12 291.00 18.19 0.00 0.00 204457.77 14132.98 216097.84 00:20:54.207 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:54.207 Verification LBA range: start 0x0 length 0x400 00:20:54.207 Nvme5n1 : 1.09 235.89 14.74 0.00 0.00 252839.40 18350.08 227951.30 00:20:54.207 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:54.207 Verification LBA range: start 0x0 length 0x400 00:20:54.207 Nvme6n1 : 1.16 275.71 17.23 0.00 0.00 214160.78 15956.59 237069.36 00:20:54.207 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:54.207 Verification LBA range: start 0x0 length 0x400 00:20:54.207 Nvme7n1 : 1.15 282.26 17.64 0.00 0.00 205259.15 5214.39 220656.86 00:20:54.207 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:54.207 Verification LBA range: start 0x0 length 0x400 00:20:54.207 Nvme8n1 : 1.16 276.93 17.31 0.00 0.00 206773.92 15728.64 237069.36 00:20:54.207 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:54.207 Verification LBA range: start 0x0 length 0x400 00:20:54.207 Nvme9n1 : 1.16 274.79 17.17 0.00 0.00 205448.06 15158.76 233422.14 00:20:54.207 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:54.207 Verification LBA range: start 0x0 length 0x400 00:20:54.207 Nvme10n1 : 1.17 273.90 17.12 0.00 0.00 203018.33 12822.26 255305.46 00:20:54.207 [2024-11-28T11:44:36.726Z] =================================================================================================================== 00:20:54.207 [2024-11-28T11:44:36.726Z] Total : 2714.27 169.64 0.00 0.00 217901.64 5214.39 255305.46 00:20:54.207 12:44:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:20:54.207 12:44:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:20:54.207 12:44:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:54.207 12:44:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:54.207 12:44:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:20:54.207 12:44:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:54.207 12:44:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:20:54.207 12:44:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:54.207 12:44:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:20:54.207 12:44:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:54.207 12:44:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:54.207 rmmod nvme_tcp 00:20:54.207 rmmod nvme_fabrics 00:20:54.207 rmmod nvme_keyring 00:20:54.467 12:44:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:54.467 12:44:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:20:54.467 12:44:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:20:54.467 12:44:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 2579339 ']' 00:20:54.467 12:44:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 2579339 00:20:54.467 12:44:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 2579339 ']' 00:20:54.467 12:44:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 2579339 00:20:54.467 12:44:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:20:54.467 12:44:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:54.467 12:44:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2579339 00:20:54.467 12:44:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:54.467 12:44:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:54.467 12:44:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2579339' 00:20:54.467 killing process with pid 2579339 00:20:54.467 12:44:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 2579339 00:20:54.467 12:44:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 2579339 00:20:54.726 12:44:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:54.726 12:44:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:54.726 12:44:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:54.726 12:44:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:20:54.726 12:44:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:20:54.726 12:44:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:20:54.726 12:44:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:54.726 12:44:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:54.726 12:44:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:54.726 12:44:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:54.726 12:44:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:54.726 12:44:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:57.262 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:57.262 00:20:57.262 real 0m14.632s 00:20:57.262 user 0m33.413s 00:20:57.262 sys 0m5.417s 00:20:57.262 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:57.262 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:57.262 ************************************ 00:20:57.262 END TEST nvmf_shutdown_tc1 00:20:57.262 ************************************ 00:20:57.262 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:20:57.262 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:57.262 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:57.262 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:57.262 ************************************ 00:20:57.262 START TEST nvmf_shutdown_tc2 00:20:57.262 ************************************ 00:20:57.262 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:20:57.262 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:20:57.262 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:20:57.262 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:57.262 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:57.262 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:57.262 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:57.262 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:57.262 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:57.262 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:57.262 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:57.262 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:57.262 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:57.262 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:20:57.262 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:57.262 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:57.262 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:20:57.262 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:57.262 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:57.262 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:57.262 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:57.262 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:57.262 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:20:57.262 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:57.262 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:20:57.262 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:20:57.262 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:20:57.262 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:20:57.262 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:20:57.262 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:20:57.262 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:57.262 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:57.262 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:57.262 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:57.262 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:57.262 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:57.262 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:57.262 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:57.262 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:57.262 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:57.262 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:57.262 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:57.262 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:57.262 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:57.262 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:57.262 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:57.262 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:57.262 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:57.262 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:57.263 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:57.263 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:57.263 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:57.263 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:57.263 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:57.263 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:57.263 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:57.263 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:57.263 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:57.263 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:57.263 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:57.263 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:57.263 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:57.263 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:57.263 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:57.263 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:57.263 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:57.263 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:57.263 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:57.263 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:57.263 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:57.263 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:57.263 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:57.263 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:57.263 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:57.263 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:57.263 Found net devices under 0000:86:00.0: cvl_0_0 00:20:57.263 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:57.263 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:57.263 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:57.263 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:57.263 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:57.263 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:57.263 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:57.263 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:57.263 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:57.263 Found net devices under 0000:86:00.1: cvl_0_1 00:20:57.263 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:57.263 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:57.263 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:20:57.263 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:57.263 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:57.263 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:57.263 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:57.263 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:57.263 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:57.263 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:57.263 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:57.263 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:57.263 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:57.263 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:57.263 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:57.263 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:57.263 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:57.263 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:57.263 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:57.263 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:57.263 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:57.263 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:57.263 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:57.263 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:57.263 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:57.263 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:57.263 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:57.263 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:57.263 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:57.263 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:57.263 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.447 ms 00:20:57.263 00:20:57.263 --- 10.0.0.2 ping statistics --- 00:20:57.263 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:57.263 rtt min/avg/max/mdev = 0.447/0.447/0.447/0.000 ms 00:20:57.263 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:57.263 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:57.263 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.196 ms 00:20:57.263 00:20:57.263 --- 10.0.0.1 ping statistics --- 00:20:57.263 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:57.263 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:20:57.263 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:57.263 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:20:57.263 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:57.263 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:57.263 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:57.263 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:57.263 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:57.263 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:57.263 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:57.263 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:20:57.263 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:57.263 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:57.263 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:57.263 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=2581128 00:20:57.263 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 2581128 00:20:57.263 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2581128 ']' 00:20:57.263 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:57.263 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:57.263 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:57.263 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:57.263 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:57.263 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:57.263 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:57.264 [2024-11-28 12:44:39.633388] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:20:57.264 [2024-11-28 12:44:39.633433] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:57.264 [2024-11-28 12:44:39.699012] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:57.264 [2024-11-28 12:44:39.741647] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:57.264 [2024-11-28 12:44:39.741685] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:57.264 [2024-11-28 12:44:39.741692] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:57.264 [2024-11-28 12:44:39.741699] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:57.264 [2024-11-28 12:44:39.741704] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:57.264 [2024-11-28 12:44:39.743350] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:57.264 [2024-11-28 12:44:39.743441] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:57.264 [2024-11-28 12:44:39.743550] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:57.264 [2024-11-28 12:44:39.743550] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:20:57.523 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:57.523 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:20:57.523 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:57.523 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:57.523 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:57.523 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:57.523 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:57.523 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.523 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:57.523 [2024-11-28 12:44:39.882162] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:57.523 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.523 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:20:57.523 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:20:57.523 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:57.523 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:57.523 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:57.523 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:57.523 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:57.523 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:57.523 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:57.523 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:57.523 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:57.523 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:57.523 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:57.523 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:57.523 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:57.523 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:57.523 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:57.523 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:57.523 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:57.523 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:57.523 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:57.523 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:57.523 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:57.523 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:57.523 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:57.523 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:20:57.523 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.523 12:44:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:57.523 Malloc1 00:20:57.523 [2024-11-28 12:44:39.995817] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:57.523 Malloc2 00:20:57.781 Malloc3 00:20:57.781 Malloc4 00:20:57.781 Malloc5 00:20:57.781 Malloc6 00:20:57.781 Malloc7 00:20:57.781 Malloc8 00:20:58.040 Malloc9 00:20:58.040 Malloc10 00:20:58.040 12:44:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.040 12:44:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:20:58.040 12:44:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:58.040 12:44:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:58.040 12:44:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=2581395 00:20:58.040 12:44:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 2581395 /var/tmp/bdevperf.sock 00:20:58.040 12:44:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2581395 ']' 00:20:58.040 12:44:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:58.040 12:44:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:20:58.040 12:44:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:58.040 12:44:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:58.040 12:44:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:58.040 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:58.040 12:44:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:20:58.040 12:44:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:58.040 12:44:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:20:58.040 12:44:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:58.040 12:44:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:58.040 12:44:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:58.040 { 00:20:58.040 "params": { 00:20:58.040 "name": "Nvme$subsystem", 00:20:58.040 "trtype": "$TEST_TRANSPORT", 00:20:58.040 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:58.040 "adrfam": "ipv4", 00:20:58.040 "trsvcid": "$NVMF_PORT", 00:20:58.040 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:58.041 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:58.041 "hdgst": ${hdgst:-false}, 00:20:58.041 "ddgst": ${ddgst:-false} 00:20:58.041 }, 00:20:58.041 "method": "bdev_nvme_attach_controller" 00:20:58.041 } 00:20:58.041 EOF 00:20:58.041 )") 00:20:58.041 12:44:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:58.041 12:44:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:58.041 12:44:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:58.041 { 00:20:58.041 "params": { 00:20:58.041 "name": "Nvme$subsystem", 00:20:58.041 "trtype": "$TEST_TRANSPORT", 00:20:58.041 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:58.041 "adrfam": "ipv4", 00:20:58.041 "trsvcid": "$NVMF_PORT", 00:20:58.041 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:58.041 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:58.041 "hdgst": ${hdgst:-false}, 00:20:58.041 "ddgst": ${ddgst:-false} 00:20:58.041 }, 00:20:58.041 "method": "bdev_nvme_attach_controller" 00:20:58.041 } 00:20:58.041 EOF 00:20:58.041 )") 00:20:58.041 12:44:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:58.041 12:44:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:58.041 12:44:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:58.041 { 00:20:58.041 "params": { 00:20:58.041 "name": "Nvme$subsystem", 00:20:58.041 "trtype": "$TEST_TRANSPORT", 00:20:58.041 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:58.041 "adrfam": "ipv4", 00:20:58.041 "trsvcid": "$NVMF_PORT", 00:20:58.041 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:58.041 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:58.041 "hdgst": ${hdgst:-false}, 00:20:58.041 "ddgst": ${ddgst:-false} 00:20:58.041 }, 00:20:58.041 "method": "bdev_nvme_attach_controller" 00:20:58.041 } 00:20:58.041 EOF 00:20:58.041 )") 00:20:58.041 12:44:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:58.041 12:44:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:58.041 12:44:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:58.041 { 00:20:58.041 "params": { 00:20:58.041 "name": "Nvme$subsystem", 00:20:58.041 "trtype": "$TEST_TRANSPORT", 00:20:58.041 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:58.041 "adrfam": "ipv4", 00:20:58.041 "trsvcid": "$NVMF_PORT", 00:20:58.041 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:58.041 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:58.041 "hdgst": ${hdgst:-false}, 00:20:58.041 "ddgst": ${ddgst:-false} 00:20:58.041 }, 00:20:58.041 "method": "bdev_nvme_attach_controller" 00:20:58.041 } 00:20:58.041 EOF 00:20:58.041 )") 00:20:58.041 12:44:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:58.041 12:44:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:58.041 12:44:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:58.041 { 00:20:58.041 "params": { 00:20:58.041 "name": "Nvme$subsystem", 00:20:58.041 "trtype": "$TEST_TRANSPORT", 00:20:58.041 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:58.041 "adrfam": "ipv4", 00:20:58.041 "trsvcid": "$NVMF_PORT", 00:20:58.041 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:58.041 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:58.041 "hdgst": ${hdgst:-false}, 00:20:58.041 "ddgst": ${ddgst:-false} 00:20:58.041 }, 00:20:58.041 "method": "bdev_nvme_attach_controller" 00:20:58.041 } 00:20:58.041 EOF 00:20:58.041 )") 00:20:58.041 12:44:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:58.041 12:44:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:58.041 12:44:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:58.041 { 00:20:58.041 "params": { 00:20:58.041 "name": "Nvme$subsystem", 00:20:58.041 "trtype": "$TEST_TRANSPORT", 00:20:58.041 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:58.041 "adrfam": "ipv4", 00:20:58.041 "trsvcid": "$NVMF_PORT", 00:20:58.041 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:58.041 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:58.041 "hdgst": ${hdgst:-false}, 00:20:58.041 "ddgst": ${ddgst:-false} 00:20:58.041 }, 00:20:58.041 "method": "bdev_nvme_attach_controller" 00:20:58.041 } 00:20:58.041 EOF 00:20:58.041 )") 00:20:58.041 12:44:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:58.041 12:44:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:58.041 12:44:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:58.041 { 00:20:58.041 "params": { 00:20:58.041 "name": "Nvme$subsystem", 00:20:58.041 "trtype": "$TEST_TRANSPORT", 00:20:58.041 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:58.041 "adrfam": "ipv4", 00:20:58.041 "trsvcid": "$NVMF_PORT", 00:20:58.041 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:58.041 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:58.041 "hdgst": ${hdgst:-false}, 00:20:58.041 "ddgst": ${ddgst:-false} 00:20:58.041 }, 00:20:58.041 "method": "bdev_nvme_attach_controller" 00:20:58.041 } 00:20:58.041 EOF 00:20:58.041 )") 00:20:58.041 12:44:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:58.041 [2024-11-28 12:44:40.470686] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:20:58.041 [2024-11-28 12:44:40.470735] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2581395 ] 00:20:58.041 12:44:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:58.041 12:44:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:58.041 { 00:20:58.041 "params": { 00:20:58.041 "name": "Nvme$subsystem", 00:20:58.041 "trtype": "$TEST_TRANSPORT", 00:20:58.041 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:58.041 "adrfam": "ipv4", 00:20:58.041 "trsvcid": "$NVMF_PORT", 00:20:58.041 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:58.041 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:58.041 "hdgst": ${hdgst:-false}, 00:20:58.041 "ddgst": ${ddgst:-false} 00:20:58.041 }, 00:20:58.041 "method": "bdev_nvme_attach_controller" 00:20:58.041 } 00:20:58.041 EOF 00:20:58.041 )") 00:20:58.041 12:44:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:58.041 12:44:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:58.041 12:44:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:58.041 { 00:20:58.041 "params": { 00:20:58.041 "name": "Nvme$subsystem", 00:20:58.041 "trtype": "$TEST_TRANSPORT", 00:20:58.041 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:58.041 "adrfam": "ipv4", 00:20:58.041 "trsvcid": "$NVMF_PORT", 00:20:58.041 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:58.041 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:58.041 "hdgst": ${hdgst:-false}, 00:20:58.041 "ddgst": ${ddgst:-false} 00:20:58.041 }, 00:20:58.041 "method": "bdev_nvme_attach_controller" 00:20:58.041 } 00:20:58.041 EOF 00:20:58.041 )") 00:20:58.041 12:44:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:58.041 12:44:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:58.041 12:44:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:58.041 { 00:20:58.041 "params": { 00:20:58.041 "name": "Nvme$subsystem", 00:20:58.041 "trtype": "$TEST_TRANSPORT", 00:20:58.041 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:58.041 "adrfam": "ipv4", 00:20:58.041 "trsvcid": "$NVMF_PORT", 00:20:58.041 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:58.041 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:58.041 "hdgst": ${hdgst:-false}, 00:20:58.041 "ddgst": ${ddgst:-false} 00:20:58.041 }, 00:20:58.041 "method": "bdev_nvme_attach_controller" 00:20:58.041 } 00:20:58.041 EOF 00:20:58.041 )") 00:20:58.041 12:44:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:58.041 12:44:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:20:58.041 12:44:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:20:58.041 12:44:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:58.041 "params": { 00:20:58.041 "name": "Nvme1", 00:20:58.041 "trtype": "tcp", 00:20:58.041 "traddr": "10.0.0.2", 00:20:58.041 "adrfam": "ipv4", 00:20:58.041 "trsvcid": "4420", 00:20:58.041 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:58.041 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:58.041 "hdgst": false, 00:20:58.041 "ddgst": false 00:20:58.041 }, 00:20:58.041 "method": "bdev_nvme_attach_controller" 00:20:58.041 },{ 00:20:58.041 "params": { 00:20:58.041 "name": "Nvme2", 00:20:58.041 "trtype": "tcp", 00:20:58.041 "traddr": "10.0.0.2", 00:20:58.041 "adrfam": "ipv4", 00:20:58.041 "trsvcid": "4420", 00:20:58.042 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:58.042 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:58.042 "hdgst": false, 00:20:58.042 "ddgst": false 00:20:58.042 }, 00:20:58.042 "method": "bdev_nvme_attach_controller" 00:20:58.042 },{ 00:20:58.042 "params": { 00:20:58.042 "name": "Nvme3", 00:20:58.042 "trtype": "tcp", 00:20:58.042 "traddr": "10.0.0.2", 00:20:58.042 "adrfam": "ipv4", 00:20:58.042 "trsvcid": "4420", 00:20:58.042 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:58.042 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:58.042 "hdgst": false, 00:20:58.042 "ddgst": false 00:20:58.042 }, 00:20:58.042 "method": "bdev_nvme_attach_controller" 00:20:58.042 },{ 00:20:58.042 "params": { 00:20:58.042 "name": "Nvme4", 00:20:58.042 "trtype": "tcp", 00:20:58.042 "traddr": "10.0.0.2", 00:20:58.042 "adrfam": "ipv4", 00:20:58.042 "trsvcid": "4420", 00:20:58.042 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:58.042 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:58.042 "hdgst": false, 00:20:58.042 "ddgst": false 00:20:58.042 }, 00:20:58.042 "method": "bdev_nvme_attach_controller" 00:20:58.042 },{ 00:20:58.042 "params": { 00:20:58.042 "name": "Nvme5", 00:20:58.042 "trtype": "tcp", 00:20:58.042 "traddr": "10.0.0.2", 00:20:58.042 "adrfam": "ipv4", 00:20:58.042 "trsvcid": "4420", 00:20:58.042 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:58.042 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:58.042 "hdgst": false, 00:20:58.042 "ddgst": false 00:20:58.042 }, 00:20:58.042 "method": "bdev_nvme_attach_controller" 00:20:58.042 },{ 00:20:58.042 "params": { 00:20:58.042 "name": "Nvme6", 00:20:58.042 "trtype": "tcp", 00:20:58.042 "traddr": "10.0.0.2", 00:20:58.042 "adrfam": "ipv4", 00:20:58.042 "trsvcid": "4420", 00:20:58.042 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:58.042 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:58.042 "hdgst": false, 00:20:58.042 "ddgst": false 00:20:58.042 }, 00:20:58.042 "method": "bdev_nvme_attach_controller" 00:20:58.042 },{ 00:20:58.042 "params": { 00:20:58.042 "name": "Nvme7", 00:20:58.042 "trtype": "tcp", 00:20:58.042 "traddr": "10.0.0.2", 00:20:58.042 "adrfam": "ipv4", 00:20:58.042 "trsvcid": "4420", 00:20:58.042 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:58.042 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:58.042 "hdgst": false, 00:20:58.042 "ddgst": false 00:20:58.042 }, 00:20:58.042 "method": "bdev_nvme_attach_controller" 00:20:58.042 },{ 00:20:58.042 "params": { 00:20:58.042 "name": "Nvme8", 00:20:58.042 "trtype": "tcp", 00:20:58.042 "traddr": "10.0.0.2", 00:20:58.042 "adrfam": "ipv4", 00:20:58.042 "trsvcid": "4420", 00:20:58.042 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:58.042 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:58.042 "hdgst": false, 00:20:58.042 "ddgst": false 00:20:58.042 }, 00:20:58.042 "method": "bdev_nvme_attach_controller" 00:20:58.042 },{ 00:20:58.042 "params": { 00:20:58.042 "name": "Nvme9", 00:20:58.042 "trtype": "tcp", 00:20:58.042 "traddr": "10.0.0.2", 00:20:58.042 "adrfam": "ipv4", 00:20:58.042 "trsvcid": "4420", 00:20:58.042 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:58.042 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:58.042 "hdgst": false, 00:20:58.042 "ddgst": false 00:20:58.042 }, 00:20:58.042 "method": "bdev_nvme_attach_controller" 00:20:58.042 },{ 00:20:58.042 "params": { 00:20:58.042 "name": "Nvme10", 00:20:58.042 "trtype": "tcp", 00:20:58.042 "traddr": "10.0.0.2", 00:20:58.042 "adrfam": "ipv4", 00:20:58.042 "trsvcid": "4420", 00:20:58.042 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:58.042 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:58.042 "hdgst": false, 00:20:58.042 "ddgst": false 00:20:58.042 }, 00:20:58.042 "method": "bdev_nvme_attach_controller" 00:20:58.042 }' 00:20:58.042 [2024-11-28 12:44:40.534288] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:58.300 [2024-11-28 12:44:40.577064] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:59.674 Running I/O for 10 seconds... 00:20:59.932 12:44:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:59.932 12:44:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:20:59.932 12:44:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:59.932 12:44:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.932 12:44:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:59.932 12:44:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.932 12:44:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:20:59.932 12:44:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:20:59.932 12:44:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:20:59.932 12:44:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:20:59.932 12:44:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:20:59.932 12:44:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:20:59.932 12:44:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:20:59.932 12:44:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:59.932 12:44:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.932 12:44:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:20:59.932 12:44:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:00.191 12:44:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.191 12:44:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:21:00.191 12:44:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:21:00.191 12:44:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:21:00.450 12:44:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:21:00.450 12:44:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:21:00.450 12:44:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:00.450 12:44:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:21:00.450 12:44:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.450 12:44:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:00.450 12:44:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.450 12:44:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=131 00:21:00.450 12:44:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:21:00.450 12:44:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:21:00.450 12:44:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:21:00.450 12:44:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:21:00.450 12:44:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 2581395 00:21:00.450 12:44:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 2581395 ']' 00:21:00.450 12:44:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 2581395 00:21:00.450 12:44:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:21:00.450 12:44:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:00.450 12:44:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2581395 00:21:00.450 12:44:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:00.450 12:44:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:00.450 12:44:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2581395' 00:21:00.450 killing process with pid 2581395 00:21:00.450 12:44:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 2581395 00:21:00.450 12:44:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 2581395 00:21:00.450 Received shutdown signal, test time was about 0.926485 seconds 00:21:00.450 00:21:00.450 Latency(us) 00:21:00.450 [2024-11-28T11:44:42.969Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:00.450 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:00.450 Verification LBA range: start 0x0 length 0x400 00:21:00.450 Nvme1n1 : 0.92 276.92 17.31 0.00 0.00 228654.53 16982.37 249834.63 00:21:00.450 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:00.450 Verification LBA range: start 0x0 length 0x400 00:21:00.450 Nvme2n1 : 0.89 216.10 13.51 0.00 0.00 287525.25 20971.52 251658.24 00:21:00.450 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:00.450 Verification LBA range: start 0x0 length 0x400 00:21:00.450 Nvme3n1 : 0.93 276.52 17.28 0.00 0.00 220289.34 16640.45 255305.46 00:21:00.450 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:00.450 Verification LBA range: start 0x0 length 0x400 00:21:00.450 Nvme4n1 : 0.92 279.34 17.46 0.00 0.00 214714.99 15956.59 255305.46 00:21:00.450 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:00.450 Verification LBA range: start 0x0 length 0x400 00:21:00.450 Nvme5n1 : 0.90 213.37 13.34 0.00 0.00 275070.74 21541.40 257129.07 00:21:00.450 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:00.450 Verification LBA range: start 0x0 length 0x400 00:21:00.450 Nvme6n1 : 0.92 278.58 17.41 0.00 0.00 207120.92 18236.10 229774.91 00:21:00.450 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:00.450 Verification LBA range: start 0x0 length 0x400 00:21:00.450 Nvme7n1 : 0.91 280.57 17.54 0.00 0.00 201753.60 24618.74 242540.19 00:21:00.450 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:00.450 Verification LBA range: start 0x0 length 0x400 00:21:00.450 Nvme8n1 : 0.89 214.73 13.42 0.00 0.00 257779.39 15614.66 242540.19 00:21:00.450 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:00.450 Verification LBA range: start 0x0 length 0x400 00:21:00.450 Nvme9n1 : 0.91 215.52 13.47 0.00 0.00 251193.61 5470.83 251658.24 00:21:00.450 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:00.450 Verification LBA range: start 0x0 length 0x400 00:21:00.450 Nvme10n1 : 0.91 211.55 13.22 0.00 0.00 251952.38 20287.67 269894.34 00:21:00.450 [2024-11-28T11:44:42.969Z] =================================================================================================================== 00:21:00.450 [2024-11-28T11:44:42.969Z] Total : 2463.19 153.95 0.00 0.00 236046.98 5470.83 269894.34 00:21:00.708 12:44:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:21:01.822 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 2581128 00:21:01.822 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:21:01.822 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:21:01.822 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:01.822 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:01.822 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:21:01.822 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:01.822 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:21:01.822 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:01.822 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:21:01.822 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:01.823 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:01.823 rmmod nvme_tcp 00:21:01.823 rmmod nvme_fabrics 00:21:01.823 rmmod nvme_keyring 00:21:01.823 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:01.823 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:21:01.823 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:21:01.823 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 2581128 ']' 00:21:01.823 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 2581128 00:21:01.823 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 2581128 ']' 00:21:01.823 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 2581128 00:21:01.823 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:21:01.823 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:01.823 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2581128 00:21:01.823 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:01.823 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:01.823 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2581128' 00:21:01.823 killing process with pid 2581128 00:21:01.823 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 2581128 00:21:01.823 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 2581128 00:21:02.163 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:02.163 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:02.163 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:02.163 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:21:02.163 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:21:02.163 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:02.163 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:21:02.163 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:02.163 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:02.163 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:02.163 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:02.163 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:04.697 12:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:04.697 00:21:04.697 real 0m7.326s 00:21:04.697 user 0m21.678s 00:21:04.697 sys 0m1.314s 00:21:04.697 12:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:04.697 12:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:04.697 ************************************ 00:21:04.697 END TEST nvmf_shutdown_tc2 00:21:04.697 ************************************ 00:21:04.697 12:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:21:04.697 12:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:04.697 12:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:04.697 12:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:04.697 ************************************ 00:21:04.697 START TEST nvmf_shutdown_tc3 00:21:04.697 ************************************ 00:21:04.697 12:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:21:04.697 12:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:21:04.697 12:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:21:04.697 12:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:04.697 12:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:04.697 12:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:04.697 12:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:04.697 12:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:04.697 12:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:04.697 12:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:04.697 12:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:04.697 12:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:04.697 12:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:04.697 12:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:21:04.697 12:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:04.697 12:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:04.697 12:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:21:04.697 12:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:04.697 12:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:04.697 12:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:04.697 12:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:04.697 12:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:04.697 12:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:21:04.697 12:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:04.698 12:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:21:04.698 12:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:21:04.698 12:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:21:04.698 12:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:21:04.698 12:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:21:04.698 12:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:21:04.698 12:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:04.698 12:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:04.698 12:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:04.698 12:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:04.698 12:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:04.698 12:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:04.698 12:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:04.698 12:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:04.698 12:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:04.698 12:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:04.698 12:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:04.698 12:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:04.698 12:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:04.698 12:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:04.698 12:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:04.698 12:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:04.698 12:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:04.698 12:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:04.698 12:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:04.698 12:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:04.698 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:04.698 12:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:04.698 12:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:04.698 12:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:04.698 12:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:04.698 12:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:04.698 12:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:04.698 12:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:04.698 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:04.698 12:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:04.698 12:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:04.698 12:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:04.698 12:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:04.698 12:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:04.698 12:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:04.698 12:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:04.698 12:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:04.698 12:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:04.698 12:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:04.698 12:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:04.698 12:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:04.698 12:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:04.698 12:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:04.698 12:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:04.698 12:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:04.698 Found net devices under 0000:86:00.0: cvl_0_0 00:21:04.698 12:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:04.698 12:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:04.698 12:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:04.698 12:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:04.698 12:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:04.698 12:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:04.698 12:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:04.698 12:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:04.698 12:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:04.698 Found net devices under 0000:86:00.1: cvl_0_1 00:21:04.698 12:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:04.698 12:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:04.698 12:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:21:04.698 12:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:04.698 12:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:04.698 12:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:04.698 12:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:04.698 12:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:04.698 12:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:04.698 12:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:04.698 12:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:04.698 12:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:04.698 12:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:04.698 12:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:04.698 12:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:04.698 12:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:04.698 12:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:04.698 12:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:04.698 12:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:04.698 12:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:04.698 12:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:04.698 12:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:04.698 12:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:04.698 12:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:04.698 12:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:04.698 12:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:04.698 12:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:04.698 12:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:04.698 12:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:04.698 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:04.698 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.470 ms 00:21:04.698 00:21:04.698 --- 10.0.0.2 ping statistics --- 00:21:04.698 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:04.698 rtt min/avg/max/mdev = 0.470/0.470/0.470/0.000 ms 00:21:04.698 12:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:04.698 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:04.698 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:21:04.698 00:21:04.698 --- 10.0.0.1 ping statistics --- 00:21:04.698 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:04.698 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:21:04.698 12:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:04.699 12:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:21:04.699 12:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:04.699 12:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:04.699 12:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:04.699 12:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:04.699 12:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:04.699 12:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:04.699 12:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:04.699 12:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:21:04.699 12:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:04.699 12:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:04.699 12:44:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:04.699 12:44:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=2582459 00:21:04.699 12:44:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 2582459 00:21:04.699 12:44:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:04.699 12:44:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 2582459 ']' 00:21:04.699 12:44:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:04.699 12:44:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:04.699 12:44:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:04.699 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:04.699 12:44:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:04.699 12:44:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:04.699 [2024-11-28 12:44:47.059641] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:21:04.699 [2024-11-28 12:44:47.059689] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:04.699 [2024-11-28 12:44:47.125486] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:04.699 [2024-11-28 12:44:47.168472] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:04.699 [2024-11-28 12:44:47.168509] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:04.699 [2024-11-28 12:44:47.168516] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:04.699 [2024-11-28 12:44:47.168522] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:04.699 [2024-11-28 12:44:47.168527] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:04.699 [2024-11-28 12:44:47.170169] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:04.699 [2024-11-28 12:44:47.170255] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:04.699 [2024-11-28 12:44:47.170286] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:04.699 [2024-11-28 12:44:47.170287] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:21:04.958 12:44:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:04.958 12:44:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:21:04.958 12:44:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:04.958 12:44:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:04.958 12:44:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:04.958 12:44:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:04.958 12:44:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:04.958 12:44:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.958 12:44:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:04.958 [2024-11-28 12:44:47.313427] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:04.958 12:44:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.958 12:44:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:21:04.958 12:44:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:21:04.958 12:44:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:04.958 12:44:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:04.958 12:44:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:04.958 12:44:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:04.958 12:44:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:04.958 12:44:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:04.958 12:44:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:04.958 12:44:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:04.958 12:44:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:04.958 12:44:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:04.959 12:44:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:04.959 12:44:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:04.959 12:44:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:04.959 12:44:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:04.959 12:44:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:04.959 12:44:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:04.959 12:44:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:04.959 12:44:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:04.959 12:44:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:04.959 12:44:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:04.959 12:44:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:04.959 12:44:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:04.959 12:44:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:04.959 12:44:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:21:04.959 12:44:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.959 12:44:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:04.959 Malloc1 00:21:04.959 [2024-11-28 12:44:47.423357] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:04.959 Malloc2 00:21:05.217 Malloc3 00:21:05.217 Malloc4 00:21:05.217 Malloc5 00:21:05.217 Malloc6 00:21:05.217 Malloc7 00:21:05.217 Malloc8 00:21:05.477 Malloc9 00:21:05.477 Malloc10 00:21:05.477 12:44:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.477 12:44:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:21:05.477 12:44:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:05.477 12:44:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:05.477 12:44:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=2582727 00:21:05.477 12:44:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 2582727 /var/tmp/bdevperf.sock 00:21:05.477 12:44:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 2582727 ']' 00:21:05.477 12:44:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:05.477 12:44:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:21:05.477 12:44:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:05.477 12:44:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:05.477 12:44:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:05.477 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:05.477 12:44:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:21:05.477 12:44:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:05.477 12:44:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:21:05.477 12:44:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:05.477 12:44:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:05.477 12:44:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:05.477 { 00:21:05.477 "params": { 00:21:05.477 "name": "Nvme$subsystem", 00:21:05.477 "trtype": "$TEST_TRANSPORT", 00:21:05.477 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:05.477 "adrfam": "ipv4", 00:21:05.477 "trsvcid": "$NVMF_PORT", 00:21:05.477 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:05.477 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:05.477 "hdgst": ${hdgst:-false}, 00:21:05.477 "ddgst": ${ddgst:-false} 00:21:05.477 }, 00:21:05.477 "method": "bdev_nvme_attach_controller" 00:21:05.477 } 00:21:05.477 EOF 00:21:05.477 )") 00:21:05.477 12:44:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:05.477 12:44:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:05.477 12:44:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:05.477 { 00:21:05.477 "params": { 00:21:05.477 "name": "Nvme$subsystem", 00:21:05.477 "trtype": "$TEST_TRANSPORT", 00:21:05.477 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:05.477 "adrfam": "ipv4", 00:21:05.477 "trsvcid": "$NVMF_PORT", 00:21:05.477 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:05.477 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:05.477 "hdgst": ${hdgst:-false}, 00:21:05.477 "ddgst": ${ddgst:-false} 00:21:05.477 }, 00:21:05.477 "method": "bdev_nvme_attach_controller" 00:21:05.477 } 00:21:05.477 EOF 00:21:05.477 )") 00:21:05.477 12:44:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:05.477 12:44:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:05.477 12:44:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:05.477 { 00:21:05.477 "params": { 00:21:05.477 "name": "Nvme$subsystem", 00:21:05.477 "trtype": "$TEST_TRANSPORT", 00:21:05.477 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:05.477 "adrfam": "ipv4", 00:21:05.477 "trsvcid": "$NVMF_PORT", 00:21:05.477 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:05.477 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:05.477 "hdgst": ${hdgst:-false}, 00:21:05.477 "ddgst": ${ddgst:-false} 00:21:05.477 }, 00:21:05.477 "method": "bdev_nvme_attach_controller" 00:21:05.477 } 00:21:05.477 EOF 00:21:05.477 )") 00:21:05.477 12:44:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:05.477 12:44:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:05.477 12:44:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:05.477 { 00:21:05.477 "params": { 00:21:05.477 "name": "Nvme$subsystem", 00:21:05.477 "trtype": "$TEST_TRANSPORT", 00:21:05.477 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:05.477 "adrfam": "ipv4", 00:21:05.477 "trsvcid": "$NVMF_PORT", 00:21:05.477 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:05.477 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:05.477 "hdgst": ${hdgst:-false}, 00:21:05.477 "ddgst": ${ddgst:-false} 00:21:05.477 }, 00:21:05.477 "method": "bdev_nvme_attach_controller" 00:21:05.477 } 00:21:05.477 EOF 00:21:05.477 )") 00:21:05.477 12:44:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:05.477 12:44:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:05.477 12:44:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:05.477 { 00:21:05.477 "params": { 00:21:05.477 "name": "Nvme$subsystem", 00:21:05.477 "trtype": "$TEST_TRANSPORT", 00:21:05.477 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:05.477 "adrfam": "ipv4", 00:21:05.477 "trsvcid": "$NVMF_PORT", 00:21:05.477 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:05.477 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:05.477 "hdgst": ${hdgst:-false}, 00:21:05.477 "ddgst": ${ddgst:-false} 00:21:05.477 }, 00:21:05.477 "method": "bdev_nvme_attach_controller" 00:21:05.477 } 00:21:05.477 EOF 00:21:05.477 )") 00:21:05.477 12:44:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:05.477 12:44:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:05.477 12:44:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:05.477 { 00:21:05.477 "params": { 00:21:05.477 "name": "Nvme$subsystem", 00:21:05.477 "trtype": "$TEST_TRANSPORT", 00:21:05.477 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:05.477 "adrfam": "ipv4", 00:21:05.477 "trsvcid": "$NVMF_PORT", 00:21:05.477 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:05.477 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:05.477 "hdgst": ${hdgst:-false}, 00:21:05.477 "ddgst": ${ddgst:-false} 00:21:05.477 }, 00:21:05.477 "method": "bdev_nvme_attach_controller" 00:21:05.477 } 00:21:05.477 EOF 00:21:05.477 )") 00:21:05.477 12:44:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:05.477 12:44:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:05.477 12:44:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:05.477 { 00:21:05.477 "params": { 00:21:05.477 "name": "Nvme$subsystem", 00:21:05.477 "trtype": "$TEST_TRANSPORT", 00:21:05.477 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:05.477 "adrfam": "ipv4", 00:21:05.477 "trsvcid": "$NVMF_PORT", 00:21:05.477 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:05.477 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:05.477 "hdgst": ${hdgst:-false}, 00:21:05.477 "ddgst": ${ddgst:-false} 00:21:05.477 }, 00:21:05.477 "method": "bdev_nvme_attach_controller" 00:21:05.477 } 00:21:05.477 EOF 00:21:05.477 )") 00:21:05.477 [2024-11-28 12:44:47.897384] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:21:05.477 [2024-11-28 12:44:47.897433] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2582727 ] 00:21:05.477 12:44:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:05.477 12:44:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:05.477 12:44:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:05.477 { 00:21:05.477 "params": { 00:21:05.477 "name": "Nvme$subsystem", 00:21:05.477 "trtype": "$TEST_TRANSPORT", 00:21:05.477 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:05.477 "adrfam": "ipv4", 00:21:05.477 "trsvcid": "$NVMF_PORT", 00:21:05.477 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:05.477 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:05.477 "hdgst": ${hdgst:-false}, 00:21:05.477 "ddgst": ${ddgst:-false} 00:21:05.477 }, 00:21:05.477 "method": "bdev_nvme_attach_controller" 00:21:05.477 } 00:21:05.477 EOF 00:21:05.477 )") 00:21:05.477 12:44:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:05.477 12:44:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:05.477 12:44:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:05.477 { 00:21:05.477 "params": { 00:21:05.477 "name": "Nvme$subsystem", 00:21:05.477 "trtype": "$TEST_TRANSPORT", 00:21:05.477 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:05.477 "adrfam": "ipv4", 00:21:05.477 "trsvcid": "$NVMF_PORT", 00:21:05.477 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:05.477 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:05.477 "hdgst": ${hdgst:-false}, 00:21:05.477 "ddgst": ${ddgst:-false} 00:21:05.477 }, 00:21:05.477 "method": "bdev_nvme_attach_controller" 00:21:05.477 } 00:21:05.477 EOF 00:21:05.477 )") 00:21:05.477 12:44:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:05.477 12:44:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:05.477 12:44:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:05.477 { 00:21:05.477 "params": { 00:21:05.477 "name": "Nvme$subsystem", 00:21:05.477 "trtype": "$TEST_TRANSPORT", 00:21:05.477 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:05.477 "adrfam": "ipv4", 00:21:05.477 "trsvcid": "$NVMF_PORT", 00:21:05.477 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:05.477 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:05.477 "hdgst": ${hdgst:-false}, 00:21:05.477 "ddgst": ${ddgst:-false} 00:21:05.477 }, 00:21:05.477 "method": "bdev_nvme_attach_controller" 00:21:05.477 } 00:21:05.477 EOF 00:21:05.477 )") 00:21:05.477 12:44:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:05.477 12:44:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:21:05.477 12:44:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:21:05.477 12:44:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:21:05.477 "params": { 00:21:05.477 "name": "Nvme1", 00:21:05.477 "trtype": "tcp", 00:21:05.477 "traddr": "10.0.0.2", 00:21:05.477 "adrfam": "ipv4", 00:21:05.477 "trsvcid": "4420", 00:21:05.477 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:05.477 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:05.477 "hdgst": false, 00:21:05.477 "ddgst": false 00:21:05.477 }, 00:21:05.477 "method": "bdev_nvme_attach_controller" 00:21:05.477 },{ 00:21:05.477 "params": { 00:21:05.477 "name": "Nvme2", 00:21:05.477 "trtype": "tcp", 00:21:05.477 "traddr": "10.0.0.2", 00:21:05.477 "adrfam": "ipv4", 00:21:05.477 "trsvcid": "4420", 00:21:05.477 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:05.477 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:05.477 "hdgst": false, 00:21:05.477 "ddgst": false 00:21:05.477 }, 00:21:05.477 "method": "bdev_nvme_attach_controller" 00:21:05.477 },{ 00:21:05.477 "params": { 00:21:05.477 "name": "Nvme3", 00:21:05.477 "trtype": "tcp", 00:21:05.477 "traddr": "10.0.0.2", 00:21:05.477 "adrfam": "ipv4", 00:21:05.477 "trsvcid": "4420", 00:21:05.477 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:05.477 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:05.477 "hdgst": false, 00:21:05.477 "ddgst": false 00:21:05.477 }, 00:21:05.477 "method": "bdev_nvme_attach_controller" 00:21:05.477 },{ 00:21:05.477 "params": { 00:21:05.477 "name": "Nvme4", 00:21:05.477 "trtype": "tcp", 00:21:05.477 "traddr": "10.0.0.2", 00:21:05.477 "adrfam": "ipv4", 00:21:05.477 "trsvcid": "4420", 00:21:05.477 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:05.477 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:05.478 "hdgst": false, 00:21:05.478 "ddgst": false 00:21:05.478 }, 00:21:05.478 "method": "bdev_nvme_attach_controller" 00:21:05.478 },{ 00:21:05.478 "params": { 00:21:05.478 "name": "Nvme5", 00:21:05.478 "trtype": "tcp", 00:21:05.478 "traddr": "10.0.0.2", 00:21:05.478 "adrfam": "ipv4", 00:21:05.478 "trsvcid": "4420", 00:21:05.478 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:05.478 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:05.478 "hdgst": false, 00:21:05.478 "ddgst": false 00:21:05.478 }, 00:21:05.478 "method": "bdev_nvme_attach_controller" 00:21:05.478 },{ 00:21:05.478 "params": { 00:21:05.478 "name": "Nvme6", 00:21:05.478 "trtype": "tcp", 00:21:05.478 "traddr": "10.0.0.2", 00:21:05.478 "adrfam": "ipv4", 00:21:05.478 "trsvcid": "4420", 00:21:05.478 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:05.478 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:05.478 "hdgst": false, 00:21:05.478 "ddgst": false 00:21:05.478 }, 00:21:05.478 "method": "bdev_nvme_attach_controller" 00:21:05.478 },{ 00:21:05.478 "params": { 00:21:05.478 "name": "Nvme7", 00:21:05.478 "trtype": "tcp", 00:21:05.478 "traddr": "10.0.0.2", 00:21:05.478 "adrfam": "ipv4", 00:21:05.478 "trsvcid": "4420", 00:21:05.478 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:05.478 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:05.478 "hdgst": false, 00:21:05.478 "ddgst": false 00:21:05.478 }, 00:21:05.478 "method": "bdev_nvme_attach_controller" 00:21:05.478 },{ 00:21:05.478 "params": { 00:21:05.478 "name": "Nvme8", 00:21:05.478 "trtype": "tcp", 00:21:05.478 "traddr": "10.0.0.2", 00:21:05.478 "adrfam": "ipv4", 00:21:05.478 "trsvcid": "4420", 00:21:05.478 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:05.478 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:05.478 "hdgst": false, 00:21:05.478 "ddgst": false 00:21:05.478 }, 00:21:05.478 "method": "bdev_nvme_attach_controller" 00:21:05.478 },{ 00:21:05.478 "params": { 00:21:05.478 "name": "Nvme9", 00:21:05.478 "trtype": "tcp", 00:21:05.478 "traddr": "10.0.0.2", 00:21:05.478 "adrfam": "ipv4", 00:21:05.478 "trsvcid": "4420", 00:21:05.478 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:05.478 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:05.478 "hdgst": false, 00:21:05.478 "ddgst": false 00:21:05.478 }, 00:21:05.478 "method": "bdev_nvme_attach_controller" 00:21:05.478 },{ 00:21:05.478 "params": { 00:21:05.478 "name": "Nvme10", 00:21:05.478 "trtype": "tcp", 00:21:05.478 "traddr": "10.0.0.2", 00:21:05.478 "adrfam": "ipv4", 00:21:05.478 "trsvcid": "4420", 00:21:05.478 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:05.478 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:05.478 "hdgst": false, 00:21:05.478 "ddgst": false 00:21:05.478 }, 00:21:05.478 "method": "bdev_nvme_attach_controller" 00:21:05.478 }' 00:21:05.478 [2024-11-28 12:44:47.963198] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:05.737 [2024-11-28 12:44:48.005256] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:07.113 Running I/O for 10 seconds... 00:21:07.372 12:44:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:07.372 12:44:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:21:07.372 12:44:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:07.372 12:44:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.372 12:44:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:07.372 12:44:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.372 12:44:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:07.372 12:44:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:21:07.372 12:44:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:21:07.372 12:44:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:21:07.372 12:44:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:21:07.372 12:44:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:21:07.372 12:44:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:21:07.372 12:44:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:21:07.372 12:44:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:07.372 12:44:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:21:07.372 12:44:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.372 12:44:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:07.372 12:44:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.372 12:44:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=3 00:21:07.372 12:44:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:21:07.372 12:44:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:21:07.630 12:44:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:21:07.631 12:44:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:21:07.631 12:44:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:07.631 12:44:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:21:07.631 12:44:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.631 12:44:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:07.631 12:44:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.631 12:44:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=67 00:21:07.631 12:44:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:21:07.631 12:44:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:21:07.890 12:44:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:21:07.890 12:44:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:21:07.890 12:44:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:07.890 12:44:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.890 12:44:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:21:07.890 12:44:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:08.156 12:44:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.156 12:44:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=195 00:21:08.156 12:44:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 195 -ge 100 ']' 00:21:08.156 12:44:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:21:08.156 12:44:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:21:08.156 12:44:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:21:08.156 12:44:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 2582459 00:21:08.156 12:44:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 2582459 ']' 00:21:08.156 12:44:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 2582459 00:21:08.156 12:44:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:21:08.156 12:44:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:08.156 12:44:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2582459 00:21:08.156 12:44:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:08.156 12:44:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:08.156 12:44:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2582459' 00:21:08.156 killing process with pid 2582459 00:21:08.156 12:44:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 2582459 00:21:08.156 12:44:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 2582459 00:21:08.156 [2024-11-28 12:44:50.484174] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x881850 is same with the state(6) to be set 00:21:08.156 [2024-11-28 12:44:50.484225] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x881850 is same with the state(6) to be set 00:21:08.156 [2024-11-28 12:44:50.484234] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x881850 is same with the state(6) to be set 00:21:08.156 [2024-11-28 12:44:50.484240] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x881850 is same with the state(6) to be set 00:21:08.156 [2024-11-28 12:44:50.484247] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x881850 is same with the state(6) to be set 00:21:08.156 [2024-11-28 12:44:50.484253] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x881850 is same with the state(6) to be set 00:21:08.156 [2024-11-28 12:44:50.484260] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x881850 is same with the state(6) to be set 00:21:08.156 [2024-11-28 12:44:50.484266] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x881850 is same with the state(6) to be set 00:21:08.156 [2024-11-28 12:44:50.484272] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x881850 is same with the state(6) to be set 00:21:08.156 [2024-11-28 12:44:50.484278] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x881850 is same with the state(6) to be set 00:21:08.156 [2024-11-28 12:44:50.484285] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x881850 is same with the state(6) to be set 00:21:08.156 [2024-11-28 12:44:50.484297] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x881850 is same with the state(6) to be set 00:21:08.156 [2024-11-28 12:44:50.484304] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x881850 is same with the state(6) to be set 00:21:08.156 [2024-11-28 12:44:50.484310] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x881850 is same with the state(6) to be set 00:21:08.156 [2024-11-28 12:44:50.484317] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x881850 is same with the state(6) to be set 00:21:08.156 [2024-11-28 12:44:50.484323] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x881850 is same with the state(6) to be set 00:21:08.156 [2024-11-28 12:44:50.484329] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x881850 is same with the state(6) to be set 00:21:08.156 [2024-11-28 12:44:50.484335] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x881850 is same with the state(6) to be set 00:21:08.156 [2024-11-28 12:44:50.484342] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x881850 is same with the state(6) to be set 00:21:08.156 [2024-11-28 12:44:50.484348] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x881850 is same with the state(6) to be set 00:21:08.156 [2024-11-28 12:44:50.484354] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x881850 is same with the state(6) to be set 00:21:08.156 [2024-11-28 12:44:50.484360] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x881850 is same with the state(6) to be set 00:21:08.156 [2024-11-28 12:44:50.484366] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x881850 is same with the state(6) to be set 00:21:08.156 [2024-11-28 12:44:50.484372] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x881850 is same with the state(6) to be set 00:21:08.156 [2024-11-28 12:44:50.484378] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x881850 is same with the state(6) to be set 00:21:08.156 [2024-11-28 12:44:50.484384] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x881850 is same with the state(6) to be set 00:21:08.156 [2024-11-28 12:44:50.484391] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x881850 is same with the state(6) to be set 00:21:08.156 [2024-11-28 12:44:50.484397] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x881850 is same with the state(6) to be set 00:21:08.156 [2024-11-28 12:44:50.484403] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x881850 is same with the state(6) to be set 00:21:08.156 [2024-11-28 12:44:50.484410] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x881850 is same with the state(6) to be set 00:21:08.156 [2024-11-28 12:44:50.484421] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x881850 is same with the state(6) to be set 00:21:08.156 [2024-11-28 12:44:50.484428] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x881850 is same with the state(6) to be set 00:21:08.156 [2024-11-28 12:44:50.484434] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x881850 is same with the state(6) to be set 00:21:08.156 [2024-11-28 12:44:50.484440] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x881850 is same with the state(6) to be set 00:21:08.156 [2024-11-28 12:44:50.484446] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x881850 is same with the state(6) to be set 00:21:08.156 [2024-11-28 12:44:50.484452] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x881850 is same with the state(6) to be set 00:21:08.156 [2024-11-28 12:44:50.484458] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x881850 is same with the state(6) to be set 00:21:08.156 [2024-11-28 12:44:50.484464] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x881850 is same with the state(6) to be set 00:21:08.156 [2024-11-28 12:44:50.484473] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x881850 is same with the state(6) to be set 00:21:08.156 [2024-11-28 12:44:50.484479] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x881850 is same with the state(6) to be set 00:21:08.156 [2024-11-28 12:44:50.484485] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x881850 is same with the state(6) to be set 00:21:08.156 [2024-11-28 12:44:50.484491] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x881850 is same with the state(6) to be set 00:21:08.156 [2024-11-28 12:44:50.484497] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x881850 is same with the state(6) to be set 00:21:08.156 [2024-11-28 12:44:50.484503] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x881850 is same with the state(6) to be set 00:21:08.156 [2024-11-28 12:44:50.484509] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x881850 is same with the state(6) to be set 00:21:08.156 [2024-11-28 12:44:50.484515] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x881850 is same with the state(6) to be set 00:21:08.156 [2024-11-28 12:44:50.484521] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x881850 is same with the state(6) to be set 00:21:08.156 [2024-11-28 12:44:50.484527] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x881850 is same with the state(6) to be set 00:21:08.156 [2024-11-28 12:44:50.484534] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x881850 is same with the state(6) to be set 00:21:08.156 [2024-11-28 12:44:50.484539] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x881850 is same with the state(6) to be set 00:21:08.156 [2024-11-28 12:44:50.484545] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x881850 is same with the state(6) to be set 00:21:08.156 [2024-11-28 12:44:50.484551] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x881850 is same with the state(6) to be set 00:21:08.156 [2024-11-28 12:44:50.484557] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x881850 is same with the state(6) to be set 00:21:08.156 [2024-11-28 12:44:50.484563] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x881850 is same with the state(6) to be set 00:21:08.156 [2024-11-28 12:44:50.484568] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x881850 is same with the state(6) to be set 00:21:08.156 [2024-11-28 12:44:50.484574] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x881850 is same with the state(6) to be set 00:21:08.156 [2024-11-28 12:44:50.484580] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x881850 is same with the state(6) to be set 00:21:08.156 [2024-11-28 12:44:50.484588] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x881850 is same with the state(6) to be set 00:21:08.156 [2024-11-28 12:44:50.484595] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x881850 is same with the state(6) to be set 00:21:08.156 [2024-11-28 12:44:50.484601] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x881850 is same with the state(6) to be set 00:21:08.156 [2024-11-28 12:44:50.484607] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x881850 is same with the state(6) to be set 00:21:08.156 [2024-11-28 12:44:50.484613] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x881850 is same with the state(6) to be set 00:21:08.156 [2024-11-28 12:44:50.484620] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x881850 is same with the state(6) to be set 00:21:08.156 [2024-11-28 12:44:50.485675] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:08.156 [2024-11-28 12:44:50.485709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.156 [2024-11-28 12:44:50.485723] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:08.156 [2024-11-28 12:44:50.485730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.156 [2024-11-28 12:44:50.485738] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:08.156 [2024-11-28 12:44:50.485745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.157 [2024-11-28 12:44:50.485753] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:08.157 [2024-11-28 12:44:50.485759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.157 [2024-11-28 12:44:50.485766] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8631c0 is same with the state(6) to be set 00:21:08.157 [2024-11-28 12:44:50.486219] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf9e30 is same with the state(6) to be set 00:21:08.157 [2024-11-28 12:44:50.486252] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf9e30 is same with the state(6) to be set 00:21:08.157 [2024-11-28 12:44:50.486264] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf9e30 is same with the state(6) to be set 00:21:08.157 [2024-11-28 12:44:50.486274] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf9e30 is same with the state(6) to be set 00:21:08.157 [2024-11-28 12:44:50.486286] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf9e30 is same with the state(6) to be set 00:21:08.157 [2024-11-28 12:44:50.486296] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf9e30 is same with the state(6) to be set 00:21:08.157 [2024-11-28 12:44:50.486305] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf9e30 is same with the state(6) to be set 00:21:08.157 [2024-11-28 12:44:50.486314] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf9e30 is same with the state(6) to be set 00:21:08.157 [2024-11-28 12:44:50.486324] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf9e30 is same with the state(6) to be set 00:21:08.157 [2024-11-28 12:44:50.486333] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf9e30 is same with the state(6) to be set 00:21:08.157 [2024-11-28 12:44:50.486343] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf9e30 is same with the state(6) to be set 00:21:08.157 [2024-11-28 12:44:50.486353] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf9e30 is same with the state(6) to be set 00:21:08.157 [2024-11-28 12:44:50.486363] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf9e30 is same with the state(6) to be set 00:21:08.157 [2024-11-28 12:44:50.486372] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf9e30 is same with the state(6) to be set 00:21:08.157 [2024-11-28 12:44:50.486382] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf9e30 is same with the state(6) to be set 00:21:08.157 [2024-11-28 12:44:50.486393] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf9e30 is same with the state(6) to be set 00:21:08.157 [2024-11-28 12:44:50.486402] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf9e30 is same with the state(6) to be set 00:21:08.157 [2024-11-28 12:44:50.486412] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf9e30 is same with the state(6) to be set 00:21:08.157 [2024-11-28 12:44:50.486422] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf9e30 is same with the state(6) to be set 00:21:08.157 [2024-11-28 12:44:50.486432] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf9e30 is same with the state(6) to be set 00:21:08.157 [2024-11-28 12:44:50.486446] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf9e30 is same with the state(6) to be set 00:21:08.157 [2024-11-28 12:44:50.486457] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf9e30 is same with the state(6) to be set 00:21:08.157 [2024-11-28 12:44:50.486466] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf9e30 is same with the state(6) to be set 00:21:08.157 [2024-11-28 12:44:50.486476] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf9e30 is same with the state(6) to be set 00:21:08.157 [2024-11-28 12:44:50.486486] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf9e30 is same with the state(6) to be set 00:21:08.157 [2024-11-28 12:44:50.486496] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf9e30 is same with the state(6) to be set 00:21:08.157 [2024-11-28 12:44:50.486506] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf9e30 is same with the state(6) to be set 00:21:08.157 [2024-11-28 12:44:50.486516] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf9e30 is same with the state(6) to be set 00:21:08.157 [2024-11-28 12:44:50.486526] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf9e30 is same with the state(6) to be set 00:21:08.157 [2024-11-28 12:44:50.486536] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf9e30 is same with the state(6) to be set 00:21:08.157 [2024-11-28 12:44:50.486546] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf9e30 is same with the state(6) to be set 00:21:08.157 [2024-11-28 12:44:50.486555] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf9e30 is same with the state(6) to be set 00:21:08.157 [2024-11-28 12:44:50.486565] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf9e30 is same with the state(6) to be set 00:21:08.157 [2024-11-28 12:44:50.486576] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf9e30 is same with the state(6) to be set 00:21:08.157 [2024-11-28 12:44:50.486586] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf9e30 is same with the state(6) to be set 00:21:08.157 [2024-11-28 12:44:50.486596] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf9e30 is same with the state(6) to be set 00:21:08.157 [2024-11-28 12:44:50.486605] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf9e30 is same with the state(6) to be set 00:21:08.157 [2024-11-28 12:44:50.486615] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf9e30 is same with the state(6) to be set 00:21:08.157 [2024-11-28 12:44:50.486625] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf9e30 is same with the state(6) to be set 00:21:08.157 [2024-11-28 12:44:50.486635] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf9e30 is same with the state(6) to be set 00:21:08.157 [2024-11-28 12:44:50.486645] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf9e30 is same with the state(6) to be set 00:21:08.157 [2024-11-28 12:44:50.486655] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf9e30 is same with the state(6) to be set 00:21:08.157 [2024-11-28 12:44:50.486666] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf9e30 is same with the state(6) to be set 00:21:08.157 [2024-11-28 12:44:50.486676] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf9e30 is same with the state(6) to be set 00:21:08.157 [2024-11-28 12:44:50.486685] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf9e30 is same with the state(6) to be set 00:21:08.157 [2024-11-28 12:44:50.486695] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf9e30 is same with the state(6) to be set 00:21:08.157 [2024-11-28 12:44:50.486706] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf9e30 is same with the state(6) to be set 00:21:08.157 [2024-11-28 12:44:50.486720] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf9e30 is same with the state(6) to be set 00:21:08.157 [2024-11-28 12:44:50.486730] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf9e30 is same with the state(6) to be set 00:21:08.157 [2024-11-28 12:44:50.486740] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf9e30 is same with the state(6) to be set 00:21:08.157 [2024-11-28 12:44:50.486751] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf9e30 is same with the state(6) to be set 00:21:08.157 [2024-11-28 12:44:50.486761] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf9e30 is same with the state(6) to be set 00:21:08.157 [2024-11-28 12:44:50.486771] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf9e30 is same with the state(6) to be set 00:21:08.157 [2024-11-28 12:44:50.486781] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf9e30 is same with the state(6) to be set 00:21:08.157 [2024-11-28 12:44:50.486790] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf9e30 is same with the state(6) to be set 00:21:08.157 [2024-11-28 12:44:50.486800] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf9e30 is same with the state(6) to be set 00:21:08.157 [2024-11-28 12:44:50.486810] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf9e30 is same with the state(6) to be set 00:21:08.157 [2024-11-28 12:44:50.486820] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf9e30 is same with the state(6) to be set 00:21:08.157 [2024-11-28 12:44:50.486830] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf9e30 is same with the state(6) to be set 00:21:08.157 [2024-11-28 12:44:50.486840] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf9e30 is same with the state(6) to be set 00:21:08.157 [2024-11-28 12:44:50.486849] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf9e30 is same with the state(6) to be set 00:21:08.157 [2024-11-28 12:44:50.486860] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf9e30 is same with the state(6) to be set 00:21:08.157 [2024-11-28 12:44:50.486869] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf9e30 is same with the state(6) to be set 00:21:08.157 [2024-11-28 12:44:50.487981] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:08.157 [2024-11-28 12:44:50.488532] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x881d20 is same with the state(6) to be set 00:21:08.157 [2024-11-28 12:44:50.488553] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x881d20 is same with the state(6) to be set 00:21:08.157 [2024-11-28 12:44:50.488563] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x881d20 is same with the state(6) to be set 00:21:08.157 [2024-11-28 12:44:50.488571] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x881d20 is same with the state(6) to be set 00:21:08.157 [2024-11-28 12:44:50.488580] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x881d20 is same with the state(6) to be set 00:21:08.157 [2024-11-28 12:44:50.488589] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x881d20 is same with the state(6) to be set 00:21:08.157 [2024-11-28 12:44:50.488599] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x881d20 is same with the state(6) to be set 00:21:08.157 [2024-11-28 12:44:50.488608] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x881d20 is same with the state(6) to be set 00:21:08.157 [2024-11-28 12:44:50.488616] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x881d20 is same with the state(6) to be set 00:21:08.157 [2024-11-28 12:44:50.488627] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x881d20 is same with the state(6) to be set 00:21:08.157 [2024-11-28 12:44:50.488637] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x881d20 is same with the state(6) to be set 00:21:08.157 [2024-11-28 12:44:50.488653] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x881d20 is same with the state(6) to be set 00:21:08.157 [2024-11-28 12:44:50.488663] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x881d20 is same with the state(6) to be set 00:21:08.157 [2024-11-28 12:44:50.488672] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x881d20 is same with the state(6) to be set 00:21:08.157 [2024-11-28 12:44:50.488683] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x881d20 is same with the state(6) to be set 00:21:08.157 [2024-11-28 12:44:50.488693] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x881d20 is same with the state(6) to be set 00:21:08.157 [2024-11-28 12:44:50.488702] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x881d20 is same with the state(6) to be set 00:21:08.157 [2024-11-28 12:44:50.488713] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x881d20 is same with the state(6) to be set 00:21:08.157 [2024-11-28 12:44:50.488722] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x881d20 is same with the state(6) to be set 00:21:08.157 [2024-11-28 12:44:50.488733] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x881d20 is same with the state(6) to be set 00:21:08.157 [2024-11-28 12:44:50.488743] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x881d20 is same with the state(6) to be set 00:21:08.157 [2024-11-28 12:44:50.488753] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x881d20 is same with the state(6) to be set 00:21:08.157 [2024-11-28 12:44:50.488762] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x881d20 is same with the state(6) to be set 00:21:08.157 [2024-11-28 12:44:50.488772] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x881d20 is same with the state(6) to be set 00:21:08.157 [2024-11-28 12:44:50.488782] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x881d20 is same with the state(6) to be set 00:21:08.158 [2024-11-28 12:44:50.488791] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x881d20 is same with the state(6) to be set 00:21:08.158 [2024-11-28 12:44:50.488801] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x881d20 is same with the state(6) to be set 00:21:08.158 [2024-11-28 12:44:50.488811] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x881d20 is same with the state(6) to be set 00:21:08.158 [2024-11-28 12:44:50.488821] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x881d20 is same with the state(6) to be set 00:21:08.158 [2024-11-28 12:44:50.488831] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x881d20 is same with the state(6) to be set 00:21:08.158 [2024-11-28 12:44:50.488841] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x881d20 is same with the state(6) to be set 00:21:08.158 [2024-11-28 12:44:50.488852] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x881d20 is same with the state(6) to be set 00:21:08.158 [2024-11-28 12:44:50.488862] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x881d20 is same with the state(6) to be set 00:21:08.158 [2024-11-28 12:44:50.488872] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x881d20 is same with the state(6) to be set 00:21:08.158 [2024-11-28 12:44:50.488882] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x881d20 is same with the state(6) to be set 00:21:08.158 [2024-11-28 12:44:50.488892] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x881d20 is same with the state(6) to be set 00:21:08.158 [2024-11-28 12:44:50.488903] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x881d20 is same with the state(6) to be set 00:21:08.158 [2024-11-28 12:44:50.488912] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x881d20 is same with the state(6) to be set 00:21:08.158 [2024-11-28 12:44:50.488925] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x881d20 is same with the state(6) to be set 00:21:08.158 [2024-11-28 12:44:50.488935] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x881d20 is same with the state(6) to be set 00:21:08.158 [2024-11-28 12:44:50.488946] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x881d20 is same with the state(6) to be set 00:21:08.158 [2024-11-28 12:44:50.488963] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x881d20 is same with the state(6) to be set 00:21:08.158 [2024-11-28 12:44:50.488972] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x881d20 is same with the state(6) to be set 00:21:08.158 [2024-11-28 12:44:50.488982] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x881d20 is same with the state(6) to be set 00:21:08.158 [2024-11-28 12:44:50.488991] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x881d20 is same with the state(6) to be set 00:21:08.158 [2024-11-28 12:44:50.489001] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x881d20 is same with the state(6) to be set 00:21:08.158 [2024-11-28 12:44:50.489011] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x881d20 is same with the state(6) to be set 00:21:08.158 [2024-11-28 12:44:50.489020] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x881d20 is same with the state(6) to be set 00:21:08.158 [2024-11-28 12:44:50.489030] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x881d20 is same with the state(6) to be set 00:21:08.158 [2024-11-28 12:44:50.489040] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x881d20 is same with the state(6) to be set 00:21:08.158 [2024-11-28 12:44:50.489050] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x881d20 is same with the state(6) to be set 00:21:08.158 [2024-11-28 12:44:50.489061] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x881d20 is same with the state(6) to be set 00:21:08.158 [2024-11-28 12:44:50.489071] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x881d20 is same with the state(6) to be set 00:21:08.158 [2024-11-28 12:44:50.489080] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x881d20 is same with the state(6) to be set 00:21:08.158 [2024-11-28 12:44:50.489090] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x881d20 is same with the state(6) to be set 00:21:08.158 [2024-11-28 12:44:50.489100] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x881d20 is same with the state(6) to be set 00:21:08.158 [2024-11-28 12:44:50.489110] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x881d20 is same with the state(6) to be set 00:21:08.158 [2024-11-28 12:44:50.489121] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x881d20 is same with the state(6) to be set 00:21:08.158 [2024-11-28 12:44:50.489131] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x881d20 is same with the state(6) to be set 00:21:08.158 [2024-11-28 12:44:50.489141] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x881d20 is same with the state(6) to be set 00:21:08.158 [2024-11-28 12:44:50.489151] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x881d20 is same with the state(6) to be set 00:21:08.158 [2024-11-28 12:44:50.489161] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x881d20 is same with the state(6) to be set 00:21:08.158 [2024-11-28 12:44:50.489170] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x881d20 is same with the state(6) to be set 00:21:08.158 [2024-11-28 12:44:50.491076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.158 [2024-11-28 12:44:50.491098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.158 [2024-11-28 12:44:50.491118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.158 [2024-11-28 12:44:50.491126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.158 [2024-11-28 12:44:50.491134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.158 [2024-11-28 12:44:50.491141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.158 [2024-11-28 12:44:50.491150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.158 [2024-11-28 12:44:50.491156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.158 [2024-11-28 12:44:50.491164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.158 [2024-11-28 12:44:50.491171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.158 [2024-11-28 12:44:50.491179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.158 [2024-11-28 12:44:50.491186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.158 [2024-11-28 12:44:50.491194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.158 [2024-11-28 12:44:50.491201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.158 [2024-11-28 12:44:50.491209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.158 [2024-11-28 12:44:50.491216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.158 [2024-11-28 12:44:50.491224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.158 [2024-11-28 12:44:50.491231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.158 [2024-11-28 12:44:50.491239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.158 [2024-11-28 12:44:50.491246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.158 [2024-11-28 12:44:50.491254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.158 [2024-11-28 12:44:50.491261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.158 [2024-11-28 12:44:50.491269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.158 [2024-11-28 12:44:50.491275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.158 [2024-11-28 12:44:50.491284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.158 [2024-11-28 12:44:50.491290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.158 [2024-11-28 12:44:50.491299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.158 [2024-11-28 12:44:50.491306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.158 [2024-11-28 12:44:50.491314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.158 [2024-11-28 12:44:50.491321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.158 [2024-11-28 12:44:50.491329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.158 [2024-11-28 12:44:50.491336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.158 [2024-11-28 12:44:50.491344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.158 [2024-11-28 12:44:50.491351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.158 [2024-11-28 12:44:50.491359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.158 [2024-11-28 12:44:50.491365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.158 [2024-11-28 12:44:50.491373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.159 [2024-11-28 12:44:50.491380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.159 [2024-11-28 12:44:50.491388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.159 [2024-11-28 12:44:50.491394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.159 [2024-11-28 12:44:50.491402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.159 [2024-11-28 12:44:50.491410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.159 [2024-11-28 12:44:50.491418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.159 [2024-11-28 12:44:50.491425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.159 [2024-11-28 12:44:50.491433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.159 [2024-11-28 12:44:50.491440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.159 [2024-11-28 12:44:50.491448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.159 [2024-11-28 12:44:50.491456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.159 [2024-11-28 12:44:50.491464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.159 [2024-11-28 12:44:50.491470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.159 [2024-11-28 12:44:50.491479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.159 [2024-11-28 12:44:50.491485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.159 [2024-11-28 12:44:50.491495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.159 [2024-11-28 12:44:50.491502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.159 [2024-11-28 12:44:50.491510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.159 [2024-11-28 12:44:50.491517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.159 [2024-11-28 12:44:50.491525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.159 [2024-11-28 12:44:50.491532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.159 [2024-11-28 12:44:50.491540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.159 [2024-11-28 12:44:50.491547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.159 [2024-11-28 12:44:50.491556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.159 [2024-11-28 12:44:50.491563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.159 [2024-11-28 12:44:50.491571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.159 [2024-11-28 12:44:50.491577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.159 [2024-11-28 12:44:50.491586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.159 [2024-11-28 12:44:50.491592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.159 [2024-11-28 12:44:50.491601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.159 [2024-11-28 12:44:50.491608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.159 [2024-11-28 12:44:50.491616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.159 [2024-11-28 12:44:50.491622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.159 [2024-11-28 12:44:50.491630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.159 [2024-11-28 12:44:50.491626] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8826e0 is same with t[2024-11-28 12:44:50.491639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:21:08.159 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.159 [2024-11-28 12:44:50.491650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.159 [2024-11-28 12:44:50.491652] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8826e0 is same with the state(6) to be set 00:21:08.159 [2024-11-28 12:44:50.491657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.159 [2024-11-28 12:44:50.491660] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8826e0 is same with the state(6) to be set 00:21:08.159 [2024-11-28 12:44:50.491666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.159 [2024-11-28 12:44:50.491672] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8826e0 is same with the state(6) to be set 00:21:08.159 [2024-11-28 12:44:50.491673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.159 [2024-11-28 12:44:50.491679] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8826e0 is same with the state(6) to be set 00:21:08.159 [2024-11-28 12:44:50.491683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.159 [2024-11-28 12:44:50.491686] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8826e0 is same with the state(6) to be set 00:21:08.159 [2024-11-28 12:44:50.491691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.159 [2024-11-28 12:44:50.491701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.159 [2024-11-28 12:44:50.491707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.159 [2024-11-28 12:44:50.491715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.159 [2024-11-28 12:44:50.491724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.159 [2024-11-28 12:44:50.491733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.159 [2024-11-28 12:44:50.491740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.159 [2024-11-28 12:44:50.491748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.159 [2024-11-28 12:44:50.491755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.159 [2024-11-28 12:44:50.491763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.159 [2024-11-28 12:44:50.491771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.159 [2024-11-28 12:44:50.491779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.159 [2024-11-28 12:44:50.491786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.159 [2024-11-28 12:44:50.491794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.159 [2024-11-28 12:44:50.491801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.159 [2024-11-28 12:44:50.491809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.159 [2024-11-28 12:44:50.491816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.159 [2024-11-28 12:44:50.491824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.159 [2024-11-28 12:44:50.491831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.159 [2024-11-28 12:44:50.491839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.159 [2024-11-28 12:44:50.491848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.159 [2024-11-28 12:44:50.491856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.159 [2024-11-28 12:44:50.491863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.159 [2024-11-28 12:44:50.491872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:40320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.159 [2024-11-28 12:44:50.491878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.159 [2024-11-28 12:44:50.491887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:40448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.159 [2024-11-28 12:44:50.491893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.159 [2024-11-28 12:44:50.491901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:40576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.159 [2024-11-28 12:44:50.491908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.159 [2024-11-28 12:44:50.491917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:40704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.159 [2024-11-28 12:44:50.491923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.160 [2024-11-28 12:44:50.491932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:40832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.160 [2024-11-28 12:44:50.491939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.160 [2024-11-28 12:44:50.491952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.160 [2024-11-28 12:44:50.491959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.160 [2024-11-28 12:44:50.491968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:41088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.160 [2024-11-28 12:44:50.491975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.160 [2024-11-28 12:44:50.491983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:41216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.160 [2024-11-28 12:44:50.491990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.160 [2024-11-28 12:44:50.491998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:41344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.160 [2024-11-28 12:44:50.492005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.160 [2024-11-28 12:44:50.492015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:41472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.160 [2024-11-28 12:44:50.492022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.160 [2024-11-28 12:44:50.492031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.160 [2024-11-28 12:44:50.492037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.160 [2024-11-28 12:44:50.492047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.160 [2024-11-28 12:44:50.492054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.160 [2024-11-28 12:44:50.492062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.160 [2024-11-28 12:44:50.492069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.160 [2024-11-28 12:44:50.492077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.160 [2024-11-28 12:44:50.492083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.160 [2024-11-28 12:44:50.492112] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:08.160 [2024-11-28 12:44:50.492285] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x882bb0 is same with the state(6) to be set 00:21:08.160 [2024-11-28 12:44:50.492301] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x882bb0 is same with the state(6) to be set 00:21:08.160 [2024-11-28 12:44:50.492308] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x882bb0 is same with the state(6) to be set 00:21:08.160 [2024-11-28 12:44:50.492314] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x882bb0 is same with the state(6) to be set 00:21:08.160 [2024-11-28 12:44:50.492321] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x882bb0 is same with the state(6) to be set 00:21:08.160 [2024-11-28 12:44:50.492327] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x882bb0 is same with the state(6) to be set 00:21:08.160 [2024-11-28 12:44:50.492333] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x882bb0 is same with the state(6) to be set 00:21:08.160 [2024-11-28 12:44:50.492339] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x882bb0 is same with the state(6) to be set 00:21:08.160 [2024-11-28 12:44:50.492345] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x882bb0 is same with the state(6) to be set 00:21:08.160 [2024-11-28 12:44:50.492352] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x882bb0 is same with the state(6) to be set 00:21:08.160 [2024-11-28 12:44:50.492358] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x882bb0 is same with the state(6) to be set 00:21:08.160 [2024-11-28 12:44:50.492364] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x882bb0 is same with the state(6) to be set 00:21:08.160 [2024-11-28 12:44:50.492370] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x882bb0 is same with the state(6) to be set 00:21:08.160 [2024-11-28 12:44:50.492376] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x882bb0 is same with the state(6) to be set 00:21:08.160 [2024-11-28 12:44:50.492382] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x882bb0 is same with the state(6) to be set 00:21:08.160 [2024-11-28 12:44:50.492388] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x882bb0 is same with the state(6) to be set 00:21:08.160 [2024-11-28 12:44:50.492394] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x882bb0 is same with the state(6) to be set 00:21:08.160 [2024-11-28 12:44:50.492400] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x882bb0 is same with the state(6) to be set 00:21:08.160 [2024-11-28 12:44:50.492406] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x882bb0 is same with the state(6) to be set 00:21:08.160 [2024-11-28 12:44:50.492416] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x882bb0 is same with the state(6) to be set 00:21:08.160 [2024-11-28 12:44:50.492422] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x882bb0 is same with the state(6) to be set 00:21:08.160 [2024-11-28 12:44:50.492429] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x882bb0 is same with the state(6) to be set 00:21:08.160 [2024-11-28 12:44:50.492435] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x882bb0 is same with the state(6) to be set 00:21:08.160 [2024-11-28 12:44:50.492441] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x882bb0 is same with the state(6) to be set 00:21:08.160 [2024-11-28 12:44:50.492448] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x882bb0 is same with the state(6) to be set 00:21:08.160 [2024-11-28 12:44:50.492454] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x882bb0 is same with the state(6) to be set 00:21:08.160 [2024-11-28 12:44:50.492460] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x882bb0 is same with the state(6) to be set 00:21:08.160 [2024-11-28 12:44:50.492466] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x882bb0 is same with the state(6) to be set 00:21:08.160 [2024-11-28 12:44:50.492472] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x882bb0 is same with the state(6) to be set 00:21:08.160 [2024-11-28 12:44:50.492478] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x882bb0 is same with the state(6) to be set 00:21:08.160 [2024-11-28 12:44:50.492485] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x882bb0 is same with the state(6) to be set 00:21:08.160 [2024-11-28 12:44:50.492491] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x882bb0 is same with the state(6) to be set 00:21:08.160 [2024-11-28 12:44:50.492497] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x882bb0 is same with the state(6) to be set 00:21:08.160 [2024-11-28 12:44:50.492504] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x882bb0 is same with the state(6) to be set 00:21:08.160 [2024-11-28 12:44:50.492510] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x882bb0 is same with the state(6) to be set 00:21:08.160 [2024-11-28 12:44:50.492516] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x882bb0 is same with the state(6) to be set 00:21:08.160 [2024-11-28 12:44:50.492522] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x882bb0 is same with the state(6) to be set 00:21:08.160 [2024-11-28 12:44:50.492528] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x882bb0 is same with the state(6) to be set 00:21:08.160 [2024-11-28 12:44:50.492534] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x882bb0 is same with the state(6) to be set 00:21:08.160 [2024-11-28 12:44:50.492540] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x882bb0 is same with the state(6) to be set 00:21:08.160 [2024-11-28 12:44:50.492546] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x882bb0 is same with the state(6) to be set 00:21:08.160 [2024-11-28 12:44:50.492552] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x882bb0 is same with the state(6) to be set 00:21:08.160 [2024-11-28 12:44:50.492558] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x882bb0 is same with the state(6) to be set 00:21:08.160 [2024-11-28 12:44:50.492565] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x882bb0 is same with the state(6) to be set 00:21:08.161 [2024-11-28 12:44:50.492571] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x882bb0 is same with the state(6) to be set 00:21:08.161 [2024-11-28 12:44:50.492577] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x882bb0 is same with the state(6) to be set 00:21:08.161 [2024-11-28 12:44:50.492584] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x882bb0 is same with the state(6) to be set 00:21:08.161 [2024-11-28 12:44:50.492590] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x882bb0 is same with the state(6) to be set 00:21:08.161 [2024-11-28 12:44:50.492596] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x882bb0 is same with the state(6) to be set 00:21:08.161 [2024-11-28 12:44:50.492602] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x882bb0 is same with the state(6) to be set 00:21:08.161 [2024-11-28 12:44:50.492608] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x882bb0 is same with the state(6) to be set 00:21:08.161 [2024-11-28 12:44:50.492614] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x882bb0 is same with the state(6) to be set 00:21:08.161 [2024-11-28 12:44:50.492620] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x882bb0 is same with the state(6) to be set 00:21:08.161 [2024-11-28 12:44:50.492626] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x882bb0 is same with the state(6) to be set 00:21:08.161 [2024-11-28 12:44:50.492632] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x882bb0 is same with the state(6) to be set 00:21:08.161 [2024-11-28 12:44:50.492639] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x882bb0 is same with the state(6) to be set 00:21:08.161 [2024-11-28 12:44:50.492644] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x882bb0 is same with the state(6) to be set 00:21:08.161 [2024-11-28 12:44:50.492651] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x882bb0 is same with the state(6) to be set 00:21:08.161 [2024-11-28 12:44:50.492657] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x882bb0 is same with the state(6) to be set 00:21:08.161 [2024-11-28 12:44:50.492664] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x882bb0 is same with the state(6) to be set 00:21:08.161 [2024-11-28 12:44:50.492670] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x882bb0 is same with the state(6) to be set 00:21:08.161 [2024-11-28 12:44:50.492676] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x882bb0 is same with the state(6) to be set 00:21:08.161 [2024-11-28 12:44:50.492682] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x882bb0 is same with the state(6) to be set 00:21:08.161 [2024-11-28 12:44:50.492689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.161 [2024-11-28 12:44:50.492706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.161 [2024-11-28 12:44:50.492718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.161 [2024-11-28 12:44:50.492725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.161 [2024-11-28 12:44:50.492734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.161 [2024-11-28 12:44:50.492740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.161 [2024-11-28 12:44:50.492749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.161 [2024-11-28 12:44:50.492756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.161 [2024-11-28 12:44:50.492764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.161 [2024-11-28 12:44:50.492770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.161 [2024-11-28 12:44:50.492782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.161 [2024-11-28 12:44:50.492789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.161 [2024-11-28 12:44:50.492797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.161 [2024-11-28 12:44:50.492804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.161 [2024-11-28 12:44:50.492813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.161 [2024-11-28 12:44:50.492820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.161 [2024-11-28 12:44:50.492828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.161 [2024-11-28 12:44:50.492835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.161 [2024-11-28 12:44:50.492843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.161 [2024-11-28 12:44:50.492850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.161 [2024-11-28 12:44:50.492858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.161 [2024-11-28 12:44:50.492865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.161 [2024-11-28 12:44:50.492873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.161 [2024-11-28 12:44:50.492880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.161 [2024-11-28 12:44:50.492889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.161 [2024-11-28 12:44:50.492895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.161 [2024-11-28 12:44:50.492903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.161 [2024-11-28 12:44:50.492910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.161 [2024-11-28 12:44:50.492918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.161 [2024-11-28 12:44:50.492925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.161 [2024-11-28 12:44:50.492933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.161 [2024-11-28 12:44:50.492940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.161 [2024-11-28 12:44:50.492953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.161 [2024-11-28 12:44:50.492961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.161 [2024-11-28 12:44:50.492969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.161 [2024-11-28 12:44:50.492981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.161 [2024-11-28 12:44:50.492990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.161 [2024-11-28 12:44:50.492996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.161 [2024-11-28 12:44:50.493005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.161 [2024-11-28 12:44:50.493012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.161 [2024-11-28 12:44:50.493020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.161 [2024-11-28 12:44:50.493027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.161 [2024-11-28 12:44:50.493035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.161 [2024-11-28 12:44:50.493042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.161 [2024-11-28 12:44:50.493050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.161 [2024-11-28 12:44:50.493057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.161 [2024-11-28 12:44:50.493066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.161 [2024-11-28 12:44:50.493073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.161 [2024-11-28 12:44:50.493082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.161 [2024-11-28 12:44:50.493088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.161 [2024-11-28 12:44:50.493096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.161 [2024-11-28 12:44:50.493103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.161 [2024-11-28 12:44:50.493111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.161 [2024-11-28 12:44:50.493118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.161 [2024-11-28 12:44:50.493126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.161 [2024-11-28 12:44:50.493133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.161 [2024-11-28 12:44:50.493141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.161 [2024-11-28 12:44:50.493147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.161 [2024-11-28 12:44:50.493155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.162 [2024-11-28 12:44:50.493162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.162 [2024-11-28 12:44:50.493172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.162 [2024-11-28 12:44:50.493179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.162 [2024-11-28 12:44:50.493187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.162 [2024-11-28 12:44:50.493194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.162 [2024-11-28 12:44:50.493202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.162 [2024-11-28 12:44:50.493208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.162 [2024-11-28 12:44:50.493216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.162 [2024-11-28 12:44:50.493223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.162 [2024-11-28 12:44:50.493231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.162 [2024-11-28 12:44:50.493237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.162 [2024-11-28 12:44:50.493246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.162 [2024-11-28 12:44:50.493252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.162 [2024-11-28 12:44:50.493260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.162 [2024-11-28 12:44:50.493267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.162 [2024-11-28 12:44:50.493275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.162 [2024-11-28 12:44:50.493282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.162 [2024-11-28 12:44:50.493290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.162 [2024-11-28 12:44:50.493297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.162 [2024-11-28 12:44:50.493305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.162 [2024-11-28 12:44:50.493312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.162 [2024-11-28 12:44:50.493320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.162 [2024-11-28 12:44:50.493327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.162 [2024-11-28 12:44:50.493335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.162 [2024-11-28 12:44:50.493342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.162 [2024-11-28 12:44:50.493350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.162 [2024-11-28 12:44:50.493358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.162 [2024-11-28 12:44:50.493366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.162 [2024-11-28 12:44:50.493373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.162 [2024-11-28 12:44:50.493381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.162 [2024-11-28 12:44:50.493387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.162 [2024-11-28 12:44:50.493395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.162 [2024-11-28 12:44:50.493402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.162 [2024-11-28 12:44:50.493410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.162 [2024-11-28 12:44:50.493416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.162 [2024-11-28 12:44:50.493424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.162 [2024-11-28 12:44:50.493431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.162 [2024-11-28 12:44:50.493439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.162 [2024-11-28 12:44:50.493446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.162 [2024-11-28 12:44:50.493454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.162 [2024-11-28 12:44:50.493460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.162 [2024-11-28 12:44:50.493470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.162 [2024-11-28 12:44:50.493477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.162 [2024-11-28 12:44:50.493485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.162 [2024-11-28 12:44:50.493483] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x883080 is same with t[2024-11-28 12:44:50.493492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:21:08.162 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.162 [2024-11-28 12:44:50.493506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.162 [2024-11-28 12:44:50.493508] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x883080 is same with the state(6) to be set 00:21:08.162 [2024-11-28 12:44:50.493513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.162 [2024-11-28 12:44:50.493522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.162 [2024-11-28 12:44:50.493521] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x883080 is same with t[2024-11-28 12:44:50.493530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:21:08.162 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.162 [2024-11-28 12:44:50.493544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:40320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.162 [2024-11-28 12:44:50.493543] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x883080 is same with the state(6) to be set 00:21:08.162 [2024-11-28 12:44:50.493552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.162 [2024-11-28 12:44:50.493562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:40448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.162 [2024-11-28 12:44:50.493568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.162 [2024-11-28 12:44:50.493576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:40576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.162 [2024-11-28 12:44:50.493584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.162 [2024-11-28 12:44:50.493592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:40704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.162 [2024-11-28 12:44:50.493599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.162 [2024-11-28 12:44:50.493607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:40832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.162 [2024-11-28 12:44:50.493613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.162 [2024-11-28 12:44:50.493621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.162 [2024-11-28 12:44:50.493628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.162 [2024-11-28 12:44:50.493636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.162 [2024-11-28 12:44:50.493642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.162 [2024-11-28 12:44:50.493650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.162 [2024-11-28 12:44:50.493657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.162 [2024-11-28 12:44:50.493665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.162 [2024-11-28 12:44:50.493672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.162 [2024-11-28 12:44:50.493680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.162 [2024-11-28 12:44:50.493686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.162 [2024-11-28 12:44:50.493709] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:08.162 [2024-11-28 12:44:50.494317] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x883550 is same with the state(6) to be set 00:21:08.162 [2024-11-28 12:44:50.494343] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x883550 is same with the state(6) to be set 00:21:08.162 [2024-11-28 12:44:50.494354] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x883550 is same with the state(6) to be set 00:21:08.162 [2024-11-28 12:44:50.494361] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x883550 is same with the state(6) to be set 00:21:08.162 [2024-11-28 12:44:50.494367] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x883550 is same with the state(6) to be set 00:21:08.163 [2024-11-28 12:44:50.494373] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x883550 is same with the state(6) to be set 00:21:08.163 [2024-11-28 12:44:50.494380] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x883550 is same with the state(6) to be set 00:21:08.163 [2024-11-28 12:44:50.494386] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x883550 is same with the state(6) to be set 00:21:08.163 [2024-11-28 12:44:50.494393] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x883550 is same with the state(6) to be set 00:21:08.163 [2024-11-28 12:44:50.494399] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x883550 is same with the state(6) to be set 00:21:08.163 [2024-11-28 12:44:50.494405] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x883550 is same with the state(6) to be set 00:21:08.163 [2024-11-28 12:44:50.494411] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x883550 is same with the state(6) to be set 00:21:08.163 [2024-11-28 12:44:50.494417] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x883550 is same with the state(6) to be set 00:21:08.163 [2024-11-28 12:44:50.494423] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x883550 is same with the state(6) to be set 00:21:08.163 [2024-11-28 12:44:50.494430] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x883550 is same with the state(6) to be set 00:21:08.163 [2024-11-28 12:44:50.494436] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x883550 is same with the state(6) to be set 00:21:08.163 [2024-11-28 12:44:50.494442] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x883550 is same with the state(6) to be set 00:21:08.163 [2024-11-28 12:44:50.494448] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x883550 is same with the state(6) to be set 00:21:08.163 [2024-11-28 12:44:50.494453] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x883550 is same with the state(6) to be set 00:21:08.163 [2024-11-28 12:44:50.494459] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x883550 is same with the state(6) to be set 00:21:08.163 [2024-11-28 12:44:50.494465] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x883550 is same with the state(6) to be set 00:21:08.163 [2024-11-28 12:44:50.494471] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x883550 is same with the state(6) to be set 00:21:08.163 [2024-11-28 12:44:50.494478] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x883550 is same with the state(6) to be set 00:21:08.163 [2024-11-28 12:44:50.494484] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x883550 is same with the state(6) to be set 00:21:08.163 [2024-11-28 12:44:50.494490] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x883550 is same with the state(6) to be set 00:21:08.163 [2024-11-28 12:44:50.494496] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x883550 is same with the state(6) to be set 00:21:08.163 [2024-11-28 12:44:50.494502] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x883550 is same with the state(6) to be set 00:21:08.163 [2024-11-28 12:44:50.494508] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x883550 is same with the state(6) to be set 00:21:08.163 [2024-11-28 12:44:50.494514] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x883550 is same with the state(6) to be set 00:21:08.163 [2024-11-28 12:44:50.494520] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x883550 is same with the state(6) to be set 00:21:08.163 [2024-11-28 12:44:50.494531] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x883550 is same with the state(6) to be set 00:21:08.163 [2024-11-28 12:44:50.494538] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x883550 is same with the state(6) to be set 00:21:08.163 [2024-11-28 12:44:50.494544] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x883550 is same with the state(6) to be set 00:21:08.163 [2024-11-28 12:44:50.494551] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x883550 is same with the state(6) to be set 00:21:08.163 [2024-11-28 12:44:50.494557] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x883550 is same with the state(6) to be set 00:21:08.163 [2024-11-28 12:44:50.494563] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x883550 is same with the state(6) to be set 00:21:08.163 [2024-11-28 12:44:50.494569] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x883550 is same with the state(6) to be set 00:21:08.163 [2024-11-28 12:44:50.494575] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x883550 is same with the state(6) to be set 00:21:08.163 [2024-11-28 12:44:50.494581] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x883550 is same with the state(6) to be set 00:21:08.163 [2024-11-28 12:44:50.494588] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x883550 is same with the state(6) to be set 00:21:08.163 [2024-11-28 12:44:50.494594] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x883550 is same with the state(6) to be set 00:21:08.163 [2024-11-28 12:44:50.494600] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x883550 is same with the state(6) to be set 00:21:08.163 [2024-11-28 12:44:50.494606] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x883550 is same with the state(6) to be set 00:21:08.163 [2024-11-28 12:44:50.494612] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x883550 is same with the state(6) to be set 00:21:08.163 [2024-11-28 12:44:50.494619] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x883550 is same with the state(6) to be set 00:21:08.163 [2024-11-28 12:44:50.494625] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x883550 is same with the state(6) to be set 00:21:08.163 [2024-11-28 12:44:50.494632] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x883550 is same with the state(6) to be set 00:21:08.163 [2024-11-28 12:44:50.494638] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x883550 is same with the state(6) to be set 00:21:08.163 [2024-11-28 12:44:50.494645] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x883550 is same with the state(6) to be set 00:21:08.163 [2024-11-28 12:44:50.494651] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x883550 is same with the state(6) to be set 00:21:08.163 [2024-11-28 12:44:50.494657] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x883550 is same with the state(6) to be set 00:21:08.163 [2024-11-28 12:44:50.494663] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x883550 is same with the state(6) to be set 00:21:08.163 [2024-11-28 12:44:50.494669] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x883550 is same with the state(6) to be set 00:21:08.163 [2024-11-28 12:44:50.494675] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x883550 is same with the state(6) to be set 00:21:08.163 [2024-11-28 12:44:50.494681] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x883550 is same with the state(6) to be set 00:21:08.163 [2024-11-28 12:44:50.494687] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x883550 is same with the state(6) to be set 00:21:08.163 [2024-11-28 12:44:50.494692] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x883550 is same with the state(6) to be set 00:21:08.163 [2024-11-28 12:44:50.494700] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x883550 is same with the state(6) to be set 00:21:08.163 [2024-11-28 12:44:50.494707] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x883550 is same with the state(6) to be set 00:21:08.163 [2024-11-28 12:44:50.494713] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x883550 is same with the state(6) to be set 00:21:08.163 [2024-11-28 12:44:50.494719] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x883550 is same with the state(6) to be set 00:21:08.163 [2024-11-28 12:44:50.494725] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x883550 is same with the state(6) to be set 00:21:08.163 [2024-11-28 12:44:50.494731] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x883550 is same with the state(6) to be set 00:21:08.163 [2024-11-28 12:44:50.495507] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x883a40 is same with the state(6) to be set 00:21:08.163 [2024-11-28 12:44:50.495521] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x883a40 is same with the state(6) to be set 00:21:08.163 [2024-11-28 12:44:50.495527] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x883a40 is same with the state(6) to be set 00:21:08.163 [2024-11-28 12:44:50.495533] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x883a40 is same with the state(6) to be set 00:21:08.163 [2024-11-28 12:44:50.495540] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x883a40 is same with the state(6) to be set 00:21:08.163 [2024-11-28 12:44:50.495546] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x883a40 is same with the state(6) to be set 00:21:08.163 [2024-11-28 12:44:50.495552] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x883a40 is same with the state(6) to be set 00:21:08.163 [2024-11-28 12:44:50.495558] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x883a40 is same with the state(6) to be set 00:21:08.163 [2024-11-28 12:44:50.495564] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x883a40 is same with the state(6) to be set 00:21:08.163 [2024-11-28 12:44:50.495570] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x883a40 is same with the state(6) to be set 00:21:08.163 [2024-11-28 12:44:50.495576] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x883a40 is same with the state(6) to be set 00:21:08.163 [2024-11-28 12:44:50.495582] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x883a40 is same with the state(6) to be set 00:21:08.163 [2024-11-28 12:44:50.495588] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x883a40 is same with the state(6) to be set 00:21:08.163 [2024-11-28 12:44:50.495594] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x883a40 is same with the state(6) to be set 00:21:08.163 [2024-11-28 12:44:50.495600] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x883a40 is same with the state(6) to be set 00:21:08.163 [2024-11-28 12:44:50.495606] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x883a40 is same with the state(6) to be set 00:21:08.163 [2024-11-28 12:44:50.495613] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x883a40 is same with the state(6) to be set 00:21:08.163 [2024-11-28 12:44:50.495619] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x883a40 is same with the state(6) to be set 00:21:08.163 [2024-11-28 12:44:50.495625] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x883a40 is same with the state(6) to be set 00:21:08.163 [2024-11-28 12:44:50.495631] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x883a40 is same with the state(6) to be set 00:21:08.163 [2024-11-28 12:44:50.495637] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x883a40 is same with the state(6) to be set 00:21:08.163 [2024-11-28 12:44:50.495646] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x883a40 is same with the state(6) to be set 00:21:08.163 [2024-11-28 12:44:50.495652] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x883a40 is same with the state(6) to be set 00:21:08.163 [2024-11-28 12:44:50.495659] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x883a40 is same with the state(6) to be set 00:21:08.163 [2024-11-28 12:44:50.495665] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x883a40 is same with the state(6) to be set 00:21:08.163 [2024-11-28 12:44:50.495671] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x883a40 is same with the state(6) to be set 00:21:08.164 [2024-11-28 12:44:50.495677] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x883a40 is same with the state(6) to be set 00:21:08.164 [2024-11-28 12:44:50.495683] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x883a40 is same with the state(6) to be set 00:21:08.164 [2024-11-28 12:44:50.495689] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x883a40 is same with the state(6) to be set 00:21:08.164 [2024-11-28 12:44:50.495695] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x883a40 is same with the state(6) to be set 00:21:08.164 [2024-11-28 12:44:50.495702] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x883a40 is same with the state(6) to be set 00:21:08.164 [2024-11-28 12:44:50.495708] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x883a40 is same with the state(6) to be set 00:21:08.164 [2024-11-28 12:44:50.495714] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x883a40 is same with the state(6) to be set 00:21:08.164 [2024-11-28 12:44:50.495720] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x883a40 is same with the state(6) to be set 00:21:08.164 [2024-11-28 12:44:50.495726] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x883a40 is same with the state(6) to be set 00:21:08.164 [2024-11-28 12:44:50.495732] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x883a40 is same with the state(6) to be set 00:21:08.164 [2024-11-28 12:44:50.495738] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x883a40 is same with the state(6) to be set 00:21:08.164 [2024-11-28 12:44:50.495744] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x883a40 is same with the state(6) to be set 00:21:08.164 [2024-11-28 12:44:50.495750] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x883a40 is same with the state(6) to be set 00:21:08.164 [2024-11-28 12:44:50.495755] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x883a40 is same with the state(6) to be set 00:21:08.164 [2024-11-28 12:44:50.495761] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x883a40 is same with the state(6) to be set 00:21:08.164 [2024-11-28 12:44:50.495767] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x883a40 is same with the state(6) to be set 00:21:08.164 [2024-11-28 12:44:50.495773] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x883a40 is same with the state(6) to be set 00:21:08.164 [2024-11-28 12:44:50.495779] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x883a40 is same with the state(6) to be set 00:21:08.164 [2024-11-28 12:44:50.495785] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x883a40 is same with the state(6) to be set 00:21:08.164 [2024-11-28 12:44:50.495790] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x883a40 is same with the state(6) to be set 00:21:08.164 [2024-11-28 12:44:50.495796] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x883a40 is same with the state(6) to be set 00:21:08.164 [2024-11-28 12:44:50.495802] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x883a40 is same with the state(6) to be set 00:21:08.164 [2024-11-28 12:44:50.495810] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x883a40 is same with the state(6) to be set 00:21:08.164 [2024-11-28 12:44:50.495816] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x883a40 is same with the state(6) to be set 00:21:08.164 [2024-11-28 12:44:50.495821] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x883a40 is same with the state(6) to be set 00:21:08.164 [2024-11-28 12:44:50.495827] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x883a40 is same with the state(6) to be set 00:21:08.164 [2024-11-28 12:44:50.495833] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x883a40 is same with the state(6) to be set 00:21:08.164 [2024-11-28 12:44:50.495839] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x883a40 is same with the state(6) to be set 00:21:08.164 [2024-11-28 12:44:50.495845] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x883a40 is same with the state(6) to be set 00:21:08.164 [2024-11-28 12:44:50.495851] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x883a40 is same with the state(6) to be set 00:21:08.164 [2024-11-28 12:44:50.495857] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x883a40 is same with the state(6) to be set 00:21:08.164 [2024-11-28 12:44:50.495862] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x883a40 is same with the state(6) to be set 00:21:08.164 [2024-11-28 12:44:50.495868] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x883a40 is same with the state(6) to be set 00:21:08.164 [2024-11-28 12:44:50.495874] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x883a40 is same with the state(6) to be set 00:21:08.164 [2024-11-28 12:44:50.495880] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x883a40 is same with the state(6) to be set 00:21:08.164 [2024-11-28 12:44:50.495886] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x883a40 is same with the state(6) to be set 00:21:08.164 [2024-11-28 12:44:50.495891] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x883a40 is same with the state(6) to be set 00:21:08.164 [2024-11-28 12:44:50.496252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.164 [2024-11-28 12:44:50.496273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.164 [2024-11-28 12:44:50.496285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.164 [2024-11-28 12:44:50.496292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.164 [2024-11-28 12:44:50.496300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.164 [2024-11-28 12:44:50.496307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.164 [2024-11-28 12:44:50.496316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.164 [2024-11-28 12:44:50.496323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.164 [2024-11-28 12:44:50.496331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.164 [2024-11-28 12:44:50.496338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.164 [2024-11-28 12:44:50.496346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.164 [2024-11-28 12:44:50.496353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.164 [2024-11-28 12:44:50.496364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.164 [2024-11-28 12:44:50.496373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.164 [2024-11-28 12:44:50.496382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.164 [2024-11-28 12:44:50.496389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.164 [2024-11-28 12:44:50.496397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.164 [2024-11-28 12:44:50.496404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.164 [2024-11-28 12:44:50.496413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.164 [2024-11-28 12:44:50.496420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.164 [2024-11-28 12:44:50.496428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.164 [2024-11-28 12:44:50.496435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.164 [2024-11-28 12:44:50.496443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.164 [2024-11-28 12:44:50.496450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.164 [2024-11-28 12:44:50.496458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128[2024-11-28 12:44:50.496455] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf9960 is same with t SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.164 he state(6) to be set 00:21:08.164 [2024-11-28 12:44:50.496468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.164 [2024-11-28 12:44:50.496471] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf9960 is same with the state(6) to be set 00:21:08.164 [2024-11-28 12:44:50.496477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.164 [2024-11-28 12:44:50.496479] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf9960 is same with the state(6) to be set 00:21:08.164 [2024-11-28 12:44:50.496485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-28 12:44:50.496486] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf9960 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.164 he state(6) to be set 00:21:08.164 [2024-11-28 12:44:50.496495] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf9960 is same with the state(6) to be set 00:21:08.164 [2024-11-28 12:44:50.496496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.164 [2024-11-28 12:44:50.496501] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf9960 is same with the state(6) to be set 00:21:08.164 [2024-11-28 12:44:50.496504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.164 [2024-11-28 12:44:50.496508] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf9960 is same with the state(6) to be set 00:21:08.164 [2024-11-28 12:44:50.496513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.164 [2024-11-28 12:44:50.496518] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf9960 is same with the state(6) to be set 00:21:08.164 [2024-11-28 12:44:50.496521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.164 [2024-11-28 12:44:50.496525] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf9960 is same with the state(6) to be set 00:21:08.164 [2024-11-28 12:44:50.496530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.164 [2024-11-28 12:44:50.496532] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf9960 is same with the state(6) to be set 00:21:08.164 [2024-11-28 12:44:50.496538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.164 [2024-11-28 12:44:50.496539] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf9960 is same with the state(6) to be set 00:21:08.165 [2024-11-28 12:44:50.496546] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf9960 is same with the state(6) to be set 00:21:08.165 [2024-11-28 12:44:50.496547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.165 [2024-11-28 12:44:50.496553] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf9960 is same with the state(6) to be set 00:21:08.165 [2024-11-28 12:44:50.496555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.165 [2024-11-28 12:44:50.496559] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf9960 is same with the state(6) to be set 00:21:08.165 [2024-11-28 12:44:50.496564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.165 [2024-11-28 12:44:50.496566] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf9960 is same with the state(6) to be set 00:21:08.165 [2024-11-28 12:44:50.496572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.165 [2024-11-28 12:44:50.496574] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf9960 is same with the state(6) to be set 00:21:08.165 [2024-11-28 12:44:50.496581] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf9960 is same with the state(6) to be set 00:21:08.165 [2024-11-28 12:44:50.496581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.165 [2024-11-28 12:44:50.496587] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf9960 is same with the state(6) to be set 00:21:08.165 [2024-11-28 12:44:50.496589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.165 [2024-11-28 12:44:50.496594] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf9960 is same with the state(6) to be set 00:21:08.165 [2024-11-28 12:44:50.496599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.165 [2024-11-28 12:44:50.496601] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf9960 is same with the state(6) to be set 00:21:08.165 [2024-11-28 12:44:50.496606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.165 [2024-11-28 12:44:50.496608] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf9960 is same with the state(6) to be set 00:21:08.165 [2024-11-28 12:44:50.496616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:12[2024-11-28 12:44:50.496617] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf9960 is same with t8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.165 he state(6) to be set 00:21:08.165 [2024-11-28 12:44:50.496627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-28 12:44:50.496628] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf9960 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.165 he state(6) to be set 00:21:08.165 [2024-11-28 12:44:50.496637] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf9960 is same with the state(6) to be set 00:21:08.165 [2024-11-28 12:44:50.496638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.165 [2024-11-28 12:44:50.496643] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf9960 is same with the state(6) to be set 00:21:08.165 [2024-11-28 12:44:50.496648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.165 [2024-11-28 12:44:50.496650] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf9960 is same with the state(6) to be set 00:21:08.165 [2024-11-28 12:44:50.496657] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf9960 is same with the state(6) to be set 00:21:08.165 [2024-11-28 12:44:50.496658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.165 [2024-11-28 12:44:50.496664] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf9960 is same with the state(6) to be set 00:21:08.165 [2024-11-28 12:44:50.496666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.165 [2024-11-28 12:44:50.496671] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf9960 is same with the state(6) to be set 00:21:08.165 [2024-11-28 12:44:50.496676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.165 [2024-11-28 12:44:50.496678] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf9960 is same with the state(6) to be set 00:21:08.165 [2024-11-28 12:44:50.496684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.165 [2024-11-28 12:44:50.496685] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf9960 is same with the state(6) to be set 00:21:08.165 [2024-11-28 12:44:50.496693] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf9960 is same with the state(6) to be set 00:21:08.165 [2024-11-28 12:44:50.496693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.165 [2024-11-28 12:44:50.496699] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf9960 is same with the state(6) to be set 00:21:08.165 [2024-11-28 12:44:50.496702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.165 [2024-11-28 12:44:50.496707] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf9960 is same with the state(6) to be set 00:21:08.165 [2024-11-28 12:44:50.496712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.165 [2024-11-28 12:44:50.496714] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf9960 is same with the state(6) to be set 00:21:08.165 [2024-11-28 12:44:50.496719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.165 [2024-11-28 12:44:50.496721] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf9960 is same with the state(6) to be set 00:21:08.165 [2024-11-28 12:44:50.496729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.165 [2024-11-28 12:44:50.496731] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf9960 is same with the state(6) to be set 00:21:08.165 [2024-11-28 12:44:50.496737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.165 [2024-11-28 12:44:50.496738] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf9960 is same with the state(6) to be set 00:21:08.165 [2024-11-28 12:44:50.496746] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf9960 is same with the state(6) to be set 00:21:08.165 [2024-11-28 12:44:50.496747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.165 [2024-11-28 12:44:50.496752] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf9960 is same with the state(6) to be set 00:21:08.165 [2024-11-28 12:44:50.496755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.165 [2024-11-28 12:44:50.496760] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf9960 is same with the state(6) to be set 00:21:08.165 [2024-11-28 12:44:50.496764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.165 [2024-11-28 12:44:50.496767] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf9960 is same with the state(6) to be set 00:21:08.165 [2024-11-28 12:44:50.496772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.165 [2024-11-28 12:44:50.496774] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf9960 is same with the state(6) to be set 00:21:08.165 [2024-11-28 12:44:50.496781] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf9960 is same with the state(6) to be set 00:21:08.165 [2024-11-28 12:44:50.496781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.165 [2024-11-28 12:44:50.496787] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf9960 is same with the state(6) to be set 00:21:08.165 [2024-11-28 12:44:50.496790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.165 [2024-11-28 12:44:50.496794] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf9960 is same with the state(6) to be set 00:21:08.165 [2024-11-28 12:44:50.496799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.165 [2024-11-28 12:44:50.496801] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf9960 is same with the state(6) to be set 00:21:08.165 [2024-11-28 12:44:50.496807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.165 [2024-11-28 12:44:50.496808] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf9960 is same with the state(6) to be set 00:21:08.165 [2024-11-28 12:44:50.496816] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf9960 is same with the state(6) to be set 00:21:08.165 [2024-11-28 12:44:50.496816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.165 [2024-11-28 12:44:50.496822] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf9960 is same with the state(6) to be set 00:21:08.165 [2024-11-28 12:44:50.496825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.166 [2024-11-28 12:44:50.496829] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf9960 is same with the state(6) to be set 00:21:08.166 [2024-11-28 12:44:50.496835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:12[2024-11-28 12:44:50.496836] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf9960 is same with t8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.166 he state(6) to be set 00:21:08.166 [2024-11-28 12:44:50.496845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-28 12:44:50.496846] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf9960 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.166 he state(6) to be set 00:21:08.166 [2024-11-28 12:44:50.496855] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf9960 is same with the state(6) to be set 00:21:08.166 [2024-11-28 12:44:50.496856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.166 [2024-11-28 12:44:50.496861] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf9960 is same with the state(6) to be set 00:21:08.166 [2024-11-28 12:44:50.496865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.166 [2024-11-28 12:44:50.496868] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf9960 is same with the state(6) to be set 00:21:08.166 [2024-11-28 12:44:50.496874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:12[2024-11-28 12:44:50.496875] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf9960 is same with t8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.166 he state(6) to be set 00:21:08.166 [2024-11-28 12:44:50.496883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.166 [2024-11-28 12:44:50.496892] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf9960 is same with t[2024-11-28 12:44:50.496892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:12he state(6) to be set 00:21:08.166 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.166 [2024-11-28 12:44:50.496900] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf9960 is same with the state(6) to be set 00:21:08.166 [2024-11-28 12:44:50.496902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.166 [2024-11-28 12:44:50.496907] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf9960 is same with the state(6) to be set 00:21:08.166 [2024-11-28 12:44:50.496912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.166 [2024-11-28 12:44:50.496914] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf9960 is same with the state(6) to be set 00:21:08.166 [2024-11-28 12:44:50.496921] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf9960 is same with t[2024-11-28 12:44:50.496921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:21:08.166 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.166 [2024-11-28 12:44:50.496929] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf9960 is same with the state(6) to be set 00:21:08.166 [2024-11-28 12:44:50.496932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.166 [2024-11-28 12:44:50.496941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.166 [2024-11-28 12:44:50.496954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.166 [2024-11-28 12:44:50.496963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.166 [2024-11-28 12:44:50.496971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.166 [2024-11-28 12:44:50.496978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.166 [2024-11-28 12:44:50.496986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.166 [2024-11-28 12:44:50.496992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.166 [2024-11-28 12:44:50.497000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.166 [2024-11-28 12:44:50.497007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.166 [2024-11-28 12:44:50.497016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.166 [2024-11-28 12:44:50.497022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.166 [2024-11-28 12:44:50.497030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.166 [2024-11-28 12:44:50.497037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.166 [2024-11-28 12:44:50.497045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.166 [2024-11-28 12:44:50.497052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.166 [2024-11-28 12:44:50.497060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.166 [2024-11-28 12:44:50.497066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.166 [2024-11-28 12:44:50.497074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.166 [2024-11-28 12:44:50.497081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.166 [2024-11-28 12:44:50.497089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.166 [2024-11-28 12:44:50.497095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.166 [2024-11-28 12:44:50.497103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.166 [2024-11-28 12:44:50.497110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.166 [2024-11-28 12:44:50.497118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.166 [2024-11-28 12:44:50.497125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.166 [2024-11-28 12:44:50.497132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.166 [2024-11-28 12:44:50.497139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.166 [2024-11-28 12:44:50.497148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.166 [2024-11-28 12:44:50.497155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.166 [2024-11-28 12:44:50.497163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.166 [2024-11-28 12:44:50.497171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.166 [2024-11-28 12:44:50.497179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.166 [2024-11-28 12:44:50.497187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.166 [2024-11-28 12:44:50.497195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.166 [2024-11-28 12:44:50.497201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.166 [2024-11-28 12:44:50.497210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.166 [2024-11-28 12:44:50.497217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.166 [2024-11-28 12:44:50.497225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.166 [2024-11-28 12:44:50.497231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.166 [2024-11-28 12:44:50.497239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.166 [2024-11-28 12:44:50.497246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.166 [2024-11-28 12:44:50.497254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.166 [2024-11-28 12:44:50.497261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.166 [2024-11-28 12:44:50.497268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.166 [2024-11-28 12:44:50.497275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.166 [2024-11-28 12:44:50.497283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.166 [2024-11-28 12:44:50.497289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.166 [2024-11-28 12:44:50.497297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.166 [2024-11-28 12:44:50.497304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.166 [2024-11-28 12:44:50.497312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.166 [2024-11-28 12:44:50.497318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.166 [2024-11-28 12:44:50.497570] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:21:08.166 [2024-11-28 12:44:50.497618] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:21:08.166 [2024-11-28 12:44:50.497655] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8ecf0 (9): Bad file descriptor 00:21:08.166 [2024-11-28 12:44:50.497667] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x862d30 (9): Bad file descriptor 00:21:08.166 [2024-11-28 12:44:50.497690] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:08.167 [2024-11-28 12:44:50.497698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.167 [2024-11-28 12:44:50.497706] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:08.167 [2024-11-28 12:44:50.497713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.167 [2024-11-28 12:44:50.497720] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:08.167 [2024-11-28 12:44:50.497726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.167 [2024-11-28 12:44:50.497734] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:08.167 [2024-11-28 12:44:50.497740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.167 [2024-11-28 12:44:50.497747] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x857200 is same with the state(6) to be set 00:21:08.167 [2024-11-28 12:44:50.497774] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:08.167 [2024-11-28 12:44:50.497782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.167 [2024-11-28 12:44:50.497789] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:08.167 [2024-11-28 12:44:50.497796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.167 [2024-11-28 12:44:50.497803] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:08.167 [2024-11-28 12:44:50.497810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.167 [2024-11-28 12:44:50.497817] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:08.167 [2024-11-28 12:44:50.497823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.167 [2024-11-28 12:44:50.497829] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x860c70 is same with the state(6) to be set 00:21:08.167 [2024-11-28 12:44:50.497849] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:08.167 [2024-11-28 12:44:50.497856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.167 [2024-11-28 12:44:50.497863] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:08.167 [2024-11-28 12:44:50.497870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.167 [2024-11-28 12:44:50.497877] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:08.167 [2024-11-28 12:44:50.497886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.167 [2024-11-28 12:44:50.497893] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:08.167 [2024-11-28 12:44:50.497900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.167 [2024-11-28 12:44:50.497906] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbfd80 is same with the state(6) to be set 00:21:08.167 [2024-11-28 12:44:50.497932] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:08.167 [2024-11-28 12:44:50.497940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.167 [2024-11-28 12:44:50.497954] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:08.167 [2024-11-28 12:44:50.497961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.167 [2024-11-28 12:44:50.497969] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:08.167 [2024-11-28 12:44:50.497975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.167 [2024-11-28 12:44:50.497982] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:08.167 [2024-11-28 12:44:50.497989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.167 [2024-11-28 12:44:50.497995] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc3820 is same with the state(6) to be set 00:21:08.167 [2024-11-28 12:44:50.498012] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8631c0 (9): Bad file descriptor 00:21:08.167 [2024-11-28 12:44:50.498035] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:08.167 [2024-11-28 12:44:50.498043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.167 [2024-11-28 12:44:50.498051] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:08.167 [2024-11-28 12:44:50.498057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.167 [2024-11-28 12:44:50.498066] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:08.167 [2024-11-28 12:44:50.498073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.167 [2024-11-28 12:44:50.498080] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:08.167 [2024-11-28 12:44:50.498086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.167 [2024-11-28 12:44:50.498092] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x777610 is same with the state(6) to be set 00:21:08.167 [2024-11-28 12:44:50.498115] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:08.167 [2024-11-28 12:44:50.498123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.167 [2024-11-28 12:44:50.498131] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:08.167 [2024-11-28 12:44:50.498140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.167 [2024-11-28 12:44:50.498147] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:08.167 [2024-11-28 12:44:50.498154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.167 [2024-11-28 12:44:50.498162] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:08.167 [2024-11-28 12:44:50.498168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.167 [2024-11-28 12:44:50.498174] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdb100 is same with the state(6) to be set 00:21:08.167 [2024-11-28 12:44:50.498200] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:08.167 [2024-11-28 12:44:50.498208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.167 [2024-11-28 12:44:50.498216] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:08.167 [2024-11-28 12:44:50.498222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.167 [2024-11-28 12:44:50.498229] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:08.167 [2024-11-28 12:44:50.498236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.167 [2024-11-28 12:44:50.498243] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:08.167 [2024-11-28 12:44:50.498249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.167 [2024-11-28 12:44:50.498256] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8e240 is same with the state(6) to be set 00:21:08.167 [2024-11-28 12:44:50.499511] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:21:08.167 [2024-11-28 12:44:50.499543] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x860c70 (9): Bad file descriptor 00:21:08.167 [2024-11-28 12:44:50.500290] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:08.167 [2024-11-28 12:44:50.500551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:08.167 [2024-11-28 12:44:50.500567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x862d30 with addr=10.0.0.2, port=4420 00:21:08.167 [2024-11-28 12:44:50.500575] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x862d30 is same with the state(6) to be set 00:21:08.167 [2024-11-28 12:44:50.500778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:08.167 [2024-11-28 12:44:50.500799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8ecf0 with addr=10.0.0.2, port=4420 00:21:08.167 [2024-11-28 12:44:50.500806] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8ecf0 is same with the state(6) to be set 00:21:08.167 [2024-11-28 12:44:50.500927] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:08.167 [2024-11-28 12:44:50.501486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:08.167 [2024-11-28 12:44:50.501505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x860c70 with addr=10.0.0.2, port=4420 00:21:08.167 [2024-11-28 12:44:50.501513] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x860c70 is same with the state(6) to be set 00:21:08.167 [2024-11-28 12:44:50.501526] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x862d30 (9): Bad file descriptor 00:21:08.167 [2024-11-28 12:44:50.501536] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8ecf0 (9): Bad file descriptor 00:21:08.167 [2024-11-28 12:44:50.501581] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:08.167 [2024-11-28 12:44:50.501661] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:08.167 [2024-11-28 12:44:50.501706] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:08.167 [2024-11-28 12:44:50.501747] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:08.167 [2024-11-28 12:44:50.501764] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x860c70 (9): Bad file descriptor 00:21:08.167 [2024-11-28 12:44:50.501773] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:21:08.168 [2024-11-28 12:44:50.501780] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:21:08.168 [2024-11-28 12:44:50.501788] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:21:08.168 [2024-11-28 12:44:50.501798] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:21:08.168 [2024-11-28 12:44:50.501805] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:21:08.168 [2024-11-28 12:44:50.501811] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:21:08.168 [2024-11-28 12:44:50.501818] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:21:08.168 [2024-11-28 12:44:50.501824] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:21:08.168 [2024-11-28 12:44:50.501882] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:21:08.168 [2024-11-28 12:44:50.501890] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:21:08.168 [2024-11-28 12:44:50.501897] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:21:08.168 [2024-11-28 12:44:50.501903] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:21:08.168 [2024-11-28 12:44:50.507610] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x857200 (9): Bad file descriptor 00:21:08.168 [2024-11-28 12:44:50.507633] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbfd80 (9): Bad file descriptor 00:21:08.168 [2024-11-28 12:44:50.507649] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcc3820 (9): Bad file descriptor 00:21:08.168 [2024-11-28 12:44:50.507671] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x777610 (9): Bad file descriptor 00:21:08.168 [2024-11-28 12:44:50.507685] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdb100 (9): Bad file descriptor 00:21:08.168 [2024-11-28 12:44:50.507701] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8e240 (9): Bad file descriptor 00:21:08.168 [2024-11-28 12:44:50.507805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.168 [2024-11-28 12:44:50.507815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.168 [2024-11-28 12:44:50.507827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.168 [2024-11-28 12:44:50.507835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.168 [2024-11-28 12:44:50.507846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.168 [2024-11-28 12:44:50.507854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.168 [2024-11-28 12:44:50.507862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.168 [2024-11-28 12:44:50.507868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.168 [2024-11-28 12:44:50.507877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.168 [2024-11-28 12:44:50.507883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.168 [2024-11-28 12:44:50.507891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.168 [2024-11-28 12:44:50.507898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.168 [2024-11-28 12:44:50.507907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.168 [2024-11-28 12:44:50.507913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.168 [2024-11-28 12:44:50.507922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.168 [2024-11-28 12:44:50.507928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.168 [2024-11-28 12:44:50.507936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.168 [2024-11-28 12:44:50.507943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.168 [2024-11-28 12:44:50.507955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.168 [2024-11-28 12:44:50.507962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.168 [2024-11-28 12:44:50.507970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.168 [2024-11-28 12:44:50.507977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.168 [2024-11-28 12:44:50.507985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.168 [2024-11-28 12:44:50.507992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.168 [2024-11-28 12:44:50.508000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.168 [2024-11-28 12:44:50.508007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.168 [2024-11-28 12:44:50.508015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.168 [2024-11-28 12:44:50.508022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.168 [2024-11-28 12:44:50.508030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.168 [2024-11-28 12:44:50.508038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.168 [2024-11-28 12:44:50.508047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.168 [2024-11-28 12:44:50.508055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.168 [2024-11-28 12:44:50.508063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.168 [2024-11-28 12:44:50.508069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.168 [2024-11-28 12:44:50.508078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.168 [2024-11-28 12:44:50.508084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.168 [2024-11-28 12:44:50.508093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.168 [2024-11-28 12:44:50.508099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.168 [2024-11-28 12:44:50.508108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.168 [2024-11-28 12:44:50.508114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.168 [2024-11-28 12:44:50.508122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.168 [2024-11-28 12:44:50.508129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.168 [2024-11-28 12:44:50.508137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.168 [2024-11-28 12:44:50.508144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.168 [2024-11-28 12:44:50.508152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.168 [2024-11-28 12:44:50.508158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.168 [2024-11-28 12:44:50.508166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.168 [2024-11-28 12:44:50.508173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.168 [2024-11-28 12:44:50.508181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.168 [2024-11-28 12:44:50.508187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.168 [2024-11-28 12:44:50.508196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.168 [2024-11-28 12:44:50.508202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.168 [2024-11-28 12:44:50.508211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.168 [2024-11-28 12:44:50.508217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.168 [2024-11-28 12:44:50.508230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.168 [2024-11-28 12:44:50.508236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.168 [2024-11-28 12:44:50.508244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.168 [2024-11-28 12:44:50.508251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.168 [2024-11-28 12:44:50.508260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.168 [2024-11-28 12:44:50.508266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.168 [2024-11-28 12:44:50.508276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.169 [2024-11-28 12:44:50.508282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.169 [2024-11-28 12:44:50.508291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.169 [2024-11-28 12:44:50.508297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.169 [2024-11-28 12:44:50.508305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.169 [2024-11-28 12:44:50.508312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.169 [2024-11-28 12:44:50.508320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.169 [2024-11-28 12:44:50.508326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.169 [2024-11-28 12:44:50.508335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.169 [2024-11-28 12:44:50.508341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.169 [2024-11-28 12:44:50.508349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.169 [2024-11-28 12:44:50.508356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.169 [2024-11-28 12:44:50.508364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.169 [2024-11-28 12:44:50.508370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.169 [2024-11-28 12:44:50.508378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.169 [2024-11-28 12:44:50.508385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.169 [2024-11-28 12:44:50.508393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.169 [2024-11-28 12:44:50.508400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.169 [2024-11-28 12:44:50.508408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.169 [2024-11-28 12:44:50.508416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.169 [2024-11-28 12:44:50.508425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.169 [2024-11-28 12:44:50.508431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.169 [2024-11-28 12:44:50.508439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.169 [2024-11-28 12:44:50.508446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.169 [2024-11-28 12:44:50.508454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.169 [2024-11-28 12:44:50.508460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.169 [2024-11-28 12:44:50.508469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.169 [2024-11-28 12:44:50.508475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.169 [2024-11-28 12:44:50.508483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.169 [2024-11-28 12:44:50.508490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.169 [2024-11-28 12:44:50.508499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.169 [2024-11-28 12:44:50.508505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.169 [2024-11-28 12:44:50.508513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.169 [2024-11-28 12:44:50.508520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.169 [2024-11-28 12:44:50.508529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.169 [2024-11-28 12:44:50.508535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.169 [2024-11-28 12:44:50.508543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.169 [2024-11-28 12:44:50.508549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.169 [2024-11-28 12:44:50.508557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.169 [2024-11-28 12:44:50.508564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.169 [2024-11-28 12:44:50.508572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.169 [2024-11-28 12:44:50.508579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.169 [2024-11-28 12:44:50.508587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.169 [2024-11-28 12:44:50.508593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.169 [2024-11-28 12:44:50.508603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.169 [2024-11-28 12:44:50.508610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.169 [2024-11-28 12:44:50.508618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.169 [2024-11-28 12:44:50.508624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.169 [2024-11-28 12:44:50.508632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.169 [2024-11-28 12:44:50.508639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.169 [2024-11-28 12:44:50.508647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.169 [2024-11-28 12:44:50.508653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.169 [2024-11-28 12:44:50.508661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.169 [2024-11-28 12:44:50.508668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.169 [2024-11-28 12:44:50.508676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.169 [2024-11-28 12:44:50.508682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.169 [2024-11-28 12:44:50.508690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.169 [2024-11-28 12:44:50.508697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.169 [2024-11-28 12:44:50.508705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.169 [2024-11-28 12:44:50.508711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.169 [2024-11-28 12:44:50.508719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.169 [2024-11-28 12:44:50.508726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.169 [2024-11-28 12:44:50.508734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.169 [2024-11-28 12:44:50.508741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.169 [2024-11-28 12:44:50.508748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.169 [2024-11-28 12:44:50.508755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.169 [2024-11-28 12:44:50.508763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.169 [2024-11-28 12:44:50.508770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.169 [2024-11-28 12:44:50.508777] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa671a0 is same with the state(6) to be set 00:21:08.170 [2024-11-28 12:44:50.509784] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:21:08.170 [2024-11-28 12:44:50.510117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:08.170 [2024-11-28 12:44:50.510134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8631c0 with addr=10.0.0.2, port=4420 00:21:08.170 [2024-11-28 12:44:50.510142] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8631c0 is same with the state(6) to be set 00:21:08.170 [2024-11-28 12:44:50.510400] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:21:08.170 [2024-11-28 12:44:50.510412] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:21:08.170 [2024-11-28 12:44:50.510432] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8631c0 (9): Bad file descriptor 00:21:08.170 [2024-11-28 12:44:50.510742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:08.170 [2024-11-28 12:44:50.510755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8ecf0 with addr=10.0.0.2, port=4420 00:21:08.170 [2024-11-28 12:44:50.510762] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8ecf0 is same with the state(6) to be set 00:21:08.170 [2024-11-28 12:44:50.510912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:08.170 [2024-11-28 12:44:50.510922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x862d30 with addr=10.0.0.2, port=4420 00:21:08.170 [2024-11-28 12:44:50.510929] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x862d30 is same with the state(6) to be set 00:21:08.170 [2024-11-28 12:44:50.510936] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:21:08.170 [2024-11-28 12:44:50.510943] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:21:08.170 [2024-11-28 12:44:50.510956] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:21:08.170 [2024-11-28 12:44:50.510963] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:21:08.170 [2024-11-28 12:44:50.511004] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8ecf0 (9): Bad file descriptor 00:21:08.170 [2024-11-28 12:44:50.511014] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x862d30 (9): Bad file descriptor 00:21:08.170 [2024-11-28 12:44:50.511054] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:21:08.170 [2024-11-28 12:44:50.511062] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:21:08.170 [2024-11-28 12:44:50.511069] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:21:08.170 [2024-11-28 12:44:50.511076] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:21:08.170 [2024-11-28 12:44:50.511082] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:21:08.170 [2024-11-28 12:44:50.511088] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:21:08.170 [2024-11-28 12:44:50.511094] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:21:08.170 [2024-11-28 12:44:50.511099] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:21:08.170 [2024-11-28 12:44:50.511128] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:21:08.170 [2024-11-28 12:44:50.511314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:08.170 [2024-11-28 12:44:50.511326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x860c70 with addr=10.0.0.2, port=4420 00:21:08.170 [2024-11-28 12:44:50.511337] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x860c70 is same with the state(6) to be set 00:21:08.170 [2024-11-28 12:44:50.511368] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x860c70 (9): Bad file descriptor 00:21:08.170 [2024-11-28 12:44:50.511398] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:21:08.170 [2024-11-28 12:44:50.511404] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:21:08.170 [2024-11-28 12:44:50.511410] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:21:08.170 [2024-11-28 12:44:50.511416] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:21:08.170 [2024-11-28 12:44:50.517759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.170 [2024-11-28 12:44:50.517789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.170 [2024-11-28 12:44:50.517801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.170 [2024-11-28 12:44:50.517808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.170 [2024-11-28 12:44:50.517817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.170 [2024-11-28 12:44:50.517824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.170 [2024-11-28 12:44:50.517833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.170 [2024-11-28 12:44:50.517839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.170 [2024-11-28 12:44:50.517848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.170 [2024-11-28 12:44:50.517854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.170 [2024-11-28 12:44:50.517863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.170 [2024-11-28 12:44:50.517869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.170 [2024-11-28 12:44:50.517878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.170 [2024-11-28 12:44:50.517885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.170 [2024-11-28 12:44:50.517893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.170 [2024-11-28 12:44:50.517900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.170 [2024-11-28 12:44:50.517909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.170 [2024-11-28 12:44:50.517915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.170 [2024-11-28 12:44:50.517923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.170 [2024-11-28 12:44:50.517930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.170 [2024-11-28 12:44:50.517942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.170 [2024-11-28 12:44:50.517954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.170 [2024-11-28 12:44:50.517963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.170 [2024-11-28 12:44:50.517970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.170 [2024-11-28 12:44:50.517979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.170 [2024-11-28 12:44:50.517985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.170 [2024-11-28 12:44:50.517993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.170 [2024-11-28 12:44:50.518000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.170 [2024-11-28 12:44:50.518008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.170 [2024-11-28 12:44:50.518015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.170 [2024-11-28 12:44:50.518023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.170 [2024-11-28 12:44:50.518029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.170 [2024-11-28 12:44:50.518037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.170 [2024-11-28 12:44:50.518044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.170 [2024-11-28 12:44:50.518052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.170 [2024-11-28 12:44:50.518059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.170 [2024-11-28 12:44:50.518067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.170 [2024-11-28 12:44:50.518073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.170 [2024-11-28 12:44:50.518081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.170 [2024-11-28 12:44:50.518088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.170 [2024-11-28 12:44:50.518096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.170 [2024-11-28 12:44:50.518103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.170 [2024-11-28 12:44:50.518111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.170 [2024-11-28 12:44:50.518118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.170 [2024-11-28 12:44:50.518126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.170 [2024-11-28 12:44:50.518134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.170 [2024-11-28 12:44:50.518143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.171 [2024-11-28 12:44:50.518150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.171 [2024-11-28 12:44:50.518158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.171 [2024-11-28 12:44:50.518165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.171 [2024-11-28 12:44:50.518174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.171 [2024-11-28 12:44:50.518180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.171 [2024-11-28 12:44:50.518188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.171 [2024-11-28 12:44:50.518195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.171 [2024-11-28 12:44:50.518203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.171 [2024-11-28 12:44:50.518210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.171 [2024-11-28 12:44:50.518218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.171 [2024-11-28 12:44:50.518225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.171 [2024-11-28 12:44:50.518233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.171 [2024-11-28 12:44:50.518240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.171 [2024-11-28 12:44:50.518248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.171 [2024-11-28 12:44:50.518254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.171 [2024-11-28 12:44:50.518262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.171 [2024-11-28 12:44:50.518269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.171 [2024-11-28 12:44:50.518277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.171 [2024-11-28 12:44:50.518283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.171 [2024-11-28 12:44:50.518292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.171 [2024-11-28 12:44:50.518298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.171 [2024-11-28 12:44:50.518307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.171 [2024-11-28 12:44:50.518314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.171 [2024-11-28 12:44:50.518323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.171 [2024-11-28 12:44:50.518330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.171 [2024-11-28 12:44:50.518338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.171 [2024-11-28 12:44:50.518344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.171 [2024-11-28 12:44:50.518352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.171 [2024-11-28 12:44:50.518359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.171 [2024-11-28 12:44:50.518367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.171 [2024-11-28 12:44:50.518374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.171 [2024-11-28 12:44:50.518382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.171 [2024-11-28 12:44:50.518389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.171 [2024-11-28 12:44:50.518398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.171 [2024-11-28 12:44:50.518404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.171 [2024-11-28 12:44:50.518412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.171 [2024-11-28 12:44:50.518419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.171 [2024-11-28 12:44:50.518427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.171 [2024-11-28 12:44:50.518433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.171 [2024-11-28 12:44:50.518442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.171 [2024-11-28 12:44:50.518448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.171 [2024-11-28 12:44:50.518456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.171 [2024-11-28 12:44:50.518463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.171 [2024-11-28 12:44:50.518471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.171 [2024-11-28 12:44:50.518478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.171 [2024-11-28 12:44:50.518486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.171 [2024-11-28 12:44:50.518493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.171 [2024-11-28 12:44:50.518501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.171 [2024-11-28 12:44:50.518509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.171 [2024-11-28 12:44:50.518518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.171 [2024-11-28 12:44:50.518524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.171 [2024-11-28 12:44:50.518532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.171 [2024-11-28 12:44:50.518539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.171 [2024-11-28 12:44:50.518547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.171 [2024-11-28 12:44:50.518553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.171 [2024-11-28 12:44:50.518561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.171 [2024-11-28 12:44:50.518568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.171 [2024-11-28 12:44:50.518576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.171 [2024-11-28 12:44:50.518582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.171 [2024-11-28 12:44:50.518590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.171 [2024-11-28 12:44:50.518597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.171 [2024-11-28 12:44:50.518605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.171 [2024-11-28 12:44:50.518611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.171 [2024-11-28 12:44:50.518619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.171 [2024-11-28 12:44:50.518626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.171 [2024-11-28 12:44:50.518635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.171 [2024-11-28 12:44:50.518641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.171 [2024-11-28 12:44:50.518650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.171 [2024-11-28 12:44:50.518656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.171 [2024-11-28 12:44:50.518666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.171 [2024-11-28 12:44:50.518673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.171 [2024-11-28 12:44:50.518681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.171 [2024-11-28 12:44:50.518688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.171 [2024-11-28 12:44:50.518698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.171 [2024-11-28 12:44:50.518705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.171 [2024-11-28 12:44:50.518713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.171 [2024-11-28 12:44:50.518720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.171 [2024-11-28 12:44:50.518728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.171 [2024-11-28 12:44:50.518735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.172 [2024-11-28 12:44:50.518744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.172 [2024-11-28 12:44:50.518750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.172 [2024-11-28 12:44:50.518757] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa68150 is same with the state(6) to be set 00:21:08.172 [2024-11-28 12:44:50.519783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.172 [2024-11-28 12:44:50.519797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.172 [2024-11-28 12:44:50.519808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.172 [2024-11-28 12:44:50.519815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.172 [2024-11-28 12:44:50.519824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.172 [2024-11-28 12:44:50.519831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.172 [2024-11-28 12:44:50.519839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.172 [2024-11-28 12:44:50.519846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.172 [2024-11-28 12:44:50.519855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.172 [2024-11-28 12:44:50.519862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.172 [2024-11-28 12:44:50.519870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.172 [2024-11-28 12:44:50.519876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.172 [2024-11-28 12:44:50.519885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.172 [2024-11-28 12:44:50.519891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.172 [2024-11-28 12:44:50.519900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.172 [2024-11-28 12:44:50.519907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.172 [2024-11-28 12:44:50.519918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.172 [2024-11-28 12:44:50.519925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.172 [2024-11-28 12:44:50.519933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.172 [2024-11-28 12:44:50.519940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.172 [2024-11-28 12:44:50.519952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.172 [2024-11-28 12:44:50.519959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.172 [2024-11-28 12:44:50.519967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.172 [2024-11-28 12:44:50.519974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.172 [2024-11-28 12:44:50.519982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.172 [2024-11-28 12:44:50.519989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.172 [2024-11-28 12:44:50.519997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.172 [2024-11-28 12:44:50.520004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.172 [2024-11-28 12:44:50.520012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.172 [2024-11-28 12:44:50.520019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.172 [2024-11-28 12:44:50.520027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.172 [2024-11-28 12:44:50.520034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.172 [2024-11-28 12:44:50.520042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.172 [2024-11-28 12:44:50.520048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.172 [2024-11-28 12:44:50.520057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.172 [2024-11-28 12:44:50.520063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.172 [2024-11-28 12:44:50.520072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.172 [2024-11-28 12:44:50.520079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.172 [2024-11-28 12:44:50.520087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.172 [2024-11-28 12:44:50.520093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.172 [2024-11-28 12:44:50.520102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.172 [2024-11-28 12:44:50.520110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.172 [2024-11-28 12:44:50.520118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.172 [2024-11-28 12:44:50.520125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.172 [2024-11-28 12:44:50.520133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.172 [2024-11-28 12:44:50.520139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.172 [2024-11-28 12:44:50.520147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.172 [2024-11-28 12:44:50.520154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.172 [2024-11-28 12:44:50.520163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.172 [2024-11-28 12:44:50.520170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.172 [2024-11-28 12:44:50.520178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.172 [2024-11-28 12:44:50.520184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.172 [2024-11-28 12:44:50.520193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.172 [2024-11-28 12:44:50.520200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.172 [2024-11-28 12:44:50.520208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.172 [2024-11-28 12:44:50.520214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.172 [2024-11-28 12:44:50.520222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.172 [2024-11-28 12:44:50.520228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.172 [2024-11-28 12:44:50.520237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.172 [2024-11-28 12:44:50.520244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.172 [2024-11-28 12:44:50.520253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.172 [2024-11-28 12:44:50.520259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.172 [2024-11-28 12:44:50.520267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.172 [2024-11-28 12:44:50.520274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.172 [2024-11-28 12:44:50.520282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.172 [2024-11-28 12:44:50.520288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.172 [2024-11-28 12:44:50.520298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.172 [2024-11-28 12:44:50.520304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.172 [2024-11-28 12:44:50.520313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.172 [2024-11-28 12:44:50.520319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.172 [2024-11-28 12:44:50.520328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.172 [2024-11-28 12:44:50.520334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.172 [2024-11-28 12:44:50.520343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.172 [2024-11-28 12:44:50.520350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.172 [2024-11-28 12:44:50.520358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.172 [2024-11-28 12:44:50.520365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.172 [2024-11-28 12:44:50.520373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.173 [2024-11-28 12:44:50.520379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.173 [2024-11-28 12:44:50.520388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.173 [2024-11-28 12:44:50.520394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.173 [2024-11-28 12:44:50.520402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.173 [2024-11-28 12:44:50.520409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.173 [2024-11-28 12:44:50.520417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.173 [2024-11-28 12:44:50.520423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.173 [2024-11-28 12:44:50.520432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.173 [2024-11-28 12:44:50.520438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.173 [2024-11-28 12:44:50.520447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.173 [2024-11-28 12:44:50.520453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.173 [2024-11-28 12:44:50.520461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.173 [2024-11-28 12:44:50.520468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.173 [2024-11-28 12:44:50.520476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.173 [2024-11-28 12:44:50.520484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.173 [2024-11-28 12:44:50.520492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.173 [2024-11-28 12:44:50.520499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.173 [2024-11-28 12:44:50.520507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.173 [2024-11-28 12:44:50.520514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.173 [2024-11-28 12:44:50.520522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.173 [2024-11-28 12:44:50.520528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.173 [2024-11-28 12:44:50.520536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.173 [2024-11-28 12:44:50.520543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.173 [2024-11-28 12:44:50.520551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.173 [2024-11-28 12:44:50.520558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.173 [2024-11-28 12:44:50.520566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.173 [2024-11-28 12:44:50.520573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.173 [2024-11-28 12:44:50.520581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.173 [2024-11-28 12:44:50.520587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.173 [2024-11-28 12:44:50.520600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.173 [2024-11-28 12:44:50.520606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.173 [2024-11-28 12:44:50.520614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.173 [2024-11-28 12:44:50.520621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.173 [2024-11-28 12:44:50.520629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.173 [2024-11-28 12:44:50.520636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.173 [2024-11-28 12:44:50.520644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.173 [2024-11-28 12:44:50.520651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.173 [2024-11-28 12:44:50.520659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.173 [2024-11-28 12:44:50.520665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.173 [2024-11-28 12:44:50.520675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.173 [2024-11-28 12:44:50.520682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.173 [2024-11-28 12:44:50.520690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.173 [2024-11-28 12:44:50.520697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.173 [2024-11-28 12:44:50.520706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.173 [2024-11-28 12:44:50.520712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.173 [2024-11-28 12:44:50.520720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.173 [2024-11-28 12:44:50.520727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.173 [2024-11-28 12:44:50.520735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.173 [2024-11-28 12:44:50.520741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.173 [2024-11-28 12:44:50.520750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.173 [2024-11-28 12:44:50.520756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.173 [2024-11-28 12:44:50.520763] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc68770 is same with the state(6) to be set 00:21:08.173 [2024-11-28 12:44:50.521772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.173 [2024-11-28 12:44:50.521784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.173 [2024-11-28 12:44:50.521794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.173 [2024-11-28 12:44:50.521801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.173 [2024-11-28 12:44:50.521810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.173 [2024-11-28 12:44:50.521817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.173 [2024-11-28 12:44:50.521825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.173 [2024-11-28 12:44:50.521832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.173 [2024-11-28 12:44:50.521840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.173 [2024-11-28 12:44:50.521847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.173 [2024-11-28 12:44:50.521856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.173 [2024-11-28 12:44:50.521862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.173 [2024-11-28 12:44:50.521875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.173 [2024-11-28 12:44:50.521882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.173 [2024-11-28 12:44:50.521890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.173 [2024-11-28 12:44:50.521897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.173 [2024-11-28 12:44:50.521905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.173 [2024-11-28 12:44:50.521911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.173 [2024-11-28 12:44:50.521919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.173 [2024-11-28 12:44:50.521926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.173 [2024-11-28 12:44:50.521934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.173 [2024-11-28 12:44:50.521941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.173 [2024-11-28 12:44:50.521956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.173 [2024-11-28 12:44:50.521963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.173 [2024-11-28 12:44:50.521972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.173 [2024-11-28 12:44:50.521978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.173 [2024-11-28 12:44:50.521986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.173 [2024-11-28 12:44:50.521993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.174 [2024-11-28 12:44:50.522001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.174 [2024-11-28 12:44:50.522007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.174 [2024-11-28 12:44:50.522015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.174 [2024-11-28 12:44:50.522022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.174 [2024-11-28 12:44:50.522030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.174 [2024-11-28 12:44:50.522037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.174 [2024-11-28 12:44:50.522045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.174 [2024-11-28 12:44:50.522051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.174 [2024-11-28 12:44:50.522059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.174 [2024-11-28 12:44:50.522068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.174 [2024-11-28 12:44:50.522076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.174 [2024-11-28 12:44:50.522083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.174 [2024-11-28 12:44:50.522091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.174 [2024-11-28 12:44:50.522097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.174 [2024-11-28 12:44:50.522105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.174 [2024-11-28 12:44:50.522112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.174 [2024-11-28 12:44:50.522120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.174 [2024-11-28 12:44:50.522126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.174 [2024-11-28 12:44:50.522135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.174 [2024-11-28 12:44:50.522141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.174 [2024-11-28 12:44:50.522149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.174 [2024-11-28 12:44:50.522156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.174 [2024-11-28 12:44:50.522164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.174 [2024-11-28 12:44:50.522170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.174 [2024-11-28 12:44:50.522178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.174 [2024-11-28 12:44:50.522185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.174 [2024-11-28 12:44:50.522193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.174 [2024-11-28 12:44:50.522199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.174 [2024-11-28 12:44:50.522207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.174 [2024-11-28 12:44:50.522214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.174 [2024-11-28 12:44:50.522222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.174 [2024-11-28 12:44:50.522229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.174 [2024-11-28 12:44:50.522237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.174 [2024-11-28 12:44:50.522243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.174 [2024-11-28 12:44:50.522251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.174 [2024-11-28 12:44:50.522260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.174 [2024-11-28 12:44:50.522268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.174 [2024-11-28 12:44:50.522274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.174 [2024-11-28 12:44:50.522282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.174 [2024-11-28 12:44:50.522289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.174 [2024-11-28 12:44:50.522297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.174 [2024-11-28 12:44:50.522303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.174 [2024-11-28 12:44:50.522311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.174 [2024-11-28 12:44:50.522318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.174 [2024-11-28 12:44:50.522326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.174 [2024-11-28 12:44:50.522333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.174 [2024-11-28 12:44:50.522341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.174 [2024-11-28 12:44:50.522347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.174 [2024-11-28 12:44:50.522356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.174 [2024-11-28 12:44:50.522362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.174 [2024-11-28 12:44:50.522371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.174 [2024-11-28 12:44:50.522377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.174 [2024-11-28 12:44:50.522385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.174 [2024-11-28 12:44:50.522392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.174 [2024-11-28 12:44:50.522400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.174 [2024-11-28 12:44:50.522406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.174 [2024-11-28 12:44:50.522414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.174 [2024-11-28 12:44:50.522421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.174 [2024-11-28 12:44:50.522428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.174 [2024-11-28 12:44:50.522435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.174 [2024-11-28 12:44:50.522444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.174 [2024-11-28 12:44:50.522451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.174 [2024-11-28 12:44:50.522459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.174 [2024-11-28 12:44:50.522465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.174 [2024-11-28 12:44:50.522473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.174 [2024-11-28 12:44:50.522482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.174 [2024-11-28 12:44:50.522490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.174 [2024-11-28 12:44:50.522496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.174 [2024-11-28 12:44:50.522504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.175 [2024-11-28 12:44:50.522511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.175 [2024-11-28 12:44:50.522519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.175 [2024-11-28 12:44:50.522526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.175 [2024-11-28 12:44:50.522534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.175 [2024-11-28 12:44:50.522541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.175 [2024-11-28 12:44:50.522549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.175 [2024-11-28 12:44:50.522556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.175 [2024-11-28 12:44:50.522565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.175 [2024-11-28 12:44:50.522571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.175 [2024-11-28 12:44:50.522579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.175 [2024-11-28 12:44:50.522586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.175 [2024-11-28 12:44:50.522594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.175 [2024-11-28 12:44:50.522600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.175 [2024-11-28 12:44:50.522608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.175 [2024-11-28 12:44:50.522615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.175 [2024-11-28 12:44:50.522624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.175 [2024-11-28 12:44:50.522632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.175 [2024-11-28 12:44:50.522641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.175 [2024-11-28 12:44:50.522647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.175 [2024-11-28 12:44:50.522656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.175 [2024-11-28 12:44:50.522662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.175 [2024-11-28 12:44:50.522670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.175 [2024-11-28 12:44:50.522677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.175 [2024-11-28 12:44:50.522685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.175 [2024-11-28 12:44:50.522692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.175 [2024-11-28 12:44:50.522700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.175 [2024-11-28 12:44:50.522707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.175 [2024-11-28 12:44:50.522715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.175 [2024-11-28 12:44:50.522721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.175 [2024-11-28 12:44:50.522729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.175 [2024-11-28 12:44:50.522736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.175 [2024-11-28 12:44:50.522743] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6acf0 is same with the state(6) to be set 00:21:08.175 [2024-11-28 12:44:50.523754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.175 [2024-11-28 12:44:50.523767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.175 [2024-11-28 12:44:50.523778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.175 [2024-11-28 12:44:50.523785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.175 [2024-11-28 12:44:50.523794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.175 [2024-11-28 12:44:50.523801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.175 [2024-11-28 12:44:50.523811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.175 [2024-11-28 12:44:50.523823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.175 [2024-11-28 12:44:50.523832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.175 [2024-11-28 12:44:50.523841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.175 [2024-11-28 12:44:50.523849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.175 [2024-11-28 12:44:50.523856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.175 [2024-11-28 12:44:50.523864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.175 [2024-11-28 12:44:50.523871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.175 [2024-11-28 12:44:50.523880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.175 [2024-11-28 12:44:50.523886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.175 [2024-11-28 12:44:50.523895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.175 [2024-11-28 12:44:50.523901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.175 [2024-11-28 12:44:50.523909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.175 [2024-11-28 12:44:50.523916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.175 [2024-11-28 12:44:50.523924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.175 [2024-11-28 12:44:50.523931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.175 [2024-11-28 12:44:50.523939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.175 [2024-11-28 12:44:50.523945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.175 [2024-11-28 12:44:50.523958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.175 [2024-11-28 12:44:50.523965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.175 [2024-11-28 12:44:50.523973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.175 [2024-11-28 12:44:50.523979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.175 [2024-11-28 12:44:50.523987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.175 [2024-11-28 12:44:50.523994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.175 [2024-11-28 12:44:50.524002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.175 [2024-11-28 12:44:50.524008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.175 [2024-11-28 12:44:50.524017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.175 [2024-11-28 12:44:50.524023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.175 [2024-11-28 12:44:50.524033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.175 [2024-11-28 12:44:50.524040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.175 [2024-11-28 12:44:50.524047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.175 [2024-11-28 12:44:50.524054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.175 [2024-11-28 12:44:50.524062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.175 [2024-11-28 12:44:50.524069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.175 [2024-11-28 12:44:50.524077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.175 [2024-11-28 12:44:50.524084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.175 [2024-11-28 12:44:50.524092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.175 [2024-11-28 12:44:50.524098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.175 [2024-11-28 12:44:50.524107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.175 [2024-11-28 12:44:50.524113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.175 [2024-11-28 12:44:50.524121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.176 [2024-11-28 12:44:50.524127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.176 [2024-11-28 12:44:50.524135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.176 [2024-11-28 12:44:50.524142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.176 [2024-11-28 12:44:50.524150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.176 [2024-11-28 12:44:50.524157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.176 [2024-11-28 12:44:50.524165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.176 [2024-11-28 12:44:50.524171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.176 [2024-11-28 12:44:50.524179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.176 [2024-11-28 12:44:50.524186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.176 [2024-11-28 12:44:50.524194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.176 [2024-11-28 12:44:50.524200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.176 [2024-11-28 12:44:50.524209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.176 [2024-11-28 12:44:50.524217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.176 [2024-11-28 12:44:50.524225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.176 [2024-11-28 12:44:50.524231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.176 [2024-11-28 12:44:50.524240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.176 [2024-11-28 12:44:50.524246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.176 [2024-11-28 12:44:50.524254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.176 [2024-11-28 12:44:50.524260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.176 [2024-11-28 12:44:50.524268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.176 [2024-11-28 12:44:50.524275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.176 [2024-11-28 12:44:50.524283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.176 [2024-11-28 12:44:50.524289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.176 [2024-11-28 12:44:50.524298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.176 [2024-11-28 12:44:50.524304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.176 [2024-11-28 12:44:50.524312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.176 [2024-11-28 12:44:50.524319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.176 [2024-11-28 12:44:50.524327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.176 [2024-11-28 12:44:50.524333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.176 [2024-11-28 12:44:50.524342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.176 [2024-11-28 12:44:50.524348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.176 [2024-11-28 12:44:50.524356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.176 [2024-11-28 12:44:50.524364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.176 [2024-11-28 12:44:50.524372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.176 [2024-11-28 12:44:50.524378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.176 [2024-11-28 12:44:50.524386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.176 [2024-11-28 12:44:50.524393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.176 [2024-11-28 12:44:50.524403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.176 [2024-11-28 12:44:50.524410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.176 [2024-11-28 12:44:50.524418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.176 [2024-11-28 12:44:50.524425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.176 [2024-11-28 12:44:50.524433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.176 [2024-11-28 12:44:50.524440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.176 [2024-11-28 12:44:50.524448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.176 [2024-11-28 12:44:50.524455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.176 [2024-11-28 12:44:50.524463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.176 [2024-11-28 12:44:50.524470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.176 [2024-11-28 12:44:50.524478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.176 [2024-11-28 12:44:50.524485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.176 [2024-11-28 12:44:50.524493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.176 [2024-11-28 12:44:50.524500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.176 [2024-11-28 12:44:50.524508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.176 [2024-11-28 12:44:50.524515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.176 [2024-11-28 12:44:50.524523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.176 [2024-11-28 12:44:50.524529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.176 [2024-11-28 12:44:50.524538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.176 [2024-11-28 12:44:50.524544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.176 [2024-11-28 12:44:50.524553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.176 [2024-11-28 12:44:50.524559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.176 [2024-11-28 12:44:50.524567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.176 [2024-11-28 12:44:50.524574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.176 [2024-11-28 12:44:50.524582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.176 [2024-11-28 12:44:50.524590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.176 [2024-11-28 12:44:50.524598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.176 [2024-11-28 12:44:50.524605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.176 [2024-11-28 12:44:50.524613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.176 [2024-11-28 12:44:50.524620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.176 [2024-11-28 12:44:50.524628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.176 [2024-11-28 12:44:50.524635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.176 [2024-11-28 12:44:50.524644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.176 [2024-11-28 12:44:50.524651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.176 [2024-11-28 12:44:50.524659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.176 [2024-11-28 12:44:50.524666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.176 [2024-11-28 12:44:50.524675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.176 [2024-11-28 12:44:50.524681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.176 [2024-11-28 12:44:50.524690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.176 [2024-11-28 12:44:50.524696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.176 [2024-11-28 12:44:50.524704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.176 [2024-11-28 12:44:50.524711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.176 [2024-11-28 12:44:50.524719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.177 [2024-11-28 12:44:50.524725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.177 [2024-11-28 12:44:50.524733] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb18a0 is same with the state(6) to be set 00:21:08.177 [2024-11-28 12:44:50.525726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.177 [2024-11-28 12:44:50.525746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.177 [2024-11-28 12:44:50.525757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.177 [2024-11-28 12:44:50.525764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.177 [2024-11-28 12:44:50.525772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.177 [2024-11-28 12:44:50.525781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.177 [2024-11-28 12:44:50.525790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.177 [2024-11-28 12:44:50.525797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.177 [2024-11-28 12:44:50.525806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.177 [2024-11-28 12:44:50.525813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.177 [2024-11-28 12:44:50.525821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.177 [2024-11-28 12:44:50.525828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.177 [2024-11-28 12:44:50.525836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.177 [2024-11-28 12:44:50.525843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.177 [2024-11-28 12:44:50.525851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.177 [2024-11-28 12:44:50.525858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.177 [2024-11-28 12:44:50.525866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.177 [2024-11-28 12:44:50.525872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.177 [2024-11-28 12:44:50.525881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.177 [2024-11-28 12:44:50.525887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.177 [2024-11-28 12:44:50.525895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.177 [2024-11-28 12:44:50.525902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.177 [2024-11-28 12:44:50.525909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.177 [2024-11-28 12:44:50.525916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.177 [2024-11-28 12:44:50.525924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.177 [2024-11-28 12:44:50.525931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.177 [2024-11-28 12:44:50.525939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.177 [2024-11-28 12:44:50.525946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.177 [2024-11-28 12:44:50.525959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.177 [2024-11-28 12:44:50.525966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.177 [2024-11-28 12:44:50.525975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.177 [2024-11-28 12:44:50.525982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.177 [2024-11-28 12:44:50.525990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.177 [2024-11-28 12:44:50.525998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.177 [2024-11-28 12:44:50.526007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.177 [2024-11-28 12:44:50.526014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.177 [2024-11-28 12:44:50.526023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.177 [2024-11-28 12:44:50.526030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.177 [2024-11-28 12:44:50.526038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.177 [2024-11-28 12:44:50.526044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.177 [2024-11-28 12:44:50.526052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.177 [2024-11-28 12:44:50.526059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.177 [2024-11-28 12:44:50.526067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.177 [2024-11-28 12:44:50.526074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.177 [2024-11-28 12:44:50.526082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.177 [2024-11-28 12:44:50.526088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.177 [2024-11-28 12:44:50.526096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.177 [2024-11-28 12:44:50.526103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.177 [2024-11-28 12:44:50.526111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.177 [2024-11-28 12:44:50.526117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.177 [2024-11-28 12:44:50.526125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.177 [2024-11-28 12:44:50.526132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.177 [2024-11-28 12:44:50.526140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.177 [2024-11-28 12:44:50.526146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.177 [2024-11-28 12:44:50.526154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.177 [2024-11-28 12:44:50.526162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.177 [2024-11-28 12:44:50.526170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.177 [2024-11-28 12:44:50.526177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.177 [2024-11-28 12:44:50.526185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.177 [2024-11-28 12:44:50.526191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.177 [2024-11-28 12:44:50.526199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.177 [2024-11-28 12:44:50.526206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.177 [2024-11-28 12:44:50.526214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.177 [2024-11-28 12:44:50.526221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.177 [2024-11-28 12:44:50.526229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.177 [2024-11-28 12:44:50.526237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.177 [2024-11-28 12:44:50.526245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.177 [2024-11-28 12:44:50.526251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.177 [2024-11-28 12:44:50.526260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.177 [2024-11-28 12:44:50.526266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.177 [2024-11-28 12:44:50.526275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.177 [2024-11-28 12:44:50.526282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.177 [2024-11-28 12:44:50.526290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.177 [2024-11-28 12:44:50.526297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.177 [2024-11-28 12:44:50.526305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.177 [2024-11-28 12:44:50.526311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.177 [2024-11-28 12:44:50.526319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.178 [2024-11-28 12:44:50.526326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.178 [2024-11-28 12:44:50.526334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.178 [2024-11-28 12:44:50.526340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.178 [2024-11-28 12:44:50.526350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.178 [2024-11-28 12:44:50.526357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.178 [2024-11-28 12:44:50.526365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.178 [2024-11-28 12:44:50.526372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.178 [2024-11-28 12:44:50.526380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.178 [2024-11-28 12:44:50.526386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.178 [2024-11-28 12:44:50.526394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.178 [2024-11-28 12:44:50.526401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.178 [2024-11-28 12:44:50.526408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.178 [2024-11-28 12:44:50.526415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.178 [2024-11-28 12:44:50.526424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.178 [2024-11-28 12:44:50.526430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.178 [2024-11-28 12:44:50.526439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.178 [2024-11-28 12:44:50.526446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.178 [2024-11-28 12:44:50.526453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.178 [2024-11-28 12:44:50.526460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.178 [2024-11-28 12:44:50.526468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.178 [2024-11-28 12:44:50.526476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.178 [2024-11-28 12:44:50.526484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.178 [2024-11-28 12:44:50.526491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.178 [2024-11-28 12:44:50.526499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.178 [2024-11-28 12:44:50.526506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.178 [2024-11-28 12:44:50.526515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.178 [2024-11-28 12:44:50.526522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.178 [2024-11-28 12:44:50.526530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.178 [2024-11-28 12:44:50.526538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.178 [2024-11-28 12:44:50.526547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.178 [2024-11-28 12:44:50.526553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.178 [2024-11-28 12:44:50.526561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.178 [2024-11-28 12:44:50.526568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.178 [2024-11-28 12:44:50.526576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.178 [2024-11-28 12:44:50.526582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.178 [2024-11-28 12:44:50.526591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.178 [2024-11-28 12:44:50.526597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.178 [2024-11-28 12:44:50.526605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.178 [2024-11-28 12:44:50.526612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.178 [2024-11-28 12:44:50.526620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.178 [2024-11-28 12:44:50.526627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.178 [2024-11-28 12:44:50.526635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.178 [2024-11-28 12:44:50.526641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.178 [2024-11-28 12:44:50.526650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.178 [2024-11-28 12:44:50.526657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.178 [2024-11-28 12:44:50.526665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.178 [2024-11-28 12:44:50.526671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.178 [2024-11-28 12:44:50.526680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.178 [2024-11-28 12:44:50.526686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.178 [2024-11-28 12:44:50.526694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.178 [2024-11-28 12:44:50.526701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.178 [2024-11-28 12:44:50.526708] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa6d70 is same with the state(6) to be set 00:21:08.178 [2024-11-28 12:44:50.527728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.178 [2024-11-28 12:44:50.527749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.178 [2024-11-28 12:44:50.527760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.178 [2024-11-28 12:44:50.527767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.178 [2024-11-28 12:44:50.527775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.178 [2024-11-28 12:44:50.527782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.178 [2024-11-28 12:44:50.527791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.178 [2024-11-28 12:44:50.527797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.178 [2024-11-28 12:44:50.527805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.178 [2024-11-28 12:44:50.527812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.178 [2024-11-28 12:44:50.527820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.178 [2024-11-28 12:44:50.527827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.178 [2024-11-28 12:44:50.527835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.178 [2024-11-28 12:44:50.527842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.178 [2024-11-28 12:44:50.527850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.178 [2024-11-28 12:44:50.527857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.178 [2024-11-28 12:44:50.527865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.178 [2024-11-28 12:44:50.527871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.178 [2024-11-28 12:44:50.527880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.178 [2024-11-28 12:44:50.527886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.179 [2024-11-28 12:44:50.527894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.179 [2024-11-28 12:44:50.527901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.179 [2024-11-28 12:44:50.527909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.179 [2024-11-28 12:44:50.527915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.179 [2024-11-28 12:44:50.527924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.179 [2024-11-28 12:44:50.527930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.179 [2024-11-28 12:44:50.527939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.179 [2024-11-28 12:44:50.527953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.179 [2024-11-28 12:44:50.527962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.179 [2024-11-28 12:44:50.527968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.179 [2024-11-28 12:44:50.527977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.179 [2024-11-28 12:44:50.527983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.179 [2024-11-28 12:44:50.527992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.179 [2024-11-28 12:44:50.527999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.179 [2024-11-28 12:44:50.528008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.179 [2024-11-28 12:44:50.528015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.179 [2024-11-28 12:44:50.528023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.179 [2024-11-28 12:44:50.528030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.179 [2024-11-28 12:44:50.528037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.179 [2024-11-28 12:44:50.528044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.179 [2024-11-28 12:44:50.528052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.179 [2024-11-28 12:44:50.528059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.179 [2024-11-28 12:44:50.528067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.179 [2024-11-28 12:44:50.528075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.179 [2024-11-28 12:44:50.528084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.179 [2024-11-28 12:44:50.528091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.179 [2024-11-28 12:44:50.528098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.179 [2024-11-28 12:44:50.528106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.179 [2024-11-28 12:44:50.528114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.179 [2024-11-28 12:44:50.528120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.179 [2024-11-28 12:44:50.528129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.179 [2024-11-28 12:44:50.528136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.179 [2024-11-28 12:44:50.528146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.179 [2024-11-28 12:44:50.528153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.179 [2024-11-28 12:44:50.528161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.179 [2024-11-28 12:44:50.528168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.179 [2024-11-28 12:44:50.528176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.179 [2024-11-28 12:44:50.528183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.179 [2024-11-28 12:44:50.528191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.179 [2024-11-28 12:44:50.528197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.179 [2024-11-28 12:44:50.528206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.179 [2024-11-28 12:44:50.528212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.179 [2024-11-28 12:44:50.528220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.179 [2024-11-28 12:44:50.528227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.179 [2024-11-28 12:44:50.528235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.179 [2024-11-28 12:44:50.528241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.179 [2024-11-28 12:44:50.528250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.179 [2024-11-28 12:44:50.528256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.179 [2024-11-28 12:44:50.528264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.179 [2024-11-28 12:44:50.528271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.179 [2024-11-28 12:44:50.528279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.179 [2024-11-28 12:44:50.528286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.179 [2024-11-28 12:44:50.528294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.179 [2024-11-28 12:44:50.528300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.179 [2024-11-28 12:44:50.528308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.179 [2024-11-28 12:44:50.528315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.179 [2024-11-28 12:44:50.528323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.179 [2024-11-28 12:44:50.528332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.179 [2024-11-28 12:44:50.528341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.179 [2024-11-28 12:44:50.528347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.179 [2024-11-28 12:44:50.528355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.179 [2024-11-28 12:44:50.528362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.179 [2024-11-28 12:44:50.528370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.179 [2024-11-28 12:44:50.528377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.179 [2024-11-28 12:44:50.528385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.179 [2024-11-28 12:44:50.528392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.179 [2024-11-28 12:44:50.528400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.179 [2024-11-28 12:44:50.528407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.179 [2024-11-28 12:44:50.528416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.179 [2024-11-28 12:44:50.528422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.179 [2024-11-28 12:44:50.528431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.179 [2024-11-28 12:44:50.528438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.179 [2024-11-28 12:44:50.528446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.179 [2024-11-28 12:44:50.528453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.179 [2024-11-28 12:44:50.528463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.179 [2024-11-28 12:44:50.528470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.179 [2024-11-28 12:44:50.528479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.179 [2024-11-28 12:44:50.528485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.179 [2024-11-28 12:44:50.528494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.180 [2024-11-28 12:44:50.528500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.180 [2024-11-28 12:44:50.528509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.180 [2024-11-28 12:44:50.528515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.180 [2024-11-28 12:44:50.528525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.180 [2024-11-28 12:44:50.528531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.180 [2024-11-28 12:44:50.528540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.180 [2024-11-28 12:44:50.528546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.180 [2024-11-28 12:44:50.528554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.180 [2024-11-28 12:44:50.528561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.180 [2024-11-28 12:44:50.528569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.180 [2024-11-28 12:44:50.528576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.180 [2024-11-28 12:44:50.533363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.180 [2024-11-28 12:44:50.533375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.180 [2024-11-28 12:44:50.533384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.180 [2024-11-28 12:44:50.533391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.180 [2024-11-28 12:44:50.533399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.180 [2024-11-28 12:44:50.533406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.180 [2024-11-28 12:44:50.533414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.180 [2024-11-28 12:44:50.533421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.180 [2024-11-28 12:44:50.533429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.180 [2024-11-28 12:44:50.533435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.180 [2024-11-28 12:44:50.533444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.180 [2024-11-28 12:44:50.533450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.180 [2024-11-28 12:44:50.533458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.180 [2024-11-28 12:44:50.533465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.180 [2024-11-28 12:44:50.533473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.180 [2024-11-28 12:44:50.533480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.180 [2024-11-28 12:44:50.533489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.180 [2024-11-28 12:44:50.533499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.180 [2024-11-28 12:44:50.533506] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa8060 is same with the state(6) to be set 00:21:08.180 [2024-11-28 12:44:50.534493] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:21:08.180 [2024-11-28 12:44:50.534510] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:21:08.180 [2024-11-28 12:44:50.534521] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:21:08.180 [2024-11-28 12:44:50.534531] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:21:08.180 [2024-11-28 12:44:50.534613] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:21:08.180 [2024-11-28 12:44:50.534627] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] Unable to perform failover, already in progress. 00:21:08.180 [2024-11-28 12:44:50.534692] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:21:08.180 task offset: 33920 on job bdev=Nvme3n1 fails 00:21:08.180 00:21:08.180 Latency(us) 00:21:08.180 [2024-11-28T11:44:50.699Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:08.180 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:08.180 Job: Nvme1n1 ended in about 0.93 seconds with error 00:21:08.180 Verification LBA range: start 0x0 length 0x400 00:21:08.180 Nvme1n1 : 0.93 205.36 12.84 68.45 0.00 231404.63 18236.10 207891.59 00:21:08.180 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:08.180 Job: Nvme2n1 ended in about 0.94 seconds with error 00:21:08.180 Verification LBA range: start 0x0 length 0x400 00:21:08.180 Nvme2n1 : 0.94 203.20 12.70 67.73 0.00 229916.94 15272.74 222480.47 00:21:08.180 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:08.180 Job: Nvme3n1 ended in about 0.92 seconds with error 00:21:08.180 Verification LBA range: start 0x0 length 0x400 00:21:08.180 Nvme3n1 : 0.92 283.53 17.72 69.53 0.00 173110.43 3561.74 221568.67 00:21:08.180 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:08.180 Job: Nvme4n1 ended in about 0.92 seconds with error 00:21:08.180 Verification LBA range: start 0x0 length 0x400 00:21:08.180 Nvme4n1 : 0.92 277.83 17.36 69.46 0.00 172819.17 2849.39 206979.78 00:21:08.180 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:08.180 Job: Nvme5n1 ended in about 0.95 seconds with error 00:21:08.180 Verification LBA range: start 0x0 length 0x400 00:21:08.180 Nvme5n1 : 0.95 202.77 12.67 67.59 0.00 218477.52 16412.49 232510.33 00:21:08.180 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:08.180 Job: Nvme6n1 ended in about 0.92 seconds with error 00:21:08.180 Verification LBA range: start 0x0 length 0x400 00:21:08.180 Nvme6n1 : 0.92 207.67 12.98 69.22 0.00 208944.36 4502.04 226127.69 00:21:08.180 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:08.180 Job: Nvme7n1 ended in about 0.95 seconds with error 00:21:08.180 Verification LBA range: start 0x0 length 0x400 00:21:08.180 Nvme7n1 : 0.95 202.35 12.65 67.45 0.00 211086.91 16640.45 208803.39 00:21:08.180 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:08.180 Job: Nvme8n1 ended in about 0.95 seconds with error 00:21:08.180 Verification LBA range: start 0x0 length 0x400 00:21:08.180 Nvme8n1 : 0.95 201.93 12.62 67.31 0.00 207589.51 16526.47 193302.71 00:21:08.180 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:08.180 Job: Nvme9n1 ended in about 0.95 seconds with error 00:21:08.180 Verification LBA range: start 0x0 length 0x400 00:21:08.180 Nvme9n1 : 0.95 134.34 8.40 67.17 0.00 272198.20 27012.23 253481.85 00:21:08.180 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:08.180 Job: Nvme10n1 ended in about 0.96 seconds with error 00:21:08.180 Verification LBA range: start 0x0 length 0x400 00:21:08.180 Nvme10n1 : 0.96 133.39 8.34 66.70 0.00 269247.59 19945.74 257129.07 00:21:08.180 [2024-11-28T11:44:50.699Z] =================================================================================================================== 00:21:08.180 [2024-11-28T11:44:50.699Z] Total : 2052.37 128.27 680.61 0.00 214510.76 2849.39 257129.07 00:21:08.180 [2024-11-28 12:44:50.570192] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:21:08.180 [2024-11-28 12:44:50.570239] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:21:08.180 [2024-11-28 12:44:50.570564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:08.180 [2024-11-28 12:44:50.570583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x857200 with addr=10.0.0.2, port=4420 00:21:08.180 [2024-11-28 12:44:50.570593] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x857200 is same with the state(6) to be set 00:21:08.180 [2024-11-28 12:44:50.570815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:08.180 [2024-11-28 12:44:50.570826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8e240 with addr=10.0.0.2, port=4420 00:21:08.180 [2024-11-28 12:44:50.570834] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8e240 is same with the state(6) to be set 00:21:08.180 [2024-11-28 12:44:50.571053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:08.180 [2024-11-28 12:44:50.571064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x777610 with addr=10.0.0.2, port=4420 00:21:08.180 [2024-11-28 12:44:50.571071] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x777610 is same with the state(6) to be set 00:21:08.180 [2024-11-28 12:44:50.571276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:08.180 [2024-11-28 12:44:50.571286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc3820 with addr=10.0.0.2, port=4420 00:21:08.180 [2024-11-28 12:44:50.571293] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc3820 is same with the state(6) to be set 00:21:08.180 [2024-11-28 12:44:50.572678] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:21:08.180 [2024-11-28 12:44:50.572694] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:21:08.180 [2024-11-28 12:44:50.572704] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:21:08.180 [2024-11-28 12:44:50.572714] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:21:08.180 [2024-11-28 12:44:50.573001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:08.180 [2024-11-28 12:44:50.573015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdb100 with addr=10.0.0.2, port=4420 00:21:08.180 [2024-11-28 12:44:50.573023] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdb100 is same with the state(6) to be set 00:21:08.181 [2024-11-28 12:44:50.573223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:08.181 [2024-11-28 12:44:50.573234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbfd80 with addr=10.0.0.2, port=4420 00:21:08.181 [2024-11-28 12:44:50.573241] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbfd80 is same with the state(6) to be set 00:21:08.181 [2024-11-28 12:44:50.573253] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x857200 (9): Bad file descriptor 00:21:08.181 [2024-11-28 12:44:50.573264] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8e240 (9): Bad file descriptor 00:21:08.181 [2024-11-28 12:44:50.573277] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x777610 (9): Bad file descriptor 00:21:08.181 [2024-11-28 12:44:50.573285] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcc3820 (9): Bad file descriptor 00:21:08.181 [2024-11-28 12:44:50.573320] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:21:08.181 [2024-11-28 12:44:50.573330] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:21:08.181 [2024-11-28 12:44:50.573341] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] Unable to perform failover, already in progress. 00:21:08.181 [2024-11-28 12:44:50.573351] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] Unable to perform failover, already in progress. 00:21:08.181 [2024-11-28 12:44:50.573584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:08.181 [2024-11-28 12:44:50.573598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8631c0 with addr=10.0.0.2, port=4420 00:21:08.181 [2024-11-28 12:44:50.573605] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8631c0 is same with the state(6) to be set 00:21:08.181 [2024-11-28 12:44:50.573832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:08.181 [2024-11-28 12:44:50.573842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x862d30 with addr=10.0.0.2, port=4420 00:21:08.181 [2024-11-28 12:44:50.573849] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x862d30 is same with the state(6) to be set 00:21:08.181 [2024-11-28 12:44:50.574001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:08.181 [2024-11-28 12:44:50.574012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8ecf0 with addr=10.0.0.2, port=4420 00:21:08.181 [2024-11-28 12:44:50.574019] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8ecf0 is same with the state(6) to be set 00:21:08.181 [2024-11-28 12:44:50.574246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:08.181 [2024-11-28 12:44:50.574257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x860c70 with addr=10.0.0.2, port=4420 00:21:08.181 [2024-11-28 12:44:50.574264] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x860c70 is same with the state(6) to be set 00:21:08.181 [2024-11-28 12:44:50.574273] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdb100 (9): Bad file descriptor 00:21:08.181 [2024-11-28 12:44:50.574283] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbfd80 (9): Bad file descriptor 00:21:08.181 [2024-11-28 12:44:50.574291] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:21:08.181 [2024-11-28 12:44:50.574297] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:21:08.181 [2024-11-28 12:44:50.574306] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:21:08.181 [2024-11-28 12:44:50.574314] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:21:08.181 [2024-11-28 12:44:50.574322] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:21:08.181 [2024-11-28 12:44:50.574328] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:21:08.181 [2024-11-28 12:44:50.574334] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:21:08.181 [2024-11-28 12:44:50.574340] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:21:08.181 [2024-11-28 12:44:50.574350] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:21:08.181 [2024-11-28 12:44:50.574356] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:21:08.181 [2024-11-28 12:44:50.574362] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:21:08.181 [2024-11-28 12:44:50.574368] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:21:08.181 [2024-11-28 12:44:50.574374] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:21:08.181 [2024-11-28 12:44:50.574380] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:21:08.181 [2024-11-28 12:44:50.574386] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:21:08.181 [2024-11-28 12:44:50.574391] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:21:08.181 [2024-11-28 12:44:50.574467] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8631c0 (9): Bad file descriptor 00:21:08.181 [2024-11-28 12:44:50.574478] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x862d30 (9): Bad file descriptor 00:21:08.181 [2024-11-28 12:44:50.574486] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8ecf0 (9): Bad file descriptor 00:21:08.181 [2024-11-28 12:44:50.574494] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x860c70 (9): Bad file descriptor 00:21:08.181 [2024-11-28 12:44:50.574502] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:21:08.181 [2024-11-28 12:44:50.574507] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:21:08.181 [2024-11-28 12:44:50.574513] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:21:08.181 [2024-11-28 12:44:50.574519] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:21:08.181 [2024-11-28 12:44:50.574526] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:21:08.181 [2024-11-28 12:44:50.574531] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:21:08.181 [2024-11-28 12:44:50.574537] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:21:08.181 [2024-11-28 12:44:50.574543] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:21:08.181 [2024-11-28 12:44:50.574569] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:21:08.181 [2024-11-28 12:44:50.574576] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:21:08.181 [2024-11-28 12:44:50.574582] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:21:08.181 [2024-11-28 12:44:50.574588] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:21:08.181 [2024-11-28 12:44:50.574595] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:21:08.181 [2024-11-28 12:44:50.574601] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:21:08.181 [2024-11-28 12:44:50.574607] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:21:08.181 [2024-11-28 12:44:50.574613] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:21:08.181 [2024-11-28 12:44:50.574619] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:21:08.181 [2024-11-28 12:44:50.574629] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:21:08.181 [2024-11-28 12:44:50.574635] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:21:08.181 [2024-11-28 12:44:50.574641] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:21:08.181 [2024-11-28 12:44:50.574648] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:21:08.181 [2024-11-28 12:44:50.574653] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:21:08.181 [2024-11-28 12:44:50.574659] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:21:08.181 [2024-11-28 12:44:50.574665] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:21:08.441 12:44:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:21:09.377 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 2582727 00:21:09.377 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:21:09.377 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2582727 00:21:09.377 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:21:09.377 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:09.377 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:21:09.377 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:09.377 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 2582727 00:21:09.377 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:21:09.377 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:09.377 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:21:09.377 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:21:09.377 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:21:09.377 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:09.377 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:21:09.377 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:21:09.377 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:09.636 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:09.637 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:21:09.637 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:09.637 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:21:09.637 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:09.637 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:21:09.637 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:09.637 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:09.637 rmmod nvme_tcp 00:21:09.637 rmmod nvme_fabrics 00:21:09.637 rmmod nvme_keyring 00:21:09.637 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:09.637 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:21:09.637 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:21:09.637 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 2582459 ']' 00:21:09.637 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 2582459 00:21:09.637 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 2582459 ']' 00:21:09.637 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 2582459 00:21:09.637 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2582459) - No such process 00:21:09.637 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 2582459 is not found' 00:21:09.637 Process with pid 2582459 is not found 00:21:09.637 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:09.637 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:09.637 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:09.637 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:21:09.637 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:09.637 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:21:09.637 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:21:09.637 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:09.637 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:09.637 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:09.637 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:09.637 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:11.541 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:11.541 00:21:11.541 real 0m7.308s 00:21:11.541 user 0m17.506s 00:21:11.541 sys 0m1.298s 00:21:11.541 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:11.541 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:11.541 ************************************ 00:21:11.541 END TEST nvmf_shutdown_tc3 00:21:11.541 ************************************ 00:21:11.541 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:21:11.541 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:21:11.541 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:21:11.541 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:11.541 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:11.541 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:11.800 ************************************ 00:21:11.800 START TEST nvmf_shutdown_tc4 00:21:11.800 ************************************ 00:21:11.800 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:21:11.800 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:21:11.800 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:21:11.800 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:11.800 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:11.800 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:11.800 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:11.800 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:11.800 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:11.800 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:11.800 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:11.800 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:11.800 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:11.800 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:21:11.800 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:11.800 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:11.800 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:21:11.800 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:11.800 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:11.800 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:11.800 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:11.800 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:11.800 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:21:11.800 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:11.800 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:21:11.800 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:21:11.800 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:21:11.800 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:21:11.800 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:21:11.800 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:21:11.800 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:11.800 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:11.801 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:11.801 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:11.801 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:11.801 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:11.801 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:11.801 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:11.801 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:11.801 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:11.801 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:11.801 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:11.801 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:11.801 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:11.801 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:11.801 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:11.801 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:11.801 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:11.801 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:11.801 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:11.801 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:11.801 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:11.801 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:11.801 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:11.801 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:11.801 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:11.801 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:11.801 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:11.801 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:11.801 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:11.801 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:11.801 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:11.801 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:11.801 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:11.801 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:11.801 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:11.801 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:11.801 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:11.801 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:11.801 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:11.801 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:11.801 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:11.801 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:11.801 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:11.801 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:11.801 Found net devices under 0000:86:00.0: cvl_0_0 00:21:11.801 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:11.801 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:11.801 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:11.801 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:11.801 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:11.801 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:11.801 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:11.801 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:11.801 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:11.801 Found net devices under 0000:86:00.1: cvl_0_1 00:21:11.801 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:11.801 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:11.801 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:21:11.801 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:11.801 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:11.801 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:11.801 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:11.801 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:11.801 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:11.801 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:11.801 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:11.801 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:11.801 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:11.801 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:11.801 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:11.801 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:11.801 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:11.801 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:11.801 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:11.801 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:11.801 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:11.801 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:11.801 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:11.801 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:11.801 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:11.801 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:11.801 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:11.801 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:11.801 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:11.801 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:11.801 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.340 ms 00:21:11.801 00:21:11.801 --- 10.0.0.2 ping statistics --- 00:21:11.801 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:11.801 rtt min/avg/max/mdev = 0.340/0.340/0.340/0.000 ms 00:21:11.801 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:11.801 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:11.801 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.218 ms 00:21:11.801 00:21:11.801 --- 10.0.0.1 ping statistics --- 00:21:11.801 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:11.801 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:21:11.801 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:11.801 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:21:11.801 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:11.801 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:11.801 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:11.801 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:11.801 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:11.801 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:11.802 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:12.061 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:21:12.061 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:12.061 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:12.061 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:12.061 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=2583928 00:21:12.061 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 2583928 00:21:12.061 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:12.061 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 2583928 ']' 00:21:12.061 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:12.061 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:12.061 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:12.061 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:12.061 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:12.061 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:12.061 [2024-11-28 12:44:54.405220] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:21:12.061 [2024-11-28 12:44:54.405263] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:12.061 [2024-11-28 12:44:54.472780] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:12.061 [2024-11-28 12:44:54.512955] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:12.061 [2024-11-28 12:44:54.512994] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:12.061 [2024-11-28 12:44:54.513000] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:12.061 [2024-11-28 12:44:54.513006] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:12.061 [2024-11-28 12:44:54.513012] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:12.061 [2024-11-28 12:44:54.514559] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:12.061 [2024-11-28 12:44:54.514647] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:12.061 [2024-11-28 12:44:54.514754] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:12.061 [2024-11-28 12:44:54.514755] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:21:12.321 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:12.321 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:21:12.321 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:12.321 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:12.321 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:12.321 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:12.321 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:12.321 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.321 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:12.321 [2024-11-28 12:44:54.661067] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:12.321 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.321 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:21:12.321 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:21:12.321 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:12.321 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:12.321 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:12.321 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:12.321 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:12.321 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:12.321 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:12.321 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:12.321 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:12.321 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:12.321 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:12.321 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:12.321 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:12.321 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:12.321 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:12.321 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:12.321 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:12.321 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:12.321 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:12.321 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:12.321 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:12.321 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:12.321 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:12.321 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:21:12.321 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.321 12:44:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:12.321 Malloc1 00:21:12.321 [2024-11-28 12:44:54.783716] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:12.321 Malloc2 00:21:12.580 Malloc3 00:21:12.580 Malloc4 00:21:12.580 Malloc5 00:21:12.580 Malloc6 00:21:12.580 Malloc7 00:21:12.580 Malloc8 00:21:12.838 Malloc9 00:21:12.838 Malloc10 00:21:12.838 12:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.838 12:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:21:12.838 12:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:12.838 12:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:12.838 12:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=2584047 00:21:12.838 12:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:21:12.838 12:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:21:12.838 [2024-11-28 12:44:55.269498] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:18.121 12:45:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:18.121 12:45:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 2583928 00:21:18.121 12:45:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 2583928 ']' 00:21:18.121 12:45:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 2583928 00:21:18.121 12:45:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:21:18.121 12:45:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:18.121 12:45:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2583928 00:21:18.121 12:45:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:18.121 12:45:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:18.121 12:45:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2583928' 00:21:18.121 killing process with pid 2583928 00:21:18.121 12:45:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 2583928 00:21:18.121 12:45:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 2583928 00:21:18.121 [2024-11-28 12:45:00.283812] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b0440 is same with the state(6) to be set 00:21:18.121 [2024-11-28 12:45:00.283874] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b0440 is same with the state(6) to be set 00:21:18.121 [2024-11-28 12:45:00.283882] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b0440 is same with the state(6) to be set 00:21:18.121 [2024-11-28 12:45:00.283889] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b0440 is same with the state(6) to be set 00:21:18.121 [2024-11-28 12:45:00.283896] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b0440 is same with the state(6) to be set 00:21:18.121 [2024-11-28 12:45:00.283902] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b0440 is same with the state(6) to be set 00:21:18.121 [2024-11-28 12:45:00.284506] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b0910 is same with the state(6) to be set 00:21:18.121 [2024-11-28 12:45:00.284739] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9bc50 is same with the state(6) to be set 00:21:18.121 [2024-11-28 12:45:00.284768] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9bc50 is same with the state(6) to be set 00:21:18.121 [2024-11-28 12:45:00.284777] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9bc50 is same with the state(6) to be set 00:21:18.121 [2024-11-28 12:45:00.284784] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9bc50 is same with the state(6) to be set 00:21:18.121 [2024-11-28 12:45:00.284791] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9bc50 is same with the state(6) to be set 00:21:18.121 [2024-11-28 12:45:00.284803] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9bc50 is same with the state(6) to be set 00:21:18.121 [2024-11-28 12:45:00.285476] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aff70 is same with the state(6) to be set 00:21:18.121 [2024-11-28 12:45:00.285502] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aff70 is same with the state(6) to be set 00:21:18.121 [2024-11-28 12:45:00.285510] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aff70 is same with the state(6) to be set 00:21:18.121 [2024-11-28 12:45:00.285517] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aff70 is same with the state(6) to be set 00:21:18.121 [2024-11-28 12:45:00.285524] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aff70 is same with the state(6) to be set 00:21:18.121 [2024-11-28 12:45:00.285530] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aff70 is same with the state(6) to be set 00:21:18.121 [2024-11-28 12:45:00.285536] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aff70 is same with the state(6) to be set 00:21:18.121 [2024-11-28 12:45:00.285542] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aff70 is same with the state(6) to be set 00:21:18.121 [2024-11-28 12:45:00.286621] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9d930 is same with the state(6) to be set 00:21:18.121 [2024-11-28 12:45:00.286641] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9d930 is same with the state(6) to be set 00:21:18.121 [2024-11-28 12:45:00.286649] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9d930 is same with the state(6) to be set 00:21:18.121 [2024-11-28 12:45:00.286657] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9d930 is same with the state(6) to be set 00:21:18.121 [2024-11-28 12:45:00.286664] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9d930 is same with the state(6) to be set 00:21:18.121 [2024-11-28 12:45:00.286670] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9d930 is same with the state(6) to be set 00:21:18.121 [2024-11-28 12:45:00.286679] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9d930 is same with the state(6) to be set 00:21:18.121 [2024-11-28 12:45:00.286686] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9d930 is same with the state(6) to be set 00:21:18.121 Write completed with error (sct=0, sc=8) 00:21:18.121 starting I/O failed: -6 00:21:18.121 Write completed with error (sct=0, sc=8) 00:21:18.121 Write completed with error (sct=0, sc=8) 00:21:18.121 Write completed with error (sct=0, sc=8) 00:21:18.121 Write completed with error (sct=0, sc=8) 00:21:18.121 starting I/O failed: -6 00:21:18.121 Write completed with error (sct=0, sc=8) 00:21:18.121 Write completed with error (sct=0, sc=8) 00:21:18.121 Write completed with error (sct=0, sc=8) 00:21:18.121 Write completed with error (sct=0, sc=8) 00:21:18.121 starting I/O failed: -6 00:21:18.121 Write completed with error (sct=0, sc=8) 00:21:18.121 Write completed with error (sct=0, sc=8) 00:21:18.121 Write completed with error (sct=0, sc=8) 00:21:18.121 Write completed with error (sct=0, sc=8) 00:21:18.121 starting I/O failed: -6 00:21:18.121 Write completed with error (sct=0, sc=8) 00:21:18.121 Write completed with error (sct=0, sc=8) 00:21:18.121 Write completed with error (sct=0, sc=8) 00:21:18.121 Write completed with error (sct=0, sc=8) 00:21:18.121 starting I/O failed: -6 00:21:18.121 Write completed with error (sct=0, sc=8) 00:21:18.121 Write completed with error (sct=0, sc=8) 00:21:18.121 Write completed with error (sct=0, sc=8) 00:21:18.121 Write completed with error (sct=0, sc=8) 00:21:18.121 starting I/O failed: -6 00:21:18.121 [2024-11-28 12:45:00.287314] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9de00 is same with the state(6) to be set 00:21:18.121 Write completed with error (sct=0, sc=8) 00:21:18.121 [2024-11-28 12:45:00.287338] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9de00 is same with the state(6) to be set 00:21:18.121 Write completed with error (sct=0, sc=8) 00:21:18.121 [2024-11-28 12:45:00.287352] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9de00 is same with the state(6) to be set 00:21:18.121 Write completed with error (sct=0, sc=8) 00:21:18.121 [2024-11-28 12:45:00.287368] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9de00 is same with the state(6) to be set 00:21:18.121 Write completed with error (sct=0, sc=8) 00:21:18.121 [2024-11-28 12:45:00.287379] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9de00 is same with the state(6) to be set 00:21:18.121 starting I/O failed: -6 00:21:18.121 [2024-11-28 12:45:00.287388] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9de00 is same with the state(6) to be set 00:21:18.121 Write completed with error (sct=0, sc=8) 00:21:18.121 [2024-11-28 12:45:00.287399] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9de00 is same with the state(6) to be set 00:21:18.121 [2024-11-28 12:45:00.287409] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9de00 is same with the state(6) to be set 00:21:18.121 Write completed with error (sct=0, sc=8) 00:21:18.121 [2024-11-28 12:45:00.287419] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9de00 is same with the state(6) to be set 00:21:18.121 Write completed with error (sct=0, sc=8) 00:21:18.121 [2024-11-28 12:45:00.287429] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9de00 is same with the state(6) to be set 00:21:18.121 [2024-11-28 12:45:00.287439] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9de00 is same with tWrite completed with error (sct=0, sc=8) 00:21:18.121 he state(6) to be set 00:21:18.121 starting I/O failed: -6 00:21:18.121 [2024-11-28 12:45:00.287486] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:18.121 starting I/O failed: -6 00:21:18.121 starting I/O failed: -6 00:21:18.121 starting I/O failed: -6 00:21:18.121 [2024-11-28 12:45:00.288117] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9e2d0 is same with the state(6) to be set 00:21:18.121 [2024-11-28 12:45:00.288139] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9e2d0 is same with the state(6) to be set 00:21:18.121 [2024-11-28 12:45:00.288145] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9e2d0 is same with the state(6) to be set 00:21:18.121 [2024-11-28 12:45:00.288152] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9e2d0 is same with the state(6) to be set 00:21:18.122 [2024-11-28 12:45:00.288158] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9e2d0 is same with the state(6) to be set 00:21:18.122 [2024-11-28 12:45:00.288164] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9e2d0 is same with the state(6) to be set 00:21:18.122 [2024-11-28 12:45:00.288171] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9e2d0 is same with the state(6) to be set 00:21:18.122 [2024-11-28 12:45:00.288178] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9e2d0 is same with the state(6) to be set 00:21:18.122 [2024-11-28 12:45:00.288184] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9e2d0 is same with the state(6) to be set 00:21:18.122 [2024-11-28 12:45:00.288191] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9e2d0 is same with the state(6) to be set 00:21:18.122 [2024-11-28 12:45:00.288197] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9e2d0 is same with the state(6) to be set 00:21:18.122 [2024-11-28 12:45:00.288203] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9e2d0 is same with the state(6) to be set 00:21:18.122 [2024-11-28 12:45:00.288209] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9e2d0 is same with the state(6) to be set 00:21:18.122 NVMe io qpair process completion error 00:21:18.122 [2024-11-28 12:45:00.288840] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9d460 is same with the state(6) to be set 00:21:18.122 [2024-11-28 12:45:00.288862] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9d460 is same with the state(6) to be set 00:21:18.122 [2024-11-28 12:45:00.288874] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9d460 is same with the state(6) to be set 00:21:18.122 [2024-11-28 12:45:00.288881] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9d460 is same with the state(6) to be set 00:21:18.122 [2024-11-28 12:45:00.288887] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9d460 is same with the state(6) to be set 00:21:18.122 [2024-11-28 12:45:00.288893] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9d460 is same with the state(6) to be set 00:21:18.122 [2024-11-28 12:45:00.288899] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9d460 is same with the state(6) to be set 00:21:18.122 [2024-11-28 12:45:00.288905] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9d460 is same with the state(6) to be set 00:21:18.122 [2024-11-28 12:45:00.288911] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9d460 is same with the state(6) to be set 00:21:18.122 [2024-11-28 12:45:00.288917] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9d460 is same with the state(6) to be set 00:21:18.122 [2024-11-28 12:45:00.288923] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9d460 is same with the state(6) to be set 00:21:18.122 [2024-11-28 12:45:00.288930] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9d460 is same with the state(6) to be set 00:21:18.122 [2024-11-28 12:45:00.288940] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9d460 is same with the state(6) to be set 00:21:18.122 [2024-11-28 12:45:00.288946] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9d460 is same with the state(6) to be set 00:21:18.122 [2024-11-28 12:45:00.288963] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9d460 is same with the state(6) to be set 00:21:18.122 [2024-11-28 12:45:00.288969] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9d460 is same with the state(6) to be set 00:21:18.122 [2024-11-28 12:45:00.288975] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9d460 is same with the state(6) to be set 00:21:18.122 [2024-11-28 12:45:00.295559] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb21810 is same with the state(6) to be set 00:21:18.122 [2024-11-28 12:45:00.295582] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb21810 is same with the state(6) to be set 00:21:18.122 [2024-11-28 12:45:00.295589] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb21810 is same with the state(6) to be set 00:21:18.122 [2024-11-28 12:45:00.295596] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb21810 is same with the state(6) to be set 00:21:18.122 [2024-11-28 12:45:00.295602] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb21810 is same with the state(6) to be set 00:21:18.122 [2024-11-28 12:45:00.295609] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb21810 is same with the state(6) to be set 00:21:18.122 [2024-11-28 12:45:00.295615] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb21810 is same with the state(6) to be set 00:21:18.122 Write completed with error (sct=0, sc=8) 00:21:18.122 Write completed with error (sct=0, sc=8) 00:21:18.122 Write completed with error (sct=0, sc=8) 00:21:18.122 starting I/O failed: -6 00:21:18.122 Write completed with error (sct=0, sc=8) 00:21:18.122 Write completed with error (sct=0, sc=8) 00:21:18.122 Write completed with error (sct=0, sc=8) 00:21:18.122 [2024-11-28 12:45:00.295946] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb21b90 is same with tWrite completed with error (sct=0, sc=8) 00:21:18.122 he state(6) to be set 00:21:18.122 starting I/O failed: -6 00:21:18.122 [2024-11-28 12:45:00.295977] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb21b90 is same with the state(6) to be set 00:21:18.122 Write completed with error (sct=0, sc=8) 00:21:18.122 [2024-11-28 12:45:00.295990] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb21b90 is same with the state(6) to be set 00:21:18.122 Write completed with error (sct=0, sc=8) 00:21:18.122 [2024-11-28 12:45:00.296005] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb21b90 is same with the state(6) to be set 00:21:18.122 Write completed with error (sct=0, sc=8) 00:21:18.122 Write completed with error (sct=0, sc=8) 00:21:18.122 starting I/O failed: -6 00:21:18.122 Write completed with error (sct=0, sc=8) 00:21:18.122 Write completed with error (sct=0, sc=8) 00:21:18.122 Write completed with error (sct=0, sc=8) 00:21:18.122 Write completed with error (sct=0, sc=8) 00:21:18.122 starting I/O failed: -6 00:21:18.122 Write completed with error (sct=0, sc=8) 00:21:18.122 Write completed with error (sct=0, sc=8) 00:21:18.122 Write completed with error (sct=0, sc=8) 00:21:18.122 Write completed with error (sct=0, sc=8) 00:21:18.122 starting I/O failed: -6 00:21:18.122 Write completed with error (sct=0, sc=8) 00:21:18.122 Write completed with error (sct=0, sc=8) 00:21:18.122 Write completed with error (sct=0, sc=8) 00:21:18.122 Write completed with error (sct=0, sc=8) 00:21:18.122 starting I/O failed: -6 00:21:18.122 Write completed with error (sct=0, sc=8) 00:21:18.122 Write completed with error (sct=0, sc=8) 00:21:18.122 Write completed with error (sct=0, sc=8) 00:21:18.122 Write completed with error (sct=0, sc=8) 00:21:18.122 starting I/O failed: -6 00:21:18.122 Write completed with error (sct=0, sc=8) 00:21:18.122 Write completed with error (sct=0, sc=8) 00:21:18.122 Write completed with error (sct=0, sc=8) 00:21:18.122 Write completed with error (sct=0, sc=8) 00:21:18.122 starting I/O failed: -6 00:21:18.122 Write completed with error (sct=0, sc=8) 00:21:18.122 Write completed with error (sct=0, sc=8) 00:21:18.122 Write completed with error (sct=0, sc=8) 00:21:18.122 [2024-11-28 12:45:00.296391] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb22060 is same with the state(6) to be set 00:21:18.122 Write completed with error (sct=0, sc=8) 00:21:18.122 starting I/O failed: -6 00:21:18.122 [2024-11-28 12:45:00.296413] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb22060 is same with the state(6) to be set 00:21:18.122 [2024-11-28 12:45:00.296421] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb22060 is same with the state(6) to be set 00:21:18.122 Write completed with error (sct=0, sc=8) 00:21:18.122 [2024-11-28 12:45:00.296428] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb22060 is same with the state(6) to be set 00:21:18.122 [2024-11-28 12:45:00.296435] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb22060 is same with the state(6) to be set 00:21:18.122 [2024-11-28 12:45:00.296441] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb22060 is same with the state(6) to be set 00:21:18.122 [2024-11-28 12:45:00.296448] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb22060 is same with the state(6) to be set 00:21:18.122 [2024-11-28 12:45:00.296458] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:18.122 Write completed with error (sct=0, sc=8) 00:21:18.122 Write completed with error (sct=0, sc=8) 00:21:18.122 starting I/O failed: -6 00:21:18.122 Write completed with error (sct=0, sc=8) 00:21:18.122 starting I/O failed: -6 00:21:18.122 Write completed with error (sct=0, sc=8) 00:21:18.122 Write completed with error (sct=0, sc=8) 00:21:18.122 Write completed with error (sct=0, sc=8) 00:21:18.122 starting I/O failed: -6 00:21:18.122 Write completed with error (sct=0, sc=8) 00:21:18.122 starting I/O failed: -6 00:21:18.122 Write completed with error (sct=0, sc=8) 00:21:18.122 Write completed with error (sct=0, sc=8) 00:21:18.122 Write completed with error (sct=0, sc=8) 00:21:18.122 starting I/O failed: -6 00:21:18.122 Write completed with error (sct=0, sc=8) 00:21:18.122 starting I/O failed: -6 00:21:18.122 [2024-11-28 12:45:00.296764] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b020 is same with the state(6) to be set 00:21:18.122 Write completed with error (sct=0, sc=8) 00:21:18.122 [2024-11-28 12:45:00.296787] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b020 is same with the state(6) to be set 00:21:18.122 [2024-11-28 12:45:00.296796] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b020 is same with the state(6) to be set 00:21:18.122 Write completed with error (sct=0, sc=8) 00:21:18.122 [2024-11-28 12:45:00.296802] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b020 is same with the state(6) to be set 00:21:18.122 [2024-11-28 12:45:00.296809] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b020 is same with the state(6) to be set 00:21:18.122 Write completed with error (sct=0, sc=8) 00:21:18.122 [2024-11-28 12:45:00.296819] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b020 is same with tstarting I/O failed: -6 00:21:18.122 he state(6) to be set 00:21:18.122 Write completed with error (sct=0, sc=8) 00:21:18.122 starting I/O failed: -6 00:21:18.122 Write completed with error (sct=0, sc=8) 00:21:18.122 Write completed with error (sct=0, sc=8) 00:21:18.122 Write completed with error (sct=0, sc=8) 00:21:18.122 starting I/O failed: -6 00:21:18.122 Write completed with error (sct=0, sc=8) 00:21:18.122 starting I/O failed: -6 00:21:18.122 Write completed with error (sct=0, sc=8) 00:21:18.122 Write completed with error (sct=0, sc=8) 00:21:18.122 Write completed with error (sct=0, sc=8) 00:21:18.122 starting I/O failed: -6 00:21:18.122 Write completed with error (sct=0, sc=8) 00:21:18.122 starting I/O failed: -6 00:21:18.122 Write completed with error (sct=0, sc=8) 00:21:18.122 Write completed with error (sct=0, sc=8) 00:21:18.122 Write completed with error (sct=0, sc=8) 00:21:18.122 starting I/O failed: -6 00:21:18.122 Write completed with error (sct=0, sc=8) 00:21:18.123 starting I/O failed: -6 00:21:18.123 Write completed with error (sct=0, sc=8) 00:21:18.123 Write completed with error (sct=0, sc=8) 00:21:18.123 Write completed with error (sct=0, sc=8) 00:21:18.123 starting I/O failed: -6 00:21:18.123 Write completed with error (sct=0, sc=8) 00:21:18.123 starting I/O failed: -6 00:21:18.123 [2024-11-28 12:45:00.297117] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aa6b0 is same with the state(6) to be set 00:21:18.123 Write completed with error (sct=0, sc=8) 00:21:18.123 [2024-11-28 12:45:00.297135] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aa6b0 is same with the state(6) to be set 00:21:18.123 [2024-11-28 12:45:00.297142] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aa6b0 is same with the state(6) to be set 00:21:18.123 Write completed with error (sct=0, sc=8) 00:21:18.123 [2024-11-28 12:45:00.297149] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aa6b0 is same with the state(6) to be set 00:21:18.123 [2024-11-28 12:45:00.297156] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aa6b0 is same with the state(6) to be set 00:21:18.123 Write completed with error (sct=0, sc=8) 00:21:18.123 [2024-11-28 12:45:00.297163] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aa6b0 is same with the state(6) to be set 00:21:18.123 starting I/O failed: -6 00:21:18.123 [2024-11-28 12:45:00.297170] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aa6b0 is same with the state(6) to be set 00:21:18.123 [2024-11-28 12:45:00.297177] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aa6b0 is same with the state(6) to be set 00:21:18.123 [2024-11-28 12:45:00.297183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aa6b0 is same with tWrite completed with error (sct=0, sc=8) 00:21:18.123 he state(6) to be set 00:21:18.123 starting I/O failed: -6 00:21:18.123 [2024-11-28 12:45:00.297191] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aa6b0 is same with the state(6) to be set 00:21:18.123 [2024-11-28 12:45:00.297198] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aa6b0 is same with the state(6) to be set 00:21:18.123 [2024-11-28 12:45:00.297204] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aa6b0 is same with the state(6) to be set 00:21:18.123 Write completed with error (sct=0, sc=8) 00:21:18.123 [2024-11-28 12:45:00.297210] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aa6b0 is same with the state(6) to be set 00:21:18.123 [2024-11-28 12:45:00.297216] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aa6b0 is same with the state(6) to be set 00:21:18.123 Write completed with error (sct=0, sc=8) 00:21:18.123 [2024-11-28 12:45:00.297222] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aa6b0 is same with the state(6) to be set 00:21:18.123 [2024-11-28 12:45:00.297229] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aa6b0 is same with the state(6) to be set 00:21:18.123 Write completed with error (sct=0, sc=8) 00:21:18.123 starting I/O failed: -6 00:21:18.123 Write completed with error (sct=0, sc=8) 00:21:18.123 starting I/O failed: -6 00:21:18.123 Write completed with error (sct=0, sc=8) 00:21:18.123 Write completed with error (sct=0, sc=8) 00:21:18.123 Write completed with error (sct=0, sc=8) 00:21:18.123 starting I/O failed: -6 00:21:18.123 Write completed with error (sct=0, sc=8) 00:21:18.123 starting I/O failed: -6 00:21:18.123 Write completed with error (sct=0, sc=8) 00:21:18.123 Write completed with error (sct=0, sc=8) 00:21:18.123 Write completed with error (sct=0, sc=8) 00:21:18.123 starting I/O failed: -6 00:21:18.123 Write completed with error (sct=0, sc=8) 00:21:18.123 starting I/O failed: -6 00:21:18.123 [2024-11-28 12:45:00.297423] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:18.123 Write completed with error (sct=0, sc=8) 00:21:18.123 [2024-11-28 12:45:00.297530] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aab80 is same with tWrite completed with error (sct=0, sc=8) 00:21:18.123 he state(6) to be set 00:21:18.123 starting I/O failed: -6 00:21:18.123 [2024-11-28 12:45:00.297554] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aab80 is same with the state(6) to be set 00:21:18.123 Write completed with error (sct=0, sc=8) 00:21:18.123 [2024-11-28 12:45:00.297565] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aab80 is same with tstarting I/O failed: -6 00:21:18.123 he state(6) to be set 00:21:18.123 [2024-11-28 12:45:00.297577] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aab80 is same with the state(6) to be set 00:21:18.123 Write completed with error (sct=0, sc=8) 00:21:18.123 starting I/O failed: -6 00:21:18.123 Write completed with error (sct=0, sc=8) 00:21:18.123 Write completed with error (sct=0, sc=8) 00:21:18.123 starting I/O failed: -6 00:21:18.123 Write completed with error (sct=0, sc=8) 00:21:18.123 starting I/O failed: -6 00:21:18.123 [2024-11-28 12:45:00.297654] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ab070 is same with the state(6) to be set 00:21:18.123 Write completed with error (sct=0, sc=8) 00:21:18.123 starting I/O failed: -6 00:21:18.123 [2024-11-28 12:45:00.297665] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ab070 is same with the state(6) to be set 00:21:18.123 [2024-11-28 12:45:00.297673] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ab070 is same with the state(6) to be set 00:21:18.123 [2024-11-28 12:45:00.297679] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ab070 is same with tWrite completed with error (sct=0, sc=8) 00:21:18.123 he state(6) to be set 00:21:18.123 [2024-11-28 12:45:00.297686] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ab070 is same with the state(6) to be set 00:21:18.123 [2024-11-28 12:45:00.297692] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ab070 is same with tWrite completed with error (sct=0, sc=8) 00:21:18.123 he state(6) to be set 00:21:18.123 starting I/O failed: -6 00:21:18.123 [2024-11-28 12:45:00.297700] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ab070 is same with the state(6) to be set 00:21:18.123 [2024-11-28 12:45:00.297707] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ab070 is same with the state(6) to be set 00:21:18.123 Write completed with error (sct=0, sc=8) 00:21:18.123 starting I/O failed: -6 00:21:18.123 Write completed with error (sct=0, sc=8) 00:21:18.123 starting I/O failed: -6 00:21:18.123 Write completed with error (sct=0, sc=8) 00:21:18.123 Write completed with error (sct=0, sc=8) 00:21:18.123 starting I/O failed: -6 00:21:18.123 Write completed with error (sct=0, sc=8) 00:21:18.123 starting I/O failed: -6 00:21:18.123 Write completed with error (sct=0, sc=8) 00:21:18.123 starting I/O failed: -6 00:21:18.123 Write completed with error (sct=0, sc=8) 00:21:18.123 Write completed with error (sct=0, sc=8) 00:21:18.123 starting I/O failed: -6 00:21:18.123 Write completed with error (sct=0, sc=8) 00:21:18.123 starting I/O failed: -6 00:21:18.123 Write completed with error (sct=0, sc=8) 00:21:18.123 starting I/O failed: -6 00:21:18.123 Write completed with error (sct=0, sc=8) 00:21:18.123 Write completed with error (sct=0, sc=8) 00:21:18.123 starting I/O failed: -6 00:21:18.123 Write completed with error (sct=0, sc=8) 00:21:18.123 starting I/O failed: -6 00:21:18.123 Write completed with error (sct=0, sc=8) 00:21:18.123 starting I/O failed: -6 00:21:18.123 Write completed with error (sct=0, sc=8) 00:21:18.123 Write completed with error (sct=0, sc=8) 00:21:18.123 starting I/O failed: -6 00:21:18.123 Write completed with error (sct=0, sc=8) 00:21:18.123 starting I/O failed: -6 00:21:18.123 Write completed with error (sct=0, sc=8) 00:21:18.123 starting I/O failed: -6 00:21:18.123 Write completed with error (sct=0, sc=8) 00:21:18.123 Write completed with error (sct=0, sc=8) 00:21:18.123 starting I/O failed: -6 00:21:18.123 Write completed with error (sct=0, sc=8) 00:21:18.123 starting I/O failed: -6 00:21:18.123 Write completed with error (sct=0, sc=8) 00:21:18.123 starting I/O failed: -6 00:21:18.123 Write completed with error (sct=0, sc=8) 00:21:18.123 [2024-11-28 12:45:00.298136] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aa1e0 is same with the state(6) to be set 00:21:18.123 Write completed with error (sct=0, sc=8) 00:21:18.123 starting I/O failed: -6 00:21:18.123 [2024-11-28 12:45:00.298149] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aa1e0 is same with the state(6) to be set 00:21:18.123 [2024-11-28 12:45:00.298156] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aa1e0 is same with the state(6) to be set 00:21:18.123 [2024-11-28 12:45:00.298164] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aa1e0 is same with tWrite completed with error (sct=0, sc=8) 00:21:18.123 he state(6) to be set 00:21:18.123 starting I/O failed: -6 00:21:18.123 [2024-11-28 12:45:00.298171] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aa1e0 is same with the state(6) to be set 00:21:18.123 [2024-11-28 12:45:00.298178] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aa1e0 is same with the state(6) to be set 00:21:18.123 Write completed with error (sct=0, sc=8) 00:21:18.123 starting I/O failed: -6 00:21:18.123 Write completed with error (sct=0, sc=8) 00:21:18.123 Write completed with error (sct=0, sc=8) 00:21:18.123 starting I/O failed: -6 00:21:18.123 Write completed with error (sct=0, sc=8) 00:21:18.123 starting I/O failed: -6 00:21:18.123 Write completed with error (sct=0, sc=8) 00:21:18.123 starting I/O failed: -6 00:21:18.123 Write completed with error (sct=0, sc=8) 00:21:18.123 Write completed with error (sct=0, sc=8) 00:21:18.123 starting I/O failed: -6 00:21:18.123 Write completed with error (sct=0, sc=8) 00:21:18.123 starting I/O failed: -6 00:21:18.123 Write completed with error (sct=0, sc=8) 00:21:18.123 starting I/O failed: -6 00:21:18.123 Write completed with error (sct=0, sc=8) 00:21:18.123 Write completed with error (sct=0, sc=8) 00:21:18.123 starting I/O failed: -6 00:21:18.123 Write completed with error (sct=0, sc=8) 00:21:18.123 starting I/O failed: -6 00:21:18.123 Write completed with error (sct=0, sc=8) 00:21:18.123 starting I/O failed: -6 00:21:18.123 [2024-11-28 12:45:00.298436] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:18.123 Write completed with error (sct=0, sc=8) 00:21:18.123 starting I/O failed: -6 00:21:18.123 Write completed with error (sct=0, sc=8) 00:21:18.123 starting I/O failed: -6 00:21:18.123 Write completed with error (sct=0, sc=8) 00:21:18.123 starting I/O failed: -6 00:21:18.123 Write completed with error (sct=0, sc=8) 00:21:18.123 starting I/O failed: -6 00:21:18.123 Write completed with error (sct=0, sc=8) 00:21:18.123 starting I/O failed: -6 00:21:18.123 Write completed with error (sct=0, sc=8) 00:21:18.123 starting I/O failed: -6 00:21:18.123 Write completed with error (sct=0, sc=8) 00:21:18.124 starting I/O failed: -6 00:21:18.124 Write completed with error (sct=0, sc=8) 00:21:18.124 starting I/O failed: -6 00:21:18.124 Write completed with error (sct=0, sc=8) 00:21:18.124 starting I/O failed: -6 00:21:18.124 Write completed with error (sct=0, sc=8) 00:21:18.124 starting I/O failed: -6 00:21:18.124 Write completed with error (sct=0, sc=8) 00:21:18.124 starting I/O failed: -6 00:21:18.124 Write completed with error (sct=0, sc=8) 00:21:18.124 starting I/O failed: -6 00:21:18.124 Write completed with error (sct=0, sc=8) 00:21:18.124 starting I/O failed: -6 00:21:18.124 Write completed with error (sct=0, sc=8) 00:21:18.124 starting I/O failed: -6 00:21:18.124 Write completed with error (sct=0, sc=8) 00:21:18.124 starting I/O failed: -6 00:21:18.124 Write completed with error (sct=0, sc=8) 00:21:18.124 starting I/O failed: -6 00:21:18.124 Write completed with error (sct=0, sc=8) 00:21:18.124 starting I/O failed: -6 00:21:18.124 Write completed with error (sct=0, sc=8) 00:21:18.124 starting I/O failed: -6 00:21:18.124 Write completed with error (sct=0, sc=8) 00:21:18.124 starting I/O failed: -6 00:21:18.124 Write completed with error (sct=0, sc=8) 00:21:18.124 starting I/O failed: -6 00:21:18.124 Write completed with error (sct=0, sc=8) 00:21:18.124 starting I/O failed: -6 00:21:18.124 Write completed with error (sct=0, sc=8) 00:21:18.124 starting I/O failed: -6 00:21:18.124 Write completed with error (sct=0, sc=8) 00:21:18.124 starting I/O failed: -6 00:21:18.124 Write completed with error (sct=0, sc=8) 00:21:18.124 starting I/O failed: -6 00:21:18.124 Write completed with error (sct=0, sc=8) 00:21:18.124 starting I/O failed: -6 00:21:18.124 Write completed with error (sct=0, sc=8) 00:21:18.124 starting I/O failed: -6 00:21:18.124 Write completed with error (sct=0, sc=8) 00:21:18.124 starting I/O failed: -6 00:21:18.124 Write completed with error (sct=0, sc=8) 00:21:18.124 starting I/O failed: -6 00:21:18.124 Write completed with error (sct=0, sc=8) 00:21:18.124 starting I/O failed: -6 00:21:18.124 Write completed with error (sct=0, sc=8) 00:21:18.124 starting I/O failed: -6 00:21:18.124 Write completed with error (sct=0, sc=8) 00:21:18.124 starting I/O failed: -6 00:21:18.124 Write completed with error (sct=0, sc=8) 00:21:18.124 starting I/O failed: -6 00:21:18.124 Write completed with error (sct=0, sc=8) 00:21:18.124 starting I/O failed: -6 00:21:18.124 Write completed with error (sct=0, sc=8) 00:21:18.124 starting I/O failed: -6 00:21:18.124 Write completed with error (sct=0, sc=8) 00:21:18.124 starting I/O failed: -6 00:21:18.124 Write completed with error (sct=0, sc=8) 00:21:18.124 starting I/O failed: -6 00:21:18.124 Write completed with error (sct=0, sc=8) 00:21:18.124 starting I/O failed: -6 00:21:18.124 Write completed with error (sct=0, sc=8) 00:21:18.124 starting I/O failed: -6 00:21:18.124 Write completed with error (sct=0, sc=8) 00:21:18.124 starting I/O failed: -6 00:21:18.124 Write completed with error (sct=0, sc=8) 00:21:18.124 starting I/O failed: -6 00:21:18.124 Write completed with error (sct=0, sc=8) 00:21:18.124 starting I/O failed: -6 00:21:18.124 Write completed with error (sct=0, sc=8) 00:21:18.124 starting I/O failed: -6 00:21:18.124 Write completed with error (sct=0, sc=8) 00:21:18.124 starting I/O failed: -6 00:21:18.124 Write completed with error (sct=0, sc=8) 00:21:18.124 starting I/O failed: -6 00:21:18.124 Write completed with error (sct=0, sc=8) 00:21:18.124 starting I/O failed: -6 00:21:18.124 Write completed with error (sct=0, sc=8) 00:21:18.124 starting I/O failed: -6 00:21:18.124 Write completed with error (sct=0, sc=8) 00:21:18.124 starting I/O failed: -6 00:21:18.124 Write completed with error (sct=0, sc=8) 00:21:18.124 starting I/O failed: -6 00:21:18.124 Write completed with error (sct=0, sc=8) 00:21:18.124 starting I/O failed: -6 00:21:18.124 Write completed with error (sct=0, sc=8) 00:21:18.124 starting I/O failed: -6 00:21:18.124 Write completed with error (sct=0, sc=8) 00:21:18.124 starting I/O failed: -6 00:21:18.124 Write completed with error (sct=0, sc=8) 00:21:18.124 starting I/O failed: -6 00:21:18.124 Write completed with error (sct=0, sc=8) 00:21:18.124 starting I/O failed: -6 00:21:18.124 Write completed with error (sct=0, sc=8) 00:21:18.124 starting I/O failed: -6 00:21:18.124 Write completed with error (sct=0, sc=8) 00:21:18.124 starting I/O failed: -6 00:21:18.124 Write completed with error (sct=0, sc=8) 00:21:18.124 starting I/O failed: -6 00:21:18.124 Write completed with error (sct=0, sc=8) 00:21:18.124 starting I/O failed: -6 00:21:18.124 Write completed with error (sct=0, sc=8) 00:21:18.124 starting I/O failed: -6 00:21:18.124 Write completed with error (sct=0, sc=8) 00:21:18.124 starting I/O failed: -6 00:21:18.124 [2024-11-28 12:45:00.300134] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:18.124 NVMe io qpair process completion error 00:21:18.124 Write completed with error (sct=0, sc=8) 00:21:18.124 Write completed with error (sct=0, sc=8) 00:21:18.124 starting I/O failed: -6 00:21:18.124 Write completed with error (sct=0, sc=8) 00:21:18.124 Write completed with error (sct=0, sc=8) 00:21:18.124 Write completed with error (sct=0, sc=8) 00:21:18.124 Write completed with error (sct=0, sc=8) 00:21:18.124 starting I/O failed: -6 00:21:18.124 Write completed with error (sct=0, sc=8) 00:21:18.124 Write completed with error (sct=0, sc=8) 00:21:18.124 Write completed with error (sct=0, sc=8) 00:21:18.124 Write completed with error (sct=0, sc=8) 00:21:18.124 starting I/O failed: -6 00:21:18.124 Write completed with error (sct=0, sc=8) 00:21:18.124 Write completed with error (sct=0, sc=8) 00:21:18.124 Write completed with error (sct=0, sc=8) 00:21:18.124 Write completed with error (sct=0, sc=8) 00:21:18.124 starting I/O failed: -6 00:21:18.124 Write completed with error (sct=0, sc=8) 00:21:18.124 Write completed with error (sct=0, sc=8) 00:21:18.124 Write completed with error (sct=0, sc=8) 00:21:18.124 Write completed with error (sct=0, sc=8) 00:21:18.124 starting I/O failed: -6 00:21:18.124 Write completed with error (sct=0, sc=8) 00:21:18.124 Write completed with error (sct=0, sc=8) 00:21:18.124 Write completed with error (sct=0, sc=8) 00:21:18.124 Write completed with error (sct=0, sc=8) 00:21:18.124 starting I/O failed: -6 00:21:18.124 Write completed with error (sct=0, sc=8) 00:21:18.124 Write completed with error (sct=0, sc=8) 00:21:18.124 Write completed with error (sct=0, sc=8) 00:21:18.124 Write completed with error (sct=0, sc=8) 00:21:18.124 starting I/O failed: -6 00:21:18.124 Write completed with error (sct=0, sc=8) 00:21:18.124 Write completed with error (sct=0, sc=8) 00:21:18.124 Write completed with error (sct=0, sc=8) 00:21:18.124 Write completed with error (sct=0, sc=8) 00:21:18.124 starting I/O failed: -6 00:21:18.124 Write completed with error (sct=0, sc=8) 00:21:18.124 Write completed with error (sct=0, sc=8) 00:21:18.124 Write completed with error (sct=0, sc=8) 00:21:18.124 Write completed with error (sct=0, sc=8) 00:21:18.124 starting I/O failed: -6 00:21:18.124 Write completed with error (sct=0, sc=8) 00:21:18.124 Write completed with error (sct=0, sc=8) 00:21:18.124 Write completed with error (sct=0, sc=8) 00:21:18.124 Write completed with error (sct=0, sc=8) 00:21:18.124 starting I/O failed: -6 00:21:18.124 [2024-11-28 12:45:00.303766] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:18.124 [2024-11-28 12:45:00.303828] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8acb30 is same with the state(6) to be set 00:21:18.124 [2024-11-28 12:45:00.303849] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8acb30 is same with the state(6) to be set 00:21:18.124 [2024-11-28 12:45:00.303857] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8acb30 is same with the state(6) to be set 00:21:18.124 [2024-11-28 12:45:00.303863] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8acb30 is same with the state(6) to be set 00:21:18.124 [2024-11-28 12:45:00.303870] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8acb30 is same with the state(6) to be set 00:21:18.124 Write completed with error (sct=0, sc=8) 00:21:18.124 starting I/O failed: -6 00:21:18.124 Write completed with error (sct=0, sc=8) 00:21:18.124 Write completed with error (sct=0, sc=8) 00:21:18.124 Write completed with error (sct=0, sc=8) 00:21:18.124 starting I/O failed: -6 00:21:18.124 Write completed with error (sct=0, sc=8) 00:21:18.124 starting I/O failed: -6 00:21:18.124 Write completed with error (sct=0, sc=8) 00:21:18.124 Write completed with error (sct=0, sc=8) 00:21:18.124 Write completed with error (sct=0, sc=8) 00:21:18.124 starting I/O failed: -6 00:21:18.124 Write completed with error (sct=0, sc=8) 00:21:18.124 starting I/O failed: -6 00:21:18.124 Write completed with error (sct=0, sc=8) 00:21:18.124 Write completed with error (sct=0, sc=8) 00:21:18.124 Write completed with error (sct=0, sc=8) 00:21:18.124 starting I/O failed: -6 00:21:18.124 Write completed with error (sct=0, sc=8) 00:21:18.124 starting I/O failed: -6 00:21:18.124 Write completed with error (sct=0, sc=8) 00:21:18.124 Write completed with error (sct=0, sc=8) 00:21:18.124 Write completed with error (sct=0, sc=8) 00:21:18.124 starting I/O failed: -6 00:21:18.124 Write completed with error (sct=0, sc=8) 00:21:18.124 starting I/O failed: -6 00:21:18.124 Write completed with error (sct=0, sc=8) 00:21:18.124 Write completed with error (sct=0, sc=8) 00:21:18.124 Write completed with error (sct=0, sc=8) 00:21:18.124 starting I/O failed: -6 00:21:18.124 Write completed with error (sct=0, sc=8) 00:21:18.124 [2024-11-28 12:45:00.304233] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ad020 is same with tstarting I/O failed: -6 00:21:18.124 he state(6) to be set 00:21:18.124 Write completed with error (sct=0, sc=8) 00:21:18.124 [2024-11-28 12:45:00.304257] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ad020 is same with the state(6) to be set 00:21:18.124 [2024-11-28 12:45:00.304269] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ad020 is same with the state(6) to be set 00:21:18.124 Write completed with error (sct=0, sc=8) 00:21:18.124 [2024-11-28 12:45:00.304279] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ad020 is same with the state(6) to be set 00:21:18.124 Write completed with error (sct=0, sc=8) 00:21:18.124 starting I/O failed: -6 00:21:18.124 Write completed with error (sct=0, sc=8) 00:21:18.124 starting I/O failed: -6 00:21:18.124 Write completed with error (sct=0, sc=8) 00:21:18.124 Write completed with error (sct=0, sc=8) 00:21:18.124 Write completed with error (sct=0, sc=8) 00:21:18.125 [2024-11-28 12:45:00.304350] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ad510 is same with the state(6) to be set 00:21:18.125 starting I/O failed: -6 00:21:18.125 [2024-11-28 12:45:00.304370] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ad510 is same with the state(6) to be set 00:21:18.125 Write completed with error (sct=0, sc=8) 00:21:18.125 [2024-11-28 12:45:00.304378] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ad510 is same with the state(6) to be set 00:21:18.125 starting I/O failed: -6 00:21:18.125 [2024-11-28 12:45:00.304386] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ad510 is same with the state(6) to be set 00:21:18.125 [2024-11-28 12:45:00.304392] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ad510 is same with the state(6) to be set 00:21:18.125 [2024-11-28 12:45:00.304399] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ad510 is same with tWrite completed with error (sct=0, sc=8) 00:21:18.125 he state(6) to be set 00:21:18.125 [2024-11-28 12:45:00.304407] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ad510 is same with the state(6) to be set 00:21:18.125 Write completed with error (sct=0, sc=8) 00:21:18.125 Write completed with error (sct=0, sc=8) 00:21:18.125 starting I/O failed: -6 00:21:18.125 Write completed with error (sct=0, sc=8) 00:21:18.125 starting I/O failed: -6 00:21:18.125 Write completed with error (sct=0, sc=8) 00:21:18.125 Write completed with error (sct=0, sc=8) 00:21:18.125 Write completed with error (sct=0, sc=8) 00:21:18.125 starting I/O failed: -6 00:21:18.125 Write completed with error (sct=0, sc=8) 00:21:18.125 starting I/O failed: -6 00:21:18.125 Write completed with error (sct=0, sc=8) 00:21:18.125 Write completed with error (sct=0, sc=8) 00:21:18.125 Write completed with error (sct=0, sc=8) 00:21:18.125 starting I/O failed: -6 00:21:18.125 Write completed with error (sct=0, sc=8) 00:21:18.125 starting I/O failed: -6 00:21:18.125 Write completed with error (sct=0, sc=8) 00:21:18.125 Write completed with error (sct=0, sc=8) 00:21:18.125 Write completed with error (sct=0, sc=8) 00:21:18.125 starting I/O failed: -6 00:21:18.125 [2024-11-28 12:45:00.304664] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:18.125 Write completed with error (sct=0, sc=8) 00:21:18.125 starting I/O failed: -6 00:21:18.125 Write completed with error (sct=0, sc=8) 00:21:18.125 Write completed with error (sct=0, sc=8) 00:21:18.125 starting I/O failed: -6 00:21:18.125 Write completed with error (sct=0, sc=8) 00:21:18.125 starting I/O failed: -6 00:21:18.125 [2024-11-28 12:45:00.304824] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ac660 is same with the state(6) to be set 00:21:18.125 Write completed with error (sct=0, sc=8) 00:21:18.125 starting I/O failed: -6 00:21:18.125 [2024-11-28 12:45:00.304846] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ac660 is same with the state(6) to be set 00:21:18.125 [2024-11-28 12:45:00.304854] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ac660 is same with the state(6) to be set 00:21:18.125 [2024-11-28 12:45:00.304861] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ac660 is same with tWrite completed with error (sct=0, sc=8) 00:21:18.125 he state(6) to be set 00:21:18.125 [2024-11-28 12:45:00.304869] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ac660 is same with the state(6) to be set 00:21:18.125 [2024-11-28 12:45:00.304876] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ac660 is same with tWrite completed with error (sct=0, sc=8) 00:21:18.125 he state(6) to be set 00:21:18.125 starting I/O failed: -6 00:21:18.125 [2024-11-28 12:45:00.304884] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ac660 is same with the state(6) to be set 00:21:18.125 [2024-11-28 12:45:00.304890] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ac660 is same with the state(6) to be set 00:21:18.125 [2024-11-28 12:45:00.304897] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ac660 is same with the state(6) to be set 00:21:18.125 Write completed with error (sct=0, sc=8) 00:21:18.125 [2024-11-28 12:45:00.304903] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ac660 is same with tstarting I/O failed: -6 00:21:18.125 he state(6) to be set 00:21:18.125 Write completed with error (sct=0, sc=8) 00:21:18.125 starting I/O failed: -6 00:21:18.125 Write completed with error (sct=0, sc=8) 00:21:18.125 Write completed with error (sct=0, sc=8) 00:21:18.125 starting I/O failed: -6 00:21:18.125 Write completed with error (sct=0, sc=8) 00:21:18.125 starting I/O failed: -6 00:21:18.125 Write completed with error (sct=0, sc=8) 00:21:18.125 starting I/O failed: -6 00:21:18.125 Write completed with error (sct=0, sc=8) 00:21:18.125 Write completed with error (sct=0, sc=8) 00:21:18.125 starting I/O failed: -6 00:21:18.125 Write completed with error (sct=0, sc=8) 00:21:18.125 starting I/O failed: -6 00:21:18.125 Write completed with error (sct=0, sc=8) 00:21:18.125 starting I/O failed: -6 00:21:18.125 Write completed with error (sct=0, sc=8) 00:21:18.125 Write completed with error (sct=0, sc=8) 00:21:18.125 starting I/O failed: -6 00:21:18.125 Write completed with error (sct=0, sc=8) 00:21:18.125 starting I/O failed: -6 00:21:18.125 Write completed with error (sct=0, sc=8) 00:21:18.125 starting I/O failed: -6 00:21:18.125 Write completed with error (sct=0, sc=8) 00:21:18.125 Write completed with error (sct=0, sc=8) 00:21:18.125 starting I/O failed: -6 00:21:18.125 Write completed with error (sct=0, sc=8) 00:21:18.125 starting I/O failed: -6 00:21:18.125 Write completed with error (sct=0, sc=8) 00:21:18.125 starting I/O failed: -6 00:21:18.125 Write completed with error (sct=0, sc=8) 00:21:18.125 Write completed with error (sct=0, sc=8) 00:21:18.125 starting I/O failed: -6 00:21:18.125 Write completed with error (sct=0, sc=8) 00:21:18.125 starting I/O failed: -6 00:21:18.125 Write completed with error (sct=0, sc=8) 00:21:18.125 starting I/O failed: -6 00:21:18.125 Write completed with error (sct=0, sc=8) 00:21:18.125 Write completed with error (sct=0, sc=8) 00:21:18.125 starting I/O failed: -6 00:21:18.125 Write completed with error (sct=0, sc=8) 00:21:18.125 starting I/O failed: -6 00:21:18.125 Write completed with error (sct=0, sc=8) 00:21:18.125 starting I/O failed: -6 00:21:18.125 Write completed with error (sct=0, sc=8) 00:21:18.125 Write completed with error (sct=0, sc=8) 00:21:18.125 starting I/O failed: -6 00:21:18.125 Write completed with error (sct=0, sc=8) 00:21:18.125 starting I/O failed: -6 00:21:18.125 Write completed with error (sct=0, sc=8) 00:21:18.125 starting I/O failed: -6 00:21:18.125 Write completed with error (sct=0, sc=8) 00:21:18.125 Write completed with error (sct=0, sc=8) 00:21:18.125 starting I/O failed: -6 00:21:18.125 Write completed with error (sct=0, sc=8) 00:21:18.125 starting I/O failed: -6 00:21:18.125 Write completed with error (sct=0, sc=8) 00:21:18.125 starting I/O failed: -6 00:21:18.125 Write completed with error (sct=0, sc=8) 00:21:18.125 Write completed with error (sct=0, sc=8) 00:21:18.125 starting I/O failed: -6 00:21:18.125 Write completed with error (sct=0, sc=8) 00:21:18.125 starting I/O failed: -6 00:21:18.125 Write completed with error (sct=0, sc=8) 00:21:18.125 starting I/O failed: -6 00:21:18.125 Write completed with error (sct=0, sc=8) 00:21:18.125 Write completed with error (sct=0, sc=8) 00:21:18.125 starting I/O failed: -6 00:21:18.125 Write completed with error (sct=0, sc=8) 00:21:18.125 starting I/O failed: -6 00:21:18.125 Write completed with error (sct=0, sc=8) 00:21:18.125 starting I/O failed: -6 00:21:18.125 Write completed with error (sct=0, sc=8) 00:21:18.125 [2024-11-28 12:45:00.305712] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:18.125 Write completed with error (sct=0, sc=8) 00:21:18.125 starting I/O failed: -6 00:21:18.125 Write completed with error (sct=0, sc=8) 00:21:18.125 starting I/O failed: -6 00:21:18.125 Write completed with error (sct=0, sc=8) 00:21:18.125 starting I/O failed: -6 00:21:18.125 Write completed with error (sct=0, sc=8) 00:21:18.125 starting I/O failed: -6 00:21:18.125 Write completed with error (sct=0, sc=8) 00:21:18.125 starting I/O failed: -6 00:21:18.125 Write completed with error (sct=0, sc=8) 00:21:18.125 starting I/O failed: -6 00:21:18.125 Write completed with error (sct=0, sc=8) 00:21:18.125 starting I/O failed: -6 00:21:18.125 Write completed with error (sct=0, sc=8) 00:21:18.125 starting I/O failed: -6 00:21:18.125 Write completed with error (sct=0, sc=8) 00:21:18.125 starting I/O failed: -6 00:21:18.125 Write completed with error (sct=0, sc=8) 00:21:18.125 starting I/O failed: -6 00:21:18.125 Write completed with error (sct=0, sc=8) 00:21:18.125 starting I/O failed: -6 00:21:18.125 Write completed with error (sct=0, sc=8) 00:21:18.125 starting I/O failed: -6 00:21:18.125 Write completed with error (sct=0, sc=8) 00:21:18.125 starting I/O failed: -6 00:21:18.125 Write completed with error (sct=0, sc=8) 00:21:18.125 starting I/O failed: -6 00:21:18.125 Write completed with error (sct=0, sc=8) 00:21:18.126 starting I/O failed: -6 00:21:18.126 Write completed with error (sct=0, sc=8) 00:21:18.126 starting I/O failed: -6 00:21:18.126 Write completed with error (sct=0, sc=8) 00:21:18.126 starting I/O failed: -6 00:21:18.126 Write completed with error (sct=0, sc=8) 00:21:18.126 starting I/O failed: -6 00:21:18.126 Write completed with error (sct=0, sc=8) 00:21:18.126 starting I/O failed: -6 00:21:18.126 Write completed with error (sct=0, sc=8) 00:21:18.126 starting I/O failed: -6 00:21:18.126 Write completed with error (sct=0, sc=8) 00:21:18.126 starting I/O failed: -6 00:21:18.126 Write completed with error (sct=0, sc=8) 00:21:18.126 starting I/O failed: -6 00:21:18.126 Write completed with error (sct=0, sc=8) 00:21:18.126 starting I/O failed: -6 00:21:18.126 Write completed with error (sct=0, sc=8) 00:21:18.126 starting I/O failed: -6 00:21:18.126 Write completed with error (sct=0, sc=8) 00:21:18.126 starting I/O failed: -6 00:21:18.126 Write completed with error (sct=0, sc=8) 00:21:18.126 starting I/O failed: -6 00:21:18.126 Write completed with error (sct=0, sc=8) 00:21:18.126 starting I/O failed: -6 00:21:18.126 Write completed with error (sct=0, sc=8) 00:21:18.126 starting I/O failed: -6 00:21:18.126 Write completed with error (sct=0, sc=8) 00:21:18.126 starting I/O failed: -6 00:21:18.126 Write completed with error (sct=0, sc=8) 00:21:18.126 starting I/O failed: -6 00:21:18.126 Write completed with error (sct=0, sc=8) 00:21:18.126 starting I/O failed: -6 00:21:18.126 Write completed with error (sct=0, sc=8) 00:21:18.126 starting I/O failed: -6 00:21:18.126 Write completed with error (sct=0, sc=8) 00:21:18.126 starting I/O failed: -6 00:21:18.126 Write completed with error (sct=0, sc=8) 00:21:18.126 starting I/O failed: -6 00:21:18.126 Write completed with error (sct=0, sc=8) 00:21:18.126 starting I/O failed: -6 00:21:18.126 Write completed with error (sct=0, sc=8) 00:21:18.126 starting I/O failed: -6 00:21:18.126 Write completed with error (sct=0, sc=8) 00:21:18.126 starting I/O failed: -6 00:21:18.126 Write completed with error (sct=0, sc=8) 00:21:18.126 starting I/O failed: -6 00:21:18.126 Write completed with error (sct=0, sc=8) 00:21:18.126 starting I/O failed: -6 00:21:18.126 Write completed with error (sct=0, sc=8) 00:21:18.126 starting I/O failed: -6 00:21:18.126 Write completed with error (sct=0, sc=8) 00:21:18.126 starting I/O failed: -6 00:21:18.126 Write completed with error (sct=0, sc=8) 00:21:18.126 starting I/O failed: -6 00:21:18.126 Write completed with error (sct=0, sc=8) 00:21:18.126 starting I/O failed: -6 00:21:18.126 Write completed with error (sct=0, sc=8) 00:21:18.126 starting I/O failed: -6 00:21:18.126 Write completed with error (sct=0, sc=8) 00:21:18.126 starting I/O failed: -6 00:21:18.126 Write completed with error (sct=0, sc=8) 00:21:18.126 starting I/O failed: -6 00:21:18.126 Write completed with error (sct=0, sc=8) 00:21:18.126 starting I/O failed: -6 00:21:18.126 Write completed with error (sct=0, sc=8) 00:21:18.126 starting I/O failed: -6 00:21:18.126 Write completed with error (sct=0, sc=8) 00:21:18.126 starting I/O failed: -6 00:21:18.126 Write completed with error (sct=0, sc=8) 00:21:18.126 starting I/O failed: -6 00:21:18.126 Write completed with error (sct=0, sc=8) 00:21:18.126 starting I/O failed: -6 00:21:18.126 Write completed with error (sct=0, sc=8) 00:21:18.126 starting I/O failed: -6 00:21:18.126 Write completed with error (sct=0, sc=8) 00:21:18.126 starting I/O failed: -6 00:21:18.126 Write completed with error (sct=0, sc=8) 00:21:18.126 starting I/O failed: -6 00:21:18.126 Write completed with error (sct=0, sc=8) 00:21:18.126 starting I/O failed: -6 00:21:18.126 Write completed with error (sct=0, sc=8) 00:21:18.126 starting I/O failed: -6 00:21:18.126 Write completed with error (sct=0, sc=8) 00:21:18.126 starting I/O failed: -6 00:21:18.126 Write completed with error (sct=0, sc=8) 00:21:18.126 starting I/O failed: -6 00:21:18.126 Write completed with error (sct=0, sc=8) 00:21:18.126 starting I/O failed: -6 00:21:18.126 [2024-11-28 12:45:00.307144] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:18.126 NVMe io qpair process completion error 00:21:18.126 Write completed with error (sct=0, sc=8) 00:21:18.126 starting I/O failed: -6 00:21:18.126 Write completed with error (sct=0, sc=8) 00:21:18.126 Write completed with error (sct=0, sc=8) 00:21:18.126 Write completed with error (sct=0, sc=8) 00:21:18.126 Write completed with error (sct=0, sc=8) 00:21:18.126 starting I/O failed: -6 00:21:18.126 Write completed with error (sct=0, sc=8) 00:21:18.126 Write completed with error (sct=0, sc=8) 00:21:18.126 Write completed with error (sct=0, sc=8) 00:21:18.126 Write completed with error (sct=0, sc=8) 00:21:18.126 starting I/O failed: -6 00:21:18.126 Write completed with error (sct=0, sc=8) 00:21:18.126 Write completed with error (sct=0, sc=8) 00:21:18.126 Write completed with error (sct=0, sc=8) 00:21:18.126 Write completed with error (sct=0, sc=8) 00:21:18.126 starting I/O failed: -6 00:21:18.126 Write completed with error (sct=0, sc=8) 00:21:18.126 Write completed with error (sct=0, sc=8) 00:21:18.126 Write completed with error (sct=0, sc=8) 00:21:18.126 Write completed with error (sct=0, sc=8) 00:21:18.126 starting I/O failed: -6 00:21:18.126 Write completed with error (sct=0, sc=8) 00:21:18.126 Write completed with error (sct=0, sc=8) 00:21:18.126 Write completed with error (sct=0, sc=8) 00:21:18.126 Write completed with error (sct=0, sc=8) 00:21:18.126 starting I/O failed: -6 00:21:18.126 Write completed with error (sct=0, sc=8) 00:21:18.126 Write completed with error (sct=0, sc=8) 00:21:18.126 Write completed with error (sct=0, sc=8) 00:21:18.126 Write completed with error (sct=0, sc=8) 00:21:18.126 starting I/O failed: -6 00:21:18.126 Write completed with error (sct=0, sc=8) 00:21:18.126 Write completed with error (sct=0, sc=8) 00:21:18.126 Write completed with error (sct=0, sc=8) 00:21:18.126 Write completed with error (sct=0, sc=8) 00:21:18.126 starting I/O failed: -6 00:21:18.126 Write completed with error (sct=0, sc=8) 00:21:18.126 Write completed with error (sct=0, sc=8) 00:21:18.126 Write completed with error (sct=0, sc=8) 00:21:18.126 Write completed with error (sct=0, sc=8) 00:21:18.126 starting I/O failed: -6 00:21:18.126 Write completed with error (sct=0, sc=8) 00:21:18.126 Write completed with error (sct=0, sc=8) 00:21:18.126 Write completed with error (sct=0, sc=8) 00:21:18.126 Write completed with error (sct=0, sc=8) 00:21:18.126 starting I/O failed: -6 00:21:18.126 Write completed with error (sct=0, sc=8) 00:21:18.126 Write completed with error (sct=0, sc=8) 00:21:18.126 Write completed with error (sct=0, sc=8) 00:21:18.126 Write completed with error (sct=0, sc=8) 00:21:18.126 starting I/O failed: -6 00:21:18.126 [2024-11-28 12:45:00.308168] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:18.126 Write completed with error (sct=0, sc=8) 00:21:18.126 starting I/O failed: -6 00:21:18.126 Write completed with error (sct=0, sc=8) 00:21:18.126 Write completed with error (sct=0, sc=8) 00:21:18.126 Write completed with error (sct=0, sc=8) 00:21:18.126 starting I/O failed: -6 00:21:18.126 Write completed with error (sct=0, sc=8) 00:21:18.126 starting I/O failed: -6 00:21:18.126 Write completed with error (sct=0, sc=8) 00:21:18.126 Write completed with error (sct=0, sc=8) 00:21:18.126 Write completed with error (sct=0, sc=8) 00:21:18.126 starting I/O failed: -6 00:21:18.126 Write completed with error (sct=0, sc=8) 00:21:18.126 starting I/O failed: -6 00:21:18.126 Write completed with error (sct=0, sc=8) 00:21:18.126 Write completed with error (sct=0, sc=8) 00:21:18.126 Write completed with error (sct=0, sc=8) 00:21:18.126 starting I/O failed: -6 00:21:18.126 Write completed with error (sct=0, sc=8) 00:21:18.126 starting I/O failed: -6 00:21:18.126 Write completed with error (sct=0, sc=8) 00:21:18.126 Write completed with error (sct=0, sc=8) 00:21:18.126 Write completed with error (sct=0, sc=8) 00:21:18.126 starting I/O failed: -6 00:21:18.126 Write completed with error (sct=0, sc=8) 00:21:18.126 starting I/O failed: -6 00:21:18.126 Write completed with error (sct=0, sc=8) 00:21:18.126 Write completed with error (sct=0, sc=8) 00:21:18.126 Write completed with error (sct=0, sc=8) 00:21:18.126 starting I/O failed: -6 00:21:18.126 Write completed with error (sct=0, sc=8) 00:21:18.126 starting I/O failed: -6 00:21:18.126 Write completed with error (sct=0, sc=8) 00:21:18.126 Write completed with error (sct=0, sc=8) 00:21:18.126 Write completed with error (sct=0, sc=8) 00:21:18.126 starting I/O failed: -6 00:21:18.127 Write completed with error (sct=0, sc=8) 00:21:18.127 starting I/O failed: -6 00:21:18.127 Write completed with error (sct=0, sc=8) 00:21:18.127 Write completed with error (sct=0, sc=8) 00:21:18.127 Write completed with error (sct=0, sc=8) 00:21:18.127 starting I/O failed: -6 00:21:18.127 Write completed with error (sct=0, sc=8) 00:21:18.127 starting I/O failed: -6 00:21:18.127 Write completed with error (sct=0, sc=8) 00:21:18.127 Write completed with error (sct=0, sc=8) 00:21:18.127 Write completed with error (sct=0, sc=8) 00:21:18.127 starting I/O failed: -6 00:21:18.127 Write completed with error (sct=0, sc=8) 00:21:18.127 starting I/O failed: -6 00:21:18.127 Write completed with error (sct=0, sc=8) 00:21:18.127 Write completed with error (sct=0, sc=8) 00:21:18.127 Write completed with error (sct=0, sc=8) 00:21:18.127 starting I/O failed: -6 00:21:18.127 Write completed with error (sct=0, sc=8) 00:21:18.127 starting I/O failed: -6 00:21:18.127 Write completed with error (sct=0, sc=8) 00:21:18.127 Write completed with error (sct=0, sc=8) 00:21:18.127 [2024-11-28 12:45:00.308958] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:18.127 starting I/O failed: -6 00:21:18.127 Write completed with error (sct=0, sc=8) 00:21:18.127 starting I/O failed: -6 00:21:18.127 Write completed with error (sct=0, sc=8) 00:21:18.127 Write completed with error (sct=0, sc=8) 00:21:18.127 starting I/O failed: -6 00:21:18.127 Write completed with error (sct=0, sc=8) 00:21:18.127 starting I/O failed: -6 00:21:18.127 Write completed with error (sct=0, sc=8) 00:21:18.127 starting I/O failed: -6 00:21:18.127 Write completed with error (sct=0, sc=8) 00:21:18.127 Write completed with error (sct=0, sc=8) 00:21:18.127 starting I/O failed: -6 00:21:18.127 Write completed with error (sct=0, sc=8) 00:21:18.127 starting I/O failed: -6 00:21:18.127 Write completed with error (sct=0, sc=8) 00:21:18.127 starting I/O failed: -6 00:21:18.127 Write completed with error (sct=0, sc=8) 00:21:18.127 Write completed with error (sct=0, sc=8) 00:21:18.127 starting I/O failed: -6 00:21:18.127 Write completed with error (sct=0, sc=8) 00:21:18.127 starting I/O failed: -6 00:21:18.127 Write completed with error (sct=0, sc=8) 00:21:18.127 starting I/O failed: -6 00:21:18.127 Write completed with error (sct=0, sc=8) 00:21:18.127 Write completed with error (sct=0, sc=8) 00:21:18.127 starting I/O failed: -6 00:21:18.127 Write completed with error (sct=0, sc=8) 00:21:18.127 starting I/O failed: -6 00:21:18.127 Write completed with error (sct=0, sc=8) 00:21:18.127 starting I/O failed: -6 00:21:18.127 Write completed with error (sct=0, sc=8) 00:21:18.127 Write completed with error (sct=0, sc=8) 00:21:18.127 starting I/O failed: -6 00:21:18.127 Write completed with error (sct=0, sc=8) 00:21:18.127 starting I/O failed: -6 00:21:18.127 Write completed with error (sct=0, sc=8) 00:21:18.127 starting I/O failed: -6 00:21:18.127 Write completed with error (sct=0, sc=8) 00:21:18.127 Write completed with error (sct=0, sc=8) 00:21:18.127 starting I/O failed: -6 00:21:18.127 Write completed with error (sct=0, sc=8) 00:21:18.127 starting I/O failed: -6 00:21:18.127 Write completed with error (sct=0, sc=8) 00:21:18.127 starting I/O failed: -6 00:21:18.127 Write completed with error (sct=0, sc=8) 00:21:18.127 Write completed with error (sct=0, sc=8) 00:21:18.127 starting I/O failed: -6 00:21:18.127 Write completed with error (sct=0, sc=8) 00:21:18.127 starting I/O failed: -6 00:21:18.127 Write completed with error (sct=0, sc=8) 00:21:18.127 starting I/O failed: -6 00:21:18.127 Write completed with error (sct=0, sc=8) 00:21:18.127 Write completed with error (sct=0, sc=8) 00:21:18.127 starting I/O failed: -6 00:21:18.127 Write completed with error (sct=0, sc=8) 00:21:18.127 starting I/O failed: -6 00:21:18.127 Write completed with error (sct=0, sc=8) 00:21:18.127 starting I/O failed: -6 00:21:18.127 Write completed with error (sct=0, sc=8) 00:21:18.127 Write completed with error (sct=0, sc=8) 00:21:18.127 starting I/O failed: -6 00:21:18.127 Write completed with error (sct=0, sc=8) 00:21:18.127 starting I/O failed: -6 00:21:18.127 Write completed with error (sct=0, sc=8) 00:21:18.127 starting I/O failed: -6 00:21:18.127 Write completed with error (sct=0, sc=8) 00:21:18.127 Write completed with error (sct=0, sc=8) 00:21:18.127 starting I/O failed: -6 00:21:18.127 Write completed with error (sct=0, sc=8) 00:21:18.127 starting I/O failed: -6 00:21:18.127 Write completed with error (sct=0, sc=8) 00:21:18.127 starting I/O failed: -6 00:21:18.127 Write completed with error (sct=0, sc=8) 00:21:18.127 Write completed with error (sct=0, sc=8) 00:21:18.127 starting I/O failed: -6 00:21:18.127 Write completed with error (sct=0, sc=8) 00:21:18.127 starting I/O failed: -6 00:21:18.127 Write completed with error (sct=0, sc=8) 00:21:18.127 starting I/O failed: -6 00:21:18.127 Write completed with error (sct=0, sc=8) 00:21:18.127 Write completed with error (sct=0, sc=8) 00:21:18.127 starting I/O failed: -6 00:21:18.127 Write completed with error (sct=0, sc=8) 00:21:18.127 starting I/O failed: -6 00:21:18.127 Write completed with error (sct=0, sc=8) 00:21:18.127 starting I/O failed: -6 00:21:18.127 [2024-11-28 12:45:00.310493] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:18.127 Write completed with error (sct=0, sc=8) 00:21:18.127 starting I/O failed: -6 00:21:18.127 Write completed with error (sct=0, sc=8) 00:21:18.127 starting I/O failed: -6 00:21:18.127 Write completed with error (sct=0, sc=8) 00:21:18.127 starting I/O failed: -6 00:21:18.127 Write completed with error (sct=0, sc=8) 00:21:18.127 starting I/O failed: -6 00:21:18.127 Write completed with error (sct=0, sc=8) 00:21:18.127 starting I/O failed: -6 00:21:18.127 Write completed with error (sct=0, sc=8) 00:21:18.127 starting I/O failed: -6 00:21:18.127 Write completed with error (sct=0, sc=8) 00:21:18.127 starting I/O failed: -6 00:21:18.127 Write completed with error (sct=0, sc=8) 00:21:18.127 starting I/O failed: -6 00:21:18.127 Write completed with error (sct=0, sc=8) 00:21:18.127 starting I/O failed: -6 00:21:18.127 Write completed with error (sct=0, sc=8) 00:21:18.127 starting I/O failed: -6 00:21:18.127 Write completed with error (sct=0, sc=8) 00:21:18.127 starting I/O failed: -6 00:21:18.127 Write completed with error (sct=0, sc=8) 00:21:18.127 starting I/O failed: -6 00:21:18.127 Write completed with error (sct=0, sc=8) 00:21:18.127 starting I/O failed: -6 00:21:18.127 Write completed with error (sct=0, sc=8) 00:21:18.127 starting I/O failed: -6 00:21:18.127 Write completed with error (sct=0, sc=8) 00:21:18.127 starting I/O failed: -6 00:21:18.127 Write completed with error (sct=0, sc=8) 00:21:18.127 starting I/O failed: -6 00:21:18.127 Write completed with error (sct=0, sc=8) 00:21:18.127 starting I/O failed: -6 00:21:18.127 Write completed with error (sct=0, sc=8) 00:21:18.127 starting I/O failed: -6 00:21:18.127 Write completed with error (sct=0, sc=8) 00:21:18.127 starting I/O failed: -6 00:21:18.127 Write completed with error (sct=0, sc=8) 00:21:18.127 starting I/O failed: -6 00:21:18.127 Write completed with error (sct=0, sc=8) 00:21:18.127 starting I/O failed: -6 00:21:18.127 Write completed with error (sct=0, sc=8) 00:21:18.127 starting I/O failed: -6 00:21:18.127 Write completed with error (sct=0, sc=8) 00:21:18.127 starting I/O failed: -6 00:21:18.127 Write completed with error (sct=0, sc=8) 00:21:18.127 starting I/O failed: -6 00:21:18.127 Write completed with error (sct=0, sc=8) 00:21:18.127 starting I/O failed: -6 00:21:18.127 Write completed with error (sct=0, sc=8) 00:21:18.127 starting I/O failed: -6 00:21:18.127 Write completed with error (sct=0, sc=8) 00:21:18.127 starting I/O failed: -6 00:21:18.127 Write completed with error (sct=0, sc=8) 00:21:18.127 starting I/O failed: -6 00:21:18.127 Write completed with error (sct=0, sc=8) 00:21:18.127 starting I/O failed: -6 00:21:18.127 Write completed with error (sct=0, sc=8) 00:21:18.127 starting I/O failed: -6 00:21:18.127 Write completed with error (sct=0, sc=8) 00:21:18.127 starting I/O failed: -6 00:21:18.127 Write completed with error (sct=0, sc=8) 00:21:18.127 starting I/O failed: -6 00:21:18.127 Write completed with error (sct=0, sc=8) 00:21:18.127 starting I/O failed: -6 00:21:18.127 Write completed with error (sct=0, sc=8) 00:21:18.127 starting I/O failed: -6 00:21:18.127 Write completed with error (sct=0, sc=8) 00:21:18.127 starting I/O failed: -6 00:21:18.127 Write completed with error (sct=0, sc=8) 00:21:18.127 starting I/O failed: -6 00:21:18.127 Write completed with error (sct=0, sc=8) 00:21:18.127 starting I/O failed: -6 00:21:18.127 Write completed with error (sct=0, sc=8) 00:21:18.127 starting I/O failed: -6 00:21:18.127 Write completed with error (sct=0, sc=8) 00:21:18.127 starting I/O failed: -6 00:21:18.127 Write completed with error (sct=0, sc=8) 00:21:18.127 starting I/O failed: -6 00:21:18.127 Write completed with error (sct=0, sc=8) 00:21:18.127 starting I/O failed: -6 00:21:18.127 Write completed with error (sct=0, sc=8) 00:21:18.127 starting I/O failed: -6 00:21:18.127 Write completed with error (sct=0, sc=8) 00:21:18.127 starting I/O failed: -6 00:21:18.127 Write completed with error (sct=0, sc=8) 00:21:18.127 starting I/O failed: -6 00:21:18.127 Write completed with error (sct=0, sc=8) 00:21:18.127 starting I/O failed: -6 00:21:18.127 Write completed with error (sct=0, sc=8) 00:21:18.127 starting I/O failed: -6 00:21:18.127 Write completed with error (sct=0, sc=8) 00:21:18.127 starting I/O failed: -6 00:21:18.127 Write completed with error (sct=0, sc=8) 00:21:18.127 starting I/O failed: -6 00:21:18.127 Write completed with error (sct=0, sc=8) 00:21:18.127 starting I/O failed: -6 00:21:18.127 Write completed with error (sct=0, sc=8) 00:21:18.127 starting I/O failed: -6 00:21:18.127 Write completed with error (sct=0, sc=8) 00:21:18.127 starting I/O failed: -6 00:21:18.127 Write completed with error (sct=0, sc=8) 00:21:18.127 starting I/O failed: -6 00:21:18.127 Write completed with error (sct=0, sc=8) 00:21:18.127 starting I/O failed: -6 00:21:18.127 Write completed with error (sct=0, sc=8) 00:21:18.127 starting I/O failed: -6 00:21:18.127 Write completed with error (sct=0, sc=8) 00:21:18.127 starting I/O failed: -6 00:21:18.127 Write completed with error (sct=0, sc=8) 00:21:18.127 starting I/O failed: -6 00:21:18.128 Write completed with error (sct=0, sc=8) 00:21:18.128 starting I/O failed: -6 00:21:18.128 Write completed with error (sct=0, sc=8) 00:21:18.128 starting I/O failed: -6 00:21:18.128 Write completed with error (sct=0, sc=8) 00:21:18.128 starting I/O failed: -6 00:21:18.128 Write completed with error (sct=0, sc=8) 00:21:18.128 starting I/O failed: -6 00:21:18.128 [2024-11-28 12:45:00.312324] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:18.128 NVMe io qpair process completion error 00:21:18.128 Write completed with error (sct=0, sc=8) 00:21:18.128 Write completed with error (sct=0, sc=8) 00:21:18.128 Write completed with error (sct=0, sc=8) 00:21:18.128 starting I/O failed: -6 00:21:18.128 Write completed with error (sct=0, sc=8) 00:21:18.128 Write completed with error (sct=0, sc=8) 00:21:18.128 Write completed with error (sct=0, sc=8) 00:21:18.128 Write completed with error (sct=0, sc=8) 00:21:18.128 starting I/O failed: -6 00:21:18.128 Write completed with error (sct=0, sc=8) 00:21:18.128 Write completed with error (sct=0, sc=8) 00:21:18.128 Write completed with error (sct=0, sc=8) 00:21:18.128 Write completed with error (sct=0, sc=8) 00:21:18.128 starting I/O failed: -6 00:21:18.128 Write completed with error (sct=0, sc=8) 00:21:18.128 Write completed with error (sct=0, sc=8) 00:21:18.128 Write completed with error (sct=0, sc=8) 00:21:18.128 Write completed with error (sct=0, sc=8) 00:21:18.128 starting I/O failed: -6 00:21:18.128 Write completed with error (sct=0, sc=8) 00:21:18.128 Write completed with error (sct=0, sc=8) 00:21:18.128 Write completed with error (sct=0, sc=8) 00:21:18.128 Write completed with error (sct=0, sc=8) 00:21:18.128 starting I/O failed: -6 00:21:18.128 Write completed with error (sct=0, sc=8) 00:21:18.128 Write completed with error (sct=0, sc=8) 00:21:18.128 Write completed with error (sct=0, sc=8) 00:21:18.128 Write completed with error (sct=0, sc=8) 00:21:18.128 starting I/O failed: -6 00:21:18.128 Write completed with error (sct=0, sc=8) 00:21:18.128 Write completed with error (sct=0, sc=8) 00:21:18.128 Write completed with error (sct=0, sc=8) 00:21:18.128 Write completed with error (sct=0, sc=8) 00:21:18.128 starting I/O failed: -6 00:21:18.128 Write completed with error (sct=0, sc=8) 00:21:18.128 Write completed with error (sct=0, sc=8) 00:21:18.128 starting I/O failed: -6 00:21:18.128 Write completed with error (sct=0, sc=8) 00:21:18.128 starting I/O failed: -6 00:21:18.128 Write completed with error (sct=0, sc=8) 00:21:18.128 Write completed with error (sct=0, sc=8) 00:21:18.128 Write completed with error (sct=0, sc=8) 00:21:18.128 starting I/O failed: -6 00:21:18.128 Write completed with error (sct=0, sc=8) 00:21:18.128 starting I/O failed: -6 00:21:18.128 Write completed with error (sct=0, sc=8) 00:21:18.128 Write completed with error (sct=0, sc=8) 00:21:18.128 Write completed with error (sct=0, sc=8) 00:21:18.128 starting I/O failed: -6 00:21:18.128 Write completed with error (sct=0, sc=8) 00:21:18.128 starting I/O failed: -6 00:21:18.128 Write completed with error (sct=0, sc=8) 00:21:18.128 Write completed with error (sct=0, sc=8) 00:21:18.128 Write completed with error (sct=0, sc=8) 00:21:18.128 starting I/O failed: -6 00:21:18.128 Write completed with error (sct=0, sc=8) 00:21:18.128 starting I/O failed: -6 00:21:18.128 Write completed with error (sct=0, sc=8) 00:21:18.128 Write completed with error (sct=0, sc=8) 00:21:18.128 Write completed with error (sct=0, sc=8) 00:21:18.128 starting I/O failed: -6 00:21:18.128 Write completed with error (sct=0, sc=8) 00:21:18.128 starting I/O failed: -6 00:21:18.128 Write completed with error (sct=0, sc=8) 00:21:18.128 Write completed with error (sct=0, sc=8) 00:21:18.128 Write completed with error (sct=0, sc=8) 00:21:18.128 starting I/O failed: -6 00:21:18.128 Write completed with error (sct=0, sc=8) 00:21:18.128 starting I/O failed: -6 00:21:18.128 Write completed with error (sct=0, sc=8) 00:21:18.128 Write completed with error (sct=0, sc=8) 00:21:18.128 Write completed with error (sct=0, sc=8) 00:21:18.128 starting I/O failed: -6 00:21:18.128 Write completed with error (sct=0, sc=8) 00:21:18.128 starting I/O failed: -6 00:21:18.128 Write completed with error (sct=0, sc=8) 00:21:18.128 Write completed with error (sct=0, sc=8) 00:21:18.128 Write completed with error (sct=0, sc=8) 00:21:18.128 starting I/O failed: -6 00:21:18.128 Write completed with error (sct=0, sc=8) 00:21:18.128 starting I/O failed: -6 00:21:18.128 Write completed with error (sct=0, sc=8) 00:21:18.128 Write completed with error (sct=0, sc=8) 00:21:18.128 Write completed with error (sct=0, sc=8) 00:21:18.128 starting I/O failed: -6 00:21:18.128 Write completed with error (sct=0, sc=8) 00:21:18.128 starting I/O failed: -6 00:21:18.128 Write completed with error (sct=0, sc=8) 00:21:18.128 Write completed with error (sct=0, sc=8) 00:21:18.128 Write completed with error (sct=0, sc=8) 00:21:18.128 starting I/O failed: -6 00:21:18.128 Write completed with error (sct=0, sc=8) 00:21:18.128 starting I/O failed: -6 00:21:18.128 Write completed with error (sct=0, sc=8) 00:21:18.128 Write completed with error (sct=0, sc=8) 00:21:18.128 Write completed with error (sct=0, sc=8) 00:21:18.128 starting I/O failed: -6 00:21:18.128 Write completed with error (sct=0, sc=8) 00:21:18.128 starting I/O failed: -6 00:21:18.128 Write completed with error (sct=0, sc=8) 00:21:18.128 Write completed with error (sct=0, sc=8) 00:21:18.128 Write completed with error (sct=0, sc=8) 00:21:18.128 starting I/O failed: -6 00:21:18.128 Write completed with error (sct=0, sc=8) 00:21:18.128 starting I/O failed: -6 00:21:18.128 [2024-11-28 12:45:00.313928] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:18.128 Write completed with error (sct=0, sc=8) 00:21:18.128 starting I/O failed: -6 00:21:18.128 Write completed with error (sct=0, sc=8) 00:21:18.128 Write completed with error (sct=0, sc=8) 00:21:18.128 starting I/O failed: -6 00:21:18.128 Write completed with error (sct=0, sc=8) 00:21:18.128 starting I/O failed: -6 00:21:18.128 Write completed with error (sct=0, sc=8) 00:21:18.128 starting I/O failed: -6 00:21:18.128 Write completed with error (sct=0, sc=8) 00:21:18.128 Write completed with error (sct=0, sc=8) 00:21:18.128 starting I/O failed: -6 00:21:18.128 Write completed with error (sct=0, sc=8) 00:21:18.128 starting I/O failed: -6 00:21:18.128 Write completed with error (sct=0, sc=8) 00:21:18.128 starting I/O failed: -6 00:21:18.128 Write completed with error (sct=0, sc=8) 00:21:18.128 Write completed with error (sct=0, sc=8) 00:21:18.128 starting I/O failed: -6 00:21:18.128 Write completed with error (sct=0, sc=8) 00:21:18.128 starting I/O failed: -6 00:21:18.128 Write completed with error (sct=0, sc=8) 00:21:18.128 starting I/O failed: -6 00:21:18.128 Write completed with error (sct=0, sc=8) 00:21:18.128 Write completed with error (sct=0, sc=8) 00:21:18.128 starting I/O failed: -6 00:21:18.128 Write completed with error (sct=0, sc=8) 00:21:18.128 starting I/O failed: -6 00:21:18.128 Write completed with error (sct=0, sc=8) 00:21:18.128 starting I/O failed: -6 00:21:18.128 Write completed with error (sct=0, sc=8) 00:21:18.128 Write completed with error (sct=0, sc=8) 00:21:18.128 starting I/O failed: -6 00:21:18.128 Write completed with error (sct=0, sc=8) 00:21:18.128 starting I/O failed: -6 00:21:18.128 Write completed with error (sct=0, sc=8) 00:21:18.128 starting I/O failed: -6 00:21:18.128 Write completed with error (sct=0, sc=8) 00:21:18.128 Write completed with error (sct=0, sc=8) 00:21:18.128 starting I/O failed: -6 00:21:18.128 Write completed with error (sct=0, sc=8) 00:21:18.128 starting I/O failed: -6 00:21:18.128 Write completed with error (sct=0, sc=8) 00:21:18.128 starting I/O failed: -6 00:21:18.128 Write completed with error (sct=0, sc=8) 00:21:18.128 Write completed with error (sct=0, sc=8) 00:21:18.128 starting I/O failed: -6 00:21:18.128 Write completed with error (sct=0, sc=8) 00:21:18.128 starting I/O failed: -6 00:21:18.128 Write completed with error (sct=0, sc=8) 00:21:18.128 starting I/O failed: -6 00:21:18.128 Write completed with error (sct=0, sc=8) 00:21:18.128 Write completed with error (sct=0, sc=8) 00:21:18.128 starting I/O failed: -6 00:21:18.128 Write completed with error (sct=0, sc=8) 00:21:18.128 starting I/O failed: -6 00:21:18.128 Write completed with error (sct=0, sc=8) 00:21:18.128 starting I/O failed: -6 00:21:18.128 Write completed with error (sct=0, sc=8) 00:21:18.128 Write completed with error (sct=0, sc=8) 00:21:18.128 starting I/O failed: -6 00:21:18.128 Write completed with error (sct=0, sc=8) 00:21:18.128 starting I/O failed: -6 00:21:18.128 Write completed with error (sct=0, sc=8) 00:21:18.128 starting I/O failed: -6 00:21:18.128 Write completed with error (sct=0, sc=8) 00:21:18.128 Write completed with error (sct=0, sc=8) 00:21:18.128 starting I/O failed: -6 00:21:18.128 Write completed with error (sct=0, sc=8) 00:21:18.128 starting I/O failed: -6 00:21:18.128 Write completed with error (sct=0, sc=8) 00:21:18.128 starting I/O failed: -6 00:21:18.128 Write completed with error (sct=0, sc=8) 00:21:18.128 Write completed with error (sct=0, sc=8) 00:21:18.128 starting I/O failed: -6 00:21:18.128 Write completed with error (sct=0, sc=8) 00:21:18.128 starting I/O failed: -6 00:21:18.128 Write completed with error (sct=0, sc=8) 00:21:18.128 starting I/O failed: -6 00:21:18.128 Write completed with error (sct=0, sc=8) 00:21:18.128 Write completed with error (sct=0, sc=8) 00:21:18.128 starting I/O failed: -6 00:21:18.128 Write completed with error (sct=0, sc=8) 00:21:18.128 starting I/O failed: -6 00:21:18.128 Write completed with error (sct=0, sc=8) 00:21:18.128 starting I/O failed: -6 00:21:18.128 Write completed with error (sct=0, sc=8) 00:21:18.128 Write completed with error (sct=0, sc=8) 00:21:18.128 starting I/O failed: -6 00:21:18.128 Write completed with error (sct=0, sc=8) 00:21:18.128 starting I/O failed: -6 00:21:18.128 Write completed with error (sct=0, sc=8) 00:21:18.128 starting I/O failed: -6 00:21:18.128 Write completed with error (sct=0, sc=8) 00:21:18.128 [2024-11-28 12:45:00.315083] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:18.128 Write completed with error (sct=0, sc=8) 00:21:18.128 starting I/O failed: -6 00:21:18.128 Write completed with error (sct=0, sc=8) 00:21:18.128 starting I/O failed: -6 00:21:18.128 Write completed with error (sct=0, sc=8) 00:21:18.128 starting I/O failed: -6 00:21:18.128 Write completed with error (sct=0, sc=8) 00:21:18.128 starting I/O failed: -6 00:21:18.129 Write completed with error (sct=0, sc=8) 00:21:18.129 starting I/O failed: -6 00:21:18.129 Write completed with error (sct=0, sc=8) 00:21:18.129 starting I/O failed: -6 00:21:18.129 Write completed with error (sct=0, sc=8) 00:21:18.129 starting I/O failed: -6 00:21:18.129 Write completed with error (sct=0, sc=8) 00:21:18.129 starting I/O failed: -6 00:21:18.129 Write completed with error (sct=0, sc=8) 00:21:18.129 starting I/O failed: -6 00:21:18.129 Write completed with error (sct=0, sc=8) 00:21:18.129 starting I/O failed: -6 00:21:18.129 Write completed with error (sct=0, sc=8) 00:21:18.129 starting I/O failed: -6 00:21:18.129 Write completed with error (sct=0, sc=8) 00:21:18.129 starting I/O failed: -6 00:21:18.129 Write completed with error (sct=0, sc=8) 00:21:18.129 starting I/O failed: -6 00:21:18.129 Write completed with error (sct=0, sc=8) 00:21:18.129 starting I/O failed: -6 00:21:18.129 Write completed with error (sct=0, sc=8) 00:21:18.129 starting I/O failed: -6 00:21:18.129 Write completed with error (sct=0, sc=8) 00:21:18.129 starting I/O failed: -6 00:21:18.129 Write completed with error (sct=0, sc=8) 00:21:18.129 starting I/O failed: -6 00:21:18.129 Write completed with error (sct=0, sc=8) 00:21:18.129 starting I/O failed: -6 00:21:18.129 Write completed with error (sct=0, sc=8) 00:21:18.129 starting I/O failed: -6 00:21:18.129 Write completed with error (sct=0, sc=8) 00:21:18.129 starting I/O failed: -6 00:21:18.129 Write completed with error (sct=0, sc=8) 00:21:18.129 starting I/O failed: -6 00:21:18.129 Write completed with error (sct=0, sc=8) 00:21:18.129 starting I/O failed: -6 00:21:18.129 Write completed with error (sct=0, sc=8) 00:21:18.129 starting I/O failed: -6 00:21:18.129 Write completed with error (sct=0, sc=8) 00:21:18.129 starting I/O failed: -6 00:21:18.129 Write completed with error (sct=0, sc=8) 00:21:18.129 starting I/O failed: -6 00:21:18.129 Write completed with error (sct=0, sc=8) 00:21:18.129 starting I/O failed: -6 00:21:18.129 Write completed with error (sct=0, sc=8) 00:21:18.129 starting I/O failed: -6 00:21:18.129 Write completed with error (sct=0, sc=8) 00:21:18.129 starting I/O failed: -6 00:21:18.129 Write completed with error (sct=0, sc=8) 00:21:18.129 starting I/O failed: -6 00:21:18.129 Write completed with error (sct=0, sc=8) 00:21:18.129 starting I/O failed: -6 00:21:18.129 Write completed with error (sct=0, sc=8) 00:21:18.129 starting I/O failed: -6 00:21:18.129 Write completed with error (sct=0, sc=8) 00:21:18.129 starting I/O failed: -6 00:21:18.129 Write completed with error (sct=0, sc=8) 00:21:18.129 starting I/O failed: -6 00:21:18.129 Write completed with error (sct=0, sc=8) 00:21:18.129 starting I/O failed: -6 00:21:18.129 Write completed with error (sct=0, sc=8) 00:21:18.129 starting I/O failed: -6 00:21:18.129 Write completed with error (sct=0, sc=8) 00:21:18.129 starting I/O failed: -6 00:21:18.129 Write completed with error (sct=0, sc=8) 00:21:18.129 starting I/O failed: -6 00:21:18.129 Write completed with error (sct=0, sc=8) 00:21:18.129 starting I/O failed: -6 00:21:18.129 Write completed with error (sct=0, sc=8) 00:21:18.129 starting I/O failed: -6 00:21:18.129 Write completed with error (sct=0, sc=8) 00:21:18.129 starting I/O failed: -6 00:21:18.129 Write completed with error (sct=0, sc=8) 00:21:18.129 starting I/O failed: -6 00:21:18.129 Write completed with error (sct=0, sc=8) 00:21:18.129 starting I/O failed: -6 00:21:18.129 Write completed with error (sct=0, sc=8) 00:21:18.129 starting I/O failed: -6 00:21:18.129 Write completed with error (sct=0, sc=8) 00:21:18.129 starting I/O failed: -6 00:21:18.129 Write completed with error (sct=0, sc=8) 00:21:18.129 starting I/O failed: -6 00:21:18.129 Write completed with error (sct=0, sc=8) 00:21:18.129 starting I/O failed: -6 00:21:18.129 Write completed with error (sct=0, sc=8) 00:21:18.129 starting I/O failed: -6 00:21:18.129 Write completed with error (sct=0, sc=8) 00:21:18.129 starting I/O failed: -6 00:21:18.129 Write completed with error (sct=0, sc=8) 00:21:18.129 starting I/O failed: -6 00:21:18.129 Write completed with error (sct=0, sc=8) 00:21:18.129 starting I/O failed: -6 00:21:18.129 Write completed with error (sct=0, sc=8) 00:21:18.129 starting I/O failed: -6 00:21:18.129 Write completed with error (sct=0, sc=8) 00:21:18.129 starting I/O failed: -6 00:21:18.129 Write completed with error (sct=0, sc=8) 00:21:18.129 starting I/O failed: -6 00:21:18.129 Write completed with error (sct=0, sc=8) 00:21:18.129 starting I/O failed: -6 00:21:18.129 Write completed with error (sct=0, sc=8) 00:21:18.129 starting I/O failed: -6 00:21:18.129 Write completed with error (sct=0, sc=8) 00:21:18.129 starting I/O failed: -6 00:21:18.129 Write completed with error (sct=0, sc=8) 00:21:18.129 starting I/O failed: -6 00:21:18.129 [2024-11-28 12:45:00.318609] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:18.129 NVMe io qpair process completion error 00:21:18.129 Write completed with error (sct=0, sc=8) 00:21:18.129 Write completed with error (sct=0, sc=8) 00:21:18.129 starting I/O failed: -6 00:21:18.129 Write completed with error (sct=0, sc=8) 00:21:18.129 Write completed with error (sct=0, sc=8) 00:21:18.129 Write completed with error (sct=0, sc=8) 00:21:18.129 Write completed with error (sct=0, sc=8) 00:21:18.129 starting I/O failed: -6 00:21:18.129 Write completed with error (sct=0, sc=8) 00:21:18.129 Write completed with error (sct=0, sc=8) 00:21:18.129 Write completed with error (sct=0, sc=8) 00:21:18.129 Write completed with error (sct=0, sc=8) 00:21:18.129 starting I/O failed: -6 00:21:18.129 Write completed with error (sct=0, sc=8) 00:21:18.129 Write completed with error (sct=0, sc=8) 00:21:18.129 Write completed with error (sct=0, sc=8) 00:21:18.129 Write completed with error (sct=0, sc=8) 00:21:18.129 starting I/O failed: -6 00:21:18.129 Write completed with error (sct=0, sc=8) 00:21:18.129 Write completed with error (sct=0, sc=8) 00:21:18.129 Write completed with error (sct=0, sc=8) 00:21:18.129 Write completed with error (sct=0, sc=8) 00:21:18.129 starting I/O failed: -6 00:21:18.129 Write completed with error (sct=0, sc=8) 00:21:18.129 Write completed with error (sct=0, sc=8) 00:21:18.129 Write completed with error (sct=0, sc=8) 00:21:18.129 Write completed with error (sct=0, sc=8) 00:21:18.129 starting I/O failed: -6 00:21:18.129 Write completed with error (sct=0, sc=8) 00:21:18.129 Write completed with error (sct=0, sc=8) 00:21:18.129 Write completed with error (sct=0, sc=8) 00:21:18.129 Write completed with error (sct=0, sc=8) 00:21:18.129 starting I/O failed: -6 00:21:18.129 Write completed with error (sct=0, sc=8) 00:21:18.129 Write completed with error (sct=0, sc=8) 00:21:18.129 Write completed with error (sct=0, sc=8) 00:21:18.129 Write completed with error (sct=0, sc=8) 00:21:18.129 starting I/O failed: -6 00:21:18.129 Write completed with error (sct=0, sc=8) 00:21:18.129 Write completed with error (sct=0, sc=8) 00:21:18.129 Write completed with error (sct=0, sc=8) 00:21:18.129 Write completed with error (sct=0, sc=8) 00:21:18.129 starting I/O failed: -6 00:21:18.129 Write completed with error (sct=0, sc=8) 00:21:18.129 Write completed with error (sct=0, sc=8) 00:21:18.129 Write completed with error (sct=0, sc=8) 00:21:18.129 [2024-11-28 12:45:00.319566] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:18.129 Write completed with error (sct=0, sc=8) 00:21:18.129 starting I/O failed: -6 00:21:18.129 Write completed with error (sct=0, sc=8) 00:21:18.129 Write completed with error (sct=0, sc=8) 00:21:18.129 starting I/O failed: -6 00:21:18.129 Write completed with error (sct=0, sc=8) 00:21:18.129 Write completed with error (sct=0, sc=8) 00:21:18.129 starting I/O failed: -6 00:21:18.129 Write completed with error (sct=0, sc=8) 00:21:18.129 Write completed with error (sct=0, sc=8) 00:21:18.129 starting I/O failed: -6 00:21:18.129 Write completed with error (sct=0, sc=8) 00:21:18.129 Write completed with error (sct=0, sc=8) 00:21:18.129 starting I/O failed: -6 00:21:18.129 Write completed with error (sct=0, sc=8) 00:21:18.129 Write completed with error (sct=0, sc=8) 00:21:18.129 starting I/O failed: -6 00:21:18.129 Write completed with error (sct=0, sc=8) 00:21:18.129 Write completed with error (sct=0, sc=8) 00:21:18.129 starting I/O failed: -6 00:21:18.129 Write completed with error (sct=0, sc=8) 00:21:18.129 Write completed with error (sct=0, sc=8) 00:21:18.129 starting I/O failed: -6 00:21:18.129 Write completed with error (sct=0, sc=8) 00:21:18.129 Write completed with error (sct=0, sc=8) 00:21:18.129 starting I/O failed: -6 00:21:18.129 Write completed with error (sct=0, sc=8) 00:21:18.129 Write completed with error (sct=0, sc=8) 00:21:18.129 starting I/O failed: -6 00:21:18.129 Write completed with error (sct=0, sc=8) 00:21:18.129 Write completed with error (sct=0, sc=8) 00:21:18.129 starting I/O failed: -6 00:21:18.129 Write completed with error (sct=0, sc=8) 00:21:18.129 Write completed with error (sct=0, sc=8) 00:21:18.129 starting I/O failed: -6 00:21:18.129 Write completed with error (sct=0, sc=8) 00:21:18.129 Write completed with error (sct=0, sc=8) 00:21:18.129 starting I/O failed: -6 00:21:18.129 Write completed with error (sct=0, sc=8) 00:21:18.129 Write completed with error (sct=0, sc=8) 00:21:18.129 starting I/O failed: -6 00:21:18.129 Write completed with error (sct=0, sc=8) 00:21:18.129 Write completed with error (sct=0, sc=8) 00:21:18.129 starting I/O failed: -6 00:21:18.129 Write completed with error (sct=0, sc=8) 00:21:18.129 Write completed with error (sct=0, sc=8) 00:21:18.129 starting I/O failed: -6 00:21:18.129 Write completed with error (sct=0, sc=8) 00:21:18.129 Write completed with error (sct=0, sc=8) 00:21:18.129 starting I/O failed: -6 00:21:18.129 Write completed with error (sct=0, sc=8) 00:21:18.129 Write completed with error (sct=0, sc=8) 00:21:18.129 starting I/O failed: -6 00:21:18.129 Write completed with error (sct=0, sc=8) 00:21:18.129 Write completed with error (sct=0, sc=8) 00:21:18.129 starting I/O failed: -6 00:21:18.129 Write completed with error (sct=0, sc=8) 00:21:18.129 Write completed with error (sct=0, sc=8) 00:21:18.129 starting I/O failed: -6 00:21:18.129 Write completed with error (sct=0, sc=8) 00:21:18.129 Write completed with error (sct=0, sc=8) 00:21:18.129 starting I/O failed: -6 00:21:18.129 Write completed with error (sct=0, sc=8) 00:21:18.129 Write completed with error (sct=0, sc=8) 00:21:18.129 starting I/O failed: -6 00:21:18.129 Write completed with error (sct=0, sc=8) 00:21:18.130 Write completed with error (sct=0, sc=8) 00:21:18.130 starting I/O failed: -6 00:21:18.130 [2024-11-28 12:45:00.320467] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:18.130 Write completed with error (sct=0, sc=8) 00:21:18.130 starting I/O failed: -6 00:21:18.130 Write completed with error (sct=0, sc=8) 00:21:18.130 starting I/O failed: -6 00:21:18.130 Write completed with error (sct=0, sc=8) 00:21:18.130 Write completed with error (sct=0, sc=8) 00:21:18.130 starting I/O failed: -6 00:21:18.130 Write completed with error (sct=0, sc=8) 00:21:18.130 starting I/O failed: -6 00:21:18.130 Write completed with error (sct=0, sc=8) 00:21:18.130 starting I/O failed: -6 00:21:18.130 Write completed with error (sct=0, sc=8) 00:21:18.130 Write completed with error (sct=0, sc=8) 00:21:18.130 starting I/O failed: -6 00:21:18.130 Write completed with error (sct=0, sc=8) 00:21:18.130 starting I/O failed: -6 00:21:18.130 Write completed with error (sct=0, sc=8) 00:21:18.130 starting I/O failed: -6 00:21:18.130 Write completed with error (sct=0, sc=8) 00:21:18.130 Write completed with error (sct=0, sc=8) 00:21:18.130 starting I/O failed: -6 00:21:18.130 Write completed with error (sct=0, sc=8) 00:21:18.130 starting I/O failed: -6 00:21:18.130 Write completed with error (sct=0, sc=8) 00:21:18.130 starting I/O failed: -6 00:21:18.130 Write completed with error (sct=0, sc=8) 00:21:18.130 Write completed with error (sct=0, sc=8) 00:21:18.130 starting I/O failed: -6 00:21:18.130 Write completed with error (sct=0, sc=8) 00:21:18.130 starting I/O failed: -6 00:21:18.130 Write completed with error (sct=0, sc=8) 00:21:18.130 starting I/O failed: -6 00:21:18.130 Write completed with error (sct=0, sc=8) 00:21:18.130 Write completed with error (sct=0, sc=8) 00:21:18.130 starting I/O failed: -6 00:21:18.130 Write completed with error (sct=0, sc=8) 00:21:18.130 starting I/O failed: -6 00:21:18.130 Write completed with error (sct=0, sc=8) 00:21:18.130 starting I/O failed: -6 00:21:18.130 Write completed with error (sct=0, sc=8) 00:21:18.130 Write completed with error (sct=0, sc=8) 00:21:18.130 starting I/O failed: -6 00:21:18.130 Write completed with error (sct=0, sc=8) 00:21:18.130 starting I/O failed: -6 00:21:18.130 Write completed with error (sct=0, sc=8) 00:21:18.130 starting I/O failed: -6 00:21:18.130 Write completed with error (sct=0, sc=8) 00:21:18.130 Write completed with error (sct=0, sc=8) 00:21:18.130 starting I/O failed: -6 00:21:18.130 Write completed with error (sct=0, sc=8) 00:21:18.130 starting I/O failed: -6 00:21:18.130 Write completed with error (sct=0, sc=8) 00:21:18.130 starting I/O failed: -6 00:21:18.130 Write completed with error (sct=0, sc=8) 00:21:18.130 Write completed with error (sct=0, sc=8) 00:21:18.130 starting I/O failed: -6 00:21:18.130 Write completed with error (sct=0, sc=8) 00:21:18.130 starting I/O failed: -6 00:21:18.130 Write completed with error (sct=0, sc=8) 00:21:18.130 starting I/O failed: -6 00:21:18.130 Write completed with error (sct=0, sc=8) 00:21:18.130 Write completed with error (sct=0, sc=8) 00:21:18.130 starting I/O failed: -6 00:21:18.130 Write completed with error (sct=0, sc=8) 00:21:18.130 starting I/O failed: -6 00:21:18.130 Write completed with error (sct=0, sc=8) 00:21:18.130 starting I/O failed: -6 00:21:18.130 Write completed with error (sct=0, sc=8) 00:21:18.130 Write completed with error (sct=0, sc=8) 00:21:18.130 starting I/O failed: -6 00:21:18.130 Write completed with error (sct=0, sc=8) 00:21:18.130 starting I/O failed: -6 00:21:18.130 Write completed with error (sct=0, sc=8) 00:21:18.130 starting I/O failed: -6 00:21:18.130 Write completed with error (sct=0, sc=8) 00:21:18.130 Write completed with error (sct=0, sc=8) 00:21:18.130 starting I/O failed: -6 00:21:18.130 Write completed with error (sct=0, sc=8) 00:21:18.130 starting I/O failed: -6 00:21:18.130 Write completed with error (sct=0, sc=8) 00:21:18.130 starting I/O failed: -6 00:21:18.130 Write completed with error (sct=0, sc=8) 00:21:18.130 Write completed with error (sct=0, sc=8) 00:21:18.130 starting I/O failed: -6 00:21:18.130 Write completed with error (sct=0, sc=8) 00:21:18.130 starting I/O failed: -6 00:21:18.130 [2024-11-28 12:45:00.321504] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:18.130 Write completed with error (sct=0, sc=8) 00:21:18.130 starting I/O failed: -6 00:21:18.130 Write completed with error (sct=0, sc=8) 00:21:18.130 starting I/O failed: -6 00:21:18.130 Write completed with error (sct=0, sc=8) 00:21:18.130 starting I/O failed: -6 00:21:18.130 Write completed with error (sct=0, sc=8) 00:21:18.130 starting I/O failed: -6 00:21:18.130 Write completed with error (sct=0, sc=8) 00:21:18.130 starting I/O failed: -6 00:21:18.130 Write completed with error (sct=0, sc=8) 00:21:18.130 starting I/O failed: -6 00:21:18.130 Write completed with error (sct=0, sc=8) 00:21:18.130 starting I/O failed: -6 00:21:18.130 Write completed with error (sct=0, sc=8) 00:21:18.130 starting I/O failed: -6 00:21:18.130 Write completed with error (sct=0, sc=8) 00:21:18.130 starting I/O failed: -6 00:21:18.130 Write completed with error (sct=0, sc=8) 00:21:18.130 starting I/O failed: -6 00:21:18.130 Write completed with error (sct=0, sc=8) 00:21:18.130 starting I/O failed: -6 00:21:18.130 Write completed with error (sct=0, sc=8) 00:21:18.130 starting I/O failed: -6 00:21:18.130 Write completed with error (sct=0, sc=8) 00:21:18.130 starting I/O failed: -6 00:21:18.130 Write completed with error (sct=0, sc=8) 00:21:18.130 starting I/O failed: -6 00:21:18.130 Write completed with error (sct=0, sc=8) 00:21:18.130 starting I/O failed: -6 00:21:18.130 Write completed with error (sct=0, sc=8) 00:21:18.130 starting I/O failed: -6 00:21:18.130 Write completed with error (sct=0, sc=8) 00:21:18.130 starting I/O failed: -6 00:21:18.130 Write completed with error (sct=0, sc=8) 00:21:18.130 starting I/O failed: -6 00:21:18.130 Write completed with error (sct=0, sc=8) 00:21:18.130 starting I/O failed: -6 00:21:18.130 Write completed with error (sct=0, sc=8) 00:21:18.130 starting I/O failed: -6 00:21:18.130 Write completed with error (sct=0, sc=8) 00:21:18.130 starting I/O failed: -6 00:21:18.130 Write completed with error (sct=0, sc=8) 00:21:18.130 starting I/O failed: -6 00:21:18.130 Write completed with error (sct=0, sc=8) 00:21:18.130 starting I/O failed: -6 00:21:18.130 Write completed with error (sct=0, sc=8) 00:21:18.130 starting I/O failed: -6 00:21:18.130 Write completed with error (sct=0, sc=8) 00:21:18.130 starting I/O failed: -6 00:21:18.130 Write completed with error (sct=0, sc=8) 00:21:18.130 starting I/O failed: -6 00:21:18.130 Write completed with error (sct=0, sc=8) 00:21:18.130 starting I/O failed: -6 00:21:18.130 Write completed with error (sct=0, sc=8) 00:21:18.130 starting I/O failed: -6 00:21:18.130 Write completed with error (sct=0, sc=8) 00:21:18.130 starting I/O failed: -6 00:21:18.130 Write completed with error (sct=0, sc=8) 00:21:18.130 starting I/O failed: -6 00:21:18.130 Write completed with error (sct=0, sc=8) 00:21:18.130 starting I/O failed: -6 00:21:18.130 Write completed with error (sct=0, sc=8) 00:21:18.130 starting I/O failed: -6 00:21:18.130 Write completed with error (sct=0, sc=8) 00:21:18.130 starting I/O failed: -6 00:21:18.130 Write completed with error (sct=0, sc=8) 00:21:18.130 starting I/O failed: -6 00:21:18.130 Write completed with error (sct=0, sc=8) 00:21:18.130 starting I/O failed: -6 00:21:18.130 Write completed with error (sct=0, sc=8) 00:21:18.130 starting I/O failed: -6 00:21:18.130 Write completed with error (sct=0, sc=8) 00:21:18.130 starting I/O failed: -6 00:21:18.130 Write completed with error (sct=0, sc=8) 00:21:18.130 starting I/O failed: -6 00:21:18.130 Write completed with error (sct=0, sc=8) 00:21:18.130 starting I/O failed: -6 00:21:18.130 Write completed with error (sct=0, sc=8) 00:21:18.130 starting I/O failed: -6 00:21:18.130 Write completed with error (sct=0, sc=8) 00:21:18.130 starting I/O failed: -6 00:21:18.130 Write completed with error (sct=0, sc=8) 00:21:18.130 starting I/O failed: -6 00:21:18.130 Write completed with error (sct=0, sc=8) 00:21:18.130 starting I/O failed: -6 00:21:18.130 Write completed with error (sct=0, sc=8) 00:21:18.130 starting I/O failed: -6 00:21:18.130 Write completed with error (sct=0, sc=8) 00:21:18.130 starting I/O failed: -6 00:21:18.130 Write completed with error (sct=0, sc=8) 00:21:18.130 starting I/O failed: -6 00:21:18.130 Write completed with error (sct=0, sc=8) 00:21:18.130 starting I/O failed: -6 00:21:18.130 Write completed with error (sct=0, sc=8) 00:21:18.130 starting I/O failed: -6 00:21:18.130 Write completed with error (sct=0, sc=8) 00:21:18.130 starting I/O failed: -6 00:21:18.130 Write completed with error (sct=0, sc=8) 00:21:18.130 starting I/O failed: -6 00:21:18.130 Write completed with error (sct=0, sc=8) 00:21:18.130 starting I/O failed: -6 00:21:18.130 Write completed with error (sct=0, sc=8) 00:21:18.130 starting I/O failed: -6 00:21:18.130 Write completed with error (sct=0, sc=8) 00:21:18.131 starting I/O failed: -6 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.131 starting I/O failed: -6 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.131 starting I/O failed: -6 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.131 starting I/O failed: -6 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.131 starting I/O failed: -6 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.131 starting I/O failed: -6 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.131 starting I/O failed: -6 00:21:18.131 [2024-11-28 12:45:00.325113] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:18.131 NVMe io qpair process completion error 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.131 [2024-11-28 12:45:00.327813] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:18.131 NVMe io qpair process completion error 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.131 starting I/O failed: -6 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.131 starting I/O failed: -6 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.131 starting I/O failed: -6 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.131 starting I/O failed: -6 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.131 starting I/O failed: -6 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.131 starting I/O failed: -6 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.131 starting I/O failed: -6 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.131 starting I/O failed: -6 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.131 starting I/O failed: -6 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.131 [2024-11-28 12:45:00.328840] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:18.131 starting I/O failed: -6 00:21:18.131 starting I/O failed: -6 00:21:18.131 starting I/O failed: -6 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.131 starting I/O failed: -6 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.131 starting I/O failed: -6 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.131 starting I/O failed: -6 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.131 starting I/O failed: -6 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.131 starting I/O failed: -6 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.131 starting I/O failed: -6 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.131 starting I/O failed: -6 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.131 starting I/O failed: -6 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.131 Write completed with error (sct=0, sc=8) 00:21:18.132 Write completed with error (sct=0, sc=8) 00:21:18.132 starting I/O failed: -6 00:21:18.132 Write completed with error (sct=0, sc=8) 00:21:18.132 starting I/O failed: -6 00:21:18.132 Write completed with error (sct=0, sc=8) 00:21:18.132 Write completed with error (sct=0, sc=8) 00:21:18.132 Write completed with error (sct=0, sc=8) 00:21:18.132 starting I/O failed: -6 00:21:18.132 Write completed with error (sct=0, sc=8) 00:21:18.132 starting I/O failed: -6 00:21:18.132 Write completed with error (sct=0, sc=8) 00:21:18.132 Write completed with error (sct=0, sc=8) 00:21:18.132 Write completed with error (sct=0, sc=8) 00:21:18.132 starting I/O failed: -6 00:21:18.132 Write completed with error (sct=0, sc=8) 00:21:18.132 starting I/O failed: -6 00:21:18.132 Write completed with error (sct=0, sc=8) 00:21:18.132 Write completed with error (sct=0, sc=8) 00:21:18.132 Write completed with error (sct=0, sc=8) 00:21:18.132 starting I/O failed: -6 00:21:18.132 Write completed with error (sct=0, sc=8) 00:21:18.132 starting I/O failed: -6 00:21:18.132 Write completed with error (sct=0, sc=8) 00:21:18.132 Write completed with error (sct=0, sc=8) 00:21:18.132 Write completed with error (sct=0, sc=8) 00:21:18.132 starting I/O failed: -6 00:21:18.132 Write completed with error (sct=0, sc=8) 00:21:18.132 starting I/O failed: -6 00:21:18.132 [2024-11-28 12:45:00.329769] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:18.132 Write completed with error (sct=0, sc=8) 00:21:18.132 starting I/O failed: -6 00:21:18.132 Write completed with error (sct=0, sc=8) 00:21:18.132 Write completed with error (sct=0, sc=8) 00:21:18.132 starting I/O failed: -6 00:21:18.132 Write completed with error (sct=0, sc=8) 00:21:18.132 starting I/O failed: -6 00:21:18.132 Write completed with error (sct=0, sc=8) 00:21:18.132 starting I/O failed: -6 00:21:18.132 Write completed with error (sct=0, sc=8) 00:21:18.132 Write completed with error (sct=0, sc=8) 00:21:18.132 starting I/O failed: -6 00:21:18.132 Write completed with error (sct=0, sc=8) 00:21:18.132 starting I/O failed: -6 00:21:18.132 Write completed with error (sct=0, sc=8) 00:21:18.132 starting I/O failed: -6 00:21:18.132 Write completed with error (sct=0, sc=8) 00:21:18.132 Write completed with error (sct=0, sc=8) 00:21:18.132 starting I/O failed: -6 00:21:18.132 Write completed with error (sct=0, sc=8) 00:21:18.132 starting I/O failed: -6 00:21:18.132 Write completed with error (sct=0, sc=8) 00:21:18.132 starting I/O failed: -6 00:21:18.132 Write completed with error (sct=0, sc=8) 00:21:18.132 Write completed with error (sct=0, sc=8) 00:21:18.132 starting I/O failed: -6 00:21:18.132 Write completed with error (sct=0, sc=8) 00:21:18.132 starting I/O failed: -6 00:21:18.132 Write completed with error (sct=0, sc=8) 00:21:18.132 starting I/O failed: -6 00:21:18.132 Write completed with error (sct=0, sc=8) 00:21:18.132 Write completed with error (sct=0, sc=8) 00:21:18.132 starting I/O failed: -6 00:21:18.132 Write completed with error (sct=0, sc=8) 00:21:18.132 starting I/O failed: -6 00:21:18.132 Write completed with error (sct=0, sc=8) 00:21:18.132 starting I/O failed: -6 00:21:18.132 Write completed with error (sct=0, sc=8) 00:21:18.132 Write completed with error (sct=0, sc=8) 00:21:18.132 starting I/O failed: -6 00:21:18.132 Write completed with error (sct=0, sc=8) 00:21:18.132 starting I/O failed: -6 00:21:18.132 Write completed with error (sct=0, sc=8) 00:21:18.132 starting I/O failed: -6 00:21:18.132 Write completed with error (sct=0, sc=8) 00:21:18.132 Write completed with error (sct=0, sc=8) 00:21:18.132 starting I/O failed: -6 00:21:18.132 Write completed with error (sct=0, sc=8) 00:21:18.132 starting I/O failed: -6 00:21:18.132 Write completed with error (sct=0, sc=8) 00:21:18.132 starting I/O failed: -6 00:21:18.132 Write completed with error (sct=0, sc=8) 00:21:18.132 Write completed with error (sct=0, sc=8) 00:21:18.132 starting I/O failed: -6 00:21:18.132 Write completed with error (sct=0, sc=8) 00:21:18.132 starting I/O failed: -6 00:21:18.132 Write completed with error (sct=0, sc=8) 00:21:18.132 starting I/O failed: -6 00:21:18.132 Write completed with error (sct=0, sc=8) 00:21:18.132 Write completed with error (sct=0, sc=8) 00:21:18.132 starting I/O failed: -6 00:21:18.132 Write completed with error (sct=0, sc=8) 00:21:18.132 starting I/O failed: -6 00:21:18.132 Write completed with error (sct=0, sc=8) 00:21:18.132 starting I/O failed: -6 00:21:18.132 Write completed with error (sct=0, sc=8) 00:21:18.132 Write completed with error (sct=0, sc=8) 00:21:18.132 starting I/O failed: -6 00:21:18.132 Write completed with error (sct=0, sc=8) 00:21:18.132 starting I/O failed: -6 00:21:18.132 Write completed with error (sct=0, sc=8) 00:21:18.132 starting I/O failed: -6 00:21:18.132 Write completed with error (sct=0, sc=8) 00:21:18.132 Write completed with error (sct=0, sc=8) 00:21:18.132 starting I/O failed: -6 00:21:18.132 Write completed with error (sct=0, sc=8) 00:21:18.132 starting I/O failed: -6 00:21:18.132 Write completed with error (sct=0, sc=8) 00:21:18.132 starting I/O failed: -6 00:21:18.132 Write completed with error (sct=0, sc=8) 00:21:18.132 Write completed with error (sct=0, sc=8) 00:21:18.132 starting I/O failed: -6 00:21:18.132 Write completed with error (sct=0, sc=8) 00:21:18.132 starting I/O failed: -6 00:21:18.132 Write completed with error (sct=0, sc=8) 00:21:18.132 starting I/O failed: -6 00:21:18.132 Write completed with error (sct=0, sc=8) 00:21:18.132 Write completed with error (sct=0, sc=8) 00:21:18.132 starting I/O failed: -6 00:21:18.132 [2024-11-28 12:45:00.330856] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:18.132 Write completed with error (sct=0, sc=8) 00:21:18.132 starting I/O failed: -6 00:21:18.132 Write completed with error (sct=0, sc=8) 00:21:18.132 starting I/O failed: -6 00:21:18.132 Write completed with error (sct=0, sc=8) 00:21:18.132 starting I/O failed: -6 00:21:18.132 Write completed with error (sct=0, sc=8) 00:21:18.132 starting I/O failed: -6 00:21:18.132 Write completed with error (sct=0, sc=8) 00:21:18.132 starting I/O failed: -6 00:21:18.132 Write completed with error (sct=0, sc=8) 00:21:18.132 starting I/O failed: -6 00:21:18.132 Write completed with error (sct=0, sc=8) 00:21:18.132 starting I/O failed: -6 00:21:18.132 Write completed with error (sct=0, sc=8) 00:21:18.132 starting I/O failed: -6 00:21:18.132 Write completed with error (sct=0, sc=8) 00:21:18.132 starting I/O failed: -6 00:21:18.132 Write completed with error (sct=0, sc=8) 00:21:18.132 starting I/O failed: -6 00:21:18.132 Write completed with error (sct=0, sc=8) 00:21:18.132 starting I/O failed: -6 00:21:18.132 Write completed with error (sct=0, sc=8) 00:21:18.132 starting I/O failed: -6 00:21:18.132 Write completed with error (sct=0, sc=8) 00:21:18.132 starting I/O failed: -6 00:21:18.132 Write completed with error (sct=0, sc=8) 00:21:18.132 starting I/O failed: -6 00:21:18.132 Write completed with error (sct=0, sc=8) 00:21:18.132 starting I/O failed: -6 00:21:18.132 Write completed with error (sct=0, sc=8) 00:21:18.132 starting I/O failed: -6 00:21:18.132 Write completed with error (sct=0, sc=8) 00:21:18.132 starting I/O failed: -6 00:21:18.132 Write completed with error (sct=0, sc=8) 00:21:18.132 starting I/O failed: -6 00:21:18.132 Write completed with error (sct=0, sc=8) 00:21:18.132 starting I/O failed: -6 00:21:18.132 Write completed with error (sct=0, sc=8) 00:21:18.132 starting I/O failed: -6 00:21:18.132 Write completed with error (sct=0, sc=8) 00:21:18.132 starting I/O failed: -6 00:21:18.132 Write completed with error (sct=0, sc=8) 00:21:18.132 starting I/O failed: -6 00:21:18.132 Write completed with error (sct=0, sc=8) 00:21:18.132 starting I/O failed: -6 00:21:18.132 Write completed with error (sct=0, sc=8) 00:21:18.132 starting I/O failed: -6 00:21:18.132 Write completed with error (sct=0, sc=8) 00:21:18.132 starting I/O failed: -6 00:21:18.132 Write completed with error (sct=0, sc=8) 00:21:18.132 starting I/O failed: -6 00:21:18.132 Write completed with error (sct=0, sc=8) 00:21:18.132 starting I/O failed: -6 00:21:18.132 Write completed with error (sct=0, sc=8) 00:21:18.132 starting I/O failed: -6 00:21:18.132 Write completed with error (sct=0, sc=8) 00:21:18.132 starting I/O failed: -6 00:21:18.132 Write completed with error (sct=0, sc=8) 00:21:18.132 starting I/O failed: -6 00:21:18.132 Write completed with error (sct=0, sc=8) 00:21:18.132 starting I/O failed: -6 00:21:18.132 Write completed with error (sct=0, sc=8) 00:21:18.132 starting I/O failed: -6 00:21:18.132 Write completed with error (sct=0, sc=8) 00:21:18.132 starting I/O failed: -6 00:21:18.132 Write completed with error (sct=0, sc=8) 00:21:18.132 starting I/O failed: -6 00:21:18.132 Write completed with error (sct=0, sc=8) 00:21:18.132 starting I/O failed: -6 00:21:18.132 Write completed with error (sct=0, sc=8) 00:21:18.132 starting I/O failed: -6 00:21:18.132 Write completed with error (sct=0, sc=8) 00:21:18.132 starting I/O failed: -6 00:21:18.132 Write completed with error (sct=0, sc=8) 00:21:18.132 starting I/O failed: -6 00:21:18.132 Write completed with error (sct=0, sc=8) 00:21:18.132 starting I/O failed: -6 00:21:18.132 Write completed with error (sct=0, sc=8) 00:21:18.132 starting I/O failed: -6 00:21:18.132 Write completed with error (sct=0, sc=8) 00:21:18.132 starting I/O failed: -6 00:21:18.132 Write completed with error (sct=0, sc=8) 00:21:18.132 starting I/O failed: -6 00:21:18.132 Write completed with error (sct=0, sc=8) 00:21:18.132 starting I/O failed: -6 00:21:18.132 Write completed with error (sct=0, sc=8) 00:21:18.132 starting I/O failed: -6 00:21:18.132 Write completed with error (sct=0, sc=8) 00:21:18.132 starting I/O failed: -6 00:21:18.132 Write completed with error (sct=0, sc=8) 00:21:18.132 starting I/O failed: -6 00:21:18.132 Write completed with error (sct=0, sc=8) 00:21:18.132 starting I/O failed: -6 00:21:18.132 Write completed with error (sct=0, sc=8) 00:21:18.132 starting I/O failed: -6 00:21:18.132 Write completed with error (sct=0, sc=8) 00:21:18.132 starting I/O failed: -6 00:21:18.132 Write completed with error (sct=0, sc=8) 00:21:18.132 starting I/O failed: -6 00:21:18.132 Write completed with error (sct=0, sc=8) 00:21:18.132 starting I/O failed: -6 00:21:18.132 Write completed with error (sct=0, sc=8) 00:21:18.132 starting I/O failed: -6 00:21:18.132 Write completed with error (sct=0, sc=8) 00:21:18.132 starting I/O failed: -6 00:21:18.132 Write completed with error (sct=0, sc=8) 00:21:18.133 starting I/O failed: -6 00:21:18.133 Write completed with error (sct=0, sc=8) 00:21:18.133 starting I/O failed: -6 00:21:18.133 Write completed with error (sct=0, sc=8) 00:21:18.133 starting I/O failed: -6 00:21:18.133 Write completed with error (sct=0, sc=8) 00:21:18.133 starting I/O failed: -6 00:21:18.133 Write completed with error (sct=0, sc=8) 00:21:18.133 starting I/O failed: -6 00:21:18.133 Write completed with error (sct=0, sc=8) 00:21:18.133 starting I/O failed: -6 00:21:18.133 Write completed with error (sct=0, sc=8) 00:21:18.133 starting I/O failed: -6 00:21:18.133 [2024-11-28 12:45:00.332644] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:18.133 NVMe io qpair process completion error 00:21:18.133 Write completed with error (sct=0, sc=8) 00:21:18.133 Write completed with error (sct=0, sc=8) 00:21:18.133 Write completed with error (sct=0, sc=8) 00:21:18.133 Write completed with error (sct=0, sc=8) 00:21:18.133 starting I/O failed: -6 00:21:18.133 Write completed with error (sct=0, sc=8) 00:21:18.133 Write completed with error (sct=0, sc=8) 00:21:18.133 Write completed with error (sct=0, sc=8) 00:21:18.133 Write completed with error (sct=0, sc=8) 00:21:18.133 starting I/O failed: -6 00:21:18.133 Write completed with error (sct=0, sc=8) 00:21:18.133 Write completed with error (sct=0, sc=8) 00:21:18.133 Write completed with error (sct=0, sc=8) 00:21:18.133 Write completed with error (sct=0, sc=8) 00:21:18.133 starting I/O failed: -6 00:21:18.133 Write completed with error (sct=0, sc=8) 00:21:18.133 Write completed with error (sct=0, sc=8) 00:21:18.133 Write completed with error (sct=0, sc=8) 00:21:18.133 Write completed with error (sct=0, sc=8) 00:21:18.133 starting I/O failed: -6 00:21:18.133 Write completed with error (sct=0, sc=8) 00:21:18.133 Write completed with error (sct=0, sc=8) 00:21:18.133 Write completed with error (sct=0, sc=8) 00:21:18.133 Write completed with error (sct=0, sc=8) 00:21:18.133 starting I/O failed: -6 00:21:18.133 Write completed with error (sct=0, sc=8) 00:21:18.133 Write completed with error (sct=0, sc=8) 00:21:18.133 Write completed with error (sct=0, sc=8) 00:21:18.133 Write completed with error (sct=0, sc=8) 00:21:18.133 starting I/O failed: -6 00:21:18.133 Write completed with error (sct=0, sc=8) 00:21:18.133 Write completed with error (sct=0, sc=8) 00:21:18.133 Write completed with error (sct=0, sc=8) 00:21:18.133 Write completed with error (sct=0, sc=8) 00:21:18.133 starting I/O failed: -6 00:21:18.133 Write completed with error (sct=0, sc=8) 00:21:18.133 starting I/O failed: -6 00:21:18.133 Write completed with error (sct=0, sc=8) 00:21:18.133 Write completed with error (sct=0, sc=8) 00:21:18.133 Write completed with error (sct=0, sc=8) 00:21:18.133 starting I/O failed: -6 00:21:18.133 Write completed with error (sct=0, sc=8) 00:21:18.133 starting I/O failed: -6 00:21:18.133 Write completed with error (sct=0, sc=8) 00:21:18.133 Write completed with error (sct=0, sc=8) 00:21:18.133 Write completed with error (sct=0, sc=8) 00:21:18.133 starting I/O failed: -6 00:21:18.133 Write completed with error (sct=0, sc=8) 00:21:18.133 starting I/O failed: -6 00:21:18.133 Write completed with error (sct=0, sc=8) 00:21:18.133 Write completed with error (sct=0, sc=8) 00:21:18.133 Write completed with error (sct=0, sc=8) 00:21:18.133 starting I/O failed: -6 00:21:18.133 Write completed with error (sct=0, sc=8) 00:21:18.133 starting I/O failed: -6 00:21:18.133 Write completed with error (sct=0, sc=8) 00:21:18.133 Write completed with error (sct=0, sc=8) 00:21:18.133 Write completed with error (sct=0, sc=8) 00:21:18.133 starting I/O failed: -6 00:21:18.133 Write completed with error (sct=0, sc=8) 00:21:18.133 starting I/O failed: -6 00:21:18.133 Write completed with error (sct=0, sc=8) 00:21:18.133 Write completed with error (sct=0, sc=8) 00:21:18.133 Write completed with error (sct=0, sc=8) 00:21:18.133 starting I/O failed: -6 00:21:18.133 Write completed with error (sct=0, sc=8) 00:21:18.133 starting I/O failed: -6 00:21:18.133 Write completed with error (sct=0, sc=8) 00:21:18.133 Write completed with error (sct=0, sc=8) 00:21:18.133 Write completed with error (sct=0, sc=8) 00:21:18.133 starting I/O failed: -6 00:21:18.133 Write completed with error (sct=0, sc=8) 00:21:18.133 starting I/O failed: -6 00:21:18.133 Write completed with error (sct=0, sc=8) 00:21:18.133 Write completed with error (sct=0, sc=8) 00:21:18.133 Write completed with error (sct=0, sc=8) 00:21:18.133 starting I/O failed: -6 00:21:18.133 Write completed with error (sct=0, sc=8) 00:21:18.133 starting I/O failed: -6 00:21:18.133 Write completed with error (sct=0, sc=8) 00:21:18.133 Write completed with error (sct=0, sc=8) 00:21:18.133 Write completed with error (sct=0, sc=8) 00:21:18.133 starting I/O failed: -6 00:21:18.133 Write completed with error (sct=0, sc=8) 00:21:18.133 starting I/O failed: -6 00:21:18.133 Write completed with error (sct=0, sc=8) 00:21:18.133 Write completed with error (sct=0, sc=8) 00:21:18.133 Write completed with error (sct=0, sc=8) 00:21:18.133 starting I/O failed: -6 00:21:18.133 Write completed with error (sct=0, sc=8) 00:21:18.133 starting I/O failed: -6 00:21:18.133 Write completed with error (sct=0, sc=8) 00:21:18.133 Write completed with error (sct=0, sc=8) 00:21:18.133 Write completed with error (sct=0, sc=8) 00:21:18.133 starting I/O failed: -6 00:21:18.133 Write completed with error (sct=0, sc=8) 00:21:18.133 starting I/O failed: -6 00:21:18.133 starting I/O failed: -6 00:21:18.133 Write completed with error (sct=0, sc=8) 00:21:18.133 starting I/O failed: -6 00:21:18.133 Write completed with error (sct=0, sc=8) 00:21:18.133 Write completed with error (sct=0, sc=8) 00:21:18.133 starting I/O failed: -6 00:21:18.133 Write completed with error (sct=0, sc=8) 00:21:18.133 starting I/O failed: -6 00:21:18.133 Write completed with error (sct=0, sc=8) 00:21:18.133 starting I/O failed: -6 00:21:18.133 Write completed with error (sct=0, sc=8) 00:21:18.133 Write completed with error (sct=0, sc=8) 00:21:18.133 starting I/O failed: -6 00:21:18.133 Write completed with error (sct=0, sc=8) 00:21:18.133 starting I/O failed: -6 00:21:18.133 Write completed with error (sct=0, sc=8) 00:21:18.133 starting I/O failed: -6 00:21:18.133 Write completed with error (sct=0, sc=8) 00:21:18.133 Write completed with error (sct=0, sc=8) 00:21:18.133 starting I/O failed: -6 00:21:18.133 Write completed with error (sct=0, sc=8) 00:21:18.133 starting I/O failed: -6 00:21:18.133 Write completed with error (sct=0, sc=8) 00:21:18.133 starting I/O failed: -6 00:21:18.133 Write completed with error (sct=0, sc=8) 00:21:18.133 Write completed with error (sct=0, sc=8) 00:21:18.133 starting I/O failed: -6 00:21:18.133 Write completed with error (sct=0, sc=8) 00:21:18.133 starting I/O failed: -6 00:21:18.133 Write completed with error (sct=0, sc=8) 00:21:18.133 starting I/O failed: -6 00:21:18.133 Write completed with error (sct=0, sc=8) 00:21:18.133 Write completed with error (sct=0, sc=8) 00:21:18.133 starting I/O failed: -6 00:21:18.133 Write completed with error (sct=0, sc=8) 00:21:18.133 starting I/O failed: -6 00:21:18.133 Write completed with error (sct=0, sc=8) 00:21:18.133 starting I/O failed: -6 00:21:18.133 Write completed with error (sct=0, sc=8) 00:21:18.133 Write completed with error (sct=0, sc=8) 00:21:18.133 starting I/O failed: -6 00:21:18.133 Write completed with error (sct=0, sc=8) 00:21:18.133 starting I/O failed: -6 00:21:18.133 Write completed with error (sct=0, sc=8) 00:21:18.133 starting I/O failed: -6 00:21:18.133 Write completed with error (sct=0, sc=8) 00:21:18.133 Write completed with error (sct=0, sc=8) 00:21:18.133 starting I/O failed: -6 00:21:18.133 Write completed with error (sct=0, sc=8) 00:21:18.133 starting I/O failed: -6 00:21:18.133 Write completed with error (sct=0, sc=8) 00:21:18.133 starting I/O failed: -6 00:21:18.133 Write completed with error (sct=0, sc=8) 00:21:18.133 Write completed with error (sct=0, sc=8) 00:21:18.133 starting I/O failed: -6 00:21:18.133 Write completed with error (sct=0, sc=8) 00:21:18.133 starting I/O failed: -6 00:21:18.133 Write completed with error (sct=0, sc=8) 00:21:18.133 starting I/O failed: -6 00:21:18.133 Write completed with error (sct=0, sc=8) 00:21:18.133 Write completed with error (sct=0, sc=8) 00:21:18.133 starting I/O failed: -6 00:21:18.133 Write completed with error (sct=0, sc=8) 00:21:18.133 starting I/O failed: -6 00:21:18.133 Write completed with error (sct=0, sc=8) 00:21:18.133 starting I/O failed: -6 00:21:18.133 Write completed with error (sct=0, sc=8) 00:21:18.133 Write completed with error (sct=0, sc=8) 00:21:18.133 starting I/O failed: -6 00:21:18.133 Write completed with error (sct=0, sc=8) 00:21:18.133 starting I/O failed: -6 00:21:18.133 Write completed with error (sct=0, sc=8) 00:21:18.133 starting I/O failed: -6 00:21:18.133 Write completed with error (sct=0, sc=8) 00:21:18.133 Write completed with error (sct=0, sc=8) 00:21:18.133 starting I/O failed: -6 00:21:18.133 Write completed with error (sct=0, sc=8) 00:21:18.133 starting I/O failed: -6 00:21:18.133 Write completed with error (sct=0, sc=8) 00:21:18.133 starting I/O failed: -6 00:21:18.133 Write completed with error (sct=0, sc=8) 00:21:18.133 Write completed with error (sct=0, sc=8) 00:21:18.133 starting I/O failed: -6 00:21:18.133 Write completed with error (sct=0, sc=8) 00:21:18.133 starting I/O failed: -6 00:21:18.133 Write completed with error (sct=0, sc=8) 00:21:18.133 starting I/O failed: -6 00:21:18.133 Write completed with error (sct=0, sc=8) 00:21:18.133 Write completed with error (sct=0, sc=8) 00:21:18.133 starting I/O failed: -6 00:21:18.133 Write completed with error (sct=0, sc=8) 00:21:18.133 starting I/O failed: -6 00:21:18.133 Write completed with error (sct=0, sc=8) 00:21:18.133 starting I/O failed: -6 00:21:18.133 Write completed with error (sct=0, sc=8) 00:21:18.133 Write completed with error (sct=0, sc=8) 00:21:18.133 starting I/O failed: -6 00:21:18.133 Write completed with error (sct=0, sc=8) 00:21:18.133 starting I/O failed: -6 00:21:18.133 [2024-11-28 12:45:00.335229] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:18.133 Write completed with error (sct=0, sc=8) 00:21:18.133 starting I/O failed: -6 00:21:18.133 Write completed with error (sct=0, sc=8) 00:21:18.133 starting I/O failed: -6 00:21:18.133 Write completed with error (sct=0, sc=8) 00:21:18.134 starting I/O failed: -6 00:21:18.134 Write completed with error (sct=0, sc=8) 00:21:18.134 starting I/O failed: -6 00:21:18.134 Write completed with error (sct=0, sc=8) 00:21:18.134 starting I/O failed: -6 00:21:18.134 Write completed with error (sct=0, sc=8) 00:21:18.134 starting I/O failed: -6 00:21:18.134 Write completed with error (sct=0, sc=8) 00:21:18.134 starting I/O failed: -6 00:21:18.134 Write completed with error (sct=0, sc=8) 00:21:18.134 starting I/O failed: -6 00:21:18.134 Write completed with error (sct=0, sc=8) 00:21:18.134 starting I/O failed: -6 00:21:18.134 Write completed with error (sct=0, sc=8) 00:21:18.134 starting I/O failed: -6 00:21:18.134 Write completed with error (sct=0, sc=8) 00:21:18.134 starting I/O failed: -6 00:21:18.134 Write completed with error (sct=0, sc=8) 00:21:18.134 starting I/O failed: -6 00:21:18.134 Write completed with error (sct=0, sc=8) 00:21:18.134 starting I/O failed: -6 00:21:18.134 Write completed with error (sct=0, sc=8) 00:21:18.134 starting I/O failed: -6 00:21:18.134 Write completed with error (sct=0, sc=8) 00:21:18.134 starting I/O failed: -6 00:21:18.134 Write completed with error (sct=0, sc=8) 00:21:18.134 starting I/O failed: -6 00:21:18.134 Write completed with error (sct=0, sc=8) 00:21:18.134 starting I/O failed: -6 00:21:18.134 Write completed with error (sct=0, sc=8) 00:21:18.134 starting I/O failed: -6 00:21:18.134 Write completed with error (sct=0, sc=8) 00:21:18.134 starting I/O failed: -6 00:21:18.134 Write completed with error (sct=0, sc=8) 00:21:18.134 starting I/O failed: -6 00:21:18.134 Write completed with error (sct=0, sc=8) 00:21:18.134 starting I/O failed: -6 00:21:18.134 Write completed with error (sct=0, sc=8) 00:21:18.134 starting I/O failed: -6 00:21:18.134 Write completed with error (sct=0, sc=8) 00:21:18.134 starting I/O failed: -6 00:21:18.134 Write completed with error (sct=0, sc=8) 00:21:18.134 starting I/O failed: -6 00:21:18.134 Write completed with error (sct=0, sc=8) 00:21:18.134 starting I/O failed: -6 00:21:18.134 Write completed with error (sct=0, sc=8) 00:21:18.134 starting I/O failed: -6 00:21:18.134 Write completed with error (sct=0, sc=8) 00:21:18.134 starting I/O failed: -6 00:21:18.134 Write completed with error (sct=0, sc=8) 00:21:18.134 starting I/O failed: -6 00:21:18.134 Write completed with error (sct=0, sc=8) 00:21:18.134 starting I/O failed: -6 00:21:18.134 Write completed with error (sct=0, sc=8) 00:21:18.134 starting I/O failed: -6 00:21:18.134 Write completed with error (sct=0, sc=8) 00:21:18.134 starting I/O failed: -6 00:21:18.134 Write completed with error (sct=0, sc=8) 00:21:18.134 starting I/O failed: -6 00:21:18.134 Write completed with error (sct=0, sc=8) 00:21:18.134 starting I/O failed: -6 00:21:18.134 Write completed with error (sct=0, sc=8) 00:21:18.134 starting I/O failed: -6 00:21:18.134 Write completed with error (sct=0, sc=8) 00:21:18.134 starting I/O failed: -6 00:21:18.134 Write completed with error (sct=0, sc=8) 00:21:18.134 starting I/O failed: -6 00:21:18.134 Write completed with error (sct=0, sc=8) 00:21:18.134 starting I/O failed: -6 00:21:18.134 Write completed with error (sct=0, sc=8) 00:21:18.134 starting I/O failed: -6 00:21:18.134 Write completed with error (sct=0, sc=8) 00:21:18.134 starting I/O failed: -6 00:21:18.134 Write completed with error (sct=0, sc=8) 00:21:18.134 starting I/O failed: -6 00:21:18.134 Write completed with error (sct=0, sc=8) 00:21:18.134 starting I/O failed: -6 00:21:18.134 Write completed with error (sct=0, sc=8) 00:21:18.134 starting I/O failed: -6 00:21:18.134 Write completed with error (sct=0, sc=8) 00:21:18.134 starting I/O failed: -6 00:21:18.134 Write completed with error (sct=0, sc=8) 00:21:18.134 starting I/O failed: -6 00:21:18.134 Write completed with error (sct=0, sc=8) 00:21:18.134 starting I/O failed: -6 00:21:18.134 Write completed with error (sct=0, sc=8) 00:21:18.134 starting I/O failed: -6 00:21:18.134 Write completed with error (sct=0, sc=8) 00:21:18.134 starting I/O failed: -6 00:21:18.134 Write completed with error (sct=0, sc=8) 00:21:18.134 starting I/O failed: -6 00:21:18.134 Write completed with error (sct=0, sc=8) 00:21:18.134 starting I/O failed: -6 00:21:18.134 Write completed with error (sct=0, sc=8) 00:21:18.134 starting I/O failed: -6 00:21:18.134 Write completed with error (sct=0, sc=8) 00:21:18.134 starting I/O failed: -6 00:21:18.134 Write completed with error (sct=0, sc=8) 00:21:18.134 starting I/O failed: -6 00:21:18.134 Write completed with error (sct=0, sc=8) 00:21:18.134 starting I/O failed: -6 00:21:18.134 Write completed with error (sct=0, sc=8) 00:21:18.134 starting I/O failed: -6 00:21:18.134 Write completed with error (sct=0, sc=8) 00:21:18.134 starting I/O failed: -6 00:21:18.134 Write completed with error (sct=0, sc=8) 00:21:18.134 starting I/O failed: -6 00:21:18.134 Write completed with error (sct=0, sc=8) 00:21:18.134 starting I/O failed: -6 00:21:18.134 [2024-11-28 12:45:00.336731] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:18.134 NVMe io qpair process completion error 00:21:18.134 Write completed with error (sct=0, sc=8) 00:21:18.134 Write completed with error (sct=0, sc=8) 00:21:18.134 starting I/O failed: -6 00:21:18.134 Write completed with error (sct=0, sc=8) 00:21:18.134 Write completed with error (sct=0, sc=8) 00:21:18.134 Write completed with error (sct=0, sc=8) 00:21:18.134 Write completed with error (sct=0, sc=8) 00:21:18.134 starting I/O failed: -6 00:21:18.134 Write completed with error (sct=0, sc=8) 00:21:18.134 Write completed with error (sct=0, sc=8) 00:21:18.134 Write completed with error (sct=0, sc=8) 00:21:18.134 Write completed with error (sct=0, sc=8) 00:21:18.134 starting I/O failed: -6 00:21:18.134 Write completed with error (sct=0, sc=8) 00:21:18.134 Write completed with error (sct=0, sc=8) 00:21:18.134 Write completed with error (sct=0, sc=8) 00:21:18.134 Write completed with error (sct=0, sc=8) 00:21:18.134 starting I/O failed: -6 00:21:18.134 Write completed with error (sct=0, sc=8) 00:21:18.134 Write completed with error (sct=0, sc=8) 00:21:18.134 Write completed with error (sct=0, sc=8) 00:21:18.134 Write completed with error (sct=0, sc=8) 00:21:18.134 starting I/O failed: -6 00:21:18.134 Write completed with error (sct=0, sc=8) 00:21:18.134 Write completed with error (sct=0, sc=8) 00:21:18.134 Write completed with error (sct=0, sc=8) 00:21:18.134 Write completed with error (sct=0, sc=8) 00:21:18.134 starting I/O failed: -6 00:21:18.134 Write completed with error (sct=0, sc=8) 00:21:18.134 Write completed with error (sct=0, sc=8) 00:21:18.134 Write completed with error (sct=0, sc=8) 00:21:18.134 Write completed with error (sct=0, sc=8) 00:21:18.134 starting I/O failed: -6 00:21:18.134 Write completed with error (sct=0, sc=8) 00:21:18.134 Write completed with error (sct=0, sc=8) 00:21:18.134 Write completed with error (sct=0, sc=8) 00:21:18.134 starting I/O failed: -6 00:21:18.134 Write completed with error (sct=0, sc=8) 00:21:18.134 Write completed with error (sct=0, sc=8) 00:21:18.134 starting I/O failed: -6 00:21:18.134 Write completed with error (sct=0, sc=8) 00:21:18.134 Write completed with error (sct=0, sc=8) 00:21:18.134 starting I/O failed: -6 00:21:18.134 Write completed with error (sct=0, sc=8) 00:21:18.134 Write completed with error (sct=0, sc=8) 00:21:18.134 starting I/O failed: -6 00:21:18.134 Write completed with error (sct=0, sc=8) 00:21:18.134 Write completed with error (sct=0, sc=8) 00:21:18.134 starting I/O failed: -6 00:21:18.134 Write completed with error (sct=0, sc=8) 00:21:18.134 Write completed with error (sct=0, sc=8) 00:21:18.134 starting I/O failed: -6 00:21:18.134 Write completed with error (sct=0, sc=8) 00:21:18.134 Write completed with error (sct=0, sc=8) 00:21:18.134 starting I/O failed: -6 00:21:18.134 Write completed with error (sct=0, sc=8) 00:21:18.134 Write completed with error (sct=0, sc=8) 00:21:18.134 starting I/O failed: -6 00:21:18.134 Write completed with error (sct=0, sc=8) 00:21:18.134 Write completed with error (sct=0, sc=8) 00:21:18.134 starting I/O failed: -6 00:21:18.134 Write completed with error (sct=0, sc=8) 00:21:18.134 Write completed with error (sct=0, sc=8) 00:21:18.134 starting I/O failed: -6 00:21:18.134 Write completed with error (sct=0, sc=8) 00:21:18.134 Write completed with error (sct=0, sc=8) 00:21:18.134 starting I/O failed: -6 00:21:18.134 Write completed with error (sct=0, sc=8) 00:21:18.134 Write completed with error (sct=0, sc=8) 00:21:18.134 starting I/O failed: -6 00:21:18.134 Write completed with error (sct=0, sc=8) 00:21:18.134 Write completed with error (sct=0, sc=8) 00:21:18.134 starting I/O failed: -6 00:21:18.134 Write completed with error (sct=0, sc=8) 00:21:18.134 Write completed with error (sct=0, sc=8) 00:21:18.134 starting I/O failed: -6 00:21:18.134 Write completed with error (sct=0, sc=8) 00:21:18.134 Write completed with error (sct=0, sc=8) 00:21:18.134 starting I/O failed: -6 00:21:18.134 Write completed with error (sct=0, sc=8) 00:21:18.134 Write completed with error (sct=0, sc=8) 00:21:18.134 starting I/O failed: -6 00:21:18.134 Write completed with error (sct=0, sc=8) 00:21:18.134 Write completed with error (sct=0, sc=8) 00:21:18.134 starting I/O failed: -6 00:21:18.134 Write completed with error (sct=0, sc=8) 00:21:18.134 Write completed with error (sct=0, sc=8) 00:21:18.134 starting I/O failed: -6 00:21:18.134 Write completed with error (sct=0, sc=8) 00:21:18.134 Write completed with error (sct=0, sc=8) 00:21:18.134 starting I/O failed: -6 00:21:18.134 Write completed with error (sct=0, sc=8) 00:21:18.134 Write completed with error (sct=0, sc=8) 00:21:18.134 starting I/O failed: -6 00:21:18.134 Write completed with error (sct=0, sc=8) 00:21:18.134 Write completed with error (sct=0, sc=8) 00:21:18.134 starting I/O failed: -6 00:21:18.134 Write completed with error (sct=0, sc=8) 00:21:18.134 Write completed with error (sct=0, sc=8) 00:21:18.134 starting I/O failed: -6 00:21:18.134 Write completed with error (sct=0, sc=8) 00:21:18.134 Write completed with error (sct=0, sc=8) 00:21:18.134 starting I/O failed: -6 00:21:18.134 Write completed with error (sct=0, sc=8) 00:21:18.134 Write completed with error (sct=0, sc=8) 00:21:18.134 starting I/O failed: -6 00:21:18.134 Write completed with error (sct=0, sc=8) 00:21:18.134 Write completed with error (sct=0, sc=8) 00:21:18.134 starting I/O failed: -6 00:21:18.134 Write completed with error (sct=0, sc=8) 00:21:18.134 [2024-11-28 12:45:00.338450] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:18.135 starting I/O failed: -6 00:21:18.135 Write completed with error (sct=0, sc=8) 00:21:18.135 starting I/O failed: -6 00:21:18.135 Write completed with error (sct=0, sc=8) 00:21:18.135 Write completed with error (sct=0, sc=8) 00:21:18.135 starting I/O failed: -6 00:21:18.135 Write completed with error (sct=0, sc=8) 00:21:18.135 starting I/O failed: -6 00:21:18.135 Write completed with error (sct=0, sc=8) 00:21:18.135 starting I/O failed: -6 00:21:18.135 Write completed with error (sct=0, sc=8) 00:21:18.135 Write completed with error (sct=0, sc=8) 00:21:18.135 starting I/O failed: -6 00:21:18.135 Write completed with error (sct=0, sc=8) 00:21:18.135 starting I/O failed: -6 00:21:18.135 Write completed with error (sct=0, sc=8) 00:21:18.135 starting I/O failed: -6 00:21:18.135 Write completed with error (sct=0, sc=8) 00:21:18.135 Write completed with error (sct=0, sc=8) 00:21:18.135 starting I/O failed: -6 00:21:18.135 Write completed with error (sct=0, sc=8) 00:21:18.135 starting I/O failed: -6 00:21:18.135 Write completed with error (sct=0, sc=8) 00:21:18.135 starting I/O failed: -6 00:21:18.135 Write completed with error (sct=0, sc=8) 00:21:18.135 Write completed with error (sct=0, sc=8) 00:21:18.135 starting I/O failed: -6 00:21:18.135 Write completed with error (sct=0, sc=8) 00:21:18.135 starting I/O failed: -6 00:21:18.135 Write completed with error (sct=0, sc=8) 00:21:18.135 starting I/O failed: -6 00:21:18.135 Write completed with error (sct=0, sc=8) 00:21:18.135 Write completed with error (sct=0, sc=8) 00:21:18.135 starting I/O failed: -6 00:21:18.135 Write completed with error (sct=0, sc=8) 00:21:18.135 starting I/O failed: -6 00:21:18.135 Write completed with error (sct=0, sc=8) 00:21:18.135 starting I/O failed: -6 00:21:18.135 Write completed with error (sct=0, sc=8) 00:21:18.135 Write completed with error (sct=0, sc=8) 00:21:18.135 starting I/O failed: -6 00:21:18.135 Write completed with error (sct=0, sc=8) 00:21:18.135 starting I/O failed: -6 00:21:18.135 Write completed with error (sct=0, sc=8) 00:21:18.135 starting I/O failed: -6 00:21:18.135 Write completed with error (sct=0, sc=8) 00:21:18.135 Write completed with error (sct=0, sc=8) 00:21:18.135 starting I/O failed: -6 00:21:18.135 Write completed with error (sct=0, sc=8) 00:21:18.135 starting I/O failed: -6 00:21:18.135 Write completed with error (sct=0, sc=8) 00:21:18.135 starting I/O failed: -6 00:21:18.135 Write completed with error (sct=0, sc=8) 00:21:18.135 Write completed with error (sct=0, sc=8) 00:21:18.135 starting I/O failed: -6 00:21:18.135 Write completed with error (sct=0, sc=8) 00:21:18.135 starting I/O failed: -6 00:21:18.135 Write completed with error (sct=0, sc=8) 00:21:18.135 starting I/O failed: -6 00:21:18.135 Write completed with error (sct=0, sc=8) 00:21:18.135 Write completed with error (sct=0, sc=8) 00:21:18.135 starting I/O failed: -6 00:21:18.135 Write completed with error (sct=0, sc=8) 00:21:18.135 starting I/O failed: -6 00:21:18.135 Write completed with error (sct=0, sc=8) 00:21:18.135 starting I/O failed: -6 00:21:18.135 Write completed with error (sct=0, sc=8) 00:21:18.135 Write completed with error (sct=0, sc=8) 00:21:18.135 starting I/O failed: -6 00:21:18.135 Write completed with error (sct=0, sc=8) 00:21:18.135 starting I/O failed: -6 00:21:18.135 Write completed with error (sct=0, sc=8) 00:21:18.135 starting I/O failed: -6 00:21:18.135 Write completed with error (sct=0, sc=8) 00:21:18.135 Write completed with error (sct=0, sc=8) 00:21:18.135 starting I/O failed: -6 00:21:18.135 Write completed with error (sct=0, sc=8) 00:21:18.135 starting I/O failed: -6 00:21:18.135 Write completed with error (sct=0, sc=8) 00:21:18.135 starting I/O failed: -6 00:21:18.135 Write completed with error (sct=0, sc=8) 00:21:18.135 Write completed with error (sct=0, sc=8) 00:21:18.135 starting I/O failed: -6 00:21:18.135 Write completed with error (sct=0, sc=8) 00:21:18.135 starting I/O failed: -6 00:21:18.135 [2024-11-28 12:45:00.339501] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:18.135 Write completed with error (sct=0, sc=8) 00:21:18.135 starting I/O failed: -6 00:21:18.135 Write completed with error (sct=0, sc=8) 00:21:18.135 starting I/O failed: -6 00:21:18.135 Write completed with error (sct=0, sc=8) 00:21:18.135 starting I/O failed: -6 00:21:18.135 Write completed with error (sct=0, sc=8) 00:21:18.135 starting I/O failed: -6 00:21:18.135 Write completed with error (sct=0, sc=8) 00:21:18.135 starting I/O failed: -6 00:21:18.135 Write completed with error (sct=0, sc=8) 00:21:18.135 starting I/O failed: -6 00:21:18.135 Write completed with error (sct=0, sc=8) 00:21:18.135 starting I/O failed: -6 00:21:18.135 Write completed with error (sct=0, sc=8) 00:21:18.135 starting I/O failed: -6 00:21:18.135 Write completed with error (sct=0, sc=8) 00:21:18.135 starting I/O failed: -6 00:21:18.135 Write completed with error (sct=0, sc=8) 00:21:18.135 starting I/O failed: -6 00:21:18.135 Write completed with error (sct=0, sc=8) 00:21:18.135 starting I/O failed: -6 00:21:18.135 Write completed with error (sct=0, sc=8) 00:21:18.135 starting I/O failed: -6 00:21:18.135 Write completed with error (sct=0, sc=8) 00:21:18.135 starting I/O failed: -6 00:21:18.135 Write completed with error (sct=0, sc=8) 00:21:18.135 starting I/O failed: -6 00:21:18.135 Write completed with error (sct=0, sc=8) 00:21:18.135 starting I/O failed: -6 00:21:18.135 Write completed with error (sct=0, sc=8) 00:21:18.135 starting I/O failed: -6 00:21:18.135 Write completed with error (sct=0, sc=8) 00:21:18.135 starting I/O failed: -6 00:21:18.135 Write completed with error (sct=0, sc=8) 00:21:18.135 starting I/O failed: -6 00:21:18.135 Write completed with error (sct=0, sc=8) 00:21:18.135 starting I/O failed: -6 00:21:18.135 Write completed with error (sct=0, sc=8) 00:21:18.135 starting I/O failed: -6 00:21:18.135 Write completed with error (sct=0, sc=8) 00:21:18.135 starting I/O failed: -6 00:21:18.135 Write completed with error (sct=0, sc=8) 00:21:18.135 starting I/O failed: -6 00:21:18.135 Write completed with error (sct=0, sc=8) 00:21:18.135 starting I/O failed: -6 00:21:18.135 Write completed with error (sct=0, sc=8) 00:21:18.135 starting I/O failed: -6 00:21:18.135 Write completed with error (sct=0, sc=8) 00:21:18.135 starting I/O failed: -6 00:21:18.135 Write completed with error (sct=0, sc=8) 00:21:18.135 starting I/O failed: -6 00:21:18.135 Write completed with error (sct=0, sc=8) 00:21:18.135 starting I/O failed: -6 00:21:18.135 Write completed with error (sct=0, sc=8) 00:21:18.135 starting I/O failed: -6 00:21:18.135 Write completed with error (sct=0, sc=8) 00:21:18.135 starting I/O failed: -6 00:21:18.135 Write completed with error (sct=0, sc=8) 00:21:18.135 starting I/O failed: -6 00:21:18.135 Write completed with error (sct=0, sc=8) 00:21:18.135 starting I/O failed: -6 00:21:18.135 Write completed with error (sct=0, sc=8) 00:21:18.135 starting I/O failed: -6 00:21:18.135 Write completed with error (sct=0, sc=8) 00:21:18.135 starting I/O failed: -6 00:21:18.135 Write completed with error (sct=0, sc=8) 00:21:18.135 starting I/O failed: -6 00:21:18.135 Write completed with error (sct=0, sc=8) 00:21:18.135 starting I/O failed: -6 00:21:18.135 Write completed with error (sct=0, sc=8) 00:21:18.135 starting I/O failed: -6 00:21:18.135 Write completed with error (sct=0, sc=8) 00:21:18.135 starting I/O failed: -6 00:21:18.135 Write completed with error (sct=0, sc=8) 00:21:18.135 starting I/O failed: -6 00:21:18.135 Write completed with error (sct=0, sc=8) 00:21:18.135 starting I/O failed: -6 00:21:18.135 Write completed with error (sct=0, sc=8) 00:21:18.135 starting I/O failed: -6 00:21:18.135 Write completed with error (sct=0, sc=8) 00:21:18.135 starting I/O failed: -6 00:21:18.135 Write completed with error (sct=0, sc=8) 00:21:18.135 starting I/O failed: -6 00:21:18.135 Write completed with error (sct=0, sc=8) 00:21:18.135 starting I/O failed: -6 00:21:18.135 Write completed with error (sct=0, sc=8) 00:21:18.135 starting I/O failed: -6 00:21:18.135 Write completed with error (sct=0, sc=8) 00:21:18.135 starting I/O failed: -6 00:21:18.135 Write completed with error (sct=0, sc=8) 00:21:18.135 starting I/O failed: -6 00:21:18.135 Write completed with error (sct=0, sc=8) 00:21:18.135 starting I/O failed: -6 00:21:18.135 Write completed with error (sct=0, sc=8) 00:21:18.135 starting I/O failed: -6 00:21:18.135 Write completed with error (sct=0, sc=8) 00:21:18.135 starting I/O failed: -6 00:21:18.135 Write completed with error (sct=0, sc=8) 00:21:18.135 starting I/O failed: -6 00:21:18.135 Write completed with error (sct=0, sc=8) 00:21:18.135 starting I/O failed: -6 00:21:18.135 Write completed with error (sct=0, sc=8) 00:21:18.135 starting I/O failed: -6 00:21:18.135 Write completed with error (sct=0, sc=8) 00:21:18.135 starting I/O failed: -6 00:21:18.135 Write completed with error (sct=0, sc=8) 00:21:18.135 starting I/O failed: -6 00:21:18.135 Write completed with error (sct=0, sc=8) 00:21:18.135 starting I/O failed: -6 00:21:18.135 Write completed with error (sct=0, sc=8) 00:21:18.135 starting I/O failed: -6 00:21:18.135 Write completed with error (sct=0, sc=8) 00:21:18.135 starting I/O failed: -6 00:21:18.136 Write completed with error (sct=0, sc=8) 00:21:18.136 starting I/O failed: -6 00:21:18.136 Write completed with error (sct=0, sc=8) 00:21:18.136 starting I/O failed: -6 00:21:18.136 [2024-11-28 12:45:00.347442] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:18.136 NVMe io qpair process completion error 00:21:18.136 Write completed with error (sct=0, sc=8) 00:21:18.136 starting I/O failed: -6 00:21:18.136 Write completed with error (sct=0, sc=8) 00:21:18.136 Write completed with error (sct=0, sc=8) 00:21:18.136 Write completed with error (sct=0, sc=8) 00:21:18.136 Write completed with error (sct=0, sc=8) 00:21:18.136 starting I/O failed: -6 00:21:18.136 Write completed with error (sct=0, sc=8) 00:21:18.136 Write completed with error (sct=0, sc=8) 00:21:18.136 Write completed with error (sct=0, sc=8) 00:21:18.136 Write completed with error (sct=0, sc=8) 00:21:18.136 starting I/O failed: -6 00:21:18.136 Write completed with error (sct=0, sc=8) 00:21:18.136 Write completed with error (sct=0, sc=8) 00:21:18.136 Write completed with error (sct=0, sc=8) 00:21:18.136 Write completed with error (sct=0, sc=8) 00:21:18.136 starting I/O failed: -6 00:21:18.136 Write completed with error (sct=0, sc=8) 00:21:18.136 Write completed with error (sct=0, sc=8) 00:21:18.136 Write completed with error (sct=0, sc=8) 00:21:18.136 Write completed with error (sct=0, sc=8) 00:21:18.136 starting I/O failed: -6 00:21:18.136 Write completed with error (sct=0, sc=8) 00:21:18.136 Write completed with error (sct=0, sc=8) 00:21:18.136 Write completed with error (sct=0, sc=8) 00:21:18.136 Write completed with error (sct=0, sc=8) 00:21:18.136 starting I/O failed: -6 00:21:18.136 Write completed with error (sct=0, sc=8) 00:21:18.136 Write completed with error (sct=0, sc=8) 00:21:18.136 Write completed with error (sct=0, sc=8) 00:21:18.136 Write completed with error (sct=0, sc=8) 00:21:18.136 starting I/O failed: -6 00:21:18.136 Write completed with error (sct=0, sc=8) 00:21:18.136 Write completed with error (sct=0, sc=8) 00:21:18.136 starting I/O failed: -6 00:21:18.136 Write completed with error (sct=0, sc=8) 00:21:18.136 Write completed with error (sct=0, sc=8) 00:21:18.136 starting I/O failed: -6 00:21:18.136 Write completed with error (sct=0, sc=8) 00:21:18.136 Write completed with error (sct=0, sc=8) 00:21:18.136 starting I/O failed: -6 00:21:18.136 Write completed with error (sct=0, sc=8) 00:21:18.136 Write completed with error (sct=0, sc=8) 00:21:18.136 starting I/O failed: -6 00:21:18.136 Write completed with error (sct=0, sc=8) 00:21:18.136 Write completed with error (sct=0, sc=8) 00:21:18.136 starting I/O failed: -6 00:21:18.136 Write completed with error (sct=0, sc=8) 00:21:18.136 Write completed with error (sct=0, sc=8) 00:21:18.136 starting I/O failed: -6 00:21:18.136 Write completed with error (sct=0, sc=8) 00:21:18.136 Write completed with error (sct=0, sc=8) 00:21:18.136 starting I/O failed: -6 00:21:18.136 Write completed with error (sct=0, sc=8) 00:21:18.136 Write completed with error (sct=0, sc=8) 00:21:18.136 starting I/O failed: -6 00:21:18.136 Write completed with error (sct=0, sc=8) 00:21:18.136 Write completed with error (sct=0, sc=8) 00:21:18.136 starting I/O failed: -6 00:21:18.136 Write completed with error (sct=0, sc=8) 00:21:18.136 Write completed with error (sct=0, sc=8) 00:21:18.136 starting I/O failed: -6 00:21:18.136 Write completed with error (sct=0, sc=8) 00:21:18.136 Write completed with error (sct=0, sc=8) 00:21:18.136 starting I/O failed: -6 00:21:18.136 Write completed with error (sct=0, sc=8) 00:21:18.136 Write completed with error (sct=0, sc=8) 00:21:18.136 starting I/O failed: -6 00:21:18.136 Write completed with error (sct=0, sc=8) 00:21:18.136 Write completed with error (sct=0, sc=8) 00:21:18.136 starting I/O failed: -6 00:21:18.136 Write completed with error (sct=0, sc=8) 00:21:18.136 Write completed with error (sct=0, sc=8) 00:21:18.136 starting I/O failed: -6 00:21:18.136 Write completed with error (sct=0, sc=8) 00:21:18.136 Write completed with error (sct=0, sc=8) 00:21:18.136 starting I/O failed: -6 00:21:18.136 Write completed with error (sct=0, sc=8) 00:21:18.136 Write completed with error (sct=0, sc=8) 00:21:18.136 starting I/O failed: -6 00:21:18.136 Write completed with error (sct=0, sc=8) 00:21:18.136 Write completed with error (sct=0, sc=8) 00:21:18.136 starting I/O failed: -6 00:21:18.136 Write completed with error (sct=0, sc=8) 00:21:18.136 Write completed with error (sct=0, sc=8) 00:21:18.136 starting I/O failed: -6 00:21:18.136 Write completed with error (sct=0, sc=8) 00:21:18.136 Write completed with error (sct=0, sc=8) 00:21:18.136 starting I/O failed: -6 00:21:18.136 Write completed with error (sct=0, sc=8) 00:21:18.136 Write completed with error (sct=0, sc=8) 00:21:18.136 starting I/O failed: -6 00:21:18.136 Write completed with error (sct=0, sc=8) 00:21:18.136 Write completed with error (sct=0, sc=8) 00:21:18.136 starting I/O failed: -6 00:21:18.136 Write completed with error (sct=0, sc=8) 00:21:18.136 Write completed with error (sct=0, sc=8) 00:21:18.136 starting I/O failed: -6 00:21:18.136 Write completed with error (sct=0, sc=8) 00:21:18.136 [2024-11-28 12:45:00.348912] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:18.136 Write completed with error (sct=0, sc=8) 00:21:18.136 starting I/O failed: -6 00:21:18.136 Write completed with error (sct=0, sc=8) 00:21:18.136 starting I/O failed: -6 00:21:18.136 Write completed with error (sct=0, sc=8) 00:21:18.136 starting I/O failed: -6 00:21:18.136 Write completed with error (sct=0, sc=8) 00:21:18.136 Write completed with error (sct=0, sc=8) 00:21:18.136 starting I/O failed: -6 00:21:18.136 Write completed with error (sct=0, sc=8) 00:21:18.136 starting I/O failed: -6 00:21:18.136 Write completed with error (sct=0, sc=8) 00:21:18.136 starting I/O failed: -6 00:21:18.136 Write completed with error (sct=0, sc=8) 00:21:18.136 Write completed with error (sct=0, sc=8) 00:21:18.136 starting I/O failed: -6 00:21:18.136 Write completed with error (sct=0, sc=8) 00:21:18.136 starting I/O failed: -6 00:21:18.136 Write completed with error (sct=0, sc=8) 00:21:18.136 starting I/O failed: -6 00:21:18.136 Write completed with error (sct=0, sc=8) 00:21:18.136 Write completed with error (sct=0, sc=8) 00:21:18.136 starting I/O failed: -6 00:21:18.136 Write completed with error (sct=0, sc=8) 00:21:18.136 starting I/O failed: -6 00:21:18.136 Write completed with error (sct=0, sc=8) 00:21:18.136 starting I/O failed: -6 00:21:18.136 Write completed with error (sct=0, sc=8) 00:21:18.136 Write completed with error (sct=0, sc=8) 00:21:18.136 starting I/O failed: -6 00:21:18.136 Write completed with error (sct=0, sc=8) 00:21:18.136 starting I/O failed: -6 00:21:18.136 Write completed with error (sct=0, sc=8) 00:21:18.136 starting I/O failed: -6 00:21:18.136 Write completed with error (sct=0, sc=8) 00:21:18.136 Write completed with error (sct=0, sc=8) 00:21:18.136 starting I/O failed: -6 00:21:18.136 Write completed with error (sct=0, sc=8) 00:21:18.136 starting I/O failed: -6 00:21:18.136 Write completed with error (sct=0, sc=8) 00:21:18.136 starting I/O failed: -6 00:21:18.136 Write completed with error (sct=0, sc=8) 00:21:18.136 Write completed with error (sct=0, sc=8) 00:21:18.136 starting I/O failed: -6 00:21:18.136 Write completed with error (sct=0, sc=8) 00:21:18.136 starting I/O failed: -6 00:21:18.136 Write completed with error (sct=0, sc=8) 00:21:18.136 starting I/O failed: -6 00:21:18.136 Write completed with error (sct=0, sc=8) 00:21:18.136 Write completed with error (sct=0, sc=8) 00:21:18.136 starting I/O failed: -6 00:21:18.136 Write completed with error (sct=0, sc=8) 00:21:18.136 starting I/O failed: -6 00:21:18.136 Write completed with error (sct=0, sc=8) 00:21:18.136 starting I/O failed: -6 00:21:18.136 Write completed with error (sct=0, sc=8) 00:21:18.136 Write completed with error (sct=0, sc=8) 00:21:18.136 starting I/O failed: -6 00:21:18.136 Write completed with error (sct=0, sc=8) 00:21:18.136 starting I/O failed: -6 00:21:18.136 Write completed with error (sct=0, sc=8) 00:21:18.136 starting I/O failed: -6 00:21:18.136 Write completed with error (sct=0, sc=8) 00:21:18.136 Write completed with error (sct=0, sc=8) 00:21:18.136 starting I/O failed: -6 00:21:18.136 Write completed with error (sct=0, sc=8) 00:21:18.136 starting I/O failed: -6 00:21:18.136 Write completed with error (sct=0, sc=8) 00:21:18.136 starting I/O failed: -6 00:21:18.136 Write completed with error (sct=0, sc=8) 00:21:18.136 Write completed with error (sct=0, sc=8) 00:21:18.136 starting I/O failed: -6 00:21:18.136 Write completed with error (sct=0, sc=8) 00:21:18.136 starting I/O failed: -6 00:21:18.136 Write completed with error (sct=0, sc=8) 00:21:18.136 starting I/O failed: -6 00:21:18.136 Write completed with error (sct=0, sc=8) 00:21:18.136 Write completed with error (sct=0, sc=8) 00:21:18.136 starting I/O failed: -6 00:21:18.136 Write completed with error (sct=0, sc=8) 00:21:18.136 starting I/O failed: -6 00:21:18.136 Write completed with error (sct=0, sc=8) 00:21:18.136 starting I/O failed: -6 00:21:18.136 Write completed with error (sct=0, sc=8) 00:21:18.136 Write completed with error (sct=0, sc=8) 00:21:18.136 starting I/O failed: -6 00:21:18.136 Write completed with error (sct=0, sc=8) 00:21:18.136 starting I/O failed: -6 00:21:18.136 Write completed with error (sct=0, sc=8) 00:21:18.136 starting I/O failed: -6 00:21:18.136 Write completed with error (sct=0, sc=8) 00:21:18.136 [2024-11-28 12:45:00.350013] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:18.136 Write completed with error (sct=0, sc=8) 00:21:18.136 starting I/O failed: -6 00:21:18.136 Write completed with error (sct=0, sc=8) 00:21:18.136 starting I/O failed: -6 00:21:18.136 Write completed with error (sct=0, sc=8) 00:21:18.136 starting I/O failed: -6 00:21:18.136 Write completed with error (sct=0, sc=8) 00:21:18.136 starting I/O failed: -6 00:21:18.136 Write completed with error (sct=0, sc=8) 00:21:18.136 starting I/O failed: -6 00:21:18.136 Write completed with error (sct=0, sc=8) 00:21:18.136 starting I/O failed: -6 00:21:18.136 Write completed with error (sct=0, sc=8) 00:21:18.136 starting I/O failed: -6 00:21:18.136 Write completed with error (sct=0, sc=8) 00:21:18.136 starting I/O failed: -6 00:21:18.136 Write completed with error (sct=0, sc=8) 00:21:18.136 starting I/O failed: -6 00:21:18.136 Write completed with error (sct=0, sc=8) 00:21:18.136 starting I/O failed: -6 00:21:18.137 Write completed with error (sct=0, sc=8) 00:21:18.137 starting I/O failed: -6 00:21:18.137 Write completed with error (sct=0, sc=8) 00:21:18.137 starting I/O failed: -6 00:21:18.137 Write completed with error (sct=0, sc=8) 00:21:18.137 starting I/O failed: -6 00:21:18.137 Write completed with error (sct=0, sc=8) 00:21:18.137 starting I/O failed: -6 00:21:18.137 Write completed with error (sct=0, sc=8) 00:21:18.137 starting I/O failed: -6 00:21:18.137 Write completed with error (sct=0, sc=8) 00:21:18.137 starting I/O failed: -6 00:21:18.137 Write completed with error (sct=0, sc=8) 00:21:18.137 starting I/O failed: -6 00:21:18.137 Write completed with error (sct=0, sc=8) 00:21:18.137 starting I/O failed: -6 00:21:18.137 Write completed with error (sct=0, sc=8) 00:21:18.137 starting I/O failed: -6 00:21:18.137 Write completed with error (sct=0, sc=8) 00:21:18.137 starting I/O failed: -6 00:21:18.137 Write completed with error (sct=0, sc=8) 00:21:18.137 starting I/O failed: -6 00:21:18.137 Write completed with error (sct=0, sc=8) 00:21:18.137 starting I/O failed: -6 00:21:18.137 Write completed with error (sct=0, sc=8) 00:21:18.137 starting I/O failed: -6 00:21:18.137 Write completed with error (sct=0, sc=8) 00:21:18.137 starting I/O failed: -6 00:21:18.137 Write completed with error (sct=0, sc=8) 00:21:18.137 starting I/O failed: -6 00:21:18.137 Write completed with error (sct=0, sc=8) 00:21:18.137 starting I/O failed: -6 00:21:18.137 Write completed with error (sct=0, sc=8) 00:21:18.137 starting I/O failed: -6 00:21:18.137 Write completed with error (sct=0, sc=8) 00:21:18.137 starting I/O failed: -6 00:21:18.137 Write completed with error (sct=0, sc=8) 00:21:18.137 starting I/O failed: -6 00:21:18.137 Write completed with error (sct=0, sc=8) 00:21:18.137 starting I/O failed: -6 00:21:18.137 Write completed with error (sct=0, sc=8) 00:21:18.137 starting I/O failed: -6 00:21:18.137 Write completed with error (sct=0, sc=8) 00:21:18.137 starting I/O failed: -6 00:21:18.137 Write completed with error (sct=0, sc=8) 00:21:18.137 starting I/O failed: -6 00:21:18.137 Write completed with error (sct=0, sc=8) 00:21:18.137 starting I/O failed: -6 00:21:18.137 Write completed with error (sct=0, sc=8) 00:21:18.137 starting I/O failed: -6 00:21:18.137 Write completed with error (sct=0, sc=8) 00:21:18.137 starting I/O failed: -6 00:21:18.137 Write completed with error (sct=0, sc=8) 00:21:18.137 starting I/O failed: -6 00:21:18.137 Write completed with error (sct=0, sc=8) 00:21:18.137 starting I/O failed: -6 00:21:18.137 Write completed with error (sct=0, sc=8) 00:21:18.137 starting I/O failed: -6 00:21:18.137 Write completed with error (sct=0, sc=8) 00:21:18.137 starting I/O failed: -6 00:21:18.137 Write completed with error (sct=0, sc=8) 00:21:18.137 starting I/O failed: -6 00:21:18.137 Write completed with error (sct=0, sc=8) 00:21:18.137 starting I/O failed: -6 00:21:18.137 Write completed with error (sct=0, sc=8) 00:21:18.137 starting I/O failed: -6 00:21:18.137 Write completed with error (sct=0, sc=8) 00:21:18.137 starting I/O failed: -6 00:21:18.137 Write completed with error (sct=0, sc=8) 00:21:18.137 starting I/O failed: -6 00:21:18.137 Write completed with error (sct=0, sc=8) 00:21:18.137 starting I/O failed: -6 00:21:18.137 Write completed with error (sct=0, sc=8) 00:21:18.137 starting I/O failed: -6 00:21:18.137 Write completed with error (sct=0, sc=8) 00:21:18.137 starting I/O failed: -6 00:21:18.137 Write completed with error (sct=0, sc=8) 00:21:18.137 starting I/O failed: -6 00:21:18.137 Write completed with error (sct=0, sc=8) 00:21:18.137 starting I/O failed: -6 00:21:18.137 Write completed with error (sct=0, sc=8) 00:21:18.137 starting I/O failed: -6 00:21:18.137 Write completed with error (sct=0, sc=8) 00:21:18.137 starting I/O failed: -6 00:21:18.137 Write completed with error (sct=0, sc=8) 00:21:18.137 starting I/O failed: -6 00:21:18.137 Write completed with error (sct=0, sc=8) 00:21:18.137 starting I/O failed: -6 00:21:18.137 Write completed with error (sct=0, sc=8) 00:21:18.137 starting I/O failed: -6 00:21:18.137 Write completed with error (sct=0, sc=8) 00:21:18.137 starting I/O failed: -6 00:21:18.137 Write completed with error (sct=0, sc=8) 00:21:18.137 starting I/O failed: -6 00:21:18.137 Write completed with error (sct=0, sc=8) 00:21:18.137 starting I/O failed: -6 00:21:18.137 Write completed with error (sct=0, sc=8) 00:21:18.137 starting I/O failed: -6 00:21:18.137 Write completed with error (sct=0, sc=8) 00:21:18.137 starting I/O failed: -6 00:21:18.137 [2024-11-28 12:45:00.352629] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:18.137 NVMe io qpair process completion error 00:21:18.137 Initializing NVMe Controllers 00:21:18.137 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:21:18.137 Controller IO queue size 128, less than required. 00:21:18.137 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:18.137 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:21:18.137 Controller IO queue size 128, less than required. 00:21:18.137 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:18.137 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:21:18.137 Controller IO queue size 128, less than required. 00:21:18.137 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:18.137 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:21:18.137 Controller IO queue size 128, less than required. 00:21:18.137 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:18.137 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:21:18.137 Controller IO queue size 128, less than required. 00:21:18.137 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:18.137 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:21:18.137 Controller IO queue size 128, less than required. 00:21:18.137 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:18.137 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:18.137 Controller IO queue size 128, less than required. 00:21:18.137 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:18.137 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:21:18.137 Controller IO queue size 128, less than required. 00:21:18.137 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:18.137 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:21:18.137 Controller IO queue size 128, less than required. 00:21:18.137 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:18.137 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:21:18.137 Controller IO queue size 128, less than required. 00:21:18.137 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:18.137 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:21:18.137 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:21:18.137 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:21:18.137 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:21:18.137 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:21:18.137 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:21:18.137 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:18.137 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:21:18.137 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:21:18.137 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:21:18.137 Initialization complete. Launching workers. 00:21:18.137 ======================================================== 00:21:18.137 Latency(us) 00:21:18.137 Device Information : IOPS MiB/s Average min max 00:21:18.137 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 2130.80 91.56 60070.80 772.42 114725.84 00:21:18.137 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 2153.03 92.51 59459.55 973.72 131981.56 00:21:18.137 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 2122.00 91.18 59763.54 703.05 109623.25 00:21:18.137 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 2159.53 92.79 59320.24 953.91 107781.63 00:21:18.137 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 2162.46 92.92 58643.47 942.03 129709.95 00:21:18.137 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 2148.21 92.31 59044.09 834.36 104886.67 00:21:18.137 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2123.89 91.26 59734.13 1034.60 103141.74 00:21:18.137 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 2086.99 89.68 60825.37 908.52 100602.82 00:21:18.137 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 2147.16 92.26 59094.33 538.23 110426.92 00:21:18.137 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 2144.64 92.15 59214.15 639.04 98815.84 00:21:18.137 ======================================================== 00:21:18.137 Total : 21378.71 918.62 59511.59 538.23 131981.56 00:21:18.137 00:21:18.137 [2024-11-28 12:45:00.355618] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8890 is same with the state(6) to be set 00:21:18.137 [2024-11-28 12:45:00.355664] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8bc0 is same with the state(6) to be set 00:21:18.137 [2024-11-28 12:45:00.355697] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8560 is same with the state(6) to be set 00:21:18.137 [2024-11-28 12:45:00.355727] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e9a70 is same with the state(6) to be set 00:21:18.137 [2024-11-28 12:45:00.355756] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11eaae0 is same with the state(6) to be set 00:21:18.137 [2024-11-28 12:45:00.355785] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e9740 is same with the state(6) to be set 00:21:18.137 [2024-11-28 12:45:00.355816] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ea720 is same with the state(6) to be set 00:21:18.137 [2024-11-28 12:45:00.355845] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8ef0 is same with the state(6) to be set 00:21:18.138 [2024-11-28 12:45:00.355873] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ea900 is same with the state(6) to be set 00:21:18.138 [2024-11-28 12:45:00.355903] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e9410 is same with the state(6) to be set 00:21:18.138 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:21:18.397 12:45:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:21:19.335 12:45:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 2584047 00:21:19.335 12:45:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:21:19.335 12:45:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2584047 00:21:19.335 12:45:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:21:19.335 12:45:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:19.335 12:45:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:21:19.335 12:45:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:19.335 12:45:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 2584047 00:21:19.335 12:45:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1 00:21:19.335 12:45:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:19.335 12:45:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:19.335 12:45:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:19.335 12:45:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:21:19.335 12:45:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:21:19.335 12:45:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:19.335 12:45:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:19.335 12:45:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:21:19.335 12:45:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:19.335 12:45:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:21:19.335 12:45:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:19.335 12:45:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:21:19.335 12:45:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:19.335 12:45:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:19.335 rmmod nvme_tcp 00:21:19.335 rmmod nvme_fabrics 00:21:19.335 rmmod nvme_keyring 00:21:19.335 12:45:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:19.335 12:45:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:21:19.335 12:45:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:21:19.335 12:45:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 2583928 ']' 00:21:19.335 12:45:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 2583928 00:21:19.335 12:45:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 2583928 ']' 00:21:19.335 12:45:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 2583928 00:21:19.335 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2583928) - No such process 00:21:19.335 12:45:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 2583928 is not found' 00:21:19.335 Process with pid 2583928 is not found 00:21:19.335 12:45:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:19.335 12:45:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:19.335 12:45:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:19.335 12:45:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:21:19.335 12:45:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:21:19.335 12:45:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:19.335 12:45:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:21:19.335 12:45:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:19.335 12:45:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:19.335 12:45:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:19.335 12:45:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:19.335 12:45:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:21.871 12:45:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:21.871 00:21:21.871 real 0m9.740s 00:21:21.871 user 0m24.941s 00:21:21.871 sys 0m5.189s 00:21:21.871 12:45:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:21.871 12:45:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:21.871 ************************************ 00:21:21.871 END TEST nvmf_shutdown_tc4 00:21:21.871 ************************************ 00:21:21.871 12:45:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:21:21.871 00:21:21.871 real 0m39.515s 00:21:21.871 user 1m37.767s 00:21:21.871 sys 0m13.532s 00:21:21.871 12:45:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:21.871 12:45:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:21.871 ************************************ 00:21:21.871 END TEST nvmf_shutdown 00:21:21.871 ************************************ 00:21:21.871 12:45:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:21:21.871 12:45:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:21.871 12:45:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:21.871 12:45:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:21.871 ************************************ 00:21:21.871 START TEST nvmf_nsid 00:21:21.871 ************************************ 00:21:21.871 12:45:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:21:21.872 * Looking for test storage... 00:21:21.872 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:21.872 12:45:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:21.872 12:45:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lcov --version 00:21:21.872 12:45:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:21.872 12:45:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:21.872 12:45:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:21.872 12:45:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:21.872 12:45:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:21.872 12:45:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:21:21.872 12:45:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:21:21.872 12:45:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:21:21.872 12:45:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:21:21.872 12:45:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:21:21.872 12:45:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:21:21.872 12:45:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:21:21.872 12:45:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:21.872 12:45:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:21:21.872 12:45:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:21:21.872 12:45:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:21.872 12:45:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:21.872 12:45:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:21:21.872 12:45:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:21:21.872 12:45:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:21.872 12:45:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:21:21.872 12:45:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:21:21.872 12:45:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:21:21.872 12:45:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:21:21.872 12:45:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:21.872 12:45:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:21:21.872 12:45:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:21:21.872 12:45:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:21.872 12:45:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:21.872 12:45:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:21:21.872 12:45:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:21.872 12:45:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:21.872 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:21.872 --rc genhtml_branch_coverage=1 00:21:21.872 --rc genhtml_function_coverage=1 00:21:21.872 --rc genhtml_legend=1 00:21:21.872 --rc geninfo_all_blocks=1 00:21:21.872 --rc geninfo_unexecuted_blocks=1 00:21:21.872 00:21:21.872 ' 00:21:21.872 12:45:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:21.872 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:21.872 --rc genhtml_branch_coverage=1 00:21:21.872 --rc genhtml_function_coverage=1 00:21:21.872 --rc genhtml_legend=1 00:21:21.872 --rc geninfo_all_blocks=1 00:21:21.872 --rc geninfo_unexecuted_blocks=1 00:21:21.872 00:21:21.872 ' 00:21:21.872 12:45:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:21.872 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:21.872 --rc genhtml_branch_coverage=1 00:21:21.872 --rc genhtml_function_coverage=1 00:21:21.872 --rc genhtml_legend=1 00:21:21.872 --rc geninfo_all_blocks=1 00:21:21.872 --rc geninfo_unexecuted_blocks=1 00:21:21.872 00:21:21.872 ' 00:21:21.872 12:45:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:21.872 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:21.872 --rc genhtml_branch_coverage=1 00:21:21.872 --rc genhtml_function_coverage=1 00:21:21.872 --rc genhtml_legend=1 00:21:21.872 --rc geninfo_all_blocks=1 00:21:21.872 --rc geninfo_unexecuted_blocks=1 00:21:21.872 00:21:21.872 ' 00:21:21.872 12:45:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:21.872 12:45:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:21:21.872 12:45:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:21.872 12:45:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:21.872 12:45:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:21.872 12:45:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:21.872 12:45:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:21.872 12:45:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:21.872 12:45:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:21.872 12:45:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:21.872 12:45:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:21.872 12:45:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:21.872 12:45:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:21.872 12:45:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:21.872 12:45:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:21.872 12:45:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:21.872 12:45:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:21.872 12:45:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:21.872 12:45:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:21.872 12:45:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:21:21.872 12:45:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:21.872 12:45:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:21.872 12:45:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:21.872 12:45:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:21.873 12:45:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:21.873 12:45:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:21.873 12:45:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:21:21.873 12:45:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:21.873 12:45:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:21:21.873 12:45:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:21.873 12:45:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:21.873 12:45:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:21.873 12:45:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:21.873 12:45:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:21.873 12:45:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:21.873 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:21.873 12:45:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:21.873 12:45:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:21.873 12:45:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:21.873 12:45:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:21:21.873 12:45:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:21:21.873 12:45:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:21:21.873 12:45:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:21:21.873 12:45:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:21:21.873 12:45:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:21:21.873 12:45:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:21.873 12:45:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:21.873 12:45:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:21.873 12:45:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:21.873 12:45:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:21.873 12:45:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:21.873 12:45:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:21.873 12:45:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:21.873 12:45:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:21.873 12:45:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:21.873 12:45:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:21:21.873 12:45:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:28.441 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:28.441 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:21:28.441 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:28.441 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:28.441 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:28.441 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:28.441 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:28.441 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:21:28.441 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:28.441 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:21:28.441 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:21:28.441 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:21:28.441 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:21:28.441 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:21:28.441 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:21:28.441 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:28.441 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:28.441 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:28.441 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:28.441 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:28.441 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:28.441 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:28.441 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:28.441 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:28.441 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:28.441 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:28.441 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:28.441 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:28.441 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:28.441 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:28.441 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:28.441 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:28.441 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:28.441 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:28.441 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:28.441 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:28.441 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:28.441 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:28.441 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:28.441 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:28.441 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:28.441 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:28.441 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:28.441 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:28.441 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:28.441 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:28.441 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:28.441 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:28.441 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:28.441 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:28.441 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:28.441 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:28.441 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:28.441 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:28.441 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:28.441 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:28.441 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:28.441 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:28.441 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:28.441 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:28.441 Found net devices under 0000:86:00.0: cvl_0_0 00:21:28.441 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:28.441 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:28.441 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:28.441 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:28.441 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:28.441 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:28.441 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:28.441 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:28.442 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:28.442 Found net devices under 0000:86:00.1: cvl_0_1 00:21:28.442 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:28.442 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:28.442 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:21:28.442 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:28.442 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:28.442 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:28.442 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:28.442 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:28.442 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:28.442 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:28.442 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:28.442 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:28.442 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:28.442 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:28.442 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:28.442 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:28.442 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:28.442 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:28.442 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:28.442 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:28.442 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:28.442 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:28.442 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:28.442 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:28.442 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:28.442 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:28.442 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:28.442 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:28.442 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:28.442 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:28.442 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.349 ms 00:21:28.442 00:21:28.442 --- 10.0.0.2 ping statistics --- 00:21:28.442 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:28.442 rtt min/avg/max/mdev = 0.349/0.349/0.349/0.000 ms 00:21:28.442 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:28.442 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:28.442 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.138 ms 00:21:28.442 00:21:28.442 --- 10.0.0.1 ping statistics --- 00:21:28.442 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:28.442 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:21:28.442 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:28.442 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:21:28.442 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:28.442 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:28.442 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:28.442 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:28.442 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:28.442 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:28.442 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:28.442 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:21:28.442 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:28.442 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:28.442 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:28.442 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=2589113 00:21:28.442 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 2589113 00:21:28.442 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 2589113 ']' 00:21:28.442 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:28.442 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:28.442 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:28.442 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:28.442 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:28.442 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:28.442 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:21:28.442 [2024-11-28 12:45:09.956082] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:21:28.442 [2024-11-28 12:45:09.956129] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:28.442 [2024-11-28 12:45:10.023333] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:28.442 [2024-11-28 12:45:10.069910] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:28.442 [2024-11-28 12:45:10.069946] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:28.442 [2024-11-28 12:45:10.069958] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:28.442 [2024-11-28 12:45:10.069964] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:28.442 [2024-11-28 12:45:10.069969] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:28.442 [2024-11-28 12:45:10.070524] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:28.442 12:45:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:28.442 12:45:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:21:28.442 12:45:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:28.442 12:45:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:28.442 12:45:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:28.442 12:45:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:28.442 12:45:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:21:28.442 12:45:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=2589277 00:21:28.442 12:45:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:21:28.442 12:45:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:21:28.442 12:45:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:21:28.442 12:45:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:21:28.442 12:45:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:28.442 12:45:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:28.442 12:45:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:28.442 12:45:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:28.442 12:45:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:28.442 12:45:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:28.442 12:45:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:28.442 12:45:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:28.442 12:45:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:28.442 12:45:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:21:28.442 12:45:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:21:28.442 12:45:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=ff060b90-c4ed-4a40-ac76-652289d15e78 00:21:28.442 12:45:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:21:28.442 12:45:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=620bdd7e-3186-498f-8e0c-a2efb8203c5e 00:21:28.442 12:45:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:21:28.442 12:45:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=39d6bc4c-b94a-49a9-b367-228186fecd0d 00:21:28.442 12:45:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:21:28.442 12:45:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.442 12:45:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:28.442 null0 00:21:28.442 null1 00:21:28.442 [2024-11-28 12:45:10.239968] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:21:28.442 [2024-11-28 12:45:10.240017] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2589277 ] 00:21:28.442 null2 00:21:28.443 [2024-11-28 12:45:10.251804] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:28.443 [2024-11-28 12:45:10.275984] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:28.443 [2024-11-28 12:45:10.303793] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:28.443 12:45:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.443 12:45:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 2589277 /var/tmp/tgt2.sock 00:21:28.443 12:45:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 2589277 ']' 00:21:28.443 12:45:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:21:28.443 12:45:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:28.443 12:45:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:21:28.443 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:21:28.443 12:45:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:28.443 12:45:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:28.443 [2024-11-28 12:45:10.349869] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:28.443 12:45:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:28.443 12:45:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:21:28.443 12:45:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:21:28.443 [2024-11-28 12:45:10.872410] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:28.443 [2024-11-28 12:45:10.888517] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:21:28.443 nvme0n1 nvme0n2 00:21:28.443 nvme1n1 00:21:28.443 12:45:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:21:28.443 12:45:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:21:28.443 12:45:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:29.820 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:21:29.820 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:21:29.820 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:21:29.820 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:21:29.820 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:21:29.820 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:21:29.820 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:21:29.820 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:21:29.820 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:21:29.820 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:21:29.820 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:21:29.820 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:21:29.820 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:21:30.756 12:45:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:21:30.756 12:45:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:21:30.756 12:45:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:21:30.756 12:45:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:21:30.756 12:45:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:21:30.756 12:45:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid ff060b90-c4ed-4a40-ac76-652289d15e78 00:21:30.756 12:45:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:21:30.756 12:45:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:21:30.756 12:45:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:21:30.756 12:45:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:21:30.756 12:45:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:21:30.756 12:45:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=ff060b90c4ed4a40ac76652289d15e78 00:21:30.756 12:45:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo FF060B90C4ED4A40AC76652289D15E78 00:21:30.756 12:45:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ FF060B90C4ED4A40AC76652289D15E78 == \F\F\0\6\0\B\9\0\C\4\E\D\4\A\4\0\A\C\7\6\6\5\2\2\8\9\D\1\5\E\7\8 ]] 00:21:30.756 12:45:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:21:30.756 12:45:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:21:30.756 12:45:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:21:30.756 12:45:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:21:30.756 12:45:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:21:30.756 12:45:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:21:30.756 12:45:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:21:30.756 12:45:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 620bdd7e-3186-498f-8e0c-a2efb8203c5e 00:21:30.756 12:45:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:21:30.756 12:45:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:21:30.756 12:45:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:21:30.756 12:45:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:21:30.756 12:45:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:21:30.756 12:45:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=620bdd7e3186498f8e0ca2efb8203c5e 00:21:30.756 12:45:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 620BDD7E3186498F8E0CA2EFB8203C5E 00:21:30.756 12:45:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 620BDD7E3186498F8E0CA2EFB8203C5E == \6\2\0\B\D\D\7\E\3\1\8\6\4\9\8\F\8\E\0\C\A\2\E\F\B\8\2\0\3\C\5\E ]] 00:21:30.756 12:45:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:21:30.756 12:45:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:21:30.756 12:45:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:21:30.756 12:45:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:21:30.756 12:45:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:21:30.756 12:45:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:21:30.756 12:45:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:21:30.756 12:45:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 39d6bc4c-b94a-49a9-b367-228186fecd0d 00:21:30.756 12:45:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:21:30.756 12:45:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:21:30.756 12:45:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:21:30.756 12:45:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:21:30.756 12:45:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:21:30.756 12:45:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=39d6bc4cb94a49a9b367228186fecd0d 00:21:30.756 12:45:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 39D6BC4CB94A49A9B367228186FECD0D 00:21:30.756 12:45:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 39D6BC4CB94A49A9B367228186FECD0D == \3\9\D\6\B\C\4\C\B\9\4\A\4\9\A\9\B\3\6\7\2\2\8\1\8\6\F\E\C\D\0\D ]] 00:21:30.756 12:45:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:21:31.016 12:45:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:21:31.016 12:45:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:21:31.016 12:45:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 2589277 00:21:31.016 12:45:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 2589277 ']' 00:21:31.016 12:45:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 2589277 00:21:31.016 12:45:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:21:31.016 12:45:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:31.016 12:45:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2589277 00:21:31.016 12:45:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:31.016 12:45:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:31.016 12:45:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2589277' 00:21:31.016 killing process with pid 2589277 00:21:31.016 12:45:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 2589277 00:21:31.016 12:45:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 2589277 00:21:31.275 12:45:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:21:31.275 12:45:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:31.275 12:45:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:21:31.275 12:45:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:31.275 12:45:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:21:31.275 12:45:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:31.275 12:45:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:31.275 rmmod nvme_tcp 00:21:31.275 rmmod nvme_fabrics 00:21:31.534 rmmod nvme_keyring 00:21:31.534 12:45:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:31.534 12:45:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:21:31.534 12:45:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:21:31.534 12:45:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 2589113 ']' 00:21:31.534 12:45:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 2589113 00:21:31.534 12:45:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 2589113 ']' 00:21:31.534 12:45:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 2589113 00:21:31.534 12:45:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:21:31.534 12:45:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:31.534 12:45:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2589113 00:21:31.535 12:45:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:31.535 12:45:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:31.535 12:45:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2589113' 00:21:31.535 killing process with pid 2589113 00:21:31.535 12:45:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 2589113 00:21:31.535 12:45:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 2589113 00:21:31.535 12:45:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:31.535 12:45:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:31.535 12:45:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:31.535 12:45:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:21:31.535 12:45:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:21:31.535 12:45:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:31.535 12:45:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:21:31.535 12:45:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:31.535 12:45:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:31.535 12:45:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:31.535 12:45:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:31.535 12:45:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:34.073 12:45:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:34.073 00:21:34.073 real 0m12.163s 00:21:34.073 user 0m9.523s 00:21:34.073 sys 0m5.351s 00:21:34.073 12:45:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:34.073 12:45:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:34.073 ************************************ 00:21:34.073 END TEST nvmf_nsid 00:21:34.073 ************************************ 00:21:34.073 12:45:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:21:34.073 00:21:34.073 real 11m47.365s 00:21:34.073 user 25m36.020s 00:21:34.073 sys 3m36.583s 00:21:34.073 12:45:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:34.073 12:45:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:34.073 ************************************ 00:21:34.073 END TEST nvmf_target_extra 00:21:34.073 ************************************ 00:21:34.073 12:45:16 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:21:34.073 12:45:16 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:34.073 12:45:16 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:34.073 12:45:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:34.073 ************************************ 00:21:34.073 START TEST nvmf_host 00:21:34.073 ************************************ 00:21:34.073 12:45:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:21:34.073 * Looking for test storage... 00:21:34.073 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:21:34.073 12:45:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:34.073 12:45:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lcov --version 00:21:34.073 12:45:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:34.073 12:45:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:34.073 12:45:16 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:34.073 12:45:16 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:34.073 12:45:16 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:34.073 12:45:16 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:21:34.073 12:45:16 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:21:34.073 12:45:16 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:21:34.073 12:45:16 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:21:34.073 12:45:16 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:21:34.073 12:45:16 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:21:34.073 12:45:16 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:21:34.073 12:45:16 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:34.073 12:45:16 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:21:34.073 12:45:16 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:21:34.073 12:45:16 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:34.073 12:45:16 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:34.073 12:45:16 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:21:34.073 12:45:16 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:21:34.073 12:45:16 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:34.074 12:45:16 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:21:34.074 12:45:16 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:21:34.074 12:45:16 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:21:34.074 12:45:16 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:21:34.074 12:45:16 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:34.074 12:45:16 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:21:34.074 12:45:16 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:21:34.074 12:45:16 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:34.074 12:45:16 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:34.074 12:45:16 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:21:34.074 12:45:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:34.074 12:45:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:34.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:34.074 --rc genhtml_branch_coverage=1 00:21:34.074 --rc genhtml_function_coverage=1 00:21:34.074 --rc genhtml_legend=1 00:21:34.074 --rc geninfo_all_blocks=1 00:21:34.074 --rc geninfo_unexecuted_blocks=1 00:21:34.074 00:21:34.074 ' 00:21:34.074 12:45:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:34.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:34.074 --rc genhtml_branch_coverage=1 00:21:34.074 --rc genhtml_function_coverage=1 00:21:34.074 --rc genhtml_legend=1 00:21:34.074 --rc geninfo_all_blocks=1 00:21:34.074 --rc geninfo_unexecuted_blocks=1 00:21:34.074 00:21:34.074 ' 00:21:34.074 12:45:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:34.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:34.074 --rc genhtml_branch_coverage=1 00:21:34.074 --rc genhtml_function_coverage=1 00:21:34.074 --rc genhtml_legend=1 00:21:34.074 --rc geninfo_all_blocks=1 00:21:34.074 --rc geninfo_unexecuted_blocks=1 00:21:34.074 00:21:34.074 ' 00:21:34.074 12:45:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:34.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:34.074 --rc genhtml_branch_coverage=1 00:21:34.074 --rc genhtml_function_coverage=1 00:21:34.074 --rc genhtml_legend=1 00:21:34.074 --rc geninfo_all_blocks=1 00:21:34.074 --rc geninfo_unexecuted_blocks=1 00:21:34.074 00:21:34.074 ' 00:21:34.074 12:45:16 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:34.074 12:45:16 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:21:34.074 12:45:16 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:34.074 12:45:16 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:34.074 12:45:16 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:34.074 12:45:16 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:34.074 12:45:16 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:34.074 12:45:16 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:34.074 12:45:16 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:34.074 12:45:16 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:34.074 12:45:16 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:34.074 12:45:16 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:34.074 12:45:16 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:34.074 12:45:16 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:34.074 12:45:16 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:34.074 12:45:16 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:34.074 12:45:16 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:34.074 12:45:16 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:34.074 12:45:16 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:34.074 12:45:16 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:21:34.074 12:45:16 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:34.074 12:45:16 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:34.074 12:45:16 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:34.074 12:45:16 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:34.074 12:45:16 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:34.074 12:45:16 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:34.074 12:45:16 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:21:34.074 12:45:16 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:34.074 12:45:16 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:21:34.074 12:45:16 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:34.074 12:45:16 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:34.074 12:45:16 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:34.074 12:45:16 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:34.074 12:45:16 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:34.074 12:45:16 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:34.074 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:34.074 12:45:16 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:34.074 12:45:16 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:34.074 12:45:16 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:34.074 12:45:16 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:21:34.074 12:45:16 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:21:34.074 12:45:16 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:21:34.074 12:45:16 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:21:34.074 12:45:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:34.074 12:45:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:34.074 12:45:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:34.074 ************************************ 00:21:34.074 START TEST nvmf_multicontroller 00:21:34.074 ************************************ 00:21:34.074 12:45:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:21:34.074 * Looking for test storage... 00:21:34.074 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:34.074 12:45:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:34.074 12:45:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lcov --version 00:21:34.074 12:45:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:34.074 12:45:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:34.074 12:45:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:34.074 12:45:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:34.074 12:45:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:34.074 12:45:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:21:34.074 12:45:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:21:34.074 12:45:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:21:34.074 12:45:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:21:34.074 12:45:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:21:34.074 12:45:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:21:34.335 12:45:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:21:34.335 12:45:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:34.335 12:45:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:21:34.335 12:45:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:21:34.335 12:45:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:34.335 12:45:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:34.335 12:45:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:21:34.335 12:45:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:21:34.335 12:45:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:34.335 12:45:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:21:34.335 12:45:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:21:34.335 12:45:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:21:34.335 12:45:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:21:34.335 12:45:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:34.335 12:45:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:21:34.335 12:45:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:21:34.335 12:45:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:34.335 12:45:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:34.335 12:45:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:21:34.335 12:45:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:34.335 12:45:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:34.335 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:34.335 --rc genhtml_branch_coverage=1 00:21:34.335 --rc genhtml_function_coverage=1 00:21:34.335 --rc genhtml_legend=1 00:21:34.335 --rc geninfo_all_blocks=1 00:21:34.335 --rc geninfo_unexecuted_blocks=1 00:21:34.335 00:21:34.335 ' 00:21:34.335 12:45:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:34.335 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:34.335 --rc genhtml_branch_coverage=1 00:21:34.335 --rc genhtml_function_coverage=1 00:21:34.335 --rc genhtml_legend=1 00:21:34.335 --rc geninfo_all_blocks=1 00:21:34.335 --rc geninfo_unexecuted_blocks=1 00:21:34.335 00:21:34.335 ' 00:21:34.335 12:45:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:34.335 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:34.335 --rc genhtml_branch_coverage=1 00:21:34.335 --rc genhtml_function_coverage=1 00:21:34.335 --rc genhtml_legend=1 00:21:34.335 --rc geninfo_all_blocks=1 00:21:34.335 --rc geninfo_unexecuted_blocks=1 00:21:34.335 00:21:34.335 ' 00:21:34.335 12:45:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:34.335 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:34.335 --rc genhtml_branch_coverage=1 00:21:34.335 --rc genhtml_function_coverage=1 00:21:34.335 --rc genhtml_legend=1 00:21:34.335 --rc geninfo_all_blocks=1 00:21:34.335 --rc geninfo_unexecuted_blocks=1 00:21:34.335 00:21:34.335 ' 00:21:34.335 12:45:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:34.335 12:45:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:21:34.335 12:45:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:34.335 12:45:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:34.335 12:45:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:34.335 12:45:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:34.335 12:45:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:34.335 12:45:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:34.335 12:45:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:34.335 12:45:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:34.335 12:45:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:34.335 12:45:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:34.335 12:45:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:34.335 12:45:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:34.335 12:45:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:34.335 12:45:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:34.335 12:45:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:34.336 12:45:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:34.336 12:45:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:34.336 12:45:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:21:34.336 12:45:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:34.336 12:45:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:34.336 12:45:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:34.336 12:45:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:34.336 12:45:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:34.336 12:45:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:34.336 12:45:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:21:34.336 12:45:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:34.336 12:45:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:21:34.336 12:45:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:34.336 12:45:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:34.336 12:45:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:34.336 12:45:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:34.336 12:45:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:34.336 12:45:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:34.336 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:34.336 12:45:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:34.336 12:45:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:34.336 12:45:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:34.336 12:45:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:34.336 12:45:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:34.336 12:45:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:21:34.336 12:45:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:21:34.336 12:45:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:34.336 12:45:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:21:34.336 12:45:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:21:34.336 12:45:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:34.336 12:45:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:34.336 12:45:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:34.336 12:45:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:34.336 12:45:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:34.336 12:45:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:34.336 12:45:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:34.336 12:45:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:34.336 12:45:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:34.336 12:45:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:34.336 12:45:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:21:34.336 12:45:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:39.613 12:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:39.613 12:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:21:39.613 12:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:39.613 12:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:39.613 12:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:39.613 12:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:39.613 12:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:39.613 12:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:21:39.613 12:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:39.613 12:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:21:39.613 12:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:21:39.613 12:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:21:39.613 12:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:21:39.613 12:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:21:39.613 12:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:21:39.613 12:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:39.613 12:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:39.613 12:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:39.613 12:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:39.613 12:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:39.613 12:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:39.613 12:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:39.613 12:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:39.613 12:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:39.613 12:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:39.613 12:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:39.613 12:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:39.613 12:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:39.613 12:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:39.613 12:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:39.613 12:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:39.613 12:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:39.613 12:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:39.613 12:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:39.613 12:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:39.613 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:39.613 12:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:39.613 12:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:39.613 12:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:39.613 12:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:39.613 12:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:39.613 12:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:39.613 12:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:39.613 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:39.613 12:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:39.613 12:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:39.613 12:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:39.613 12:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:39.613 12:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:39.613 12:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:39.613 12:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:39.613 12:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:39.613 12:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:39.613 12:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:39.613 12:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:39.613 12:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:39.613 12:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:39.613 12:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:39.613 12:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:39.613 12:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:39.613 Found net devices under 0000:86:00.0: cvl_0_0 00:21:39.613 12:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:39.613 12:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:39.613 12:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:39.613 12:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:39.613 12:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:39.613 12:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:39.613 12:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:39.613 12:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:39.613 12:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:39.613 Found net devices under 0000:86:00.1: cvl_0_1 00:21:39.613 12:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:39.613 12:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:39.613 12:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:21:39.613 12:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:39.613 12:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:39.613 12:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:39.613 12:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:39.613 12:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:39.613 12:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:39.613 12:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:39.613 12:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:39.614 12:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:39.614 12:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:39.614 12:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:39.614 12:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:39.614 12:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:39.614 12:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:39.614 12:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:39.614 12:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:39.614 12:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:39.614 12:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:39.614 12:45:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:39.614 12:45:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:39.614 12:45:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:39.614 12:45:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:39.614 12:45:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:39.614 12:45:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:39.614 12:45:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:39.614 12:45:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:39.614 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:39.614 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.483 ms 00:21:39.614 00:21:39.614 --- 10.0.0.2 ping statistics --- 00:21:39.614 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:39.614 rtt min/avg/max/mdev = 0.483/0.483/0.483/0.000 ms 00:21:39.614 12:45:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:39.873 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:39.873 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:21:39.873 00:21:39.873 --- 10.0.0.1 ping statistics --- 00:21:39.873 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:39.873 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:21:39.873 12:45:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:39.873 12:45:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:21:39.873 12:45:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:39.873 12:45:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:39.873 12:45:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:39.873 12:45:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:39.874 12:45:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:39.874 12:45:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:39.874 12:45:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:39.874 12:45:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:21:39.874 12:45:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:39.874 12:45:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:39.874 12:45:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:39.874 12:45:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=2593365 00:21:39.874 12:45:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 2593365 00:21:39.874 12:45:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 2593365 ']' 00:21:39.874 12:45:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:39.874 12:45:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:39.874 12:45:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:39.874 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:39.874 12:45:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:21:39.874 12:45:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:39.874 12:45:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:39.874 [2024-11-28 12:45:22.221074] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:21:39.874 [2024-11-28 12:45:22.221122] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:39.874 [2024-11-28 12:45:22.287437] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:39.874 [2024-11-28 12:45:22.330218] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:39.874 [2024-11-28 12:45:22.330255] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:39.874 [2024-11-28 12:45:22.330263] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:39.874 [2024-11-28 12:45:22.330270] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:39.874 [2024-11-28 12:45:22.330275] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:39.874 [2024-11-28 12:45:22.331656] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:39.874 [2024-11-28 12:45:22.331743] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:39.874 [2024-11-28 12:45:22.331745] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:40.133 12:45:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:40.133 12:45:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:21:40.133 12:45:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:40.133 12:45:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:40.133 12:45:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:40.133 12:45:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:40.133 12:45:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:40.133 12:45:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.133 12:45:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:40.133 [2024-11-28 12:45:22.469784] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:40.133 12:45:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.133 12:45:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:40.133 12:45:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.133 12:45:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:40.133 Malloc0 00:21:40.133 12:45:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.133 12:45:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:40.133 12:45:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.134 12:45:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:40.134 12:45:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.134 12:45:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:40.134 12:45:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.134 12:45:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:40.134 12:45:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.134 12:45:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:40.134 12:45:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.134 12:45:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:40.134 [2024-11-28 12:45:22.536958] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:40.134 12:45:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.134 12:45:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:40.134 12:45:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.134 12:45:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:40.134 [2024-11-28 12:45:22.544884] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:40.134 12:45:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.134 12:45:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:40.134 12:45:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.134 12:45:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:40.134 Malloc1 00:21:40.134 12:45:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.134 12:45:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:21:40.134 12:45:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.134 12:45:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:40.134 12:45:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.134 12:45:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:21:40.134 12:45:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.134 12:45:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:40.134 12:45:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.134 12:45:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:21:40.134 12:45:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.134 12:45:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:40.134 12:45:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.134 12:45:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:21:40.134 12:45:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.134 12:45:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:40.134 12:45:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.134 12:45:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=2593464 00:21:40.134 12:45:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:21:40.134 12:45:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:40.134 12:45:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 2593464 /var/tmp/bdevperf.sock 00:21:40.134 12:45:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 2593464 ']' 00:21:40.134 12:45:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:40.134 12:45:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:40.134 12:45:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:40.134 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:40.134 12:45:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:40.134 12:45:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:40.393 12:45:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:40.393 12:45:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:21:40.393 12:45:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:21:40.393 12:45:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.393 12:45:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:40.653 NVMe0n1 00:21:40.653 12:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.653 12:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:40.653 12:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:21:40.653 12:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.653 12:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:40.653 12:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.653 1 00:21:40.653 12:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:21:40.653 12:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:21:40.653 12:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:21:40.653 12:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:40.653 12:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:40.653 12:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:40.653 12:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:40.653 12:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:21:40.653 12:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.653 12:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:40.653 request: 00:21:40.653 { 00:21:40.653 "name": "NVMe0", 00:21:40.653 "trtype": "tcp", 00:21:40.653 "traddr": "10.0.0.2", 00:21:40.653 "adrfam": "ipv4", 00:21:40.653 "trsvcid": "4420", 00:21:40.653 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:40.653 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:21:40.653 "hostaddr": "10.0.0.1", 00:21:40.653 "prchk_reftag": false, 00:21:40.653 "prchk_guard": false, 00:21:40.653 "hdgst": false, 00:21:40.653 "ddgst": false, 00:21:40.653 "allow_unrecognized_csi": false, 00:21:40.653 "method": "bdev_nvme_attach_controller", 00:21:40.653 "req_id": 1 00:21:40.653 } 00:21:40.653 Got JSON-RPC error response 00:21:40.653 response: 00:21:40.653 { 00:21:40.653 "code": -114, 00:21:40.653 "message": "A controller named NVMe0 already exists with the specified network path" 00:21:40.653 } 00:21:40.653 12:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:40.653 12:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:21:40.653 12:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:40.653 12:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:40.653 12:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:40.653 12:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:21:40.653 12:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:21:40.653 12:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:21:40.653 12:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:40.653 12:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:40.653 12:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:40.653 12:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:40.653 12:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:21:40.653 12:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.653 12:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:40.653 request: 00:21:40.653 { 00:21:40.653 "name": "NVMe0", 00:21:40.653 "trtype": "tcp", 00:21:40.653 "traddr": "10.0.0.2", 00:21:40.653 "adrfam": "ipv4", 00:21:40.653 "trsvcid": "4420", 00:21:40.653 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:40.653 "hostaddr": "10.0.0.1", 00:21:40.653 "prchk_reftag": false, 00:21:40.653 "prchk_guard": false, 00:21:40.653 "hdgst": false, 00:21:40.653 "ddgst": false, 00:21:40.653 "allow_unrecognized_csi": false, 00:21:40.653 "method": "bdev_nvme_attach_controller", 00:21:40.653 "req_id": 1 00:21:40.653 } 00:21:40.653 Got JSON-RPC error response 00:21:40.653 response: 00:21:40.653 { 00:21:40.653 "code": -114, 00:21:40.653 "message": "A controller named NVMe0 already exists with the specified network path" 00:21:40.653 } 00:21:40.653 12:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:40.653 12:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:21:40.653 12:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:40.653 12:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:40.653 12:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:40.653 12:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:21:40.654 12:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:21:40.654 12:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:21:40.654 12:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:40.654 12:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:40.654 12:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:40.654 12:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:40.654 12:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:21:40.654 12:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.654 12:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:40.654 request: 00:21:40.654 { 00:21:40.654 "name": "NVMe0", 00:21:40.654 "trtype": "tcp", 00:21:40.654 "traddr": "10.0.0.2", 00:21:40.654 "adrfam": "ipv4", 00:21:40.654 "trsvcid": "4420", 00:21:40.654 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:40.654 "hostaddr": "10.0.0.1", 00:21:40.654 "prchk_reftag": false, 00:21:40.654 "prchk_guard": false, 00:21:40.654 "hdgst": false, 00:21:40.654 "ddgst": false, 00:21:40.654 "multipath": "disable", 00:21:40.654 "allow_unrecognized_csi": false, 00:21:40.654 "method": "bdev_nvme_attach_controller", 00:21:40.654 "req_id": 1 00:21:40.654 } 00:21:40.654 Got JSON-RPC error response 00:21:40.654 response: 00:21:40.654 { 00:21:40.654 "code": -114, 00:21:40.654 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:21:40.654 } 00:21:40.654 12:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:40.654 12:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:21:40.654 12:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:40.654 12:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:40.914 12:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:40.914 12:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:21:40.914 12:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:21:40.914 12:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:21:40.914 12:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:40.914 12:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:40.914 12:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:40.914 12:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:40.914 12:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:21:40.914 12:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.914 12:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:40.914 request: 00:21:40.914 { 00:21:40.914 "name": "NVMe0", 00:21:40.914 "trtype": "tcp", 00:21:40.914 "traddr": "10.0.0.2", 00:21:40.914 "adrfam": "ipv4", 00:21:40.914 "trsvcid": "4420", 00:21:40.914 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:40.914 "hostaddr": "10.0.0.1", 00:21:40.914 "prchk_reftag": false, 00:21:40.914 "prchk_guard": false, 00:21:40.914 "hdgst": false, 00:21:40.914 "ddgst": false, 00:21:40.914 "multipath": "failover", 00:21:40.914 "allow_unrecognized_csi": false, 00:21:40.914 "method": "bdev_nvme_attach_controller", 00:21:40.914 "req_id": 1 00:21:40.914 } 00:21:40.914 Got JSON-RPC error response 00:21:40.914 response: 00:21:40.914 { 00:21:40.914 "code": -114, 00:21:40.914 "message": "A controller named NVMe0 already exists with the specified network path" 00:21:40.914 } 00:21:40.914 12:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:40.914 12:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:21:40.914 12:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:40.914 12:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:40.914 12:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:40.914 12:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:40.914 12:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.914 12:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:40.914 NVMe0n1 00:21:40.914 12:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.914 12:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:40.914 12:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.914 12:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:40.914 12:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.914 12:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:21:40.914 12:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.914 12:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:41.174 00:21:41.174 12:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.174 12:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:41.174 12:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:21:41.174 12:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.174 12:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:41.174 12:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.174 12:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:21:41.174 12:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:42.554 { 00:21:42.554 "results": [ 00:21:42.554 { 00:21:42.554 "job": "NVMe0n1", 00:21:42.554 "core_mask": "0x1", 00:21:42.554 "workload": "write", 00:21:42.554 "status": "finished", 00:21:42.554 "queue_depth": 128, 00:21:42.554 "io_size": 4096, 00:21:42.554 "runtime": 1.007332, 00:21:42.554 "iops": 24082.427640539565, 00:21:42.554 "mibps": 94.07198297085768, 00:21:42.554 "io_failed": 0, 00:21:42.554 "io_timeout": 0, 00:21:42.554 "avg_latency_us": 5306.985137851125, 00:21:42.554 "min_latency_us": 3333.7878260869566, 00:21:42.554 "max_latency_us": 13791.053913043479 00:21:42.554 } 00:21:42.554 ], 00:21:42.554 "core_count": 1 00:21:42.554 } 00:21:42.554 12:45:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:21:42.554 12:45:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.554 12:45:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:42.554 12:45:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.554 12:45:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:21:42.554 12:45:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 2593464 00:21:42.554 12:45:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 2593464 ']' 00:21:42.554 12:45:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 2593464 00:21:42.554 12:45:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:21:42.554 12:45:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:42.554 12:45:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2593464 00:21:42.554 12:45:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:42.554 12:45:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:42.554 12:45:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2593464' 00:21:42.554 killing process with pid 2593464 00:21:42.554 12:45:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 2593464 00:21:42.554 12:45:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 2593464 00:21:42.554 12:45:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:42.554 12:45:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.554 12:45:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:42.554 12:45:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.554 12:45:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:21:42.554 12:45:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.554 12:45:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:42.554 12:45:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.554 12:45:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:21:42.554 12:45:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:42.554 12:45:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:21:42.554 12:45:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:21:42.554 12:45:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 00:21:42.554 12:45:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 00:21:42.554 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:21:42.554 [2024-11-28 12:45:22.647619] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:21:42.554 [2024-11-28 12:45:22.647668] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2593464 ] 00:21:42.554 [2024-11-28 12:45:22.712263] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:42.554 [2024-11-28 12:45:22.755909] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:42.554 [2024-11-28 12:45:23.623385] bdev.c:4934:bdev_name_add: *ERROR*: Bdev name 9033cd6c-0324-4c3d-ad38-462e638dcfed already exists 00:21:42.554 [2024-11-28 12:45:23.623413] bdev.c:8154:bdev_register: *ERROR*: Unable to add uuid:9033cd6c-0324-4c3d-ad38-462e638dcfed alias for bdev NVMe1n1 00:21:42.554 [2024-11-28 12:45:23.623422] bdev_nvme.c:4659:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:21:42.554 Running I/O for 1 seconds... 00:21:42.554 24021.00 IOPS, 93.83 MiB/s 00:21:42.554 Latency(us) 00:21:42.554 [2024-11-28T11:45:25.073Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:42.554 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:21:42.554 NVMe0n1 : 1.01 24082.43 94.07 0.00 0.00 5306.99 3333.79 13791.05 00:21:42.554 [2024-11-28T11:45:25.073Z] =================================================================================================================== 00:21:42.554 [2024-11-28T11:45:25.073Z] Total : 24082.43 94.07 0.00 0.00 5306.99 3333.79 13791.05 00:21:42.554 Received shutdown signal, test time was about 1.000000 seconds 00:21:42.554 00:21:42.554 Latency(us) 00:21:42.554 [2024-11-28T11:45:25.073Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:42.554 [2024-11-28T11:45:25.073Z] =================================================================================================================== 00:21:42.554 [2024-11-28T11:45:25.073Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:42.554 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:21:42.554 12:45:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:42.554 12:45:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:21:42.554 12:45:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:21:42.554 12:45:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:42.554 12:45:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:21:42.554 12:45:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:42.554 12:45:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:21:42.554 12:45:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:42.554 12:45:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:42.554 rmmod nvme_tcp 00:21:42.554 rmmod nvme_fabrics 00:21:42.813 rmmod nvme_keyring 00:21:42.813 12:45:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:42.813 12:45:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:21:42.813 12:45:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:21:42.813 12:45:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 2593365 ']' 00:21:42.813 12:45:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 2593365 00:21:42.813 12:45:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 2593365 ']' 00:21:42.813 12:45:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 2593365 00:21:42.813 12:45:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:21:42.813 12:45:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:42.813 12:45:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2593365 00:21:42.813 12:45:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:42.813 12:45:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:42.813 12:45:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2593365' 00:21:42.813 killing process with pid 2593365 00:21:42.813 12:45:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 2593365 00:21:42.813 12:45:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 2593365 00:21:43.072 12:45:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:43.072 12:45:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:43.072 12:45:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:43.072 12:45:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:21:43.072 12:45:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:21:43.072 12:45:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:21:43.072 12:45:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:43.072 12:45:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:43.072 12:45:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:43.072 12:45:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:43.072 12:45:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:43.072 12:45:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:44.977 12:45:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:44.977 00:21:44.977 real 0m10.996s 00:21:44.977 user 0m13.194s 00:21:44.977 sys 0m4.886s 00:21:44.977 12:45:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:44.977 12:45:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:44.977 ************************************ 00:21:44.977 END TEST nvmf_multicontroller 00:21:44.977 ************************************ 00:21:44.977 12:45:27 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:21:44.977 12:45:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:44.977 12:45:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:44.977 12:45:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:45.237 ************************************ 00:21:45.237 START TEST nvmf_aer 00:21:45.237 ************************************ 00:21:45.237 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:21:45.237 * Looking for test storage... 00:21:45.237 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:45.237 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:45.237 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lcov --version 00:21:45.237 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:45.237 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:45.237 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:45.237 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:45.237 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:45.237 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:21:45.237 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:21:45.237 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:21:45.237 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:21:45.237 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:21:45.237 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:21:45.237 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:21:45.237 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:45.237 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:21:45.237 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:21:45.237 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:45.237 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:45.237 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:21:45.237 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:21:45.237 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:45.237 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:21:45.237 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:21:45.237 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:21:45.237 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:21:45.237 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:45.237 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:21:45.237 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:21:45.237 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:45.237 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:45.237 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:21:45.237 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:45.237 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:45.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:45.237 --rc genhtml_branch_coverage=1 00:21:45.237 --rc genhtml_function_coverage=1 00:21:45.237 --rc genhtml_legend=1 00:21:45.237 --rc geninfo_all_blocks=1 00:21:45.237 --rc geninfo_unexecuted_blocks=1 00:21:45.237 00:21:45.237 ' 00:21:45.237 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:45.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:45.237 --rc genhtml_branch_coverage=1 00:21:45.237 --rc genhtml_function_coverage=1 00:21:45.237 --rc genhtml_legend=1 00:21:45.237 --rc geninfo_all_blocks=1 00:21:45.237 --rc geninfo_unexecuted_blocks=1 00:21:45.237 00:21:45.237 ' 00:21:45.237 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:45.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:45.237 --rc genhtml_branch_coverage=1 00:21:45.237 --rc genhtml_function_coverage=1 00:21:45.237 --rc genhtml_legend=1 00:21:45.237 --rc geninfo_all_blocks=1 00:21:45.237 --rc geninfo_unexecuted_blocks=1 00:21:45.237 00:21:45.237 ' 00:21:45.237 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:45.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:45.237 --rc genhtml_branch_coverage=1 00:21:45.237 --rc genhtml_function_coverage=1 00:21:45.237 --rc genhtml_legend=1 00:21:45.237 --rc geninfo_all_blocks=1 00:21:45.237 --rc geninfo_unexecuted_blocks=1 00:21:45.237 00:21:45.237 ' 00:21:45.237 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:45.237 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:21:45.237 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:45.237 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:45.237 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:45.237 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:45.237 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:45.237 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:45.237 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:45.237 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:45.238 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:45.238 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:45.238 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:45.238 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:45.238 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:45.238 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:45.238 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:45.238 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:45.238 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:45.238 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:21:45.238 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:45.238 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:45.238 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:45.238 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:45.238 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:45.238 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:45.238 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:21:45.238 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:45.238 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:21:45.238 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:45.238 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:45.238 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:45.238 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:45.238 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:45.238 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:45.238 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:45.238 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:45.238 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:45.238 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:45.238 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:21:45.238 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:45.238 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:45.238 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:45.238 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:45.238 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:45.238 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:45.238 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:45.238 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:45.238 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:45.238 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:45.238 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:21:45.238 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:50.535 12:45:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:50.535 12:45:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:21:50.535 12:45:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:50.535 12:45:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:50.535 12:45:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:50.535 12:45:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:50.535 12:45:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:50.535 12:45:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:21:50.535 12:45:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:50.535 12:45:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:21:50.535 12:45:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:21:50.535 12:45:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:21:50.535 12:45:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:21:50.535 12:45:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:21:50.535 12:45:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:21:50.535 12:45:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:50.535 12:45:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:50.535 12:45:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:50.535 12:45:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:50.535 12:45:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:50.535 12:45:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:50.535 12:45:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:50.535 12:45:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:50.535 12:45:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:50.535 12:45:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:50.535 12:45:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:50.535 12:45:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:50.535 12:45:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:50.535 12:45:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:50.535 12:45:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:50.535 12:45:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:50.535 12:45:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:50.535 12:45:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:50.535 12:45:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:50.535 12:45:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:50.535 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:50.535 12:45:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:50.535 12:45:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:50.535 12:45:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:50.535 12:45:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:50.535 12:45:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:50.535 12:45:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:50.535 12:45:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:50.535 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:50.535 12:45:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:50.535 12:45:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:50.535 12:45:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:50.535 12:45:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:50.535 12:45:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:50.535 12:45:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:50.535 12:45:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:50.535 12:45:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:50.535 12:45:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:50.535 12:45:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:50.535 12:45:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:50.535 12:45:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:50.535 12:45:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:50.535 12:45:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:50.535 12:45:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:50.535 12:45:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:50.535 Found net devices under 0000:86:00.0: cvl_0_0 00:21:50.535 12:45:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:50.535 12:45:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:50.535 12:45:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:50.535 12:45:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:50.535 12:45:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:50.535 12:45:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:50.535 12:45:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:50.535 12:45:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:50.535 12:45:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:50.535 Found net devices under 0000:86:00.1: cvl_0_1 00:21:50.535 12:45:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:50.535 12:45:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:50.535 12:45:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:21:50.535 12:45:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:50.535 12:45:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:50.535 12:45:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:50.535 12:45:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:50.535 12:45:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:50.535 12:45:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:50.535 12:45:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:50.535 12:45:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:50.535 12:45:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:50.535 12:45:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:50.535 12:45:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:50.535 12:45:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:50.535 12:45:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:50.535 12:45:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:50.535 12:45:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:50.535 12:45:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:50.535 12:45:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:50.535 12:45:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:50.535 12:45:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:50.535 12:45:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:50.535 12:45:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:50.535 12:45:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:50.535 12:45:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:50.535 12:45:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:50.535 12:45:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:50.535 12:45:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:50.535 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:50.535 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.436 ms 00:21:50.535 00:21:50.535 --- 10.0.0.2 ping statistics --- 00:21:50.535 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:50.535 rtt min/avg/max/mdev = 0.436/0.436/0.436/0.000 ms 00:21:50.535 12:45:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:50.535 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:50.535 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:21:50.535 00:21:50.535 --- 10.0.0.1 ping statistics --- 00:21:50.535 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:50.535 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:21:50.535 12:45:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:50.535 12:45:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:21:50.535 12:45:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:50.535 12:45:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:50.535 12:45:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:50.535 12:45:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:50.535 12:45:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:50.535 12:45:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:50.535 12:45:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:50.535 12:45:32 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:21:50.535 12:45:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:50.535 12:45:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:50.535 12:45:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:50.535 12:45:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=2597377 00:21:50.535 12:45:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 2597377 00:21:50.535 12:45:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 2597377 ']' 00:21:50.535 12:45:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:50.536 12:45:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:50.536 12:45:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:50.536 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:50.536 12:45:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:50.536 12:45:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:50.536 12:45:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:50.536 [2024-11-28 12:45:32.925728] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:21:50.536 [2024-11-28 12:45:32.925775] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:50.536 [2024-11-28 12:45:32.991285] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:50.536 [2024-11-28 12:45:33.034437] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:50.536 [2024-11-28 12:45:33.034475] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:50.536 [2024-11-28 12:45:33.034482] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:50.536 [2024-11-28 12:45:33.034488] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:50.536 [2024-11-28 12:45:33.034493] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:50.536 [2024-11-28 12:45:33.036012] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:50.536 [2024-11-28 12:45:33.036032] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:50.536 [2024-11-28 12:45:33.036121] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:50.536 [2024-11-28 12:45:33.036123] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:50.795 12:45:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:50.795 12:45:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:21:50.795 12:45:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:50.795 12:45:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:50.795 12:45:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:50.795 12:45:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:50.795 12:45:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:50.795 12:45:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.795 12:45:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:50.795 [2024-11-28 12:45:33.173936] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:50.795 12:45:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.795 12:45:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:21:50.795 12:45:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.795 12:45:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:50.795 Malloc0 00:21:50.795 12:45:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.795 12:45:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:21:50.795 12:45:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.795 12:45:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:50.795 12:45:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.795 12:45:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:50.795 12:45:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.795 12:45:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:50.795 12:45:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.795 12:45:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:50.795 12:45:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.795 12:45:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:50.795 [2024-11-28 12:45:33.241177] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:50.795 12:45:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.795 12:45:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:21:50.795 12:45:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.795 12:45:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:50.795 [ 00:21:50.795 { 00:21:50.795 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:50.795 "subtype": "Discovery", 00:21:50.795 "listen_addresses": [], 00:21:50.795 "allow_any_host": true, 00:21:50.795 "hosts": [] 00:21:50.795 }, 00:21:50.795 { 00:21:50.795 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:50.795 "subtype": "NVMe", 00:21:50.795 "listen_addresses": [ 00:21:50.795 { 00:21:50.795 "trtype": "TCP", 00:21:50.795 "adrfam": "IPv4", 00:21:50.795 "traddr": "10.0.0.2", 00:21:50.795 "trsvcid": "4420" 00:21:50.795 } 00:21:50.795 ], 00:21:50.795 "allow_any_host": true, 00:21:50.795 "hosts": [], 00:21:50.795 "serial_number": "SPDK00000000000001", 00:21:50.795 "model_number": "SPDK bdev Controller", 00:21:50.795 "max_namespaces": 2, 00:21:50.795 "min_cntlid": 1, 00:21:50.795 "max_cntlid": 65519, 00:21:50.795 "namespaces": [ 00:21:50.795 { 00:21:50.795 "nsid": 1, 00:21:50.795 "bdev_name": "Malloc0", 00:21:50.795 "name": "Malloc0", 00:21:50.795 "nguid": "61E5D34511F5469EA7FDC5EDB4E581F0", 00:21:50.795 "uuid": "61e5d345-11f5-469e-a7fd-c5edb4e581f0" 00:21:50.795 } 00:21:50.795 ] 00:21:50.795 } 00:21:50.795 ] 00:21:50.795 12:45:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.795 12:45:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:21:50.795 12:45:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:21:50.795 12:45:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=2597400 00:21:50.795 12:45:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:21:50.795 12:45:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:21:50.795 12:45:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:21:50.795 12:45:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:50.795 12:45:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:21:50.795 12:45:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:21:50.796 12:45:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:21:51.054 12:45:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:51.054 12:45:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:21:51.054 12:45:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:21:51.054 12:45:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:21:51.054 12:45:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:51.054 12:45:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 2 -lt 200 ']' 00:21:51.054 12:45:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=3 00:21:51.054 12:45:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:21:51.314 12:45:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:51.314 12:45:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:51.314 12:45:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:21:51.314 12:45:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:21:51.314 12:45:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.314 12:45:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:51.314 Malloc1 00:21:51.314 12:45:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.314 12:45:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:21:51.314 12:45:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.314 12:45:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:51.314 12:45:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.314 12:45:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:21:51.314 12:45:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.314 12:45:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:51.314 Asynchronous Event Request test 00:21:51.314 Attaching to 10.0.0.2 00:21:51.314 Attached to 10.0.0.2 00:21:51.314 Registering asynchronous event callbacks... 00:21:51.314 Starting namespace attribute notice tests for all controllers... 00:21:51.314 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:21:51.314 aer_cb - Changed Namespace 00:21:51.314 Cleaning up... 00:21:51.314 [ 00:21:51.314 { 00:21:51.314 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:51.314 "subtype": "Discovery", 00:21:51.314 "listen_addresses": [], 00:21:51.314 "allow_any_host": true, 00:21:51.314 "hosts": [] 00:21:51.314 }, 00:21:51.314 { 00:21:51.314 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:51.314 "subtype": "NVMe", 00:21:51.314 "listen_addresses": [ 00:21:51.314 { 00:21:51.314 "trtype": "TCP", 00:21:51.314 "adrfam": "IPv4", 00:21:51.314 "traddr": "10.0.0.2", 00:21:51.314 "trsvcid": "4420" 00:21:51.314 } 00:21:51.314 ], 00:21:51.314 "allow_any_host": true, 00:21:51.314 "hosts": [], 00:21:51.314 "serial_number": "SPDK00000000000001", 00:21:51.314 "model_number": "SPDK bdev Controller", 00:21:51.314 "max_namespaces": 2, 00:21:51.314 "min_cntlid": 1, 00:21:51.314 "max_cntlid": 65519, 00:21:51.314 "namespaces": [ 00:21:51.314 { 00:21:51.314 "nsid": 1, 00:21:51.314 "bdev_name": "Malloc0", 00:21:51.314 "name": "Malloc0", 00:21:51.314 "nguid": "61E5D34511F5469EA7FDC5EDB4E581F0", 00:21:51.314 "uuid": "61e5d345-11f5-469e-a7fd-c5edb4e581f0" 00:21:51.314 }, 00:21:51.314 { 00:21:51.314 "nsid": 2, 00:21:51.314 "bdev_name": "Malloc1", 00:21:51.314 "name": "Malloc1", 00:21:51.314 "nguid": "C67349C489F248449817380D5B03AA82", 00:21:51.314 "uuid": "c67349c4-89f2-4844-9817-380d5b03aa82" 00:21:51.314 } 00:21:51.314 ] 00:21:51.314 } 00:21:51.314 ] 00:21:51.314 12:45:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.314 12:45:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 2597400 00:21:51.314 12:45:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:21:51.314 12:45:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.314 12:45:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:51.314 12:45:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.314 12:45:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:21:51.314 12:45:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.314 12:45:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:51.314 12:45:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.314 12:45:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:51.314 12:45:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.314 12:45:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:51.314 12:45:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.314 12:45:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:21:51.314 12:45:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:21:51.314 12:45:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:51.314 12:45:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:21:51.314 12:45:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:51.314 12:45:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:21:51.314 12:45:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:51.314 12:45:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:51.314 rmmod nvme_tcp 00:21:51.314 rmmod nvme_fabrics 00:21:51.314 rmmod nvme_keyring 00:21:51.314 12:45:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:51.314 12:45:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:21:51.314 12:45:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:21:51.314 12:45:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 2597377 ']' 00:21:51.314 12:45:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 2597377 00:21:51.314 12:45:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 2597377 ']' 00:21:51.314 12:45:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 2597377 00:21:51.314 12:45:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:21:51.315 12:45:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:51.315 12:45:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2597377 00:21:51.315 12:45:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:51.315 12:45:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:51.315 12:45:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2597377' 00:21:51.315 killing process with pid 2597377 00:21:51.315 12:45:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 2597377 00:21:51.315 12:45:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 2597377 00:21:51.574 12:45:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:51.574 12:45:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:51.574 12:45:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:51.574 12:45:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:21:51.574 12:45:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:51.574 12:45:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:21:51.574 12:45:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:21:51.574 12:45:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:51.574 12:45:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:51.574 12:45:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:51.574 12:45:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:51.574 12:45:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:54.109 12:45:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:54.109 00:21:54.109 real 0m8.547s 00:21:54.109 user 0m5.208s 00:21:54.109 sys 0m4.272s 00:21:54.109 12:45:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:54.109 12:45:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:54.109 ************************************ 00:21:54.109 END TEST nvmf_aer 00:21:54.109 ************************************ 00:21:54.110 12:45:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:21:54.110 12:45:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:54.110 12:45:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:54.110 12:45:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:54.110 ************************************ 00:21:54.110 START TEST nvmf_async_init 00:21:54.110 ************************************ 00:21:54.110 12:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:21:54.110 * Looking for test storage... 00:21:54.110 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:54.110 12:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:54.110 12:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lcov --version 00:21:54.110 12:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:54.110 12:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:54.110 12:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:54.110 12:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:54.110 12:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:54.110 12:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:21:54.110 12:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:21:54.110 12:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:21:54.110 12:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:21:54.110 12:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:21:54.110 12:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:21:54.110 12:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:21:54.110 12:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:54.110 12:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:21:54.110 12:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:21:54.110 12:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:54.110 12:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:54.110 12:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:21:54.110 12:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:21:54.110 12:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:54.110 12:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:21:54.110 12:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:21:54.110 12:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:21:54.110 12:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:21:54.110 12:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:54.110 12:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:21:54.110 12:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:21:54.110 12:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:54.110 12:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:54.110 12:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:21:54.110 12:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:54.110 12:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:54.110 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:54.110 --rc genhtml_branch_coverage=1 00:21:54.110 --rc genhtml_function_coverage=1 00:21:54.110 --rc genhtml_legend=1 00:21:54.110 --rc geninfo_all_blocks=1 00:21:54.110 --rc geninfo_unexecuted_blocks=1 00:21:54.110 00:21:54.110 ' 00:21:54.110 12:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:54.110 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:54.110 --rc genhtml_branch_coverage=1 00:21:54.110 --rc genhtml_function_coverage=1 00:21:54.110 --rc genhtml_legend=1 00:21:54.110 --rc geninfo_all_blocks=1 00:21:54.110 --rc geninfo_unexecuted_blocks=1 00:21:54.110 00:21:54.110 ' 00:21:54.110 12:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:54.110 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:54.110 --rc genhtml_branch_coverage=1 00:21:54.110 --rc genhtml_function_coverage=1 00:21:54.110 --rc genhtml_legend=1 00:21:54.110 --rc geninfo_all_blocks=1 00:21:54.110 --rc geninfo_unexecuted_blocks=1 00:21:54.110 00:21:54.110 ' 00:21:54.110 12:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:54.110 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:54.110 --rc genhtml_branch_coverage=1 00:21:54.110 --rc genhtml_function_coverage=1 00:21:54.110 --rc genhtml_legend=1 00:21:54.110 --rc geninfo_all_blocks=1 00:21:54.110 --rc geninfo_unexecuted_blocks=1 00:21:54.110 00:21:54.110 ' 00:21:54.110 12:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:54.110 12:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:21:54.110 12:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:54.110 12:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:54.110 12:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:54.110 12:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:54.110 12:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:54.110 12:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:54.110 12:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:54.110 12:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:54.110 12:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:54.110 12:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:54.110 12:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:54.110 12:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:54.110 12:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:54.110 12:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:54.110 12:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:54.110 12:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:54.110 12:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:54.110 12:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:21:54.110 12:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:54.110 12:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:54.110 12:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:54.111 12:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:54.111 12:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:54.111 12:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:54.111 12:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:21:54.111 12:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:54.111 12:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:21:54.111 12:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:54.111 12:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:54.111 12:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:54.111 12:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:54.111 12:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:54.111 12:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:54.111 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:54.111 12:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:54.111 12:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:54.111 12:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:54.111 12:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:21:54.111 12:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:21:54.111 12:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:21:54.111 12:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:21:54.111 12:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:21:54.111 12:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:21:54.111 12:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=f98b41de5b9b4b88b1a2851dd1d98924 00:21:54.111 12:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:21:54.111 12:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:54.111 12:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:54.111 12:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:54.111 12:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:54.111 12:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:54.111 12:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:54.111 12:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:54.111 12:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:54.111 12:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:54.111 12:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:54.111 12:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:21:54.111 12:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:59.384 12:45:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:59.384 12:45:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:21:59.384 12:45:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:59.384 12:45:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:59.384 12:45:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:59.384 12:45:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:59.384 12:45:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:59.384 12:45:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:21:59.384 12:45:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:59.384 12:45:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:21:59.384 12:45:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:21:59.384 12:45:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:21:59.384 12:45:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:21:59.384 12:45:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:21:59.384 12:45:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:21:59.384 12:45:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:59.384 12:45:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:59.384 12:45:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:59.384 12:45:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:59.384 12:45:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:59.384 12:45:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:59.384 12:45:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:59.384 12:45:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:59.384 12:45:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:59.384 12:45:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:59.384 12:45:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:59.384 12:45:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:59.384 12:45:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:59.384 12:45:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:59.384 12:45:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:59.384 12:45:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:59.384 12:45:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:59.384 12:45:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:59.384 12:45:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:59.384 12:45:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:59.384 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:59.384 12:45:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:59.384 12:45:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:59.384 12:45:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:59.384 12:45:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:59.384 12:45:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:59.384 12:45:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:59.384 12:45:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:59.384 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:59.384 12:45:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:59.384 12:45:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:59.384 12:45:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:59.384 12:45:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:59.384 12:45:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:59.384 12:45:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:59.384 12:45:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:59.384 12:45:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:59.384 12:45:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:59.384 12:45:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:59.384 12:45:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:59.384 12:45:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:59.384 12:45:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:59.384 12:45:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:59.384 12:45:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:59.384 12:45:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:59.384 Found net devices under 0000:86:00.0: cvl_0_0 00:21:59.384 12:45:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:59.384 12:45:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:59.384 12:45:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:59.384 12:45:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:59.384 12:45:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:59.384 12:45:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:59.384 12:45:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:59.384 12:45:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:59.384 12:45:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:59.384 Found net devices under 0000:86:00.1: cvl_0_1 00:21:59.384 12:45:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:59.384 12:45:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:59.384 12:45:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:21:59.384 12:45:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:59.384 12:45:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:59.384 12:45:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:59.384 12:45:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:59.384 12:45:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:59.384 12:45:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:59.384 12:45:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:59.384 12:45:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:59.384 12:45:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:59.384 12:45:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:59.384 12:45:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:59.384 12:45:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:59.384 12:45:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:59.384 12:45:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:59.384 12:45:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:59.384 12:45:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:59.384 12:45:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:59.384 12:45:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:59.384 12:45:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:59.384 12:45:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:59.384 12:45:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:59.384 12:45:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:59.385 12:45:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:59.385 12:45:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:59.385 12:45:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:59.385 12:45:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:59.385 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:59.385 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.461 ms 00:21:59.385 00:21:59.385 --- 10.0.0.2 ping statistics --- 00:21:59.385 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:59.385 rtt min/avg/max/mdev = 0.461/0.461/0.461/0.000 ms 00:21:59.385 12:45:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:59.385 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:59.385 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:21:59.385 00:21:59.385 --- 10.0.0.1 ping statistics --- 00:21:59.385 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:59.385 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:21:59.385 12:45:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:59.385 12:45:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:21:59.385 12:45:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:59.385 12:45:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:59.385 12:45:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:59.385 12:45:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:59.385 12:45:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:59.385 12:45:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:59.385 12:45:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:59.385 12:45:41 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:21:59.385 12:45:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:59.385 12:45:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:59.385 12:45:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:59.385 12:45:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=2600923 00:21:59.385 12:45:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 2600923 00:21:59.385 12:45:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:21:59.385 12:45:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 2600923 ']' 00:21:59.385 12:45:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:59.385 12:45:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:59.385 12:45:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:59.385 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:59.385 12:45:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:59.385 12:45:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:59.385 [2024-11-28 12:45:41.824331] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:21:59.385 [2024-11-28 12:45:41.824384] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:59.385 [2024-11-28 12:45:41.891728] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:59.642 [2024-11-28 12:45:41.934563] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:59.642 [2024-11-28 12:45:41.934596] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:59.642 [2024-11-28 12:45:41.934603] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:59.642 [2024-11-28 12:45:41.934609] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:59.642 [2024-11-28 12:45:41.934614] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:59.642 [2024-11-28 12:45:41.935165] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:59.642 12:45:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:59.642 12:45:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:21:59.642 12:45:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:59.642 12:45:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:59.642 12:45:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:59.642 12:45:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:59.642 12:45:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:21:59.642 12:45:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.642 12:45:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:59.642 [2024-11-28 12:45:42.072341] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:59.642 12:45:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.642 12:45:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:21:59.642 12:45:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.642 12:45:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:59.642 null0 00:21:59.642 12:45:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.642 12:45:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:21:59.642 12:45:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.643 12:45:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:59.643 12:45:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.643 12:45:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:21:59.643 12:45:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.643 12:45:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:59.643 12:45:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.643 12:45:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g f98b41de5b9b4b88b1a2851dd1d98924 00:21:59.643 12:45:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.643 12:45:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:59.643 12:45:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.643 12:45:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:59.643 12:45:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.643 12:45:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:59.643 [2024-11-28 12:45:42.112569] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:59.643 12:45:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.643 12:45:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:21:59.643 12:45:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.643 12:45:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:59.900 nvme0n1 00:21:59.900 12:45:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.900 12:45:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:59.900 12:45:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.900 12:45:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:59.900 [ 00:21:59.900 { 00:21:59.900 "name": "nvme0n1", 00:21:59.900 "aliases": [ 00:21:59.900 "f98b41de-5b9b-4b88-b1a2-851dd1d98924" 00:21:59.900 ], 00:21:59.900 "product_name": "NVMe disk", 00:21:59.900 "block_size": 512, 00:21:59.900 "num_blocks": 2097152, 00:21:59.900 "uuid": "f98b41de-5b9b-4b88-b1a2-851dd1d98924", 00:21:59.900 "numa_id": 1, 00:21:59.900 "assigned_rate_limits": { 00:21:59.900 "rw_ios_per_sec": 0, 00:21:59.900 "rw_mbytes_per_sec": 0, 00:21:59.900 "r_mbytes_per_sec": 0, 00:21:59.900 "w_mbytes_per_sec": 0 00:21:59.900 }, 00:21:59.900 "claimed": false, 00:21:59.900 "zoned": false, 00:21:59.900 "supported_io_types": { 00:21:59.900 "read": true, 00:21:59.900 "write": true, 00:21:59.900 "unmap": false, 00:21:59.900 "flush": true, 00:21:59.900 "reset": true, 00:21:59.900 "nvme_admin": true, 00:21:59.900 "nvme_io": true, 00:21:59.900 "nvme_io_md": false, 00:21:59.900 "write_zeroes": true, 00:21:59.900 "zcopy": false, 00:21:59.900 "get_zone_info": false, 00:21:59.900 "zone_management": false, 00:21:59.900 "zone_append": false, 00:21:59.900 "compare": true, 00:21:59.900 "compare_and_write": true, 00:21:59.900 "abort": true, 00:21:59.900 "seek_hole": false, 00:21:59.900 "seek_data": false, 00:21:59.900 "copy": true, 00:21:59.900 "nvme_iov_md": false 00:21:59.900 }, 00:21:59.900 "memory_domains": [ 00:21:59.900 { 00:21:59.900 "dma_device_id": "system", 00:21:59.900 "dma_device_type": 1 00:21:59.900 } 00:21:59.900 ], 00:21:59.900 "driver_specific": { 00:21:59.900 "nvme": [ 00:21:59.900 { 00:21:59.900 "trid": { 00:21:59.900 "trtype": "TCP", 00:21:59.900 "adrfam": "IPv4", 00:21:59.900 "traddr": "10.0.0.2", 00:21:59.900 "trsvcid": "4420", 00:21:59.900 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:59.900 }, 00:21:59.900 "ctrlr_data": { 00:21:59.900 "cntlid": 1, 00:21:59.900 "vendor_id": "0x8086", 00:21:59.900 "model_number": "SPDK bdev Controller", 00:21:59.900 "serial_number": "00000000000000000000", 00:21:59.900 "firmware_revision": "25.01", 00:21:59.900 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:59.900 "oacs": { 00:21:59.900 "security": 0, 00:21:59.900 "format": 0, 00:21:59.900 "firmware": 0, 00:21:59.900 "ns_manage": 0 00:21:59.900 }, 00:21:59.900 "multi_ctrlr": true, 00:21:59.900 "ana_reporting": false 00:21:59.900 }, 00:21:59.900 "vs": { 00:21:59.900 "nvme_version": "1.3" 00:21:59.900 }, 00:21:59.900 "ns_data": { 00:21:59.900 "id": 1, 00:21:59.900 "can_share": true 00:21:59.900 } 00:21:59.900 } 00:21:59.900 ], 00:21:59.900 "mp_policy": "active_passive" 00:21:59.900 } 00:21:59.900 } 00:21:59.900 ] 00:21:59.900 12:45:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.900 12:45:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:21:59.900 12:45:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.900 12:45:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:59.900 [2024-11-28 12:45:42.369140] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:21:59.900 [2024-11-28 12:45:42.369211] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e8ee20 (9): Bad file descriptor 00:22:00.158 [2024-11-28 12:45:42.501029] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:22:00.158 12:45:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.158 12:45:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:00.158 12:45:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.158 12:45:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:00.158 [ 00:22:00.158 { 00:22:00.158 "name": "nvme0n1", 00:22:00.158 "aliases": [ 00:22:00.158 "f98b41de-5b9b-4b88-b1a2-851dd1d98924" 00:22:00.158 ], 00:22:00.158 "product_name": "NVMe disk", 00:22:00.158 "block_size": 512, 00:22:00.158 "num_blocks": 2097152, 00:22:00.158 "uuid": "f98b41de-5b9b-4b88-b1a2-851dd1d98924", 00:22:00.158 "numa_id": 1, 00:22:00.158 "assigned_rate_limits": { 00:22:00.158 "rw_ios_per_sec": 0, 00:22:00.158 "rw_mbytes_per_sec": 0, 00:22:00.158 "r_mbytes_per_sec": 0, 00:22:00.158 "w_mbytes_per_sec": 0 00:22:00.158 }, 00:22:00.158 "claimed": false, 00:22:00.158 "zoned": false, 00:22:00.158 "supported_io_types": { 00:22:00.158 "read": true, 00:22:00.158 "write": true, 00:22:00.158 "unmap": false, 00:22:00.158 "flush": true, 00:22:00.158 "reset": true, 00:22:00.158 "nvme_admin": true, 00:22:00.158 "nvme_io": true, 00:22:00.158 "nvme_io_md": false, 00:22:00.158 "write_zeroes": true, 00:22:00.158 "zcopy": false, 00:22:00.158 "get_zone_info": false, 00:22:00.158 "zone_management": false, 00:22:00.158 "zone_append": false, 00:22:00.159 "compare": true, 00:22:00.159 "compare_and_write": true, 00:22:00.159 "abort": true, 00:22:00.159 "seek_hole": false, 00:22:00.159 "seek_data": false, 00:22:00.159 "copy": true, 00:22:00.159 "nvme_iov_md": false 00:22:00.159 }, 00:22:00.159 "memory_domains": [ 00:22:00.159 { 00:22:00.159 "dma_device_id": "system", 00:22:00.159 "dma_device_type": 1 00:22:00.159 } 00:22:00.159 ], 00:22:00.159 "driver_specific": { 00:22:00.159 "nvme": [ 00:22:00.159 { 00:22:00.159 "trid": { 00:22:00.159 "trtype": "TCP", 00:22:00.159 "adrfam": "IPv4", 00:22:00.159 "traddr": "10.0.0.2", 00:22:00.159 "trsvcid": "4420", 00:22:00.159 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:00.159 }, 00:22:00.159 "ctrlr_data": { 00:22:00.159 "cntlid": 2, 00:22:00.159 "vendor_id": "0x8086", 00:22:00.159 "model_number": "SPDK bdev Controller", 00:22:00.159 "serial_number": "00000000000000000000", 00:22:00.159 "firmware_revision": "25.01", 00:22:00.159 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:00.159 "oacs": { 00:22:00.159 "security": 0, 00:22:00.159 "format": 0, 00:22:00.159 "firmware": 0, 00:22:00.159 "ns_manage": 0 00:22:00.159 }, 00:22:00.159 "multi_ctrlr": true, 00:22:00.159 "ana_reporting": false 00:22:00.159 }, 00:22:00.159 "vs": { 00:22:00.159 "nvme_version": "1.3" 00:22:00.159 }, 00:22:00.159 "ns_data": { 00:22:00.159 "id": 1, 00:22:00.159 "can_share": true 00:22:00.159 } 00:22:00.159 } 00:22:00.159 ], 00:22:00.159 "mp_policy": "active_passive" 00:22:00.159 } 00:22:00.159 } 00:22:00.159 ] 00:22:00.159 12:45:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.159 12:45:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:00.159 12:45:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.159 12:45:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:00.159 12:45:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.159 12:45:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:22:00.159 12:45:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.bqWbro8CEy 00:22:00.159 12:45:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:00.159 12:45:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.bqWbro8CEy 00:22:00.159 12:45:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.bqWbro8CEy 00:22:00.159 12:45:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.159 12:45:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:00.159 12:45:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.159 12:45:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:22:00.159 12:45:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.159 12:45:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:00.159 12:45:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.159 12:45:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:22:00.159 12:45:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.159 12:45:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:00.159 [2024-11-28 12:45:42.569752] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:00.159 [2024-11-28 12:45:42.569847] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:00.159 12:45:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.159 12:45:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:22:00.159 12:45:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.159 12:45:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:00.159 12:45:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.159 12:45:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:00.159 12:45:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.159 12:45:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:00.159 [2024-11-28 12:45:42.585812] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:00.159 nvme0n1 00:22:00.159 12:45:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.159 12:45:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:00.159 12:45:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.159 12:45:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:00.159 [ 00:22:00.159 { 00:22:00.159 "name": "nvme0n1", 00:22:00.159 "aliases": [ 00:22:00.159 "f98b41de-5b9b-4b88-b1a2-851dd1d98924" 00:22:00.159 ], 00:22:00.159 "product_name": "NVMe disk", 00:22:00.159 "block_size": 512, 00:22:00.159 "num_blocks": 2097152, 00:22:00.159 "uuid": "f98b41de-5b9b-4b88-b1a2-851dd1d98924", 00:22:00.159 "numa_id": 1, 00:22:00.159 "assigned_rate_limits": { 00:22:00.159 "rw_ios_per_sec": 0, 00:22:00.159 "rw_mbytes_per_sec": 0, 00:22:00.159 "r_mbytes_per_sec": 0, 00:22:00.159 "w_mbytes_per_sec": 0 00:22:00.159 }, 00:22:00.159 "claimed": false, 00:22:00.159 "zoned": false, 00:22:00.159 "supported_io_types": { 00:22:00.159 "read": true, 00:22:00.159 "write": true, 00:22:00.159 "unmap": false, 00:22:00.159 "flush": true, 00:22:00.159 "reset": true, 00:22:00.159 "nvme_admin": true, 00:22:00.159 "nvme_io": true, 00:22:00.159 "nvme_io_md": false, 00:22:00.159 "write_zeroes": true, 00:22:00.159 "zcopy": false, 00:22:00.159 "get_zone_info": false, 00:22:00.159 "zone_management": false, 00:22:00.159 "zone_append": false, 00:22:00.159 "compare": true, 00:22:00.159 "compare_and_write": true, 00:22:00.159 "abort": true, 00:22:00.159 "seek_hole": false, 00:22:00.159 "seek_data": false, 00:22:00.159 "copy": true, 00:22:00.159 "nvme_iov_md": false 00:22:00.159 }, 00:22:00.159 "memory_domains": [ 00:22:00.159 { 00:22:00.159 "dma_device_id": "system", 00:22:00.159 "dma_device_type": 1 00:22:00.159 } 00:22:00.159 ], 00:22:00.159 "driver_specific": { 00:22:00.159 "nvme": [ 00:22:00.159 { 00:22:00.159 "trid": { 00:22:00.159 "trtype": "TCP", 00:22:00.159 "adrfam": "IPv4", 00:22:00.159 "traddr": "10.0.0.2", 00:22:00.159 "trsvcid": "4421", 00:22:00.159 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:00.159 }, 00:22:00.159 "ctrlr_data": { 00:22:00.159 "cntlid": 3, 00:22:00.159 "vendor_id": "0x8086", 00:22:00.159 "model_number": "SPDK bdev Controller", 00:22:00.159 "serial_number": "00000000000000000000", 00:22:00.159 "firmware_revision": "25.01", 00:22:00.159 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:00.159 "oacs": { 00:22:00.159 "security": 0, 00:22:00.159 "format": 0, 00:22:00.159 "firmware": 0, 00:22:00.418 "ns_manage": 0 00:22:00.418 }, 00:22:00.418 "multi_ctrlr": true, 00:22:00.418 "ana_reporting": false 00:22:00.418 }, 00:22:00.418 "vs": { 00:22:00.418 "nvme_version": "1.3" 00:22:00.418 }, 00:22:00.418 "ns_data": { 00:22:00.418 "id": 1, 00:22:00.418 "can_share": true 00:22:00.418 } 00:22:00.418 } 00:22:00.418 ], 00:22:00.418 "mp_policy": "active_passive" 00:22:00.418 } 00:22:00.418 } 00:22:00.418 ] 00:22:00.418 12:45:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.418 12:45:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:00.418 12:45:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.418 12:45:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:00.418 12:45:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.418 12:45:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.bqWbro8CEy 00:22:00.418 12:45:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:22:00.418 12:45:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:22:00.418 12:45:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:00.418 12:45:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:22:00.418 12:45:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:00.418 12:45:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:22:00.418 12:45:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:00.418 12:45:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:00.418 rmmod nvme_tcp 00:22:00.418 rmmod nvme_fabrics 00:22:00.418 rmmod nvme_keyring 00:22:00.419 12:45:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:00.419 12:45:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:22:00.419 12:45:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:22:00.419 12:45:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 2600923 ']' 00:22:00.419 12:45:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 2600923 00:22:00.419 12:45:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 2600923 ']' 00:22:00.419 12:45:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 2600923 00:22:00.419 12:45:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:22:00.419 12:45:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:00.419 12:45:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2600923 00:22:00.419 12:45:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:00.419 12:45:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:00.419 12:45:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2600923' 00:22:00.419 killing process with pid 2600923 00:22:00.419 12:45:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 2600923 00:22:00.419 12:45:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 2600923 00:22:00.678 12:45:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:00.678 12:45:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:00.678 12:45:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:00.678 12:45:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:22:00.678 12:45:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:22:00.678 12:45:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:00.678 12:45:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:22:00.678 12:45:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:00.678 12:45:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:00.678 12:45:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:00.678 12:45:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:00.678 12:45:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:02.583 12:45:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:02.583 00:22:02.583 real 0m8.901s 00:22:02.583 user 0m2.929s 00:22:02.583 sys 0m4.408s 00:22:02.583 12:45:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:02.583 12:45:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:02.583 ************************************ 00:22:02.583 END TEST nvmf_async_init 00:22:02.583 ************************************ 00:22:02.583 12:45:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:22:02.583 12:45:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:02.583 12:45:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:02.583 12:45:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:02.842 ************************************ 00:22:02.842 START TEST dma 00:22:02.842 ************************************ 00:22:02.842 12:45:45 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:22:02.842 * Looking for test storage... 00:22:02.842 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:02.842 12:45:45 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:02.842 12:45:45 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lcov --version 00:22:02.842 12:45:45 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:02.842 12:45:45 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:02.842 12:45:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:02.842 12:45:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:02.842 12:45:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:02.842 12:45:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:22:02.842 12:45:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:22:02.842 12:45:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:22:02.842 12:45:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:22:02.842 12:45:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:22:02.842 12:45:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:22:02.842 12:45:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:22:02.842 12:45:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:02.842 12:45:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:22:02.842 12:45:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:22:02.842 12:45:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:02.842 12:45:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:02.842 12:45:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:22:02.842 12:45:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:22:02.842 12:45:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:02.842 12:45:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:22:02.842 12:45:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:22:02.842 12:45:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:22:02.842 12:45:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:22:02.842 12:45:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:02.842 12:45:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:22:02.842 12:45:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:22:02.842 12:45:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:02.842 12:45:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:02.842 12:45:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:22:02.842 12:45:45 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:02.842 12:45:45 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:02.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:02.842 --rc genhtml_branch_coverage=1 00:22:02.842 --rc genhtml_function_coverage=1 00:22:02.842 --rc genhtml_legend=1 00:22:02.842 --rc geninfo_all_blocks=1 00:22:02.842 --rc geninfo_unexecuted_blocks=1 00:22:02.842 00:22:02.842 ' 00:22:02.842 12:45:45 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:02.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:02.842 --rc genhtml_branch_coverage=1 00:22:02.842 --rc genhtml_function_coverage=1 00:22:02.842 --rc genhtml_legend=1 00:22:02.842 --rc geninfo_all_blocks=1 00:22:02.842 --rc geninfo_unexecuted_blocks=1 00:22:02.842 00:22:02.842 ' 00:22:02.842 12:45:45 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:02.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:02.842 --rc genhtml_branch_coverage=1 00:22:02.842 --rc genhtml_function_coverage=1 00:22:02.842 --rc genhtml_legend=1 00:22:02.842 --rc geninfo_all_blocks=1 00:22:02.842 --rc geninfo_unexecuted_blocks=1 00:22:02.842 00:22:02.842 ' 00:22:02.842 12:45:45 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:02.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:02.842 --rc genhtml_branch_coverage=1 00:22:02.842 --rc genhtml_function_coverage=1 00:22:02.842 --rc genhtml_legend=1 00:22:02.842 --rc geninfo_all_blocks=1 00:22:02.842 --rc geninfo_unexecuted_blocks=1 00:22:02.842 00:22:02.842 ' 00:22:02.843 12:45:45 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:02.843 12:45:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:22:02.843 12:45:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:02.843 12:45:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:02.843 12:45:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:02.843 12:45:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:02.843 12:45:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:02.843 12:45:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:02.843 12:45:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:02.843 12:45:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:02.843 12:45:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:02.843 12:45:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:02.843 12:45:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:02.843 12:45:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:02.843 12:45:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:02.843 12:45:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:02.843 12:45:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:02.843 12:45:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:02.843 12:45:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:02.843 12:45:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:22:02.843 12:45:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:02.843 12:45:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:02.843 12:45:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:02.843 12:45:45 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:02.843 12:45:45 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:02.843 12:45:45 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:02.843 12:45:45 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:22:02.843 12:45:45 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:02.843 12:45:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:22:02.843 12:45:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:02.843 12:45:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:02.843 12:45:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:02.843 12:45:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:02.843 12:45:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:02.843 12:45:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:02.843 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:02.843 12:45:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:02.843 12:45:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:02.843 12:45:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:02.843 12:45:45 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:22:02.843 12:45:45 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:22:02.843 00:22:02.843 real 0m0.211s 00:22:02.843 user 0m0.119s 00:22:02.843 sys 0m0.108s 00:22:02.843 12:45:45 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:02.843 12:45:45 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:22:02.843 ************************************ 00:22:02.843 END TEST dma 00:22:02.843 ************************************ 00:22:02.843 12:45:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:22:02.843 12:45:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:02.843 12:45:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:02.843 12:45:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:03.103 ************************************ 00:22:03.103 START TEST nvmf_identify 00:22:03.103 ************************************ 00:22:03.103 12:45:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:22:03.103 * Looking for test storage... 00:22:03.103 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:03.103 12:45:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:03.103 12:45:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lcov --version 00:22:03.103 12:45:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:03.103 12:45:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:03.103 12:45:45 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:03.103 12:45:45 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:03.103 12:45:45 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:03.103 12:45:45 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:22:03.103 12:45:45 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:22:03.103 12:45:45 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:22:03.103 12:45:45 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:22:03.103 12:45:45 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:22:03.103 12:45:45 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:22:03.103 12:45:45 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:22:03.103 12:45:45 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:03.103 12:45:45 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:22:03.103 12:45:45 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:22:03.103 12:45:45 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:03.103 12:45:45 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:03.103 12:45:45 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:22:03.103 12:45:45 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:22:03.103 12:45:45 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:03.103 12:45:45 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:22:03.103 12:45:45 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:22:03.103 12:45:45 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:22:03.103 12:45:45 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:22:03.103 12:45:45 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:03.103 12:45:45 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:22:03.103 12:45:45 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:22:03.103 12:45:45 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:03.103 12:45:45 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:03.103 12:45:45 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:22:03.103 12:45:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:03.103 12:45:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:03.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:03.103 --rc genhtml_branch_coverage=1 00:22:03.103 --rc genhtml_function_coverage=1 00:22:03.103 --rc genhtml_legend=1 00:22:03.103 --rc geninfo_all_blocks=1 00:22:03.103 --rc geninfo_unexecuted_blocks=1 00:22:03.103 00:22:03.103 ' 00:22:03.103 12:45:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:03.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:03.103 --rc genhtml_branch_coverage=1 00:22:03.103 --rc genhtml_function_coverage=1 00:22:03.103 --rc genhtml_legend=1 00:22:03.103 --rc geninfo_all_blocks=1 00:22:03.103 --rc geninfo_unexecuted_blocks=1 00:22:03.103 00:22:03.103 ' 00:22:03.103 12:45:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:03.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:03.103 --rc genhtml_branch_coverage=1 00:22:03.103 --rc genhtml_function_coverage=1 00:22:03.103 --rc genhtml_legend=1 00:22:03.103 --rc geninfo_all_blocks=1 00:22:03.103 --rc geninfo_unexecuted_blocks=1 00:22:03.103 00:22:03.103 ' 00:22:03.103 12:45:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:03.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:03.104 --rc genhtml_branch_coverage=1 00:22:03.104 --rc genhtml_function_coverage=1 00:22:03.104 --rc genhtml_legend=1 00:22:03.104 --rc geninfo_all_blocks=1 00:22:03.104 --rc geninfo_unexecuted_blocks=1 00:22:03.104 00:22:03.104 ' 00:22:03.104 12:45:45 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:03.104 12:45:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:22:03.104 12:45:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:03.104 12:45:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:03.104 12:45:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:03.104 12:45:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:03.104 12:45:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:03.104 12:45:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:03.104 12:45:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:03.104 12:45:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:03.104 12:45:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:03.104 12:45:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:03.104 12:45:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:03.104 12:45:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:03.104 12:45:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:03.104 12:45:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:03.104 12:45:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:03.104 12:45:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:03.104 12:45:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:03.104 12:45:45 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:22:03.104 12:45:45 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:03.104 12:45:45 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:03.104 12:45:45 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:03.104 12:45:45 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:03.104 12:45:45 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:03.104 12:45:45 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:03.104 12:45:45 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:22:03.104 12:45:45 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:03.104 12:45:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:22:03.104 12:45:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:03.104 12:45:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:03.104 12:45:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:03.104 12:45:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:03.104 12:45:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:03.104 12:45:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:03.104 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:03.104 12:45:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:03.104 12:45:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:03.104 12:45:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:03.104 12:45:45 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:03.104 12:45:45 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:03.104 12:45:45 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:22:03.104 12:45:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:03.104 12:45:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:03.104 12:45:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:03.104 12:45:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:03.104 12:45:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:03.104 12:45:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:03.104 12:45:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:03.104 12:45:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:03.104 12:45:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:03.104 12:45:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:03.104 12:45:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:22:03.104 12:45:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:08.380 12:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:08.380 12:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:22:08.380 12:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:08.380 12:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:08.380 12:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:08.380 12:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:08.380 12:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:08.380 12:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:22:08.380 12:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:08.380 12:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:22:08.380 12:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:22:08.380 12:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:22:08.380 12:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:22:08.380 12:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:22:08.380 12:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:22:08.380 12:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:08.380 12:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:08.380 12:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:08.380 12:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:08.380 12:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:08.380 12:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:08.380 12:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:08.380 12:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:08.380 12:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:08.380 12:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:08.380 12:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:08.380 12:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:08.380 12:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:08.380 12:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:08.380 12:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:08.380 12:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:08.380 12:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:08.380 12:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:08.380 12:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:08.380 12:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:08.380 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:08.380 12:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:08.380 12:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:08.380 12:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:08.380 12:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:08.380 12:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:08.380 12:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:08.380 12:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:08.380 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:08.380 12:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:08.380 12:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:08.380 12:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:08.380 12:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:08.380 12:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:08.380 12:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:08.380 12:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:08.380 12:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:08.380 12:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:08.380 12:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:08.380 12:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:08.380 12:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:08.380 12:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:08.380 12:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:08.380 12:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:08.380 12:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:08.380 Found net devices under 0000:86:00.0: cvl_0_0 00:22:08.380 12:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:08.380 12:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:08.380 12:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:08.380 12:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:08.380 12:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:08.380 12:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:08.380 12:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:08.380 12:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:08.380 12:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:08.380 Found net devices under 0000:86:00.1: cvl_0_1 00:22:08.380 12:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:08.380 12:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:08.380 12:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:22:08.380 12:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:08.380 12:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:08.380 12:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:08.380 12:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:08.380 12:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:08.380 12:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:08.380 12:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:08.380 12:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:08.380 12:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:08.380 12:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:08.380 12:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:08.380 12:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:08.380 12:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:08.380 12:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:08.380 12:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:08.381 12:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:08.381 12:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:08.381 12:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:08.381 12:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:08.381 12:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:08.381 12:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:08.381 12:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:08.381 12:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:08.381 12:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:08.381 12:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:08.381 12:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:08.381 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:08.381 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.470 ms 00:22:08.381 00:22:08.381 --- 10.0.0.2 ping statistics --- 00:22:08.381 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:08.381 rtt min/avg/max/mdev = 0.470/0.470/0.470/0.000 ms 00:22:08.381 12:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:08.381 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:08.381 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.198 ms 00:22:08.381 00:22:08.381 --- 10.0.0.1 ping statistics --- 00:22:08.381 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:08.381 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:22:08.381 12:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:08.381 12:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:22:08.381 12:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:08.381 12:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:08.381 12:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:08.381 12:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:08.381 12:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:08.381 12:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:08.381 12:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:08.381 12:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:22:08.381 12:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:08.381 12:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:08.381 12:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=2604566 00:22:08.381 12:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:08.381 12:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 2604566 00:22:08.381 12:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 2604566 ']' 00:22:08.381 12:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:08.381 12:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:08.381 12:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:08.381 12:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:08.381 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:08.381 12:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:08.381 12:45:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:08.381 [2024-11-28 12:45:50.870012] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:22:08.381 [2024-11-28 12:45:50.870061] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:08.640 [2024-11-28 12:45:50.935083] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:08.640 [2024-11-28 12:45:50.979318] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:08.640 [2024-11-28 12:45:50.979357] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:08.640 [2024-11-28 12:45:50.979364] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:08.640 [2024-11-28 12:45:50.979370] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:08.640 [2024-11-28 12:45:50.979375] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:08.640 [2024-11-28 12:45:50.980921] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:08.640 [2024-11-28 12:45:50.981025] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:08.640 [2024-11-28 12:45:50.981049] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:08.640 [2024-11-28 12:45:50.981051] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:08.640 12:45:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:08.640 12:45:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:22:08.640 12:45:51 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:08.640 12:45:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.640 12:45:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:08.640 [2024-11-28 12:45:51.084147] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:08.640 12:45:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.640 12:45:51 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:22:08.640 12:45:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:08.640 12:45:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:08.640 12:45:51 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:08.640 12:45:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.640 12:45:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:08.901 Malloc0 00:22:08.901 12:45:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.901 12:45:51 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:08.901 12:45:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.901 12:45:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:08.901 12:45:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.901 12:45:51 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:22:08.901 12:45:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.901 12:45:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:08.901 12:45:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.901 12:45:51 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:08.901 12:45:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.901 12:45:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:08.901 [2024-11-28 12:45:51.188410] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:08.901 12:45:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.901 12:45:51 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:08.901 12:45:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.901 12:45:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:08.901 12:45:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.901 12:45:51 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:22:08.901 12:45:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.901 12:45:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:08.901 [ 00:22:08.901 { 00:22:08.901 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:08.901 "subtype": "Discovery", 00:22:08.901 "listen_addresses": [ 00:22:08.901 { 00:22:08.901 "trtype": "TCP", 00:22:08.901 "adrfam": "IPv4", 00:22:08.901 "traddr": "10.0.0.2", 00:22:08.901 "trsvcid": "4420" 00:22:08.901 } 00:22:08.901 ], 00:22:08.901 "allow_any_host": true, 00:22:08.901 "hosts": [] 00:22:08.901 }, 00:22:08.901 { 00:22:08.901 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:08.901 "subtype": "NVMe", 00:22:08.901 "listen_addresses": [ 00:22:08.901 { 00:22:08.901 "trtype": "TCP", 00:22:08.901 "adrfam": "IPv4", 00:22:08.901 "traddr": "10.0.0.2", 00:22:08.901 "trsvcid": "4420" 00:22:08.901 } 00:22:08.901 ], 00:22:08.901 "allow_any_host": true, 00:22:08.901 "hosts": [], 00:22:08.901 "serial_number": "SPDK00000000000001", 00:22:08.901 "model_number": "SPDK bdev Controller", 00:22:08.901 "max_namespaces": 32, 00:22:08.901 "min_cntlid": 1, 00:22:08.901 "max_cntlid": 65519, 00:22:08.901 "namespaces": [ 00:22:08.901 { 00:22:08.901 "nsid": 1, 00:22:08.901 "bdev_name": "Malloc0", 00:22:08.901 "name": "Malloc0", 00:22:08.901 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:22:08.901 "eui64": "ABCDEF0123456789", 00:22:08.901 "uuid": "c5ebddd4-1679-411d-8514-34d6718d3615" 00:22:08.901 } 00:22:08.901 ] 00:22:08.901 } 00:22:08.901 ] 00:22:08.901 12:45:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.901 12:45:51 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:22:08.901 [2024-11-28 12:45:51.240964] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:22:08.901 [2024-11-28 12:45:51.240997] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2604761 ] 00:22:08.901 [2024-11-28 12:45:51.283052] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:22:08.901 [2024-11-28 12:45:51.283103] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:22:08.901 [2024-11-28 12:45:51.283108] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:22:08.901 [2024-11-28 12:45:51.283124] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:22:08.901 [2024-11-28 12:45:51.283133] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:22:08.901 [2024-11-28 12:45:51.287257] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:22:08.901 [2024-11-28 12:45:51.287289] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x15fc690 0 00:22:08.901 [2024-11-28 12:45:51.294955] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:22:08.901 [2024-11-28 12:45:51.294968] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:22:08.901 [2024-11-28 12:45:51.294972] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:22:08.901 [2024-11-28 12:45:51.294975] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:22:08.901 [2024-11-28 12:45:51.295009] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:08.901 [2024-11-28 12:45:51.295015] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:08.901 [2024-11-28 12:45:51.295018] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15fc690) 00:22:08.902 [2024-11-28 12:45:51.295029] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:22:08.902 [2024-11-28 12:45:51.295047] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x165e100, cid 0, qid 0 00:22:08.902 [2024-11-28 12:45:51.302958] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:08.902 [2024-11-28 12:45:51.302967] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:08.902 [2024-11-28 12:45:51.302973] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:08.902 [2024-11-28 12:45:51.302977] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x165e100) on tqpair=0x15fc690 00:22:08.902 [2024-11-28 12:45:51.302988] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:22:08.902 [2024-11-28 12:45:51.302995] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:22:08.902 [2024-11-28 12:45:51.303000] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:22:08.902 [2024-11-28 12:45:51.303014] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:08.902 [2024-11-28 12:45:51.303018] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:08.902 [2024-11-28 12:45:51.303021] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15fc690) 00:22:08.902 [2024-11-28 12:45:51.303027] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.902 [2024-11-28 12:45:51.303040] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x165e100, cid 0, qid 0 00:22:08.902 [2024-11-28 12:45:51.303210] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:08.902 [2024-11-28 12:45:51.303216] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:08.902 [2024-11-28 12:45:51.303219] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:08.902 [2024-11-28 12:45:51.303222] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x165e100) on tqpair=0x15fc690 00:22:08.902 [2024-11-28 12:45:51.303230] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:22:08.902 [2024-11-28 12:45:51.303237] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:22:08.902 [2024-11-28 12:45:51.303243] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:08.902 [2024-11-28 12:45:51.303247] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:08.902 [2024-11-28 12:45:51.303250] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15fc690) 00:22:08.902 [2024-11-28 12:45:51.303255] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.902 [2024-11-28 12:45:51.303266] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x165e100, cid 0, qid 0 00:22:08.902 [2024-11-28 12:45:51.303334] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:08.902 [2024-11-28 12:45:51.303340] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:08.902 [2024-11-28 12:45:51.303343] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:08.902 [2024-11-28 12:45:51.303346] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x165e100) on tqpair=0x15fc690 00:22:08.902 [2024-11-28 12:45:51.303351] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:22:08.902 [2024-11-28 12:45:51.303358] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:22:08.902 [2024-11-28 12:45:51.303364] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:08.902 [2024-11-28 12:45:51.303368] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:08.902 [2024-11-28 12:45:51.303371] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15fc690) 00:22:08.902 [2024-11-28 12:45:51.303376] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.902 [2024-11-28 12:45:51.303386] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x165e100, cid 0, qid 0 00:22:08.902 [2024-11-28 12:45:51.303449] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:08.902 [2024-11-28 12:45:51.303455] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:08.902 [2024-11-28 12:45:51.303460] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:08.902 [2024-11-28 12:45:51.303463] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x165e100) on tqpair=0x15fc690 00:22:08.902 [2024-11-28 12:45:51.303467] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:22:08.902 [2024-11-28 12:45:51.303476] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:08.902 [2024-11-28 12:45:51.303479] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:08.902 [2024-11-28 12:45:51.303483] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15fc690) 00:22:08.902 [2024-11-28 12:45:51.303488] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.902 [2024-11-28 12:45:51.303498] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x165e100, cid 0, qid 0 00:22:08.902 [2024-11-28 12:45:51.303564] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:08.902 [2024-11-28 12:45:51.303570] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:08.902 [2024-11-28 12:45:51.303573] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:08.902 [2024-11-28 12:45:51.303577] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x165e100) on tqpair=0x15fc690 00:22:08.902 [2024-11-28 12:45:51.303580] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:22:08.902 [2024-11-28 12:45:51.303585] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:22:08.902 [2024-11-28 12:45:51.303592] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:22:08.902 [2024-11-28 12:45:51.303699] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:22:08.902 [2024-11-28 12:45:51.303704] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:22:08.902 [2024-11-28 12:45:51.303711] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:08.902 [2024-11-28 12:45:51.303715] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:08.902 [2024-11-28 12:45:51.303718] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15fc690) 00:22:08.902 [2024-11-28 12:45:51.303723] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.902 [2024-11-28 12:45:51.303733] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x165e100, cid 0, qid 0 00:22:08.902 [2024-11-28 12:45:51.303803] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:08.902 [2024-11-28 12:45:51.303809] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:08.902 [2024-11-28 12:45:51.303812] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:08.902 [2024-11-28 12:45:51.303815] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x165e100) on tqpair=0x15fc690 00:22:08.902 [2024-11-28 12:45:51.303820] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:22:08.902 [2024-11-28 12:45:51.303828] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:08.902 [2024-11-28 12:45:51.303831] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:08.902 [2024-11-28 12:45:51.303835] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15fc690) 00:22:08.902 [2024-11-28 12:45:51.303840] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.903 [2024-11-28 12:45:51.303849] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x165e100, cid 0, qid 0 00:22:08.903 [2024-11-28 12:45:51.303914] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:08.903 [2024-11-28 12:45:51.303923] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:08.903 [2024-11-28 12:45:51.303927] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:08.903 [2024-11-28 12:45:51.303930] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x165e100) on tqpair=0x15fc690 00:22:08.903 [2024-11-28 12:45:51.303934] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:22:08.903 [2024-11-28 12:45:51.303938] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:22:08.903 [2024-11-28 12:45:51.303945] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:22:08.903 [2024-11-28 12:45:51.303963] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:22:08.903 [2024-11-28 12:45:51.303972] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:08.903 [2024-11-28 12:45:51.303976] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15fc690) 00:22:08.903 [2024-11-28 12:45:51.303981] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.903 [2024-11-28 12:45:51.303991] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x165e100, cid 0, qid 0 00:22:08.903 [2024-11-28 12:45:51.304088] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:08.903 [2024-11-28 12:45:51.304094] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:08.903 [2024-11-28 12:45:51.304097] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:08.903 [2024-11-28 12:45:51.304101] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x15fc690): datao=0, datal=4096, cccid=0 00:22:08.903 [2024-11-28 12:45:51.304105] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x165e100) on tqpair(0x15fc690): expected_datao=0, payload_size=4096 00:22:08.903 [2024-11-28 12:45:51.304109] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:08.903 [2024-11-28 12:45:51.304116] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:08.903 [2024-11-28 12:45:51.304119] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:08.903 [2024-11-28 12:45:51.304132] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:08.903 [2024-11-28 12:45:51.304137] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:08.903 [2024-11-28 12:45:51.304140] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:08.903 [2024-11-28 12:45:51.304143] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x165e100) on tqpair=0x15fc690 00:22:08.903 [2024-11-28 12:45:51.304150] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:22:08.903 [2024-11-28 12:45:51.304154] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:22:08.903 [2024-11-28 12:45:51.304158] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:22:08.903 [2024-11-28 12:45:51.304163] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:22:08.903 [2024-11-28 12:45:51.304167] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:22:08.903 [2024-11-28 12:45:51.304171] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:22:08.903 [2024-11-28 12:45:51.304178] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:22:08.903 [2024-11-28 12:45:51.304184] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:08.903 [2024-11-28 12:45:51.304187] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:08.903 [2024-11-28 12:45:51.304193] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15fc690) 00:22:08.903 [2024-11-28 12:45:51.304198] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:08.903 [2024-11-28 12:45:51.304208] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x165e100, cid 0, qid 0 00:22:08.903 [2024-11-28 12:45:51.304279] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:08.903 [2024-11-28 12:45:51.304284] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:08.903 [2024-11-28 12:45:51.304288] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:08.903 [2024-11-28 12:45:51.304291] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x165e100) on tqpair=0x15fc690 00:22:08.903 [2024-11-28 12:45:51.304297] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:08.903 [2024-11-28 12:45:51.304300] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:08.903 [2024-11-28 12:45:51.304303] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15fc690) 00:22:08.903 [2024-11-28 12:45:51.304308] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:08.903 [2024-11-28 12:45:51.304314] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:08.903 [2024-11-28 12:45:51.304317] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:08.903 [2024-11-28 12:45:51.304320] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x15fc690) 00:22:08.903 [2024-11-28 12:45:51.304325] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:08.903 [2024-11-28 12:45:51.304330] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:08.903 [2024-11-28 12:45:51.304333] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:08.903 [2024-11-28 12:45:51.304337] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x15fc690) 00:22:08.903 [2024-11-28 12:45:51.304342] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:08.903 [2024-11-28 12:45:51.304347] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:08.903 [2024-11-28 12:45:51.304350] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:08.903 [2024-11-28 12:45:51.304353] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15fc690) 00:22:08.903 [2024-11-28 12:45:51.304358] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:08.903 [2024-11-28 12:45:51.304362] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:22:08.903 [2024-11-28 12:45:51.304373] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:22:08.903 [2024-11-28 12:45:51.304379] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:08.903 [2024-11-28 12:45:51.304382] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x15fc690) 00:22:08.903 [2024-11-28 12:45:51.304387] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.903 [2024-11-28 12:45:51.304398] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x165e100, cid 0, qid 0 00:22:08.903 [2024-11-28 12:45:51.304403] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x165e280, cid 1, qid 0 00:22:08.903 [2024-11-28 12:45:51.304407] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x165e400, cid 2, qid 0 00:22:08.903 [2024-11-28 12:45:51.304411] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x165e580, cid 3, qid 0 00:22:08.903 [2024-11-28 12:45:51.304415] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x165e700, cid 4, qid 0 00:22:08.903 [2024-11-28 12:45:51.304518] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:08.903 [2024-11-28 12:45:51.304523] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:08.903 [2024-11-28 12:45:51.304527] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:08.903 [2024-11-28 12:45:51.304530] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x165e700) on tqpair=0x15fc690 00:22:08.904 [2024-11-28 12:45:51.304534] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:22:08.904 [2024-11-28 12:45:51.304538] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:22:08.904 [2024-11-28 12:45:51.304547] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:08.904 [2024-11-28 12:45:51.304551] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x15fc690) 00:22:08.904 [2024-11-28 12:45:51.304556] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.904 [2024-11-28 12:45:51.304566] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x165e700, cid 4, qid 0 00:22:08.904 [2024-11-28 12:45:51.304643] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:08.904 [2024-11-28 12:45:51.304649] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:08.904 [2024-11-28 12:45:51.304652] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:08.904 [2024-11-28 12:45:51.304656] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x15fc690): datao=0, datal=4096, cccid=4 00:22:08.904 [2024-11-28 12:45:51.304659] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x165e700) on tqpair(0x15fc690): expected_datao=0, payload_size=4096 00:22:08.904 [2024-11-28 12:45:51.304663] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:08.904 [2024-11-28 12:45:51.304677] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:08.904 [2024-11-28 12:45:51.304681] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:08.904 [2024-11-28 12:45:51.345080] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:08.904 [2024-11-28 12:45:51.345093] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:08.904 [2024-11-28 12:45:51.345097] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:08.904 [2024-11-28 12:45:51.345101] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x165e700) on tqpair=0x15fc690 00:22:08.904 [2024-11-28 12:45:51.345114] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:22:08.904 [2024-11-28 12:45:51.345138] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:08.904 [2024-11-28 12:45:51.345142] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x15fc690) 00:22:08.904 [2024-11-28 12:45:51.345149] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.904 [2024-11-28 12:45:51.345156] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:08.904 [2024-11-28 12:45:51.345159] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:08.904 [2024-11-28 12:45:51.345162] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x15fc690) 00:22:08.904 [2024-11-28 12:45:51.345168] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:22:08.904 [2024-11-28 12:45:51.345183] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x165e700, cid 4, qid 0 00:22:08.904 [2024-11-28 12:45:51.345188] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x165e880, cid 5, qid 0 00:22:08.904 [2024-11-28 12:45:51.345294] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:08.904 [2024-11-28 12:45:51.345300] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:08.904 [2024-11-28 12:45:51.345303] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:08.904 [2024-11-28 12:45:51.345309] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x15fc690): datao=0, datal=1024, cccid=4 00:22:08.904 [2024-11-28 12:45:51.345313] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x165e700) on tqpair(0x15fc690): expected_datao=0, payload_size=1024 00:22:08.904 [2024-11-28 12:45:51.345317] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:08.904 [2024-11-28 12:45:51.345322] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:08.904 [2024-11-28 12:45:51.345326] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:08.904 [2024-11-28 12:45:51.345331] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:08.904 [2024-11-28 12:45:51.345336] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:08.904 [2024-11-28 12:45:51.345339] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:08.904 [2024-11-28 12:45:51.345342] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x165e880) on tqpair=0x15fc690 00:22:08.904 [2024-11-28 12:45:51.387954] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:08.904 [2024-11-28 12:45:51.387964] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:08.904 [2024-11-28 12:45:51.387967] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:08.904 [2024-11-28 12:45:51.387970] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x165e700) on tqpair=0x15fc690 00:22:08.904 [2024-11-28 12:45:51.387984] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:08.904 [2024-11-28 12:45:51.387988] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x15fc690) 00:22:08.904 [2024-11-28 12:45:51.387995] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.904 [2024-11-28 12:45:51.388011] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x165e700, cid 4, qid 0 00:22:08.904 [2024-11-28 12:45:51.388171] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:08.904 [2024-11-28 12:45:51.388177] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:08.904 [2024-11-28 12:45:51.388180] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:08.904 [2024-11-28 12:45:51.388183] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x15fc690): datao=0, datal=3072, cccid=4 00:22:08.904 [2024-11-28 12:45:51.388187] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x165e700) on tqpair(0x15fc690): expected_datao=0, payload_size=3072 00:22:08.904 [2024-11-28 12:45:51.388191] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:08.904 [2024-11-28 12:45:51.388197] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:08.904 [2024-11-28 12:45:51.388200] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:08.904 [2024-11-28 12:45:51.388242] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:08.904 [2024-11-28 12:45:51.388248] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:08.904 [2024-11-28 12:45:51.388251] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:08.904 [2024-11-28 12:45:51.388254] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x165e700) on tqpair=0x15fc690 00:22:08.904 [2024-11-28 12:45:51.388262] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:08.904 [2024-11-28 12:45:51.388265] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x15fc690) 00:22:08.904 [2024-11-28 12:45:51.388271] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.904 [2024-11-28 12:45:51.388284] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x165e700, cid 4, qid 0 00:22:08.904 [2024-11-28 12:45:51.388378] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:08.904 [2024-11-28 12:45:51.388384] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:08.904 [2024-11-28 12:45:51.388387] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:08.904 [2024-11-28 12:45:51.388393] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x15fc690): datao=0, datal=8, cccid=4 00:22:08.904 [2024-11-28 12:45:51.388397] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x165e700) on tqpair(0x15fc690): expected_datao=0, payload_size=8 00:22:08.904 [2024-11-28 12:45:51.388401] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:08.904 [2024-11-28 12:45:51.388406] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:08.904 [2024-11-28 12:45:51.388410] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:09.168 [2024-11-28 12:45:51.429087] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.168 [2024-11-28 12:45:51.429097] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.168 [2024-11-28 12:45:51.429100] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.168 [2024-11-28 12:45:51.429104] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x165e700) on tqpair=0x15fc690 00:22:09.168 ===================================================== 00:22:09.168 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:22:09.168 ===================================================== 00:22:09.168 Controller Capabilities/Features 00:22:09.168 ================================ 00:22:09.168 Vendor ID: 0000 00:22:09.168 Subsystem Vendor ID: 0000 00:22:09.168 Serial Number: .................... 00:22:09.168 Model Number: ........................................ 00:22:09.168 Firmware Version: 25.01 00:22:09.168 Recommended Arb Burst: 0 00:22:09.169 IEEE OUI Identifier: 00 00 00 00:22:09.169 Multi-path I/O 00:22:09.169 May have multiple subsystem ports: No 00:22:09.169 May have multiple controllers: No 00:22:09.169 Associated with SR-IOV VF: No 00:22:09.169 Max Data Transfer Size: 131072 00:22:09.169 Max Number of Namespaces: 0 00:22:09.169 Max Number of I/O Queues: 1024 00:22:09.169 NVMe Specification Version (VS): 1.3 00:22:09.169 NVMe Specification Version (Identify): 1.3 00:22:09.169 Maximum Queue Entries: 128 00:22:09.169 Contiguous Queues Required: Yes 00:22:09.169 Arbitration Mechanisms Supported 00:22:09.169 Weighted Round Robin: Not Supported 00:22:09.169 Vendor Specific: Not Supported 00:22:09.169 Reset Timeout: 15000 ms 00:22:09.169 Doorbell Stride: 4 bytes 00:22:09.169 NVM Subsystem Reset: Not Supported 00:22:09.169 Command Sets Supported 00:22:09.169 NVM Command Set: Supported 00:22:09.169 Boot Partition: Not Supported 00:22:09.169 Memory Page Size Minimum: 4096 bytes 00:22:09.169 Memory Page Size Maximum: 4096 bytes 00:22:09.169 Persistent Memory Region: Not Supported 00:22:09.169 Optional Asynchronous Events Supported 00:22:09.169 Namespace Attribute Notices: Not Supported 00:22:09.169 Firmware Activation Notices: Not Supported 00:22:09.169 ANA Change Notices: Not Supported 00:22:09.169 PLE Aggregate Log Change Notices: Not Supported 00:22:09.169 LBA Status Info Alert Notices: Not Supported 00:22:09.169 EGE Aggregate Log Change Notices: Not Supported 00:22:09.169 Normal NVM Subsystem Shutdown event: Not Supported 00:22:09.169 Zone Descriptor Change Notices: Not Supported 00:22:09.169 Discovery Log Change Notices: Supported 00:22:09.169 Controller Attributes 00:22:09.169 128-bit Host Identifier: Not Supported 00:22:09.169 Non-Operational Permissive Mode: Not Supported 00:22:09.169 NVM Sets: Not Supported 00:22:09.169 Read Recovery Levels: Not Supported 00:22:09.169 Endurance Groups: Not Supported 00:22:09.169 Predictable Latency Mode: Not Supported 00:22:09.169 Traffic Based Keep ALive: Not Supported 00:22:09.169 Namespace Granularity: Not Supported 00:22:09.169 SQ Associations: Not Supported 00:22:09.169 UUID List: Not Supported 00:22:09.169 Multi-Domain Subsystem: Not Supported 00:22:09.169 Fixed Capacity Management: Not Supported 00:22:09.169 Variable Capacity Management: Not Supported 00:22:09.169 Delete Endurance Group: Not Supported 00:22:09.169 Delete NVM Set: Not Supported 00:22:09.169 Extended LBA Formats Supported: Not Supported 00:22:09.169 Flexible Data Placement Supported: Not Supported 00:22:09.169 00:22:09.169 Controller Memory Buffer Support 00:22:09.169 ================================ 00:22:09.169 Supported: No 00:22:09.169 00:22:09.169 Persistent Memory Region Support 00:22:09.169 ================================ 00:22:09.169 Supported: No 00:22:09.169 00:22:09.169 Admin Command Set Attributes 00:22:09.169 ============================ 00:22:09.169 Security Send/Receive: Not Supported 00:22:09.169 Format NVM: Not Supported 00:22:09.169 Firmware Activate/Download: Not Supported 00:22:09.169 Namespace Management: Not Supported 00:22:09.169 Device Self-Test: Not Supported 00:22:09.169 Directives: Not Supported 00:22:09.169 NVMe-MI: Not Supported 00:22:09.169 Virtualization Management: Not Supported 00:22:09.169 Doorbell Buffer Config: Not Supported 00:22:09.169 Get LBA Status Capability: Not Supported 00:22:09.169 Command & Feature Lockdown Capability: Not Supported 00:22:09.169 Abort Command Limit: 1 00:22:09.169 Async Event Request Limit: 4 00:22:09.169 Number of Firmware Slots: N/A 00:22:09.169 Firmware Slot 1 Read-Only: N/A 00:22:09.169 Firmware Activation Without Reset: N/A 00:22:09.169 Multiple Update Detection Support: N/A 00:22:09.169 Firmware Update Granularity: No Information Provided 00:22:09.169 Per-Namespace SMART Log: No 00:22:09.169 Asymmetric Namespace Access Log Page: Not Supported 00:22:09.169 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:22:09.169 Command Effects Log Page: Not Supported 00:22:09.169 Get Log Page Extended Data: Supported 00:22:09.169 Telemetry Log Pages: Not Supported 00:22:09.169 Persistent Event Log Pages: Not Supported 00:22:09.169 Supported Log Pages Log Page: May Support 00:22:09.169 Commands Supported & Effects Log Page: Not Supported 00:22:09.169 Feature Identifiers & Effects Log Page:May Support 00:22:09.169 NVMe-MI Commands & Effects Log Page: May Support 00:22:09.169 Data Area 4 for Telemetry Log: Not Supported 00:22:09.169 Error Log Page Entries Supported: 128 00:22:09.169 Keep Alive: Not Supported 00:22:09.169 00:22:09.169 NVM Command Set Attributes 00:22:09.169 ========================== 00:22:09.169 Submission Queue Entry Size 00:22:09.169 Max: 1 00:22:09.169 Min: 1 00:22:09.169 Completion Queue Entry Size 00:22:09.169 Max: 1 00:22:09.169 Min: 1 00:22:09.169 Number of Namespaces: 0 00:22:09.169 Compare Command: Not Supported 00:22:09.169 Write Uncorrectable Command: Not Supported 00:22:09.169 Dataset Management Command: Not Supported 00:22:09.169 Write Zeroes Command: Not Supported 00:22:09.169 Set Features Save Field: Not Supported 00:22:09.169 Reservations: Not Supported 00:22:09.169 Timestamp: Not Supported 00:22:09.169 Copy: Not Supported 00:22:09.169 Volatile Write Cache: Not Present 00:22:09.169 Atomic Write Unit (Normal): 1 00:22:09.169 Atomic Write Unit (PFail): 1 00:22:09.169 Atomic Compare & Write Unit: 1 00:22:09.169 Fused Compare & Write: Supported 00:22:09.169 Scatter-Gather List 00:22:09.169 SGL Command Set: Supported 00:22:09.169 SGL Keyed: Supported 00:22:09.169 SGL Bit Bucket Descriptor: Not Supported 00:22:09.169 SGL Metadata Pointer: Not Supported 00:22:09.169 Oversized SGL: Not Supported 00:22:09.169 SGL Metadata Address: Not Supported 00:22:09.169 SGL Offset: Supported 00:22:09.169 Transport SGL Data Block: Not Supported 00:22:09.169 Replay Protected Memory Block: Not Supported 00:22:09.169 00:22:09.169 Firmware Slot Information 00:22:09.169 ========================= 00:22:09.169 Active slot: 0 00:22:09.169 00:22:09.169 00:22:09.169 Error Log 00:22:09.169 ========= 00:22:09.169 00:22:09.169 Active Namespaces 00:22:09.169 ================= 00:22:09.169 Discovery Log Page 00:22:09.169 ================== 00:22:09.169 Generation Counter: 2 00:22:09.169 Number of Records: 2 00:22:09.170 Record Format: 0 00:22:09.170 00:22:09.170 Discovery Log Entry 0 00:22:09.170 ---------------------- 00:22:09.170 Transport Type: 3 (TCP) 00:22:09.170 Address Family: 1 (IPv4) 00:22:09.170 Subsystem Type: 3 (Current Discovery Subsystem) 00:22:09.170 Entry Flags: 00:22:09.170 Duplicate Returned Information: 1 00:22:09.170 Explicit Persistent Connection Support for Discovery: 1 00:22:09.170 Transport Requirements: 00:22:09.170 Secure Channel: Not Required 00:22:09.170 Port ID: 0 (0x0000) 00:22:09.170 Controller ID: 65535 (0xffff) 00:22:09.170 Admin Max SQ Size: 128 00:22:09.170 Transport Service Identifier: 4420 00:22:09.170 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:22:09.170 Transport Address: 10.0.0.2 00:22:09.170 Discovery Log Entry 1 00:22:09.170 ---------------------- 00:22:09.170 Transport Type: 3 (TCP) 00:22:09.170 Address Family: 1 (IPv4) 00:22:09.170 Subsystem Type: 2 (NVM Subsystem) 00:22:09.170 Entry Flags: 00:22:09.170 Duplicate Returned Information: 0 00:22:09.170 Explicit Persistent Connection Support for Discovery: 0 00:22:09.170 Transport Requirements: 00:22:09.170 Secure Channel: Not Required 00:22:09.170 Port ID: 0 (0x0000) 00:22:09.170 Controller ID: 65535 (0xffff) 00:22:09.170 Admin Max SQ Size: 128 00:22:09.170 Transport Service Identifier: 4420 00:22:09.170 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:22:09.170 Transport Address: 10.0.0.2 [2024-11-28 12:45:51.429189] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:22:09.170 [2024-11-28 12:45:51.429200] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x165e100) on tqpair=0x15fc690 00:22:09.170 [2024-11-28 12:45:51.429207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.170 [2024-11-28 12:45:51.429211] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x165e280) on tqpair=0x15fc690 00:22:09.170 [2024-11-28 12:45:51.429215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.170 [2024-11-28 12:45:51.429220] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x165e400) on tqpair=0x15fc690 00:22:09.170 [2024-11-28 12:45:51.429224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.170 [2024-11-28 12:45:51.429228] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x165e580) on tqpair=0x15fc690 00:22:09.170 [2024-11-28 12:45:51.429232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.170 [2024-11-28 12:45:51.429240] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.170 [2024-11-28 12:45:51.429244] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.170 [2024-11-28 12:45:51.429247] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15fc690) 00:22:09.170 [2024-11-28 12:45:51.429253] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.170 [2024-11-28 12:45:51.429268] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x165e580, cid 3, qid 0 00:22:09.170 [2024-11-28 12:45:51.429332] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.170 [2024-11-28 12:45:51.429338] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.170 [2024-11-28 12:45:51.429341] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.170 [2024-11-28 12:45:51.429345] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x165e580) on tqpair=0x15fc690 00:22:09.170 [2024-11-28 12:45:51.429351] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.170 [2024-11-28 12:45:51.429354] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.170 [2024-11-28 12:45:51.429357] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15fc690) 00:22:09.170 [2024-11-28 12:45:51.429363] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.170 [2024-11-28 12:45:51.429376] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x165e580, cid 3, qid 0 00:22:09.170 [2024-11-28 12:45:51.429449] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.170 [2024-11-28 12:45:51.429455] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.170 [2024-11-28 12:45:51.429458] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.170 [2024-11-28 12:45:51.429463] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x165e580) on tqpair=0x15fc690 00:22:09.170 [2024-11-28 12:45:51.429468] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:22:09.170 [2024-11-28 12:45:51.429472] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:22:09.170 [2024-11-28 12:45:51.429480] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.170 [2024-11-28 12:45:51.429484] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.170 [2024-11-28 12:45:51.429487] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15fc690) 00:22:09.170 [2024-11-28 12:45:51.429493] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.170 [2024-11-28 12:45:51.429502] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x165e580, cid 3, qid 0 00:22:09.170 [2024-11-28 12:45:51.429567] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.170 [2024-11-28 12:45:51.429573] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.170 [2024-11-28 12:45:51.429576] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.170 [2024-11-28 12:45:51.429580] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x165e580) on tqpair=0x15fc690 00:22:09.170 [2024-11-28 12:45:51.429588] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.170 [2024-11-28 12:45:51.429592] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.170 [2024-11-28 12:45:51.429595] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15fc690) 00:22:09.170 [2024-11-28 12:45:51.429600] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.170 [2024-11-28 12:45:51.429610] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x165e580, cid 3, qid 0 00:22:09.170 [2024-11-28 12:45:51.429672] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.170 [2024-11-28 12:45:51.429678] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.170 [2024-11-28 12:45:51.429681] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.170 [2024-11-28 12:45:51.429684] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x165e580) on tqpair=0x15fc690 00:22:09.170 [2024-11-28 12:45:51.429692] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.170 [2024-11-28 12:45:51.429696] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.170 [2024-11-28 12:45:51.429699] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15fc690) 00:22:09.170 [2024-11-28 12:45:51.429705] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.170 [2024-11-28 12:45:51.429714] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x165e580, cid 3, qid 0 00:22:09.170 [2024-11-28 12:45:51.429799] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.170 [2024-11-28 12:45:51.429804] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.170 [2024-11-28 12:45:51.429807] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.170 [2024-11-28 12:45:51.429810] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x165e580) on tqpair=0x15fc690 00:22:09.170 [2024-11-28 12:45:51.429819] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.170 [2024-11-28 12:45:51.429823] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.170 [2024-11-28 12:45:51.429826] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15fc690) 00:22:09.170 [2024-11-28 12:45:51.429831] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.170 [2024-11-28 12:45:51.429841] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x165e580, cid 3, qid 0 00:22:09.170 [2024-11-28 12:45:51.429903] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.171 [2024-11-28 12:45:51.429910] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.171 [2024-11-28 12:45:51.429913] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.171 [2024-11-28 12:45:51.429916] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x165e580) on tqpair=0x15fc690 00:22:09.171 [2024-11-28 12:45:51.429925] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.171 [2024-11-28 12:45:51.429928] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.171 [2024-11-28 12:45:51.429931] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15fc690) 00:22:09.171 [2024-11-28 12:45:51.429937] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.171 [2024-11-28 12:45:51.429952] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x165e580, cid 3, qid 0 00:22:09.171 [2024-11-28 12:45:51.430012] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.171 [2024-11-28 12:45:51.430018] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.171 [2024-11-28 12:45:51.430020] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.171 [2024-11-28 12:45:51.430024] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x165e580) on tqpair=0x15fc690 00:22:09.171 [2024-11-28 12:45:51.430032] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.171 [2024-11-28 12:45:51.430036] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.171 [2024-11-28 12:45:51.430039] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15fc690) 00:22:09.171 [2024-11-28 12:45:51.430044] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.171 [2024-11-28 12:45:51.430054] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x165e580, cid 3, qid 0 00:22:09.171 [2024-11-28 12:45:51.430120] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.171 [2024-11-28 12:45:51.430125] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.171 [2024-11-28 12:45:51.430128] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.171 [2024-11-28 12:45:51.430132] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x165e580) on tqpair=0x15fc690 00:22:09.171 [2024-11-28 12:45:51.430140] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.171 [2024-11-28 12:45:51.430143] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.171 [2024-11-28 12:45:51.430146] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15fc690) 00:22:09.171 [2024-11-28 12:45:51.430152] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.171 [2024-11-28 12:45:51.430161] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x165e580, cid 3, qid 0 00:22:09.171 [2024-11-28 12:45:51.430234] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.171 [2024-11-28 12:45:51.430239] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.171 [2024-11-28 12:45:51.430242] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.171 [2024-11-28 12:45:51.430246] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x165e580) on tqpair=0x15fc690 00:22:09.171 [2024-11-28 12:45:51.430255] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.171 [2024-11-28 12:45:51.430258] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.171 [2024-11-28 12:45:51.430261] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15fc690) 00:22:09.171 [2024-11-28 12:45:51.430267] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.171 [2024-11-28 12:45:51.430277] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x165e580, cid 3, qid 0 00:22:09.171 [2024-11-28 12:45:51.430343] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.171 [2024-11-28 12:45:51.430349] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.171 [2024-11-28 12:45:51.430354] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.171 [2024-11-28 12:45:51.430357] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x165e580) on tqpair=0x15fc690 00:22:09.171 [2024-11-28 12:45:51.430365] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.171 [2024-11-28 12:45:51.430369] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.171 [2024-11-28 12:45:51.430372] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15fc690) 00:22:09.171 [2024-11-28 12:45:51.430378] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.171 [2024-11-28 12:45:51.430387] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x165e580, cid 3, qid 0 00:22:09.171 [2024-11-28 12:45:51.430452] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.171 [2024-11-28 12:45:51.430458] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.171 [2024-11-28 12:45:51.430461] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.171 [2024-11-28 12:45:51.430464] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x165e580) on tqpair=0x15fc690 00:22:09.171 [2024-11-28 12:45:51.430472] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.171 [2024-11-28 12:45:51.430476] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.171 [2024-11-28 12:45:51.430479] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15fc690) 00:22:09.171 [2024-11-28 12:45:51.430485] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.171 [2024-11-28 12:45:51.430494] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x165e580, cid 3, qid 0 00:22:09.171 [2024-11-28 12:45:51.430560] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.171 [2024-11-28 12:45:51.430565] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.171 [2024-11-28 12:45:51.430568] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.171 [2024-11-28 12:45:51.430571] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x165e580) on tqpair=0x15fc690 00:22:09.171 [2024-11-28 12:45:51.430579] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.171 [2024-11-28 12:45:51.430583] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.171 [2024-11-28 12:45:51.430586] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15fc690) 00:22:09.171 [2024-11-28 12:45:51.430592] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.171 [2024-11-28 12:45:51.430601] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x165e580, cid 3, qid 0 00:22:09.171 [2024-11-28 12:45:51.430668] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.171 [2024-11-28 12:45:51.430673] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.171 [2024-11-28 12:45:51.430676] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.171 [2024-11-28 12:45:51.430680] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x165e580) on tqpair=0x15fc690 00:22:09.171 [2024-11-28 12:45:51.430688] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.171 [2024-11-28 12:45:51.430691] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.171 [2024-11-28 12:45:51.430694] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15fc690) 00:22:09.171 [2024-11-28 12:45:51.430700] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.171 [2024-11-28 12:45:51.430710] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x165e580, cid 3, qid 0 00:22:09.171 [2024-11-28 12:45:51.430780] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.171 [2024-11-28 12:45:51.430786] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.171 [2024-11-28 12:45:51.430789] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.171 [2024-11-28 12:45:51.430794] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x165e580) on tqpair=0x15fc690 00:22:09.171 [2024-11-28 12:45:51.430803] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.171 [2024-11-28 12:45:51.430806] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.171 [2024-11-28 12:45:51.430810] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15fc690) 00:22:09.171 [2024-11-28 12:45:51.430815] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.171 [2024-11-28 12:45:51.430825] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x165e580, cid 3, qid 0 00:22:09.172 [2024-11-28 12:45:51.430896] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.172 [2024-11-28 12:45:51.430901] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.172 [2024-11-28 12:45:51.430904] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.172 [2024-11-28 12:45:51.430908] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x165e580) on tqpair=0x15fc690 00:22:09.172 [2024-11-28 12:45:51.430915] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.172 [2024-11-28 12:45:51.430919] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.172 [2024-11-28 12:45:51.430922] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15fc690) 00:22:09.172 [2024-11-28 12:45:51.430928] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.172 [2024-11-28 12:45:51.430937] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x165e580, cid 3, qid 0 00:22:09.172 [2024-11-28 12:45:51.431004] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.172 [2024-11-28 12:45:51.431010] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.172 [2024-11-28 12:45:51.431013] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.172 [2024-11-28 12:45:51.431017] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x165e580) on tqpair=0x15fc690 00:22:09.172 [2024-11-28 12:45:51.431025] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.172 [2024-11-28 12:45:51.431028] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.172 [2024-11-28 12:45:51.431031] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15fc690) 00:22:09.172 [2024-11-28 12:45:51.431037] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.172 [2024-11-28 12:45:51.431047] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x165e580, cid 3, qid 0 00:22:09.172 [2024-11-28 12:45:51.431112] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.172 [2024-11-28 12:45:51.431118] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.172 [2024-11-28 12:45:51.431121] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.172 [2024-11-28 12:45:51.431124] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x165e580) on tqpair=0x15fc690 00:22:09.172 [2024-11-28 12:45:51.431132] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.172 [2024-11-28 12:45:51.431136] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.172 [2024-11-28 12:45:51.431139] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15fc690) 00:22:09.172 [2024-11-28 12:45:51.431144] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.172 [2024-11-28 12:45:51.431153] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x165e580, cid 3, qid 0 00:22:09.172 [2024-11-28 12:45:51.431220] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.172 [2024-11-28 12:45:51.431226] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.172 [2024-11-28 12:45:51.431229] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.172 [2024-11-28 12:45:51.431232] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x165e580) on tqpair=0x15fc690 00:22:09.172 [2024-11-28 12:45:51.431242] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.172 [2024-11-28 12:45:51.431246] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.172 [2024-11-28 12:45:51.431249] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15fc690) 00:22:09.172 [2024-11-28 12:45:51.431254] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.172 [2024-11-28 12:45:51.431264] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x165e580, cid 3, qid 0 00:22:09.172 [2024-11-28 12:45:51.431334] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.172 [2024-11-28 12:45:51.431339] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.172 [2024-11-28 12:45:51.431342] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.172 [2024-11-28 12:45:51.431345] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x165e580) on tqpair=0x15fc690 00:22:09.172 [2024-11-28 12:45:51.431354] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.172 [2024-11-28 12:45:51.431358] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.172 [2024-11-28 12:45:51.431361] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15fc690) 00:22:09.172 [2024-11-28 12:45:51.431367] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.172 [2024-11-28 12:45:51.431377] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x165e580, cid 3, qid 0 00:22:09.172 [2024-11-28 12:45:51.431438] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.172 [2024-11-28 12:45:51.431444] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.172 [2024-11-28 12:45:51.431446] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.172 [2024-11-28 12:45:51.431450] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x165e580) on tqpair=0x15fc690 00:22:09.172 [2024-11-28 12:45:51.431458] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.172 [2024-11-28 12:45:51.431461] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.172 [2024-11-28 12:45:51.431465] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15fc690) 00:22:09.172 [2024-11-28 12:45:51.431470] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.172 [2024-11-28 12:45:51.431479] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x165e580, cid 3, qid 0 00:22:09.172 [2024-11-28 12:45:51.431552] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.172 [2024-11-28 12:45:51.431557] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.172 [2024-11-28 12:45:51.431560] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.172 [2024-11-28 12:45:51.431564] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x165e580) on tqpair=0x15fc690 00:22:09.172 [2024-11-28 12:45:51.431573] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.172 [2024-11-28 12:45:51.431576] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.172 [2024-11-28 12:45:51.431580] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15fc690) 00:22:09.172 [2024-11-28 12:45:51.431585] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.172 [2024-11-28 12:45:51.431595] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x165e580, cid 3, qid 0 00:22:09.172 [2024-11-28 12:45:51.431658] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.172 [2024-11-28 12:45:51.431664] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.172 [2024-11-28 12:45:51.431666] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.172 [2024-11-28 12:45:51.431670] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x165e580) on tqpair=0x15fc690 00:22:09.172 [2024-11-28 12:45:51.431678] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.172 [2024-11-28 12:45:51.431683] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.172 [2024-11-28 12:45:51.431686] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15fc690) 00:22:09.172 [2024-11-28 12:45:51.431692] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.172 [2024-11-28 12:45:51.431701] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x165e580, cid 3, qid 0 00:22:09.172 [2024-11-28 12:45:51.431767] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.172 [2024-11-28 12:45:51.431772] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.172 [2024-11-28 12:45:51.431775] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.172 [2024-11-28 12:45:51.431778] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x165e580) on tqpair=0x15fc690 00:22:09.172 [2024-11-28 12:45:51.431786] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.172 [2024-11-28 12:45:51.431790] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.172 [2024-11-28 12:45:51.431793] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15fc690) 00:22:09.172 [2024-11-28 12:45:51.431799] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.173 [2024-11-28 12:45:51.431808] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x165e580, cid 3, qid 0 00:22:09.173 [2024-11-28 12:45:51.431876] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.173 [2024-11-28 12:45:51.431882] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.173 [2024-11-28 12:45:51.431885] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.173 [2024-11-28 12:45:51.431888] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x165e580) on tqpair=0x15fc690 00:22:09.173 [2024-11-28 12:45:51.431896] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.173 [2024-11-28 12:45:51.431900] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.173 [2024-11-28 12:45:51.431903] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15fc690) 00:22:09.173 [2024-11-28 12:45:51.431909] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.173 [2024-11-28 12:45:51.431918] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x165e580, cid 3, qid 0 00:22:09.173 [2024-11-28 12:45:51.435954] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.173 [2024-11-28 12:45:51.435961] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.173 [2024-11-28 12:45:51.435964] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.173 [2024-11-28 12:45:51.435968] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x165e580) on tqpair=0x15fc690 00:22:09.173 [2024-11-28 12:45:51.435977] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.173 [2024-11-28 12:45:51.435981] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.173 [2024-11-28 12:45:51.435984] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15fc690) 00:22:09.173 [2024-11-28 12:45:51.435990] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.173 [2024-11-28 12:45:51.436001] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x165e580, cid 3, qid 0 00:22:09.173 [2024-11-28 12:45:51.436153] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.173 [2024-11-28 12:45:51.436158] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.173 [2024-11-28 12:45:51.436161] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.173 [2024-11-28 12:45:51.436165] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x165e580) on tqpair=0x15fc690 00:22:09.173 [2024-11-28 12:45:51.436171] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 6 milliseconds 00:22:09.173 00:22:09.173 12:45:51 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:22:09.173 [2024-11-28 12:45:51.473649] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:22:09.173 [2024-11-28 12:45:51.473683] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2604770 ] 00:22:09.173 [2024-11-28 12:45:51.514606] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:22:09.173 [2024-11-28 12:45:51.514648] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:22:09.173 [2024-11-28 12:45:51.514653] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:22:09.173 [2024-11-28 12:45:51.514668] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:22:09.173 [2024-11-28 12:45:51.514676] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:22:09.173 [2024-11-28 12:45:51.515142] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:22:09.173 [2024-11-28 12:45:51.515172] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xcb6690 0 00:22:09.173 [2024-11-28 12:45:51.525961] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:22:09.173 [2024-11-28 12:45:51.525977] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:22:09.173 [2024-11-28 12:45:51.525981] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:22:09.173 [2024-11-28 12:45:51.525984] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:22:09.173 [2024-11-28 12:45:51.526012] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.173 [2024-11-28 12:45:51.526018] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.173 [2024-11-28 12:45:51.526021] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xcb6690) 00:22:09.173 [2024-11-28 12:45:51.526031] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:22:09.173 [2024-11-28 12:45:51.526047] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd18100, cid 0, qid 0 00:22:09.173 [2024-11-28 12:45:51.533958] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.173 [2024-11-28 12:45:51.533966] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.173 [2024-11-28 12:45:51.533969] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.173 [2024-11-28 12:45:51.533973] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd18100) on tqpair=0xcb6690 00:22:09.173 [2024-11-28 12:45:51.533981] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:22:09.173 [2024-11-28 12:45:51.533986] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:22:09.173 [2024-11-28 12:45:51.533991] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:22:09.173 [2024-11-28 12:45:51.534003] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.173 [2024-11-28 12:45:51.534008] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.173 [2024-11-28 12:45:51.534011] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xcb6690) 00:22:09.173 [2024-11-28 12:45:51.534017] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.173 [2024-11-28 12:45:51.534030] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd18100, cid 0, qid 0 00:22:09.173 [2024-11-28 12:45:51.534189] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.173 [2024-11-28 12:45:51.534196] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.173 [2024-11-28 12:45:51.534199] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.173 [2024-11-28 12:45:51.534202] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd18100) on tqpair=0xcb6690 00:22:09.173 [2024-11-28 12:45:51.534208] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:22:09.173 [2024-11-28 12:45:51.534215] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:22:09.173 [2024-11-28 12:45:51.534221] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.173 [2024-11-28 12:45:51.534225] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.173 [2024-11-28 12:45:51.534228] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xcb6690) 00:22:09.173 [2024-11-28 12:45:51.534234] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.173 [2024-11-28 12:45:51.534245] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd18100, cid 0, qid 0 00:22:09.173 [2024-11-28 12:45:51.534308] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.173 [2024-11-28 12:45:51.534314] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.173 [2024-11-28 12:45:51.534317] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.173 [2024-11-28 12:45:51.534320] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd18100) on tqpair=0xcb6690 00:22:09.173 [2024-11-28 12:45:51.534325] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:22:09.173 [2024-11-28 12:45:51.534331] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:22:09.173 [2024-11-28 12:45:51.534337] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.173 [2024-11-28 12:45:51.534341] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.173 [2024-11-28 12:45:51.534344] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xcb6690) 00:22:09.173 [2024-11-28 12:45:51.534349] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.173 [2024-11-28 12:45:51.534359] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd18100, cid 0, qid 0 00:22:09.174 [2024-11-28 12:45:51.534424] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.174 [2024-11-28 12:45:51.534430] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.174 [2024-11-28 12:45:51.534433] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.174 [2024-11-28 12:45:51.534436] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd18100) on tqpair=0xcb6690 00:22:09.174 [2024-11-28 12:45:51.534440] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:22:09.174 [2024-11-28 12:45:51.534449] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.174 [2024-11-28 12:45:51.534452] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.174 [2024-11-28 12:45:51.534455] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xcb6690) 00:22:09.174 [2024-11-28 12:45:51.534461] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.174 [2024-11-28 12:45:51.534470] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd18100, cid 0, qid 0 00:22:09.174 [2024-11-28 12:45:51.534532] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.174 [2024-11-28 12:45:51.534538] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.174 [2024-11-28 12:45:51.534541] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.174 [2024-11-28 12:45:51.534546] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd18100) on tqpair=0xcb6690 00:22:09.174 [2024-11-28 12:45:51.534550] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:22:09.174 [2024-11-28 12:45:51.534554] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:22:09.174 [2024-11-28 12:45:51.534561] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:22:09.174 [2024-11-28 12:45:51.534668] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:22:09.174 [2024-11-28 12:45:51.534673] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:22:09.174 [2024-11-28 12:45:51.534679] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.174 [2024-11-28 12:45:51.534682] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.174 [2024-11-28 12:45:51.534686] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xcb6690) 00:22:09.174 [2024-11-28 12:45:51.534691] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.174 [2024-11-28 12:45:51.534701] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd18100, cid 0, qid 0 00:22:09.174 [2024-11-28 12:45:51.534769] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.174 [2024-11-28 12:45:51.534775] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.174 [2024-11-28 12:45:51.534778] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.174 [2024-11-28 12:45:51.534781] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd18100) on tqpair=0xcb6690 00:22:09.174 [2024-11-28 12:45:51.534785] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:22:09.174 [2024-11-28 12:45:51.534793] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.174 [2024-11-28 12:45:51.534796] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.174 [2024-11-28 12:45:51.534800] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xcb6690) 00:22:09.174 [2024-11-28 12:45:51.534805] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.174 [2024-11-28 12:45:51.534815] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd18100, cid 0, qid 0 00:22:09.174 [2024-11-28 12:45:51.534882] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.174 [2024-11-28 12:45:51.534888] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.174 [2024-11-28 12:45:51.534891] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.174 [2024-11-28 12:45:51.534894] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd18100) on tqpair=0xcb6690 00:22:09.174 [2024-11-28 12:45:51.534898] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:22:09.174 [2024-11-28 12:45:51.534902] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:22:09.174 [2024-11-28 12:45:51.534909] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:22:09.174 [2024-11-28 12:45:51.534921] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:22:09.174 [2024-11-28 12:45:51.534929] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.174 [2024-11-28 12:45:51.534932] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xcb6690) 00:22:09.174 [2024-11-28 12:45:51.534938] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.174 [2024-11-28 12:45:51.534955] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd18100, cid 0, qid 0 00:22:09.174 [2024-11-28 12:45:51.535059] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:09.174 [2024-11-28 12:45:51.535065] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:09.174 [2024-11-28 12:45:51.535068] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:09.174 [2024-11-28 12:45:51.535072] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xcb6690): datao=0, datal=4096, cccid=0 00:22:09.174 [2024-11-28 12:45:51.535076] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd18100) on tqpair(0xcb6690): expected_datao=0, payload_size=4096 00:22:09.174 [2024-11-28 12:45:51.535079] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.174 [2024-11-28 12:45:51.535086] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:09.174 [2024-11-28 12:45:51.535089] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:09.174 [2024-11-28 12:45:51.535101] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.174 [2024-11-28 12:45:51.535106] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.174 [2024-11-28 12:45:51.535109] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.174 [2024-11-28 12:45:51.535113] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd18100) on tqpair=0xcb6690 00:22:09.174 [2024-11-28 12:45:51.535119] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:22:09.174 [2024-11-28 12:45:51.535124] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:22:09.174 [2024-11-28 12:45:51.535128] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:22:09.174 [2024-11-28 12:45:51.535131] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:22:09.175 [2024-11-28 12:45:51.535135] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:22:09.175 [2024-11-28 12:45:51.535140] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:22:09.175 [2024-11-28 12:45:51.535147] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:22:09.175 [2024-11-28 12:45:51.535153] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.175 [2024-11-28 12:45:51.535156] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.175 [2024-11-28 12:45:51.535159] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xcb6690) 00:22:09.175 [2024-11-28 12:45:51.535165] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:09.175 [2024-11-28 12:45:51.535176] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd18100, cid 0, qid 0 00:22:09.175 [2024-11-28 12:45:51.535242] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.175 [2024-11-28 12:45:51.535248] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.175 [2024-11-28 12:45:51.535251] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.175 [2024-11-28 12:45:51.535254] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd18100) on tqpair=0xcb6690 00:22:09.175 [2024-11-28 12:45:51.535260] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.175 [2024-11-28 12:45:51.535263] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.175 [2024-11-28 12:45:51.535266] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xcb6690) 00:22:09.175 [2024-11-28 12:45:51.535271] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:09.175 [2024-11-28 12:45:51.535276] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.175 [2024-11-28 12:45:51.535282] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.175 [2024-11-28 12:45:51.535285] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xcb6690) 00:22:09.175 [2024-11-28 12:45:51.535290] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:09.175 [2024-11-28 12:45:51.535295] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.175 [2024-11-28 12:45:51.535298] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.175 [2024-11-28 12:45:51.535301] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xcb6690) 00:22:09.175 [2024-11-28 12:45:51.535306] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:09.175 [2024-11-28 12:45:51.535311] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.175 [2024-11-28 12:45:51.535315] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.175 [2024-11-28 12:45:51.535318] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcb6690) 00:22:09.175 [2024-11-28 12:45:51.535322] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:09.175 [2024-11-28 12:45:51.535327] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:22:09.175 [2024-11-28 12:45:51.535337] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:22:09.175 [2024-11-28 12:45:51.535342] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.175 [2024-11-28 12:45:51.535346] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xcb6690) 00:22:09.175 [2024-11-28 12:45:51.535351] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.175 [2024-11-28 12:45:51.535362] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd18100, cid 0, qid 0 00:22:09.175 [2024-11-28 12:45:51.535366] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd18280, cid 1, qid 0 00:22:09.175 [2024-11-28 12:45:51.535371] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd18400, cid 2, qid 0 00:22:09.175 [2024-11-28 12:45:51.535374] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd18580, cid 3, qid 0 00:22:09.175 [2024-11-28 12:45:51.535378] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd18700, cid 4, qid 0 00:22:09.175 [2024-11-28 12:45:51.535479] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.175 [2024-11-28 12:45:51.535485] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.175 [2024-11-28 12:45:51.535488] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.175 [2024-11-28 12:45:51.535491] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd18700) on tqpair=0xcb6690 00:22:09.175 [2024-11-28 12:45:51.535496] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:22:09.175 [2024-11-28 12:45:51.535500] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:22:09.175 [2024-11-28 12:45:51.535508] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:22:09.175 [2024-11-28 12:45:51.535514] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:22:09.175 [2024-11-28 12:45:51.535520] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.175 [2024-11-28 12:45:51.535523] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.175 [2024-11-28 12:45:51.535526] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xcb6690) 00:22:09.175 [2024-11-28 12:45:51.535533] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:09.175 [2024-11-28 12:45:51.535543] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd18700, cid 4, qid 0 00:22:09.175 [2024-11-28 12:45:51.535610] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.175 [2024-11-28 12:45:51.535616] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.175 [2024-11-28 12:45:51.535619] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.175 [2024-11-28 12:45:51.535622] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd18700) on tqpair=0xcb6690 00:22:09.175 [2024-11-28 12:45:51.535675] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:22:09.175 [2024-11-28 12:45:51.535685] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:22:09.175 [2024-11-28 12:45:51.535692] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.175 [2024-11-28 12:45:51.535695] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xcb6690) 00:22:09.175 [2024-11-28 12:45:51.535701] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.175 [2024-11-28 12:45:51.535710] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd18700, cid 4, qid 0 00:22:09.175 [2024-11-28 12:45:51.535789] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:09.175 [2024-11-28 12:45:51.535795] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:09.175 [2024-11-28 12:45:51.535797] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:09.175 [2024-11-28 12:45:51.535801] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xcb6690): datao=0, datal=4096, cccid=4 00:22:09.175 [2024-11-28 12:45:51.535805] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd18700) on tqpair(0xcb6690): expected_datao=0, payload_size=4096 00:22:09.175 [2024-11-28 12:45:51.535808] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.175 [2024-11-28 12:45:51.535815] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:09.175 [2024-11-28 12:45:51.535818] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:09.175 [2024-11-28 12:45:51.535830] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.175 [2024-11-28 12:45:51.535835] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.175 [2024-11-28 12:45:51.535838] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.175 [2024-11-28 12:45:51.535841] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd18700) on tqpair=0xcb6690 00:22:09.175 [2024-11-28 12:45:51.535851] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:22:09.176 [2024-11-28 12:45:51.535858] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:22:09.176 [2024-11-28 12:45:51.535867] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:22:09.176 [2024-11-28 12:45:51.535873] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.176 [2024-11-28 12:45:51.535876] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xcb6690) 00:22:09.176 [2024-11-28 12:45:51.535881] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.176 [2024-11-28 12:45:51.535892] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd18700, cid 4, qid 0 00:22:09.176 [2024-11-28 12:45:51.535980] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:09.176 [2024-11-28 12:45:51.535987] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:09.176 [2024-11-28 12:45:51.535989] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:09.176 [2024-11-28 12:45:51.535995] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xcb6690): datao=0, datal=4096, cccid=4 00:22:09.176 [2024-11-28 12:45:51.535999] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd18700) on tqpair(0xcb6690): expected_datao=0, payload_size=4096 00:22:09.176 [2024-11-28 12:45:51.536003] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.176 [2024-11-28 12:45:51.536023] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:09.176 [2024-11-28 12:45:51.536027] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:09.176 [2024-11-28 12:45:51.536066] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.176 [2024-11-28 12:45:51.536072] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.176 [2024-11-28 12:45:51.536075] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.176 [2024-11-28 12:45:51.536078] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd18700) on tqpair=0xcb6690 00:22:09.176 [2024-11-28 12:45:51.536087] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:22:09.176 [2024-11-28 12:45:51.536095] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:22:09.176 [2024-11-28 12:45:51.536101] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.176 [2024-11-28 12:45:51.536105] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xcb6690) 00:22:09.176 [2024-11-28 12:45:51.536111] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.176 [2024-11-28 12:45:51.536121] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd18700, cid 4, qid 0 00:22:09.176 [2024-11-28 12:45:51.536232] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:09.176 [2024-11-28 12:45:51.536238] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:09.176 [2024-11-28 12:45:51.536241] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:09.176 [2024-11-28 12:45:51.536244] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xcb6690): datao=0, datal=4096, cccid=4 00:22:09.176 [2024-11-28 12:45:51.536248] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd18700) on tqpair(0xcb6690): expected_datao=0, payload_size=4096 00:22:09.176 [2024-11-28 12:45:51.536251] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.176 [2024-11-28 12:45:51.536257] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:09.176 [2024-11-28 12:45:51.536260] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:09.176 [2024-11-28 12:45:51.536280] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.176 [2024-11-28 12:45:51.536285] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.176 [2024-11-28 12:45:51.536288] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.176 [2024-11-28 12:45:51.536291] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd18700) on tqpair=0xcb6690 00:22:09.176 [2024-11-28 12:45:51.536299] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:22:09.176 [2024-11-28 12:45:51.536308] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:22:09.176 [2024-11-28 12:45:51.536314] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:22:09.176 [2024-11-28 12:45:51.536320] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:22:09.176 [2024-11-28 12:45:51.536324] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:22:09.176 [2024-11-28 12:45:51.536329] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:22:09.176 [2024-11-28 12:45:51.536334] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:22:09.176 [2024-11-28 12:45:51.536339] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:22:09.176 [2024-11-28 12:45:51.536343] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:22:09.176 [2024-11-28 12:45:51.536356] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.176 [2024-11-28 12:45:51.536360] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xcb6690) 00:22:09.176 [2024-11-28 12:45:51.536365] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.176 [2024-11-28 12:45:51.536371] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.176 [2024-11-28 12:45:51.536374] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.176 [2024-11-28 12:45:51.536377] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xcb6690) 00:22:09.176 [2024-11-28 12:45:51.536382] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:22:09.176 [2024-11-28 12:45:51.536395] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd18700, cid 4, qid 0 00:22:09.176 [2024-11-28 12:45:51.536400] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd18880, cid 5, qid 0 00:22:09.176 [2024-11-28 12:45:51.536475] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.176 [2024-11-28 12:45:51.536481] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.176 [2024-11-28 12:45:51.536484] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.176 [2024-11-28 12:45:51.536487] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd18700) on tqpair=0xcb6690 00:22:09.176 [2024-11-28 12:45:51.536493] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.176 [2024-11-28 12:45:51.536498] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.176 [2024-11-28 12:45:51.536501] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.176 [2024-11-28 12:45:51.536504] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd18880) on tqpair=0xcb6690 00:22:09.176 [2024-11-28 12:45:51.536512] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.176 [2024-11-28 12:45:51.536516] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xcb6690) 00:22:09.176 [2024-11-28 12:45:51.536521] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.176 [2024-11-28 12:45:51.536531] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd18880, cid 5, qid 0 00:22:09.176 [2024-11-28 12:45:51.536607] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.176 [2024-11-28 12:45:51.536612] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.176 [2024-11-28 12:45:51.536615] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.176 [2024-11-28 12:45:51.536618] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd18880) on tqpair=0xcb6690 00:22:09.176 [2024-11-28 12:45:51.536627] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.176 [2024-11-28 12:45:51.536630] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xcb6690) 00:22:09.176 [2024-11-28 12:45:51.536636] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.176 [2024-11-28 12:45:51.536645] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd18880, cid 5, qid 0 00:22:09.177 [2024-11-28 12:45:51.536723] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.177 [2024-11-28 12:45:51.536728] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.177 [2024-11-28 12:45:51.536733] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.177 [2024-11-28 12:45:51.536736] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd18880) on tqpair=0xcb6690 00:22:09.177 [2024-11-28 12:45:51.536745] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.177 [2024-11-28 12:45:51.536748] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xcb6690) 00:22:09.177 [2024-11-28 12:45:51.536754] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.177 [2024-11-28 12:45:51.536763] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd18880, cid 5, qid 0 00:22:09.177 [2024-11-28 12:45:51.536830] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.177 [2024-11-28 12:45:51.536835] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.177 [2024-11-28 12:45:51.536838] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.177 [2024-11-28 12:45:51.536842] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd18880) on tqpair=0xcb6690 00:22:09.177 [2024-11-28 12:45:51.536855] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.177 [2024-11-28 12:45:51.536860] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xcb6690) 00:22:09.177 [2024-11-28 12:45:51.536865] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.177 [2024-11-28 12:45:51.536871] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.177 [2024-11-28 12:45:51.536875] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xcb6690) 00:22:09.177 [2024-11-28 12:45:51.536880] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.177 [2024-11-28 12:45:51.536886] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.177 [2024-11-28 12:45:51.536889] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0xcb6690) 00:22:09.177 [2024-11-28 12:45:51.536895] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.177 [2024-11-28 12:45:51.536901] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.177 [2024-11-28 12:45:51.536904] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xcb6690) 00:22:09.177 [2024-11-28 12:45:51.536909] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.177 [2024-11-28 12:45:51.536920] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd18880, cid 5, qid 0 00:22:09.177 [2024-11-28 12:45:51.536925] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd18700, cid 4, qid 0 00:22:09.177 [2024-11-28 12:45:51.536928] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd18a00, cid 6, qid 0 00:22:09.177 [2024-11-28 12:45:51.536933] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd18b80, cid 7, qid 0 00:22:09.177 [2024-11-28 12:45:51.537098] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:09.177 [2024-11-28 12:45:51.537104] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:09.177 [2024-11-28 12:45:51.537107] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:09.177 [2024-11-28 12:45:51.537110] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xcb6690): datao=0, datal=8192, cccid=5 00:22:09.177 [2024-11-28 12:45:51.537114] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd18880) on tqpair(0xcb6690): expected_datao=0, payload_size=8192 00:22:09.177 [2024-11-28 12:45:51.537118] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.177 [2024-11-28 12:45:51.537132] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:09.177 [2024-11-28 12:45:51.537137] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:09.177 [2024-11-28 12:45:51.537142] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:09.177 [2024-11-28 12:45:51.537147] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:09.177 [2024-11-28 12:45:51.537150] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:09.177 [2024-11-28 12:45:51.537153] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xcb6690): datao=0, datal=512, cccid=4 00:22:09.177 [2024-11-28 12:45:51.537157] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd18700) on tqpair(0xcb6690): expected_datao=0, payload_size=512 00:22:09.177 [2024-11-28 12:45:51.537161] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.177 [2024-11-28 12:45:51.537166] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:09.177 [2024-11-28 12:45:51.537169] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:09.177 [2024-11-28 12:45:51.537174] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:09.177 [2024-11-28 12:45:51.537179] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:09.177 [2024-11-28 12:45:51.537182] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:09.177 [2024-11-28 12:45:51.537185] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xcb6690): datao=0, datal=512, cccid=6 00:22:09.177 [2024-11-28 12:45:51.537189] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd18a00) on tqpair(0xcb6690): expected_datao=0, payload_size=512 00:22:09.177 [2024-11-28 12:45:51.537192] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.177 [2024-11-28 12:45:51.537198] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:09.177 [2024-11-28 12:45:51.537201] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:09.177 [2024-11-28 12:45:51.537206] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:09.177 [2024-11-28 12:45:51.537210] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:09.177 [2024-11-28 12:45:51.537213] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:09.177 [2024-11-28 12:45:51.537216] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xcb6690): datao=0, datal=4096, cccid=7 00:22:09.177 [2024-11-28 12:45:51.537220] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd18b80) on tqpair(0xcb6690): expected_datao=0, payload_size=4096 00:22:09.177 [2024-11-28 12:45:51.537224] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.177 [2024-11-28 12:45:51.537229] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:09.177 [2024-11-28 12:45:51.537233] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:09.177 [2024-11-28 12:45:51.537239] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.177 [2024-11-28 12:45:51.537244] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.177 [2024-11-28 12:45:51.537247] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.177 [2024-11-28 12:45:51.537251] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd18880) on tqpair=0xcb6690 00:22:09.177 [2024-11-28 12:45:51.537260] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.177 [2024-11-28 12:45:51.537266] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.177 [2024-11-28 12:45:51.537269] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.177 [2024-11-28 12:45:51.537272] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd18700) on tqpair=0xcb6690 00:22:09.177 [2024-11-28 12:45:51.537280] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.177 [2024-11-28 12:45:51.537285] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.177 [2024-11-28 12:45:51.537288] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.177 [2024-11-28 12:45:51.537292] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd18a00) on tqpair=0xcb6690 00:22:09.177 [2024-11-28 12:45:51.537297] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.177 [2024-11-28 12:45:51.537302] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.177 [2024-11-28 12:45:51.537307] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.177 [2024-11-28 12:45:51.537310] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd18b80) on tqpair=0xcb6690 00:22:09.177 ===================================================== 00:22:09.177 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:09.177 ===================================================== 00:22:09.177 Controller Capabilities/Features 00:22:09.177 ================================ 00:22:09.177 Vendor ID: 8086 00:22:09.177 Subsystem Vendor ID: 8086 00:22:09.177 Serial Number: SPDK00000000000001 00:22:09.177 Model Number: SPDK bdev Controller 00:22:09.178 Firmware Version: 25.01 00:22:09.178 Recommended Arb Burst: 6 00:22:09.178 IEEE OUI Identifier: e4 d2 5c 00:22:09.178 Multi-path I/O 00:22:09.178 May have multiple subsystem ports: Yes 00:22:09.178 May have multiple controllers: Yes 00:22:09.178 Associated with SR-IOV VF: No 00:22:09.178 Max Data Transfer Size: 131072 00:22:09.178 Max Number of Namespaces: 32 00:22:09.178 Max Number of I/O Queues: 127 00:22:09.178 NVMe Specification Version (VS): 1.3 00:22:09.178 NVMe Specification Version (Identify): 1.3 00:22:09.178 Maximum Queue Entries: 128 00:22:09.178 Contiguous Queues Required: Yes 00:22:09.178 Arbitration Mechanisms Supported 00:22:09.178 Weighted Round Robin: Not Supported 00:22:09.178 Vendor Specific: Not Supported 00:22:09.178 Reset Timeout: 15000 ms 00:22:09.178 Doorbell Stride: 4 bytes 00:22:09.178 NVM Subsystem Reset: Not Supported 00:22:09.178 Command Sets Supported 00:22:09.178 NVM Command Set: Supported 00:22:09.178 Boot Partition: Not Supported 00:22:09.178 Memory Page Size Minimum: 4096 bytes 00:22:09.178 Memory Page Size Maximum: 4096 bytes 00:22:09.178 Persistent Memory Region: Not Supported 00:22:09.178 Optional Asynchronous Events Supported 00:22:09.178 Namespace Attribute Notices: Supported 00:22:09.178 Firmware Activation Notices: Not Supported 00:22:09.178 ANA Change Notices: Not Supported 00:22:09.178 PLE Aggregate Log Change Notices: Not Supported 00:22:09.178 LBA Status Info Alert Notices: Not Supported 00:22:09.178 EGE Aggregate Log Change Notices: Not Supported 00:22:09.178 Normal NVM Subsystem Shutdown event: Not Supported 00:22:09.178 Zone Descriptor Change Notices: Not Supported 00:22:09.178 Discovery Log Change Notices: Not Supported 00:22:09.178 Controller Attributes 00:22:09.178 128-bit Host Identifier: Supported 00:22:09.178 Non-Operational Permissive Mode: Not Supported 00:22:09.178 NVM Sets: Not Supported 00:22:09.178 Read Recovery Levels: Not Supported 00:22:09.178 Endurance Groups: Not Supported 00:22:09.178 Predictable Latency Mode: Not Supported 00:22:09.178 Traffic Based Keep ALive: Not Supported 00:22:09.178 Namespace Granularity: Not Supported 00:22:09.178 SQ Associations: Not Supported 00:22:09.178 UUID List: Not Supported 00:22:09.178 Multi-Domain Subsystem: Not Supported 00:22:09.178 Fixed Capacity Management: Not Supported 00:22:09.178 Variable Capacity Management: Not Supported 00:22:09.178 Delete Endurance Group: Not Supported 00:22:09.178 Delete NVM Set: Not Supported 00:22:09.178 Extended LBA Formats Supported: Not Supported 00:22:09.178 Flexible Data Placement Supported: Not Supported 00:22:09.178 00:22:09.178 Controller Memory Buffer Support 00:22:09.178 ================================ 00:22:09.178 Supported: No 00:22:09.178 00:22:09.178 Persistent Memory Region Support 00:22:09.178 ================================ 00:22:09.178 Supported: No 00:22:09.178 00:22:09.178 Admin Command Set Attributes 00:22:09.178 ============================ 00:22:09.178 Security Send/Receive: Not Supported 00:22:09.178 Format NVM: Not Supported 00:22:09.178 Firmware Activate/Download: Not Supported 00:22:09.178 Namespace Management: Not Supported 00:22:09.178 Device Self-Test: Not Supported 00:22:09.178 Directives: Not Supported 00:22:09.178 NVMe-MI: Not Supported 00:22:09.178 Virtualization Management: Not Supported 00:22:09.178 Doorbell Buffer Config: Not Supported 00:22:09.178 Get LBA Status Capability: Not Supported 00:22:09.178 Command & Feature Lockdown Capability: Not Supported 00:22:09.178 Abort Command Limit: 4 00:22:09.178 Async Event Request Limit: 4 00:22:09.178 Number of Firmware Slots: N/A 00:22:09.178 Firmware Slot 1 Read-Only: N/A 00:22:09.178 Firmware Activation Without Reset: N/A 00:22:09.178 Multiple Update Detection Support: N/A 00:22:09.178 Firmware Update Granularity: No Information Provided 00:22:09.178 Per-Namespace SMART Log: No 00:22:09.178 Asymmetric Namespace Access Log Page: Not Supported 00:22:09.178 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:22:09.178 Command Effects Log Page: Supported 00:22:09.178 Get Log Page Extended Data: Supported 00:22:09.178 Telemetry Log Pages: Not Supported 00:22:09.178 Persistent Event Log Pages: Not Supported 00:22:09.178 Supported Log Pages Log Page: May Support 00:22:09.178 Commands Supported & Effects Log Page: Not Supported 00:22:09.178 Feature Identifiers & Effects Log Page:May Support 00:22:09.178 NVMe-MI Commands & Effects Log Page: May Support 00:22:09.178 Data Area 4 for Telemetry Log: Not Supported 00:22:09.178 Error Log Page Entries Supported: 128 00:22:09.178 Keep Alive: Supported 00:22:09.178 Keep Alive Granularity: 10000 ms 00:22:09.178 00:22:09.178 NVM Command Set Attributes 00:22:09.178 ========================== 00:22:09.178 Submission Queue Entry Size 00:22:09.178 Max: 64 00:22:09.178 Min: 64 00:22:09.178 Completion Queue Entry Size 00:22:09.178 Max: 16 00:22:09.178 Min: 16 00:22:09.178 Number of Namespaces: 32 00:22:09.178 Compare Command: Supported 00:22:09.178 Write Uncorrectable Command: Not Supported 00:22:09.178 Dataset Management Command: Supported 00:22:09.178 Write Zeroes Command: Supported 00:22:09.178 Set Features Save Field: Not Supported 00:22:09.178 Reservations: Supported 00:22:09.178 Timestamp: Not Supported 00:22:09.178 Copy: Supported 00:22:09.178 Volatile Write Cache: Present 00:22:09.178 Atomic Write Unit (Normal): 1 00:22:09.178 Atomic Write Unit (PFail): 1 00:22:09.178 Atomic Compare & Write Unit: 1 00:22:09.178 Fused Compare & Write: Supported 00:22:09.178 Scatter-Gather List 00:22:09.178 SGL Command Set: Supported 00:22:09.178 SGL Keyed: Supported 00:22:09.178 SGL Bit Bucket Descriptor: Not Supported 00:22:09.178 SGL Metadata Pointer: Not Supported 00:22:09.178 Oversized SGL: Not Supported 00:22:09.178 SGL Metadata Address: Not Supported 00:22:09.178 SGL Offset: Supported 00:22:09.178 Transport SGL Data Block: Not Supported 00:22:09.178 Replay Protected Memory Block: Not Supported 00:22:09.178 00:22:09.178 Firmware Slot Information 00:22:09.178 ========================= 00:22:09.178 Active slot: 1 00:22:09.178 Slot 1 Firmware Revision: 25.01 00:22:09.178 00:22:09.178 00:22:09.178 Commands Supported and Effects 00:22:09.178 ============================== 00:22:09.178 Admin Commands 00:22:09.178 -------------- 00:22:09.178 Get Log Page (02h): Supported 00:22:09.178 Identify (06h): Supported 00:22:09.178 Abort (08h): Supported 00:22:09.178 Set Features (09h): Supported 00:22:09.178 Get Features (0Ah): Supported 00:22:09.179 Asynchronous Event Request (0Ch): Supported 00:22:09.179 Keep Alive (18h): Supported 00:22:09.179 I/O Commands 00:22:09.179 ------------ 00:22:09.179 Flush (00h): Supported LBA-Change 00:22:09.179 Write (01h): Supported LBA-Change 00:22:09.179 Read (02h): Supported 00:22:09.179 Compare (05h): Supported 00:22:09.179 Write Zeroes (08h): Supported LBA-Change 00:22:09.179 Dataset Management (09h): Supported LBA-Change 00:22:09.179 Copy (19h): Supported LBA-Change 00:22:09.179 00:22:09.179 Error Log 00:22:09.179 ========= 00:22:09.179 00:22:09.179 Arbitration 00:22:09.179 =========== 00:22:09.179 Arbitration Burst: 1 00:22:09.179 00:22:09.179 Power Management 00:22:09.179 ================ 00:22:09.179 Number of Power States: 1 00:22:09.179 Current Power State: Power State #0 00:22:09.179 Power State #0: 00:22:09.179 Max Power: 0.00 W 00:22:09.179 Non-Operational State: Operational 00:22:09.179 Entry Latency: Not Reported 00:22:09.179 Exit Latency: Not Reported 00:22:09.179 Relative Read Throughput: 0 00:22:09.179 Relative Read Latency: 0 00:22:09.179 Relative Write Throughput: 0 00:22:09.179 Relative Write Latency: 0 00:22:09.179 Idle Power: Not Reported 00:22:09.179 Active Power: Not Reported 00:22:09.179 Non-Operational Permissive Mode: Not Supported 00:22:09.179 00:22:09.179 Health Information 00:22:09.179 ================== 00:22:09.179 Critical Warnings: 00:22:09.179 Available Spare Space: OK 00:22:09.179 Temperature: OK 00:22:09.179 Device Reliability: OK 00:22:09.179 Read Only: No 00:22:09.179 Volatile Memory Backup: OK 00:22:09.179 Current Temperature: 0 Kelvin (-273 Celsius) 00:22:09.179 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:22:09.179 Available Spare: 0% 00:22:09.179 Available Spare Threshold: 0% 00:22:09.179 Life Percentage Used:[2024-11-28 12:45:51.537392] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.179 [2024-11-28 12:45:51.537396] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xcb6690) 00:22:09.179 [2024-11-28 12:45:51.537402] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.179 [2024-11-28 12:45:51.537414] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd18b80, cid 7, qid 0 00:22:09.179 [2024-11-28 12:45:51.537488] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.179 [2024-11-28 12:45:51.537494] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.179 [2024-11-28 12:45:51.537497] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.179 [2024-11-28 12:45:51.537500] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd18b80) on tqpair=0xcb6690 00:22:09.179 [2024-11-28 12:45:51.537530] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:22:09.179 [2024-11-28 12:45:51.537539] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd18100) on tqpair=0xcb6690 00:22:09.179 [2024-11-28 12:45:51.537545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.179 [2024-11-28 12:45:51.537549] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd18280) on tqpair=0xcb6690 00:22:09.179 [2024-11-28 12:45:51.537554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.179 [2024-11-28 12:45:51.537558] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd18400) on tqpair=0xcb6690 00:22:09.179 [2024-11-28 12:45:51.537562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.179 [2024-11-28 12:45:51.537566] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd18580) on tqpair=0xcb6690 00:22:09.179 [2024-11-28 12:45:51.537570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.179 [2024-11-28 12:45:51.537577] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.179 [2024-11-28 12:45:51.537581] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.179 [2024-11-28 12:45:51.537584] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcb6690) 00:22:09.179 [2024-11-28 12:45:51.537590] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.179 [2024-11-28 12:45:51.537600] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd18580, cid 3, qid 0 00:22:09.179 [2024-11-28 12:45:51.537670] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.179 [2024-11-28 12:45:51.537676] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.179 [2024-11-28 12:45:51.537679] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.179 [2024-11-28 12:45:51.537682] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd18580) on tqpair=0xcb6690 00:22:09.179 [2024-11-28 12:45:51.537688] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.179 [2024-11-28 12:45:51.537691] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.179 [2024-11-28 12:45:51.537694] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcb6690) 00:22:09.179 [2024-11-28 12:45:51.537700] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.179 [2024-11-28 12:45:51.537712] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd18580, cid 3, qid 0 00:22:09.179 [2024-11-28 12:45:51.537785] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.179 [2024-11-28 12:45:51.537791] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.179 [2024-11-28 12:45:51.537796] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.179 [2024-11-28 12:45:51.537799] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd18580) on tqpair=0xcb6690 00:22:09.179 [2024-11-28 12:45:51.537803] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:22:09.179 [2024-11-28 12:45:51.537807] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:22:09.179 [2024-11-28 12:45:51.537815] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.179 [2024-11-28 12:45:51.537819] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.179 [2024-11-28 12:45:51.537822] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcb6690) 00:22:09.179 [2024-11-28 12:45:51.537828] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.179 [2024-11-28 12:45:51.537837] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd18580, cid 3, qid 0 00:22:09.179 [2024-11-28 12:45:51.537897] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.179 [2024-11-28 12:45:51.537903] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.179 [2024-11-28 12:45:51.537906] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.179 [2024-11-28 12:45:51.537910] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd18580) on tqpair=0xcb6690 00:22:09.179 [2024-11-28 12:45:51.537918] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.179 [2024-11-28 12:45:51.537922] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.179 [2024-11-28 12:45:51.537925] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcb6690) 00:22:09.179 [2024-11-28 12:45:51.537931] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.179 [2024-11-28 12:45:51.537940] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd18580, cid 3, qid 0 00:22:09.179 [2024-11-28 12:45:51.541956] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.179 [2024-11-28 12:45:51.541965] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.179 [2024-11-28 12:45:51.541968] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.179 [2024-11-28 12:45:51.541972] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd18580) on tqpair=0xcb6690 00:22:09.179 [2024-11-28 12:45:51.541982] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.180 [2024-11-28 12:45:51.541986] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.180 [2024-11-28 12:45:51.541989] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcb6690) 00:22:09.180 [2024-11-28 12:45:51.541995] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.180 [2024-11-28 12:45:51.542006] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd18580, cid 3, qid 0 00:22:09.180 [2024-11-28 12:45:51.542159] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.180 [2024-11-28 12:45:51.542164] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.180 [2024-11-28 12:45:51.542167] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.180 [2024-11-28 12:45:51.542171] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd18580) on tqpair=0xcb6690 00:22:09.180 [2024-11-28 12:45:51.542178] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 4 milliseconds 00:22:09.180 0% 00:22:09.180 Data Units Read: 0 00:22:09.180 Data Units Written: 0 00:22:09.180 Host Read Commands: 0 00:22:09.180 Host Write Commands: 0 00:22:09.180 Controller Busy Time: 0 minutes 00:22:09.180 Power Cycles: 0 00:22:09.180 Power On Hours: 0 hours 00:22:09.180 Unsafe Shutdowns: 0 00:22:09.180 Unrecoverable Media Errors: 0 00:22:09.180 Lifetime Error Log Entries: 0 00:22:09.180 Warning Temperature Time: 0 minutes 00:22:09.180 Critical Temperature Time: 0 minutes 00:22:09.180 00:22:09.180 Number of Queues 00:22:09.180 ================ 00:22:09.180 Number of I/O Submission Queues: 127 00:22:09.180 Number of I/O Completion Queues: 127 00:22:09.180 00:22:09.180 Active Namespaces 00:22:09.180 ================= 00:22:09.180 Namespace ID:1 00:22:09.180 Error Recovery Timeout: Unlimited 00:22:09.180 Command Set Identifier: NVM (00h) 00:22:09.180 Deallocate: Supported 00:22:09.180 Deallocated/Unwritten Error: Not Supported 00:22:09.180 Deallocated Read Value: Unknown 00:22:09.180 Deallocate in Write Zeroes: Not Supported 00:22:09.180 Deallocated Guard Field: 0xFFFF 00:22:09.180 Flush: Supported 00:22:09.180 Reservation: Supported 00:22:09.180 Namespace Sharing Capabilities: Multiple Controllers 00:22:09.180 Size (in LBAs): 131072 (0GiB) 00:22:09.180 Capacity (in LBAs): 131072 (0GiB) 00:22:09.180 Utilization (in LBAs): 131072 (0GiB) 00:22:09.180 NGUID: ABCDEF0123456789ABCDEF0123456789 00:22:09.180 EUI64: ABCDEF0123456789 00:22:09.180 UUID: c5ebddd4-1679-411d-8514-34d6718d3615 00:22:09.180 Thin Provisioning: Not Supported 00:22:09.180 Per-NS Atomic Units: Yes 00:22:09.180 Atomic Boundary Size (Normal): 0 00:22:09.180 Atomic Boundary Size (PFail): 0 00:22:09.180 Atomic Boundary Offset: 0 00:22:09.180 Maximum Single Source Range Length: 65535 00:22:09.180 Maximum Copy Length: 65535 00:22:09.180 Maximum Source Range Count: 1 00:22:09.180 NGUID/EUI64 Never Reused: No 00:22:09.180 Namespace Write Protected: No 00:22:09.180 Number of LBA Formats: 1 00:22:09.180 Current LBA Format: LBA Format #00 00:22:09.180 LBA Format #00: Data Size: 512 Metadata Size: 0 00:22:09.180 00:22:09.180 12:45:51 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:22:09.180 12:45:51 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:09.180 12:45:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.180 12:45:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:09.180 12:45:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.180 12:45:51 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:22:09.180 12:45:51 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:22:09.180 12:45:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:09.180 12:45:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:22:09.180 12:45:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:09.180 12:45:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:22:09.180 12:45:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:09.180 12:45:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:09.180 rmmod nvme_tcp 00:22:09.180 rmmod nvme_fabrics 00:22:09.180 rmmod nvme_keyring 00:22:09.180 12:45:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:09.180 12:45:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:22:09.180 12:45:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:22:09.180 12:45:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 2604566 ']' 00:22:09.180 12:45:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 2604566 00:22:09.180 12:45:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 2604566 ']' 00:22:09.180 12:45:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 2604566 00:22:09.180 12:45:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:22:09.180 12:45:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:09.180 12:45:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2604566 00:22:09.440 12:45:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:09.440 12:45:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:09.440 12:45:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2604566' 00:22:09.440 killing process with pid 2604566 00:22:09.440 12:45:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 2604566 00:22:09.440 12:45:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 2604566 00:22:09.440 12:45:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:09.440 12:45:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:09.440 12:45:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:09.440 12:45:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:22:09.440 12:45:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:09.440 12:45:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:22:09.440 12:45:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:22:09.440 12:45:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:09.440 12:45:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:09.440 12:45:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:09.440 12:45:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:09.440 12:45:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:11.980 12:45:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:11.980 00:22:11.980 real 0m8.550s 00:22:11.980 user 0m4.953s 00:22:11.980 sys 0m4.319s 00:22:11.980 12:45:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:11.980 12:45:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:11.980 ************************************ 00:22:11.980 END TEST nvmf_identify 00:22:11.980 ************************************ 00:22:11.980 12:45:53 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:22:11.980 12:45:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:11.980 12:45:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:11.980 12:45:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:11.980 ************************************ 00:22:11.980 START TEST nvmf_perf 00:22:11.980 ************************************ 00:22:11.980 12:45:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:22:11.980 * Looking for test storage... 00:22:11.980 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:11.980 12:45:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:11.980 12:45:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lcov --version 00:22:11.980 12:45:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:11.980 12:45:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:11.980 12:45:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:11.980 12:45:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:11.980 12:45:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:11.980 12:45:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:22:11.980 12:45:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:22:11.980 12:45:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:22:11.980 12:45:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:22:11.980 12:45:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:22:11.980 12:45:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:22:11.980 12:45:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:22:11.980 12:45:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:11.980 12:45:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:22:11.980 12:45:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:22:11.980 12:45:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:11.980 12:45:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:11.980 12:45:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:22:11.980 12:45:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:22:11.980 12:45:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:11.980 12:45:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:22:11.980 12:45:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:22:11.980 12:45:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:22:11.980 12:45:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:22:11.980 12:45:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:11.980 12:45:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:22:11.980 12:45:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:22:11.980 12:45:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:11.980 12:45:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:11.980 12:45:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:22:11.980 12:45:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:11.980 12:45:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:11.980 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:11.980 --rc genhtml_branch_coverage=1 00:22:11.980 --rc genhtml_function_coverage=1 00:22:11.980 --rc genhtml_legend=1 00:22:11.980 --rc geninfo_all_blocks=1 00:22:11.980 --rc geninfo_unexecuted_blocks=1 00:22:11.980 00:22:11.980 ' 00:22:11.980 12:45:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:11.980 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:11.980 --rc genhtml_branch_coverage=1 00:22:11.980 --rc genhtml_function_coverage=1 00:22:11.980 --rc genhtml_legend=1 00:22:11.980 --rc geninfo_all_blocks=1 00:22:11.980 --rc geninfo_unexecuted_blocks=1 00:22:11.980 00:22:11.980 ' 00:22:11.981 12:45:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:11.981 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:11.981 --rc genhtml_branch_coverage=1 00:22:11.981 --rc genhtml_function_coverage=1 00:22:11.981 --rc genhtml_legend=1 00:22:11.981 --rc geninfo_all_blocks=1 00:22:11.981 --rc geninfo_unexecuted_blocks=1 00:22:11.981 00:22:11.981 ' 00:22:11.981 12:45:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:11.981 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:11.981 --rc genhtml_branch_coverage=1 00:22:11.981 --rc genhtml_function_coverage=1 00:22:11.981 --rc genhtml_legend=1 00:22:11.981 --rc geninfo_all_blocks=1 00:22:11.981 --rc geninfo_unexecuted_blocks=1 00:22:11.981 00:22:11.981 ' 00:22:11.981 12:45:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:11.981 12:45:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:22:11.981 12:45:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:11.981 12:45:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:11.981 12:45:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:11.981 12:45:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:11.981 12:45:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:11.981 12:45:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:11.981 12:45:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:11.981 12:45:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:11.981 12:45:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:11.981 12:45:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:11.981 12:45:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:11.981 12:45:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:11.981 12:45:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:11.981 12:45:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:11.981 12:45:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:11.981 12:45:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:11.981 12:45:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:11.981 12:45:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:22:11.981 12:45:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:11.981 12:45:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:11.981 12:45:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:11.981 12:45:54 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:11.981 12:45:54 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:11.981 12:45:54 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:11.981 12:45:54 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:22:11.981 12:45:54 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:11.981 12:45:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:22:11.981 12:45:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:11.981 12:45:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:11.981 12:45:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:11.981 12:45:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:11.981 12:45:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:11.981 12:45:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:11.981 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:11.981 12:45:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:11.981 12:45:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:11.981 12:45:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:11.981 12:45:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:22:11.981 12:45:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:22:11.981 12:45:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:11.981 12:45:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:22:11.981 12:45:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:11.981 12:45:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:11.981 12:45:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:11.981 12:45:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:11.981 12:45:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:11.981 12:45:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:11.981 12:45:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:11.981 12:45:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:11.981 12:45:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:11.981 12:45:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:11.981 12:45:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:22:11.981 12:45:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:17.253 12:45:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:17.253 12:45:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:22:17.253 12:45:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:17.253 12:45:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:17.253 12:45:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:17.253 12:45:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:17.253 12:45:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:17.253 12:45:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:22:17.253 12:45:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:17.253 12:45:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:22:17.253 12:45:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:22:17.253 12:45:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:22:17.253 12:45:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:22:17.253 12:45:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:22:17.253 12:45:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:22:17.253 12:45:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:17.253 12:45:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:17.253 12:45:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:17.253 12:45:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:17.254 12:45:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:17.254 12:45:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:17.254 12:45:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:17.254 12:45:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:17.254 12:45:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:17.254 12:45:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:17.254 12:45:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:17.254 12:45:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:17.254 12:45:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:17.254 12:45:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:17.254 12:45:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:17.254 12:45:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:17.254 12:45:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:17.254 12:45:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:17.254 12:45:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:17.254 12:45:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:17.254 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:17.254 12:45:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:17.254 12:45:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:17.254 12:45:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:17.254 12:45:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:17.254 12:45:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:17.254 12:45:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:17.254 12:45:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:17.254 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:17.254 12:45:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:17.254 12:45:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:17.254 12:45:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:17.254 12:45:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:17.254 12:45:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:17.254 12:45:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:17.254 12:45:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:17.254 12:45:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:17.254 12:45:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:17.254 12:45:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:17.254 12:45:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:17.254 12:45:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:17.254 12:45:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:17.254 12:45:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:17.254 12:45:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:17.254 12:45:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:17.254 Found net devices under 0000:86:00.0: cvl_0_0 00:22:17.254 12:45:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:17.254 12:45:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:17.254 12:45:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:17.254 12:45:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:17.254 12:45:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:17.254 12:45:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:17.254 12:45:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:17.254 12:45:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:17.254 12:45:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:17.254 Found net devices under 0000:86:00.1: cvl_0_1 00:22:17.254 12:45:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:17.254 12:45:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:17.254 12:45:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:22:17.254 12:45:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:17.254 12:45:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:17.254 12:45:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:17.254 12:45:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:17.254 12:45:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:17.254 12:45:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:17.254 12:45:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:17.254 12:45:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:17.254 12:45:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:17.254 12:45:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:17.254 12:45:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:17.254 12:45:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:17.254 12:45:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:17.254 12:45:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:17.254 12:45:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:17.254 12:45:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:17.254 12:45:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:17.254 12:45:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:17.254 12:45:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:17.254 12:45:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:17.254 12:45:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:17.254 12:45:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:17.254 12:45:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:17.254 12:45:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:17.254 12:45:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:17.254 12:45:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:17.254 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:17.254 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.479 ms 00:22:17.254 00:22:17.254 --- 10.0.0.2 ping statistics --- 00:22:17.254 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:17.254 rtt min/avg/max/mdev = 0.479/0.479/0.479/0.000 ms 00:22:17.254 12:45:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:17.254 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:17.254 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.125 ms 00:22:17.254 00:22:17.254 --- 10.0.0.1 ping statistics --- 00:22:17.254 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:17.254 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:22:17.254 12:45:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:17.254 12:45:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:22:17.254 12:45:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:17.254 12:45:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:17.254 12:45:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:17.254 12:45:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:17.254 12:45:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:17.254 12:45:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:17.254 12:45:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:17.254 12:45:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:22:17.254 12:45:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:17.254 12:45:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:17.254 12:45:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:17.254 12:45:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=2608064 00:22:17.254 12:45:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 2608064 00:22:17.254 12:45:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 2608064 ']' 00:22:17.254 12:45:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:17.254 12:45:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:17.254 12:45:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:17.254 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:17.254 12:45:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:17.254 12:45:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:17.254 12:45:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:17.254 [2024-11-28 12:45:59.104275] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:22:17.254 [2024-11-28 12:45:59.104322] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:17.254 [2024-11-28 12:45:59.169898] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:17.255 [2024-11-28 12:45:59.213143] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:17.255 [2024-11-28 12:45:59.213181] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:17.255 [2024-11-28 12:45:59.213188] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:17.255 [2024-11-28 12:45:59.213195] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:17.255 [2024-11-28 12:45:59.213200] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:17.255 [2024-11-28 12:45:59.214630] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:17.255 [2024-11-28 12:45:59.214729] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:17.255 [2024-11-28 12:45:59.214805] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:17.255 [2024-11-28 12:45:59.214807] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:17.255 12:45:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:17.255 12:45:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:22:17.255 12:45:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:17.255 12:45:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:17.255 12:45:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:17.255 12:45:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:17.255 12:45:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:22:17.255 12:45:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:22:20.541 12:46:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:22:20.541 12:46:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:22:20.541 12:46:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:5e:00.0 00:22:20.541 12:46:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:22:20.541 12:46:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:22:20.541 12:46:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:5e:00.0 ']' 00:22:20.541 12:46:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:22:20.541 12:46:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:22:20.541 12:46:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:20.541 [2024-11-28 12:46:02.993314] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:20.541 12:46:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:20.800 12:46:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:22:20.800 12:46:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:21.058 12:46:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:22:21.058 12:46:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:22:21.317 12:46:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:21.317 [2024-11-28 12:46:03.816337] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:21.576 12:46:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:21.576 12:46:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:5e:00.0 ']' 00:22:21.576 12:46:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:22:21.576 12:46:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:22:21.576 12:46:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:22:22.952 Initializing NVMe Controllers 00:22:22.952 Attached to NVMe Controller at 0000:5e:00.0 [8086:0a54] 00:22:22.952 Associating PCIE (0000:5e:00.0) NSID 1 with lcore 0 00:22:22.953 Initialization complete. Launching workers. 00:22:22.953 ======================================================== 00:22:22.953 Latency(us) 00:22:22.953 Device Information : IOPS MiB/s Average min max 00:22:22.953 PCIE (0000:5e:00.0) NSID 1 from core 0: 97363.40 380.33 328.25 29.46 5212.21 00:22:22.953 ======================================================== 00:22:22.953 Total : 97363.40 380.33 328.25 29.46 5212.21 00:22:22.953 00:22:22.953 12:46:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:24.330 Initializing NVMe Controllers 00:22:24.330 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:24.330 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:24.330 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:24.330 Initialization complete. Launching workers. 00:22:24.330 ======================================================== 00:22:24.330 Latency(us) 00:22:24.330 Device Information : IOPS MiB/s Average min max 00:22:24.330 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 90.00 0.35 11365.42 118.34 44748.03 00:22:24.330 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 68.00 0.27 15010.05 6981.76 47893.29 00:22:24.330 ======================================================== 00:22:24.330 Total : 158.00 0.62 12933.99 118.34 47893.29 00:22:24.330 00:22:24.330 12:46:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:25.709 Initializing NVMe Controllers 00:22:25.709 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:25.709 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:25.709 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:25.709 Initialization complete. Launching workers. 00:22:25.709 ======================================================== 00:22:25.709 Latency(us) 00:22:25.709 Device Information : IOPS MiB/s Average min max 00:22:25.709 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10793.99 42.16 2969.14 484.07 6285.32 00:22:25.709 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3934.00 15.37 8168.58 6261.95 15954.85 00:22:25.709 ======================================================== 00:22:25.709 Total : 14727.99 57.53 4357.96 484.07 15954.85 00:22:25.709 00:22:25.709 12:46:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:22:25.709 12:46:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:22:25.709 12:46:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:28.246 Initializing NVMe Controllers 00:22:28.246 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:28.246 Controller IO queue size 128, less than required. 00:22:28.246 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:28.246 Controller IO queue size 128, less than required. 00:22:28.246 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:28.246 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:28.246 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:28.246 Initialization complete. Launching workers. 00:22:28.246 ======================================================== 00:22:28.246 Latency(us) 00:22:28.246 Device Information : IOPS MiB/s Average min max 00:22:28.246 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1564.36 391.09 83086.22 49603.89 150913.83 00:22:28.246 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 600.99 150.25 224144.95 83015.65 298579.65 00:22:28.246 ======================================================== 00:22:28.246 Total : 2165.35 541.34 122236.68 49603.89 298579.65 00:22:28.246 00:22:28.246 12:46:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:22:28.246 No valid NVMe controllers or AIO or URING devices found 00:22:28.246 Initializing NVMe Controllers 00:22:28.246 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:28.246 Controller IO queue size 128, less than required. 00:22:28.246 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:28.246 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:22:28.246 Controller IO queue size 128, less than required. 00:22:28.246 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:28.246 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:22:28.246 WARNING: Some requested NVMe devices were skipped 00:22:28.246 12:46:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:22:30.782 Initializing NVMe Controllers 00:22:30.782 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:30.782 Controller IO queue size 128, less than required. 00:22:30.783 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:30.783 Controller IO queue size 128, less than required. 00:22:30.783 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:30.783 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:30.783 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:30.783 Initialization complete. Launching workers. 00:22:30.783 00:22:30.783 ==================== 00:22:30.783 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:22:30.783 TCP transport: 00:22:30.783 polls: 15412 00:22:30.783 idle_polls: 11675 00:22:30.783 sock_completions: 3737 00:22:30.783 nvme_completions: 6245 00:22:30.783 submitted_requests: 9378 00:22:30.783 queued_requests: 1 00:22:30.783 00:22:30.783 ==================== 00:22:30.783 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:22:30.783 TCP transport: 00:22:30.783 polls: 15796 00:22:30.783 idle_polls: 11873 00:22:30.783 sock_completions: 3923 00:22:30.783 nvme_completions: 6255 00:22:30.783 submitted_requests: 9376 00:22:30.783 queued_requests: 1 00:22:30.783 ======================================================== 00:22:30.783 Latency(us) 00:22:30.783 Device Information : IOPS MiB/s Average min max 00:22:30.783 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1559.16 389.79 83633.81 44304.57 132867.34 00:22:30.783 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1561.65 390.41 82528.07 42341.66 129158.19 00:22:30.783 ======================================================== 00:22:30.783 Total : 3120.81 780.20 83080.50 42341.66 132867.34 00:22:30.783 00:22:30.783 12:46:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:22:30.783 12:46:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:30.783 12:46:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:22:30.783 12:46:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:22:30.783 12:46:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:22:30.783 12:46:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:30.783 12:46:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:22:30.783 12:46:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:30.783 12:46:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:22:30.783 12:46:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:30.783 12:46:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:30.783 rmmod nvme_tcp 00:22:30.783 rmmod nvme_fabrics 00:22:30.783 rmmod nvme_keyring 00:22:31.042 12:46:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:31.042 12:46:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:22:31.042 12:46:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:22:31.042 12:46:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 2608064 ']' 00:22:31.042 12:46:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 2608064 00:22:31.042 12:46:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 2608064 ']' 00:22:31.042 12:46:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 2608064 00:22:31.042 12:46:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:22:31.042 12:46:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:31.042 12:46:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2608064 00:22:31.042 12:46:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:31.042 12:46:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:31.042 12:46:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2608064' 00:22:31.042 killing process with pid 2608064 00:22:31.042 12:46:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 2608064 00:22:31.042 12:46:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 2608064 00:22:32.420 12:46:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:32.420 12:46:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:32.420 12:46:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:32.420 12:46:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:22:32.420 12:46:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:32.420 12:46:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:22:32.420 12:46:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:22:32.420 12:46:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:32.420 12:46:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:32.420 12:46:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:32.420 12:46:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:32.420 12:46:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:34.958 12:46:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:34.958 00:22:34.958 real 0m22.905s 00:22:34.958 user 1m1.719s 00:22:34.958 sys 0m7.327s 00:22:34.958 12:46:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:34.958 12:46:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:34.958 ************************************ 00:22:34.958 END TEST nvmf_perf 00:22:34.958 ************************************ 00:22:34.958 12:46:16 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:22:34.958 12:46:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:34.958 12:46:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:34.958 12:46:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:34.958 ************************************ 00:22:34.958 START TEST nvmf_fio_host 00:22:34.958 ************************************ 00:22:34.958 12:46:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:22:34.958 * Looking for test storage... 00:22:34.958 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:34.958 12:46:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:34.958 12:46:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lcov --version 00:22:34.958 12:46:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:34.958 12:46:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:34.958 12:46:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:34.958 12:46:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:34.958 12:46:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:34.958 12:46:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:22:34.958 12:46:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:22:34.958 12:46:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:22:34.958 12:46:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:22:34.958 12:46:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:22:34.958 12:46:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:22:34.958 12:46:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:22:34.958 12:46:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:34.958 12:46:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:22:34.958 12:46:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:22:34.958 12:46:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:34.958 12:46:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:34.958 12:46:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:22:34.958 12:46:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:22:34.958 12:46:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:34.958 12:46:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:22:34.958 12:46:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:22:34.958 12:46:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:22:34.958 12:46:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:22:34.958 12:46:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:34.958 12:46:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:22:34.958 12:46:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:22:34.958 12:46:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:34.958 12:46:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:34.958 12:46:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:22:34.958 12:46:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:34.958 12:46:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:34.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:34.958 --rc genhtml_branch_coverage=1 00:22:34.958 --rc genhtml_function_coverage=1 00:22:34.958 --rc genhtml_legend=1 00:22:34.958 --rc geninfo_all_blocks=1 00:22:34.958 --rc geninfo_unexecuted_blocks=1 00:22:34.958 00:22:34.958 ' 00:22:34.958 12:46:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:34.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:34.958 --rc genhtml_branch_coverage=1 00:22:34.958 --rc genhtml_function_coverage=1 00:22:34.958 --rc genhtml_legend=1 00:22:34.958 --rc geninfo_all_blocks=1 00:22:34.958 --rc geninfo_unexecuted_blocks=1 00:22:34.958 00:22:34.958 ' 00:22:34.958 12:46:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:34.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:34.958 --rc genhtml_branch_coverage=1 00:22:34.958 --rc genhtml_function_coverage=1 00:22:34.958 --rc genhtml_legend=1 00:22:34.958 --rc geninfo_all_blocks=1 00:22:34.958 --rc geninfo_unexecuted_blocks=1 00:22:34.958 00:22:34.958 ' 00:22:34.958 12:46:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:34.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:34.958 --rc genhtml_branch_coverage=1 00:22:34.958 --rc genhtml_function_coverage=1 00:22:34.958 --rc genhtml_legend=1 00:22:34.958 --rc geninfo_all_blocks=1 00:22:34.958 --rc geninfo_unexecuted_blocks=1 00:22:34.958 00:22:34.958 ' 00:22:34.958 12:46:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:34.958 12:46:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:22:34.958 12:46:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:34.958 12:46:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:34.958 12:46:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:34.958 12:46:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:34.958 12:46:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:34.958 12:46:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:34.958 12:46:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:22:34.958 12:46:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:34.958 12:46:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:34.958 12:46:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:22:34.958 12:46:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:34.958 12:46:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:34.958 12:46:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:34.958 12:46:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:34.958 12:46:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:34.958 12:46:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:34.958 12:46:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:34.958 12:46:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:34.958 12:46:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:34.958 12:46:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:34.959 12:46:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:34.959 12:46:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:34.959 12:46:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:34.959 12:46:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:34.959 12:46:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:34.959 12:46:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:34.959 12:46:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:34.959 12:46:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:22:34.959 12:46:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:34.959 12:46:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:34.959 12:46:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:34.959 12:46:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:34.959 12:46:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:34.959 12:46:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:34.959 12:46:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:22:34.959 12:46:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:34.959 12:46:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:22:34.959 12:46:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:34.959 12:46:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:34.959 12:46:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:34.959 12:46:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:34.959 12:46:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:34.959 12:46:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:34.959 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:34.959 12:46:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:34.959 12:46:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:34.959 12:46:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:34.959 12:46:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:34.959 12:46:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:22:34.959 12:46:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:34.959 12:46:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:34.959 12:46:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:34.959 12:46:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:34.959 12:46:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:34.959 12:46:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:34.959 12:46:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:34.959 12:46:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:34.959 12:46:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:34.959 12:46:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:34.959 12:46:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:22:34.959 12:46:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:40.232 12:46:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:40.232 12:46:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:22:40.232 12:46:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:40.232 12:46:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:40.232 12:46:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:40.232 12:46:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:40.232 12:46:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:40.232 12:46:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:22:40.232 12:46:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:40.232 12:46:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:22:40.232 12:46:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:22:40.232 12:46:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:22:40.232 12:46:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:22:40.232 12:46:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:22:40.232 12:46:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:22:40.232 12:46:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:40.232 12:46:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:40.232 12:46:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:40.232 12:46:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:40.232 12:46:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:40.232 12:46:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:40.232 12:46:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:40.232 12:46:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:40.232 12:46:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:40.232 12:46:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:40.232 12:46:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:40.232 12:46:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:40.232 12:46:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:40.232 12:46:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:40.232 12:46:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:40.232 12:46:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:40.232 12:46:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:40.232 12:46:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:40.232 12:46:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:40.232 12:46:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:40.232 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:40.232 12:46:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:40.232 12:46:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:40.232 12:46:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:40.232 12:46:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:40.232 12:46:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:40.232 12:46:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:40.232 12:46:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:40.232 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:40.232 12:46:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:40.232 12:46:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:40.232 12:46:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:40.232 12:46:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:40.232 12:46:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:40.232 12:46:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:40.232 12:46:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:40.232 12:46:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:40.232 12:46:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:40.232 12:46:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:40.232 12:46:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:40.232 12:46:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:40.232 12:46:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:40.232 12:46:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:40.232 12:46:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:40.232 12:46:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:40.232 Found net devices under 0000:86:00.0: cvl_0_0 00:22:40.232 12:46:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:40.232 12:46:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:40.232 12:46:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:40.232 12:46:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:40.232 12:46:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:40.232 12:46:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:40.232 12:46:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:40.232 12:46:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:40.232 12:46:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:40.232 Found net devices under 0000:86:00.1: cvl_0_1 00:22:40.232 12:46:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:40.232 12:46:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:40.232 12:46:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:22:40.232 12:46:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:40.232 12:46:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:40.232 12:46:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:40.233 12:46:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:40.233 12:46:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:40.233 12:46:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:40.233 12:46:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:40.233 12:46:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:40.233 12:46:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:40.233 12:46:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:40.233 12:46:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:40.233 12:46:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:40.233 12:46:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:40.233 12:46:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:40.233 12:46:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:40.233 12:46:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:40.233 12:46:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:40.233 12:46:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:40.233 12:46:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:40.233 12:46:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:40.233 12:46:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:40.233 12:46:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:40.233 12:46:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:40.233 12:46:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:40.233 12:46:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:40.233 12:46:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:40.233 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:40.233 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.493 ms 00:22:40.233 00:22:40.233 --- 10.0.0.2 ping statistics --- 00:22:40.233 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:40.233 rtt min/avg/max/mdev = 0.493/0.493/0.493/0.000 ms 00:22:40.233 12:46:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:40.233 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:40.233 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.209 ms 00:22:40.233 00:22:40.233 --- 10.0.0.1 ping statistics --- 00:22:40.233 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:40.233 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:22:40.233 12:46:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:40.233 12:46:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:22:40.233 12:46:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:40.233 12:46:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:40.233 12:46:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:40.233 12:46:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:40.233 12:46:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:40.233 12:46:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:40.233 12:46:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:40.233 12:46:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:22:40.233 12:46:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:22:40.233 12:46:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:40.233 12:46:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:40.233 12:46:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=2614157 00:22:40.233 12:46:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:40.233 12:46:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:40.233 12:46:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 2614157 00:22:40.233 12:46:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 2614157 ']' 00:22:40.233 12:46:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:40.233 12:46:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:40.233 12:46:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:40.233 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:40.233 12:46:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:40.233 12:46:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:40.492 [2024-11-28 12:46:22.761227] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:22:40.492 [2024-11-28 12:46:22.761279] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:40.492 [2024-11-28 12:46:22.827603] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:40.492 [2024-11-28 12:46:22.871802] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:40.492 [2024-11-28 12:46:22.871841] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:40.492 [2024-11-28 12:46:22.871851] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:40.492 [2024-11-28 12:46:22.871857] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:40.492 [2024-11-28 12:46:22.871862] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:40.492 [2024-11-28 12:46:22.873382] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:40.492 [2024-11-28 12:46:22.873482] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:40.492 [2024-11-28 12:46:22.873588] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:40.492 [2024-11-28 12:46:22.873590] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:40.492 12:46:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:40.492 12:46:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:22:40.492 12:46:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:40.751 [2024-11-28 12:46:23.149121] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:40.751 12:46:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:22:40.751 12:46:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:40.751 12:46:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:40.751 12:46:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:22:41.011 Malloc1 00:22:41.011 12:46:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:41.270 12:46:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:41.529 12:46:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:41.529 [2024-11-28 12:46:24.011457] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:41.529 12:46:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:41.788 12:46:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:22:41.788 12:46:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:41.788 12:46:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:41.788 12:46:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:22:41.788 12:46:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:41.788 12:46:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:22:41.788 12:46:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:41.788 12:46:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:22:41.788 12:46:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:22:41.788 12:46:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:41.788 12:46:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:41.788 12:46:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:22:41.788 12:46:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:41.788 12:46:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:22:41.788 12:46:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:22:41.788 12:46:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:41.788 12:46:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:41.788 12:46:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:22:41.788 12:46:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:41.788 12:46:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:22:41.788 12:46:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:22:41.788 12:46:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:22:41.788 12:46:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:42.046 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:22:42.046 fio-3.35 00:22:42.046 Starting 1 thread 00:22:44.581 00:22:44.581 test: (groupid=0, jobs=1): err= 0: pid=2614537: Thu Nov 28 12:46:26 2024 00:22:44.581 read: IOPS=11.4k, BW=44.4MiB/s (46.6MB/s)(90.9MiB/2047msec) 00:22:44.581 slat (nsec): min=1583, max=243260, avg=1728.69, stdev=2241.31 00:22:44.581 clat (usec): min=3109, max=52434, avg=6239.90, stdev=2563.54 00:22:44.581 lat (usec): min=3142, max=52435, avg=6241.63, stdev=2563.52 00:22:44.581 clat percentiles (usec): 00:22:44.581 | 1.00th=[ 4948], 5.00th=[ 5276], 10.00th=[ 5473], 20.00th=[ 5735], 00:22:44.581 | 30.00th=[ 5866], 40.00th=[ 5997], 50.00th=[ 6128], 60.00th=[ 6194], 00:22:44.581 | 70.00th=[ 6325], 80.00th=[ 6456], 90.00th=[ 6652], 95.00th=[ 6849], 00:22:44.581 | 99.00th=[ 7177], 99.50th=[ 7504], 99.90th=[50594], 99.95th=[51119], 00:22:44.581 | 99.99th=[52167] 00:22:44.581 bw ( KiB/s): min=45776, max=46960, per=100.00%, avg=46406.00, stdev=549.67, samples=4 00:22:44.581 iops : min=11444, max=11740, avg=11601.50, stdev=137.42, samples=4 00:22:44.581 write: IOPS=11.3k, BW=44.1MiB/s (46.2MB/s)(90.3MiB/2047msec); 0 zone resets 00:22:44.581 slat (nsec): min=1609, max=229526, avg=1792.84, stdev=1684.84 00:22:44.581 clat (usec): min=2433, max=50990, avg=5022.76, stdev=2027.68 00:22:44.581 lat (usec): min=2448, max=50991, avg=5024.55, stdev=2027.68 00:22:44.581 clat percentiles (usec): 00:22:44.581 | 1.00th=[ 4047], 5.00th=[ 4359], 10.00th=[ 4490], 20.00th=[ 4621], 00:22:44.581 | 30.00th=[ 4752], 40.00th=[ 4817], 50.00th=[ 4948], 60.00th=[ 5014], 00:22:44.581 | 70.00th=[ 5145], 80.00th=[ 5211], 90.00th=[ 5407], 95.00th=[ 5538], 00:22:44.581 | 99.00th=[ 5800], 99.50th=[ 6063], 99.90th=[48497], 99.95th=[50070], 00:22:44.581 | 99.99th=[50594] 00:22:44.581 bw ( KiB/s): min=45816, max=46272, per=100.00%, avg=46128.00, stdev=209.94, samples=4 00:22:44.581 iops : min=11454, max=11568, avg=11532.00, stdev=52.48, samples=4 00:22:44.581 lat (msec) : 4=0.42%, 10=99.31%, 50=0.20%, 100=0.08% 00:22:44.581 cpu : usr=73.07%, sys=25.51%, ctx=115, majf=0, minf=2 00:22:44.581 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:22:44.581 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:44.581 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:44.581 issued rwts: total=23282,23112,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:44.581 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:44.581 00:22:44.581 Run status group 0 (all jobs): 00:22:44.581 READ: bw=44.4MiB/s (46.6MB/s), 44.4MiB/s-44.4MiB/s (46.6MB/s-46.6MB/s), io=90.9MiB (95.4MB), run=2047-2047msec 00:22:44.581 WRITE: bw=44.1MiB/s (46.2MB/s), 44.1MiB/s-44.1MiB/s (46.2MB/s-46.2MB/s), io=90.3MiB (94.7MB), run=2047-2047msec 00:22:44.581 12:46:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:44.581 12:46:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:44.581 12:46:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:22:44.581 12:46:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:44.581 12:46:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:22:44.581 12:46:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:44.581 12:46:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:22:44.581 12:46:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:22:44.581 12:46:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:44.581 12:46:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:44.581 12:46:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:22:44.581 12:46:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:44.581 12:46:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:22:44.581 12:46:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:22:44.581 12:46:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:44.581 12:46:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:44.581 12:46:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:22:44.581 12:46:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:44.581 12:46:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:22:44.581 12:46:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:22:44.581 12:46:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:22:44.581 12:46:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:44.840 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:22:44.840 fio-3.35 00:22:44.840 Starting 1 thread 00:22:47.376 00:22:47.376 test: (groupid=0, jobs=1): err= 0: pid=2615111: Thu Nov 28 12:46:29 2024 00:22:47.376 read: IOPS=10.8k, BW=169MiB/s (177MB/s)(338MiB/2006msec) 00:22:47.376 slat (nsec): min=2552, max=84908, avg=2835.99, stdev=1310.64 00:22:47.376 clat (usec): min=1588, max=14571, avg=6888.78, stdev=1645.35 00:22:47.376 lat (usec): min=1590, max=14586, avg=6891.61, stdev=1645.52 00:22:47.376 clat percentiles (usec): 00:22:47.376 | 1.00th=[ 3687], 5.00th=[ 4424], 10.00th=[ 4883], 20.00th=[ 5473], 00:22:47.376 | 30.00th=[ 5932], 40.00th=[ 6390], 50.00th=[ 6783], 60.00th=[ 7308], 00:22:47.376 | 70.00th=[ 7701], 80.00th=[ 8094], 90.00th=[ 8979], 95.00th=[ 9765], 00:22:47.376 | 99.00th=[11469], 99.50th=[11994], 99.90th=[13173], 99.95th=[14091], 00:22:47.376 | 99.99th=[14484] 00:22:47.376 bw ( KiB/s): min=79040, max=95872, per=50.58%, avg=87272.00, stdev=7894.23, samples=4 00:22:47.376 iops : min= 4940, max= 5992, avg=5454.50, stdev=493.39, samples=4 00:22:47.376 write: IOPS=6342, BW=99.1MiB/s (104MB/s)(179MiB/1802msec); 0 zone resets 00:22:47.376 slat (usec): min=30, max=386, avg=31.95, stdev= 7.67 00:22:47.377 clat (usec): min=3313, max=15858, avg=8813.00, stdev=1509.20 00:22:47.377 lat (usec): min=3344, max=15969, avg=8844.95, stdev=1511.25 00:22:47.377 clat percentiles (usec): 00:22:47.377 | 1.00th=[ 5800], 5.00th=[ 6652], 10.00th=[ 7046], 20.00th=[ 7570], 00:22:47.377 | 30.00th=[ 7963], 40.00th=[ 8356], 50.00th=[ 8717], 60.00th=[ 8979], 00:22:47.377 | 70.00th=[ 9503], 80.00th=[ 9896], 90.00th=[10814], 95.00th=[11469], 00:22:47.377 | 99.00th=[13042], 99.50th=[14091], 99.90th=[15533], 99.95th=[15664], 00:22:47.377 | 99.99th=[15795] 00:22:47.377 bw ( KiB/s): min=84256, max=99712, per=89.75%, avg=91080.00, stdev=7473.78, samples=4 00:22:47.377 iops : min= 5266, max= 6232, avg=5692.50, stdev=467.11, samples=4 00:22:47.377 lat (msec) : 2=0.05%, 4=1.53%, 10=89.27%, 20=9.16% 00:22:47.377 cpu : usr=86.08%, sys=13.12%, ctx=37, majf=0, minf=2 00:22:47.377 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:22:47.377 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:47.377 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:47.377 issued rwts: total=21634,11430,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:47.377 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:47.377 00:22:47.377 Run status group 0 (all jobs): 00:22:47.377 READ: bw=169MiB/s (177MB/s), 169MiB/s-169MiB/s (177MB/s-177MB/s), io=338MiB (354MB), run=2006-2006msec 00:22:47.377 WRITE: bw=99.1MiB/s (104MB/s), 99.1MiB/s-99.1MiB/s (104MB/s-104MB/s), io=179MiB (187MB), run=1802-1802msec 00:22:47.377 12:46:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:47.377 12:46:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:22:47.377 12:46:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:22:47.377 12:46:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:22:47.377 12:46:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:22:47.377 12:46:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:47.377 12:46:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:22:47.377 12:46:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:47.377 12:46:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:22:47.377 12:46:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:47.377 12:46:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:47.377 rmmod nvme_tcp 00:22:47.377 rmmod nvme_fabrics 00:22:47.377 rmmod nvme_keyring 00:22:47.377 12:46:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:47.377 12:46:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:22:47.377 12:46:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:22:47.377 12:46:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 2614157 ']' 00:22:47.377 12:46:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 2614157 00:22:47.377 12:46:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 2614157 ']' 00:22:47.377 12:46:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 2614157 00:22:47.377 12:46:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:22:47.377 12:46:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:47.377 12:46:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2614157 00:22:47.636 12:46:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:47.637 12:46:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:47.637 12:46:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2614157' 00:22:47.637 killing process with pid 2614157 00:22:47.637 12:46:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 2614157 00:22:47.637 12:46:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 2614157 00:22:47.637 12:46:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:47.637 12:46:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:47.637 12:46:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:47.637 12:46:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:22:47.637 12:46:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:22:47.637 12:46:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:22:47.637 12:46:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:47.637 12:46:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:47.637 12:46:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:47.637 12:46:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:47.637 12:46:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:47.637 12:46:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:50.175 12:46:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:50.175 00:22:50.175 real 0m15.180s 00:22:50.175 user 0m45.697s 00:22:50.175 sys 0m6.218s 00:22:50.175 12:46:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:50.175 12:46:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:50.175 ************************************ 00:22:50.175 END TEST nvmf_fio_host 00:22:50.175 ************************************ 00:22:50.175 12:46:32 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:22:50.175 12:46:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:50.175 12:46:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:50.175 12:46:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:50.175 ************************************ 00:22:50.175 START TEST nvmf_failover 00:22:50.175 ************************************ 00:22:50.175 12:46:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:22:50.175 * Looking for test storage... 00:22:50.175 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:50.175 12:46:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:50.175 12:46:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:50.175 12:46:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lcov --version 00:22:50.175 12:46:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:50.175 12:46:32 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:50.175 12:46:32 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:50.175 12:46:32 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:50.175 12:46:32 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:22:50.175 12:46:32 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:22:50.175 12:46:32 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:22:50.175 12:46:32 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:22:50.175 12:46:32 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:22:50.175 12:46:32 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:22:50.175 12:46:32 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:22:50.175 12:46:32 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:50.175 12:46:32 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:22:50.175 12:46:32 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:22:50.175 12:46:32 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:50.175 12:46:32 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:50.175 12:46:32 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:22:50.175 12:46:32 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:22:50.175 12:46:32 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:50.175 12:46:32 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:22:50.175 12:46:32 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:22:50.175 12:46:32 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:22:50.175 12:46:32 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:22:50.175 12:46:32 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:50.175 12:46:32 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:22:50.175 12:46:32 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:22:50.175 12:46:32 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:50.175 12:46:32 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:50.175 12:46:32 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:22:50.175 12:46:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:50.175 12:46:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:50.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:50.175 --rc genhtml_branch_coverage=1 00:22:50.175 --rc genhtml_function_coverage=1 00:22:50.175 --rc genhtml_legend=1 00:22:50.175 --rc geninfo_all_blocks=1 00:22:50.175 --rc geninfo_unexecuted_blocks=1 00:22:50.175 00:22:50.175 ' 00:22:50.175 12:46:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:50.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:50.175 --rc genhtml_branch_coverage=1 00:22:50.175 --rc genhtml_function_coverage=1 00:22:50.176 --rc genhtml_legend=1 00:22:50.176 --rc geninfo_all_blocks=1 00:22:50.176 --rc geninfo_unexecuted_blocks=1 00:22:50.176 00:22:50.176 ' 00:22:50.176 12:46:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:50.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:50.176 --rc genhtml_branch_coverage=1 00:22:50.176 --rc genhtml_function_coverage=1 00:22:50.176 --rc genhtml_legend=1 00:22:50.176 --rc geninfo_all_blocks=1 00:22:50.176 --rc geninfo_unexecuted_blocks=1 00:22:50.176 00:22:50.176 ' 00:22:50.176 12:46:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:50.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:50.176 --rc genhtml_branch_coverage=1 00:22:50.176 --rc genhtml_function_coverage=1 00:22:50.176 --rc genhtml_legend=1 00:22:50.176 --rc geninfo_all_blocks=1 00:22:50.176 --rc geninfo_unexecuted_blocks=1 00:22:50.176 00:22:50.176 ' 00:22:50.176 12:46:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:50.176 12:46:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:22:50.176 12:46:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:50.176 12:46:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:50.176 12:46:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:50.176 12:46:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:50.176 12:46:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:50.176 12:46:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:50.176 12:46:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:50.176 12:46:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:50.176 12:46:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:50.176 12:46:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:50.176 12:46:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:50.176 12:46:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:50.176 12:46:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:50.176 12:46:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:50.176 12:46:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:50.176 12:46:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:50.176 12:46:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:50.176 12:46:32 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:22:50.176 12:46:32 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:50.176 12:46:32 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:50.176 12:46:32 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:50.176 12:46:32 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:50.176 12:46:32 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:50.176 12:46:32 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:50.176 12:46:32 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:22:50.176 12:46:32 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:50.176 12:46:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:22:50.176 12:46:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:50.176 12:46:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:50.176 12:46:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:50.176 12:46:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:50.176 12:46:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:50.176 12:46:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:50.176 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:50.176 12:46:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:50.176 12:46:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:50.176 12:46:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:50.176 12:46:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:50.176 12:46:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:50.176 12:46:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:50.176 12:46:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:50.176 12:46:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:22:50.176 12:46:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:50.176 12:46:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:50.176 12:46:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:50.176 12:46:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:50.176 12:46:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:50.176 12:46:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:50.176 12:46:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:50.176 12:46:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:50.176 12:46:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:50.176 12:46:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:50.176 12:46:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:22:50.176 12:46:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:55.452 12:46:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:55.452 12:46:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:22:55.452 12:46:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:55.452 12:46:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:55.452 12:46:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:55.452 12:46:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:55.452 12:46:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:55.452 12:46:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:22:55.452 12:46:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:55.452 12:46:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:22:55.452 12:46:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:22:55.452 12:46:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:22:55.452 12:46:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:22:55.452 12:46:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:22:55.452 12:46:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:22:55.452 12:46:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:55.452 12:46:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:55.452 12:46:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:55.452 12:46:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:55.452 12:46:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:55.452 12:46:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:55.452 12:46:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:55.452 12:46:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:55.452 12:46:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:55.452 12:46:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:55.452 12:46:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:55.452 12:46:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:55.452 12:46:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:55.452 12:46:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:55.452 12:46:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:55.452 12:46:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:55.452 12:46:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:55.452 12:46:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:55.452 12:46:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:55.452 12:46:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:55.452 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:55.452 12:46:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:55.452 12:46:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:55.452 12:46:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:55.452 12:46:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:55.452 12:46:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:55.452 12:46:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:55.452 12:46:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:55.452 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:55.452 12:46:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:55.452 12:46:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:55.452 12:46:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:55.452 12:46:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:55.452 12:46:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:55.452 12:46:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:55.452 12:46:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:55.452 12:46:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:55.452 12:46:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:55.452 12:46:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:55.452 12:46:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:55.452 12:46:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:55.452 12:46:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:55.452 12:46:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:55.452 12:46:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:55.452 12:46:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:55.452 Found net devices under 0000:86:00.0: cvl_0_0 00:22:55.452 12:46:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:55.452 12:46:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:55.452 12:46:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:55.452 12:46:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:55.452 12:46:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:55.452 12:46:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:55.452 12:46:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:55.452 12:46:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:55.452 12:46:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:55.452 Found net devices under 0000:86:00.1: cvl_0_1 00:22:55.452 12:46:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:55.452 12:46:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:55.452 12:46:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:22:55.452 12:46:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:55.452 12:46:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:55.452 12:46:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:55.453 12:46:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:55.453 12:46:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:55.453 12:46:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:55.453 12:46:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:55.453 12:46:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:55.453 12:46:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:55.453 12:46:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:55.453 12:46:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:55.453 12:46:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:55.453 12:46:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:55.453 12:46:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:55.453 12:46:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:55.453 12:46:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:55.453 12:46:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:55.453 12:46:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:55.453 12:46:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:55.453 12:46:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:55.453 12:46:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:55.453 12:46:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:55.453 12:46:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:55.453 12:46:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:55.453 12:46:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:55.453 12:46:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:55.453 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:55.453 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.410 ms 00:22:55.453 00:22:55.453 --- 10.0.0.2 ping statistics --- 00:22:55.453 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:55.453 rtt min/avg/max/mdev = 0.410/0.410/0.410/0.000 ms 00:22:55.453 12:46:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:55.453 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:55.453 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.184 ms 00:22:55.453 00:22:55.453 --- 10.0.0.1 ping statistics --- 00:22:55.453 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:55.453 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:22:55.453 12:46:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:55.453 12:46:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:22:55.453 12:46:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:55.453 12:46:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:55.453 12:46:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:55.453 12:46:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:55.453 12:46:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:55.453 12:46:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:55.453 12:46:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:55.453 12:46:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:22:55.453 12:46:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:55.453 12:46:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:55.453 12:46:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:55.453 12:46:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=2618980 00:22:55.453 12:46:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:22:55.453 12:46:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 2618980 00:22:55.453 12:46:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 2618980 ']' 00:22:55.453 12:46:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:55.453 12:46:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:55.453 12:46:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:55.453 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:55.453 12:46:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:55.453 12:46:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:55.712 [2024-11-28 12:46:37.988305] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:22:55.712 [2024-11-28 12:46:37.988350] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:55.712 [2024-11-28 12:46:38.056290] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:55.712 [2024-11-28 12:46:38.099529] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:55.712 [2024-11-28 12:46:38.099565] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:55.712 [2024-11-28 12:46:38.099573] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:55.713 [2024-11-28 12:46:38.099580] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:55.713 [2024-11-28 12:46:38.099585] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:55.713 [2024-11-28 12:46:38.100980] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:55.713 [2024-11-28 12:46:38.100999] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:55.713 [2024-11-28 12:46:38.101001] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:55.713 12:46:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:55.713 12:46:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:22:55.713 12:46:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:55.713 12:46:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:55.713 12:46:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:55.971 12:46:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:55.971 12:46:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:55.971 [2024-11-28 12:46:38.407293] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:55.971 12:46:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:22:56.231 Malloc0 00:22:56.231 12:46:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:56.490 12:46:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:56.749 12:46:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:56.749 [2024-11-28 12:46:39.223931] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:56.749 12:46:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:57.008 [2024-11-28 12:46:39.436501] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:57.008 12:46:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:22:57.266 [2024-11-28 12:46:39.641137] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:22:57.266 12:46:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:22:57.266 12:46:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=2619339 00:22:57.266 12:46:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:57.266 12:46:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 2619339 /var/tmp/bdevperf.sock 00:22:57.266 12:46:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 2619339 ']' 00:22:57.266 12:46:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:57.266 12:46:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:57.266 12:46:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:57.266 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:57.266 12:46:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:57.266 12:46:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:57.526 12:46:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:57.526 12:46:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:22:57.526 12:46:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:22:57.785 NVMe0n1 00:22:57.785 12:46:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:22:58.043 00:22:58.302 12:46:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=2619370 00:22:58.302 12:46:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:58.303 12:46:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:22:59.240 12:46:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:59.499 [2024-11-28 12:46:41.763234] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2f2d0 is same with the state(6) to be set 00:22:59.499 [2024-11-28 12:46:41.763308] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2f2d0 is same with the state(6) to be set 00:22:59.499 [2024-11-28 12:46:41.763317] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2f2d0 is same with the state(6) to be set 00:22:59.499 [2024-11-28 12:46:41.763324] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2f2d0 is same with the state(6) to be set 00:22:59.499 [2024-11-28 12:46:41.763330] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2f2d0 is same with the state(6) to be set 00:22:59.500 [2024-11-28 12:46:41.763336] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2f2d0 is same with the state(6) to be set 00:22:59.500 [2024-11-28 12:46:41.763342] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2f2d0 is same with the state(6) to be set 00:22:59.500 [2024-11-28 12:46:41.763348] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2f2d0 is same with the state(6) to be set 00:22:59.500 [2024-11-28 12:46:41.763354] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2f2d0 is same with the state(6) to be set 00:22:59.500 [2024-11-28 12:46:41.763360] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2f2d0 is same with the state(6) to be set 00:22:59.500 [2024-11-28 12:46:41.763366] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2f2d0 is same with the state(6) to be set 00:22:59.500 [2024-11-28 12:46:41.763372] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2f2d0 is same with the state(6) to be set 00:22:59.500 [2024-11-28 12:46:41.763378] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2f2d0 is same with the state(6) to be set 00:22:59.500 [2024-11-28 12:46:41.763383] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2f2d0 is same with the state(6) to be set 00:22:59.500 [2024-11-28 12:46:41.763389] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2f2d0 is same with the state(6) to be set 00:22:59.500 [2024-11-28 12:46:41.763395] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2f2d0 is same with the state(6) to be set 00:22:59.500 [2024-11-28 12:46:41.763401] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2f2d0 is same with the state(6) to be set 00:22:59.500 [2024-11-28 12:46:41.763407] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2f2d0 is same with the state(6) to be set 00:22:59.500 [2024-11-28 12:46:41.763412] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2f2d0 is same with the state(6) to be set 00:22:59.500 [2024-11-28 12:46:41.763418] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2f2d0 is same with the state(6) to be set 00:22:59.500 [2024-11-28 12:46:41.763424] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2f2d0 is same with the state(6) to be set 00:22:59.500 [2024-11-28 12:46:41.763434] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2f2d0 is same with the state(6) to be set 00:22:59.500 [2024-11-28 12:46:41.763440] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2f2d0 is same with the state(6) to be set 00:22:59.500 [2024-11-28 12:46:41.763446] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2f2d0 is same with the state(6) to be set 00:22:59.500 [2024-11-28 12:46:41.763452] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2f2d0 is same with the state(6) to be set 00:22:59.500 [2024-11-28 12:46:41.763458] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2f2d0 is same with the state(6) to be set 00:22:59.500 [2024-11-28 12:46:41.763464] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2f2d0 is same with the state(6) to be set 00:22:59.500 [2024-11-28 12:46:41.763470] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2f2d0 is same with the state(6) to be set 00:22:59.500 [2024-11-28 12:46:41.763476] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2f2d0 is same with the state(6) to be set 00:22:59.500 [2024-11-28 12:46:41.763482] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2f2d0 is same with the state(6) to be set 00:22:59.500 [2024-11-28 12:46:41.763488] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2f2d0 is same with the state(6) to be set 00:22:59.500 [2024-11-28 12:46:41.763493] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2f2d0 is same with the state(6) to be set 00:22:59.500 [2024-11-28 12:46:41.763499] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2f2d0 is same with the state(6) to be set 00:22:59.500 [2024-11-28 12:46:41.763505] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2f2d0 is same with the state(6) to be set 00:22:59.500 [2024-11-28 12:46:41.763511] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2f2d0 is same with the state(6) to be set 00:22:59.500 [2024-11-28 12:46:41.763518] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2f2d0 is same with the state(6) to be set 00:22:59.500 [2024-11-28 12:46:41.763524] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2f2d0 is same with the state(6) to be set 00:22:59.500 [2024-11-28 12:46:41.763530] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2f2d0 is same with the state(6) to be set 00:22:59.500 [2024-11-28 12:46:41.763536] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2f2d0 is same with the state(6) to be set 00:22:59.500 [2024-11-28 12:46:41.763541] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2f2d0 is same with the state(6) to be set 00:22:59.500 [2024-11-28 12:46:41.763547] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2f2d0 is same with the state(6) to be set 00:22:59.500 [2024-11-28 12:46:41.763553] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2f2d0 is same with the state(6) to be set 00:22:59.500 [2024-11-28 12:46:41.763559] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2f2d0 is same with the state(6) to be set 00:22:59.500 [2024-11-28 12:46:41.763564] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2f2d0 is same with the state(6) to be set 00:22:59.500 [2024-11-28 12:46:41.763570] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2f2d0 is same with the state(6) to be set 00:22:59.500 [2024-11-28 12:46:41.763576] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2f2d0 is same with the state(6) to be set 00:22:59.500 [2024-11-28 12:46:41.763581] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2f2d0 is same with the state(6) to be set 00:22:59.500 [2024-11-28 12:46:41.763587] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2f2d0 is same with the state(6) to be set 00:22:59.500 [2024-11-28 12:46:41.763594] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2f2d0 is same with the state(6) to be set 00:22:59.500 [2024-11-28 12:46:41.763600] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2f2d0 is same with the state(6) to be set 00:22:59.500 [2024-11-28 12:46:41.763606] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2f2d0 is same with the state(6) to be set 00:22:59.500 [2024-11-28 12:46:41.763611] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2f2d0 is same with the state(6) to be set 00:22:59.500 [2024-11-28 12:46:41.763617] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2f2d0 is same with the state(6) to be set 00:22:59.500 [2024-11-28 12:46:41.763624] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2f2d0 is same with the state(6) to be set 00:22:59.500 [2024-11-28 12:46:41.763630] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2f2d0 is same with the state(6) to be set 00:22:59.500 [2024-11-28 12:46:41.763636] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2f2d0 is same with the state(6) to be set 00:22:59.500 [2024-11-28 12:46:41.763641] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2f2d0 is same with the state(6) to be set 00:22:59.500 [2024-11-28 12:46:41.763647] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2f2d0 is same with the state(6) to be set 00:22:59.500 [2024-11-28 12:46:41.763653] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2f2d0 is same with the state(6) to be set 00:22:59.500 [2024-11-28 12:46:41.763659] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2f2d0 is same with the state(6) to be set 00:22:59.500 [2024-11-28 12:46:41.763664] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2f2d0 is same with the state(6) to be set 00:22:59.500 [2024-11-28 12:46:41.763670] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2f2d0 is same with the state(6) to be set 00:22:59.500 [2024-11-28 12:46:41.763676] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2f2d0 is same with the state(6) to be set 00:22:59.500 [2024-11-28 12:46:41.763682] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2f2d0 is same with the state(6) to be set 00:22:59.500 [2024-11-28 12:46:41.763688] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2f2d0 is same with the state(6) to be set 00:22:59.500 [2024-11-28 12:46:41.763694] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2f2d0 is same with the state(6) to be set 00:22:59.500 [2024-11-28 12:46:41.763700] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2f2d0 is same with the state(6) to be set 00:22:59.500 [2024-11-28 12:46:41.763706] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2f2d0 is same with the state(6) to be set 00:22:59.500 [2024-11-28 12:46:41.763712] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2f2d0 is same with the state(6) to be set 00:22:59.500 [2024-11-28 12:46:41.763718] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2f2d0 is same with the state(6) to be set 00:22:59.501 [2024-11-28 12:46:41.763724] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2f2d0 is same with the state(6) to be set 00:22:59.501 [2024-11-28 12:46:41.763730] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2f2d0 is same with the state(6) to be set 00:22:59.501 [2024-11-28 12:46:41.763736] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2f2d0 is same with the state(6) to be set 00:22:59.501 12:46:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:23:02.935 12:46:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:02.935 00:23:02.935 12:46:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:02.935 [2024-11-28 12:46:45.288152] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2ffa0 is same with the state(6) to be set 00:23:02.935 [2024-11-28 12:46:45.288192] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2ffa0 is same with the state(6) to be set 00:23:02.935 [2024-11-28 12:46:45.288199] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2ffa0 is same with the state(6) to be set 00:23:02.935 [2024-11-28 12:46:45.288205] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2ffa0 is same with the state(6) to be set 00:23:02.935 [2024-11-28 12:46:45.288212] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2ffa0 is same with the state(6) to be set 00:23:02.935 [2024-11-28 12:46:45.288218] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2ffa0 is same with the state(6) to be set 00:23:02.935 [2024-11-28 12:46:45.288224] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2ffa0 is same with the state(6) to be set 00:23:02.935 [2024-11-28 12:46:45.288229] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2ffa0 is same with the state(6) to be set 00:23:02.935 [2024-11-28 12:46:45.288235] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2ffa0 is same with the state(6) to be set 00:23:02.935 [2024-11-28 12:46:45.288241] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2ffa0 is same with the state(6) to be set 00:23:02.935 [2024-11-28 12:46:45.288247] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2ffa0 is same with the state(6) to be set 00:23:02.935 [2024-11-28 12:46:45.288253] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2ffa0 is same with the state(6) to be set 00:23:02.935 [2024-11-28 12:46:45.288259] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2ffa0 is same with the state(6) to be set 00:23:02.935 [2024-11-28 12:46:45.288265] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2ffa0 is same with the state(6) to be set 00:23:02.935 [2024-11-28 12:46:45.288271] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2ffa0 is same with the state(6) to be set 00:23:02.935 [2024-11-28 12:46:45.288277] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2ffa0 is same with the state(6) to be set 00:23:02.935 [2024-11-28 12:46:45.288282] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2ffa0 is same with the state(6) to be set 00:23:02.935 [2024-11-28 12:46:45.288288] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2ffa0 is same with the state(6) to be set 00:23:02.935 [2024-11-28 12:46:45.288294] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2ffa0 is same with the state(6) to be set 00:23:02.935 [2024-11-28 12:46:45.288300] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2ffa0 is same with the state(6) to be set 00:23:02.935 [2024-11-28 12:46:45.288307] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2ffa0 is same with the state(6) to be set 00:23:02.935 [2024-11-28 12:46:45.288313] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2ffa0 is same with the state(6) to be set 00:23:02.935 [2024-11-28 12:46:45.288319] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2ffa0 is same with the state(6) to be set 00:23:02.935 [2024-11-28 12:46:45.288324] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2ffa0 is same with the state(6) to be set 00:23:02.935 [2024-11-28 12:46:45.288330] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2ffa0 is same with the state(6) to be set 00:23:02.935 [2024-11-28 12:46:45.288336] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2ffa0 is same with the state(6) to be set 00:23:02.935 [2024-11-28 12:46:45.288347] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2ffa0 is same with the state(6) to be set 00:23:02.935 [2024-11-28 12:46:45.288354] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2ffa0 is same with the state(6) to be set 00:23:02.935 [2024-11-28 12:46:45.288359] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2ffa0 is same with the state(6) to be set 00:23:02.936 [2024-11-28 12:46:45.288365] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2ffa0 is same with the state(6) to be set 00:23:02.936 [2024-11-28 12:46:45.288371] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2ffa0 is same with the state(6) to be set 00:23:02.936 [2024-11-28 12:46:45.288377] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2ffa0 is same with the state(6) to be set 00:23:02.936 [2024-11-28 12:46:45.288383] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2ffa0 is same with the state(6) to be set 00:23:02.936 [2024-11-28 12:46:45.288389] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2ffa0 is same with the state(6) to be set 00:23:02.936 [2024-11-28 12:46:45.288395] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2ffa0 is same with the state(6) to be set 00:23:02.936 [2024-11-28 12:46:45.288401] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2ffa0 is same with the state(6) to be set 00:23:02.936 [2024-11-28 12:46:45.288407] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2ffa0 is same with the state(6) to be set 00:23:02.936 [2024-11-28 12:46:45.288413] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2ffa0 is same with the state(6) to be set 00:23:02.936 [2024-11-28 12:46:45.288419] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2ffa0 is same with the state(6) to be set 00:23:02.936 [2024-11-28 12:46:45.288425] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2ffa0 is same with the state(6) to be set 00:23:02.936 [2024-11-28 12:46:45.288431] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2ffa0 is same with the state(6) to be set 00:23:02.936 [2024-11-28 12:46:45.288437] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2ffa0 is same with the state(6) to be set 00:23:02.936 [2024-11-28 12:46:45.288443] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2ffa0 is same with the state(6) to be set 00:23:02.936 [2024-11-28 12:46:45.288448] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2ffa0 is same with the state(6) to be set 00:23:02.936 [2024-11-28 12:46:45.288454] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2ffa0 is same with the state(6) to be set 00:23:02.936 12:46:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:23:06.222 12:46:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:06.222 [2024-11-28 12:46:48.508351] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:06.222 12:46:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:23:07.159 12:46:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:23:07.418 [2024-11-28 12:46:49.731667] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd30ce0 is same with the state(6) to be set 00:23:07.418 12:46:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 2619370 00:23:13.999 { 00:23:13.999 "results": [ 00:23:13.999 { 00:23:13.999 "job": "NVMe0n1", 00:23:13.999 "core_mask": "0x1", 00:23:13.999 "workload": "verify", 00:23:13.999 "status": "finished", 00:23:13.999 "verify_range": { 00:23:13.999 "start": 0, 00:23:13.999 "length": 16384 00:23:13.999 }, 00:23:13.999 "queue_depth": 128, 00:23:13.999 "io_size": 4096, 00:23:13.999 "runtime": 15.004968, 00:23:13.999 "iops": 10637.476867661431, 00:23:13.999 "mibps": 41.552644014302466, 00:23:13.999 "io_failed": 10029, 00:23:13.999 "io_timeout": 0, 00:23:13.999 "avg_latency_us": 11298.667151200518, 00:23:13.999 "min_latency_us": 418.504347826087, 00:23:13.999 "max_latency_us": 21427.42260869565 00:23:13.999 } 00:23:13.999 ], 00:23:13.999 "core_count": 1 00:23:13.999 } 00:23:13.999 12:46:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 2619339 00:23:13.999 12:46:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 2619339 ']' 00:23:13.999 12:46:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 2619339 00:23:13.999 12:46:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:23:13.999 12:46:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:13.999 12:46:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2619339 00:23:13.999 12:46:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:13.999 12:46:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:13.999 12:46:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2619339' 00:23:13.999 killing process with pid 2619339 00:23:13.999 12:46:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 2619339 00:23:13.999 12:46:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 2619339 00:23:13.999 12:46:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:13.999 [2024-11-28 12:46:39.702559] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:23:13.999 [2024-11-28 12:46:39.702612] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2619339 ] 00:23:13.999 [2024-11-28 12:46:39.765473] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:13.999 [2024-11-28 12:46:39.808037] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:13.999 Running I/O for 15 seconds... 00:23:13.999 10667.00 IOPS, 41.67 MiB/s [2024-11-28T11:46:56.518Z] [2024-11-28 12:46:41.764253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:95064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.999 [2024-11-28 12:46:41.764285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.999 [2024-11-28 12:46:41.764300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:95072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.999 [2024-11-28 12:46:41.764308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.999 [2024-11-28 12:46:41.764317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:95080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.999 [2024-11-28 12:46:41.764324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.999 [2024-11-28 12:46:41.764333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:95088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.999 [2024-11-28 12:46:41.764340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.999 [2024-11-28 12:46:41.764349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:95096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.999 [2024-11-28 12:46:41.764356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.999 [2024-11-28 12:46:41.764364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:95104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.999 [2024-11-28 12:46:41.764371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.999 [2024-11-28 12:46:41.764379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:95112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.999 [2024-11-28 12:46:41.764386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.999 [2024-11-28 12:46:41.764394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:95120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.999 [2024-11-28 12:46:41.764401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.999 [2024-11-28 12:46:41.764409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:95128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.999 [2024-11-28 12:46:41.764416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.999 [2024-11-28 12:46:41.764425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:95136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.000 [2024-11-28 12:46:41.764431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.000 [2024-11-28 12:46:41.764439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:95144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.000 [2024-11-28 12:46:41.764446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.000 [2024-11-28 12:46:41.764459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:95152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.000 [2024-11-28 12:46:41.764466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.000 [2024-11-28 12:46:41.764474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:95160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.000 [2024-11-28 12:46:41.764481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.000 [2024-11-28 12:46:41.764489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:95168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.000 [2024-11-28 12:46:41.764496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.000 [2024-11-28 12:46:41.764504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:95176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.000 [2024-11-28 12:46:41.764510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.000 [2024-11-28 12:46:41.764519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:95184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.000 [2024-11-28 12:46:41.764525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.000 [2024-11-28 12:46:41.764533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:95192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.000 [2024-11-28 12:46:41.764541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.000 [2024-11-28 12:46:41.764550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:95200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.000 [2024-11-28 12:46:41.764556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.000 [2024-11-28 12:46:41.764564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:95208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.000 [2024-11-28 12:46:41.764571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.000 [2024-11-28 12:46:41.764580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:95216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.000 [2024-11-28 12:46:41.764587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.000 [2024-11-28 12:46:41.764595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:95224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.000 [2024-11-28 12:46:41.764601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.000 [2024-11-28 12:46:41.764610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:95232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.000 [2024-11-28 12:46:41.764616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.000 [2024-11-28 12:46:41.764625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:95240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.000 [2024-11-28 12:46:41.764631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.000 [2024-11-28 12:46:41.764639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:95248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.000 [2024-11-28 12:46:41.764648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.000 [2024-11-28 12:46:41.764656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:95256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.000 [2024-11-28 12:46:41.764663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.000 [2024-11-28 12:46:41.764671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:95264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.000 [2024-11-28 12:46:41.764678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.000 [2024-11-28 12:46:41.764686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:95272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.000 [2024-11-28 12:46:41.764692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.000 [2024-11-28 12:46:41.764700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:95280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.000 [2024-11-28 12:46:41.764707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.000 [2024-11-28 12:46:41.764715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:95288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.000 [2024-11-28 12:46:41.764721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.000 [2024-11-28 12:46:41.764729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:95296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.000 [2024-11-28 12:46:41.764736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.000 [2024-11-28 12:46:41.764743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:95304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.000 [2024-11-28 12:46:41.764750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.000 [2024-11-28 12:46:41.764758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:95312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.000 [2024-11-28 12:46:41.764765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.000 [2024-11-28 12:46:41.764773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:95320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.000 [2024-11-28 12:46:41.764780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.000 [2024-11-28 12:46:41.764788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:95328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.000 [2024-11-28 12:46:41.764794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.000 [2024-11-28 12:46:41.764802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:95336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.000 [2024-11-28 12:46:41.764808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.000 [2024-11-28 12:46:41.764817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:95344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.000 [2024-11-28 12:46:41.764823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.000 [2024-11-28 12:46:41.764832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:95352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.000 [2024-11-28 12:46:41.764839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.000 [2024-11-28 12:46:41.764847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:95360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.000 [2024-11-28 12:46:41.764853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.000 [2024-11-28 12:46:41.764862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:95368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.000 [2024-11-28 12:46:41.764868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.000 [2024-11-28 12:46:41.764876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:95376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.000 [2024-11-28 12:46:41.764883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.000 [2024-11-28 12:46:41.764891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:95384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.000 [2024-11-28 12:46:41.764898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.000 [2024-11-28 12:46:41.764906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:95632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.000 [2024-11-28 12:46:41.764913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.000 [2024-11-28 12:46:41.764921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:95640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.000 [2024-11-28 12:46:41.764928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.000 [2024-11-28 12:46:41.764936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:95648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.000 [2024-11-28 12:46:41.764942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.000 [2024-11-28 12:46:41.764955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:95656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.000 [2024-11-28 12:46:41.764962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.000 [2024-11-28 12:46:41.764970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:95664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.000 [2024-11-28 12:46:41.764977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.001 [2024-11-28 12:46:41.764984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:95672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.001 [2024-11-28 12:46:41.764991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.001 [2024-11-28 12:46:41.764999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:95680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.001 [2024-11-28 12:46:41.765005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.001 [2024-11-28 12:46:41.765013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:95688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.001 [2024-11-28 12:46:41.765026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.001 [2024-11-28 12:46:41.765034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:95696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.001 [2024-11-28 12:46:41.765041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.001 [2024-11-28 12:46:41.765049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:95704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.001 [2024-11-28 12:46:41.765055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.001 [2024-11-28 12:46:41.765063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:95712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.001 [2024-11-28 12:46:41.765070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.001 [2024-11-28 12:46:41.765078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:95720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.001 [2024-11-28 12:46:41.765085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.001 [2024-11-28 12:46:41.765093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:95728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.001 [2024-11-28 12:46:41.765099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.001 [2024-11-28 12:46:41.765107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:95736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.001 [2024-11-28 12:46:41.765113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.001 [2024-11-28 12:46:41.765121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:95744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.001 [2024-11-28 12:46:41.765128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.001 [2024-11-28 12:46:41.765136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.001 [2024-11-28 12:46:41.765142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.001 [2024-11-28 12:46:41.765150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:95760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.001 [2024-11-28 12:46:41.765156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.001 [2024-11-28 12:46:41.765164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:95768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.001 [2024-11-28 12:46:41.765171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.001 [2024-11-28 12:46:41.765179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:95776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.001 [2024-11-28 12:46:41.765186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.001 [2024-11-28 12:46:41.765194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:95784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.001 [2024-11-28 12:46:41.765201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.001 [2024-11-28 12:46:41.765209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:95792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.001 [2024-11-28 12:46:41.765217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.001 [2024-11-28 12:46:41.765225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:95800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.001 [2024-11-28 12:46:41.765231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.001 [2024-11-28 12:46:41.765239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:95392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.001 [2024-11-28 12:46:41.765246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.001 [2024-11-28 12:46:41.765254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:95400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.001 [2024-11-28 12:46:41.765261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.001 [2024-11-28 12:46:41.765269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:95408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.001 [2024-11-28 12:46:41.765276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.001 [2024-11-28 12:46:41.765284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:95416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.001 [2024-11-28 12:46:41.765290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.001 [2024-11-28 12:46:41.765298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:95424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.001 [2024-11-28 12:46:41.765305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.001 [2024-11-28 12:46:41.765313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:95432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.001 [2024-11-28 12:46:41.765320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.001 [2024-11-28 12:46:41.765327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:95440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.001 [2024-11-28 12:46:41.765334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.001 [2024-11-28 12:46:41.765342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:95448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.001 [2024-11-28 12:46:41.765348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.001 [2024-11-28 12:46:41.765356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:95808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.001 [2024-11-28 12:46:41.765363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.001 [2024-11-28 12:46:41.765371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:95456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.001 [2024-11-28 12:46:41.765378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.001 [2024-11-28 12:46:41.765386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:95464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.001 [2024-11-28 12:46:41.765392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.001 [2024-11-28 12:46:41.765403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:95472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.001 [2024-11-28 12:46:41.765409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.001 [2024-11-28 12:46:41.765417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:95480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.001 [2024-11-28 12:46:41.765423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.001 [2024-11-28 12:46:41.765432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:95488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.001 [2024-11-28 12:46:41.765438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.001 [2024-11-28 12:46:41.765446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:95496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.001 [2024-11-28 12:46:41.765452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.001 [2024-11-28 12:46:41.765460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:95504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.001 [2024-11-28 12:46:41.765467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.001 [2024-11-28 12:46:41.765475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:95512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.001 [2024-11-28 12:46:41.765482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.001 [2024-11-28 12:46:41.765489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:95520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.001 [2024-11-28 12:46:41.765496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.001 [2024-11-28 12:46:41.765504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:95528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.001 [2024-11-28 12:46:41.765510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.001 [2024-11-28 12:46:41.765519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:95536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.001 [2024-11-28 12:46:41.765525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.001 [2024-11-28 12:46:41.765533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:95544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.001 [2024-11-28 12:46:41.765540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.001 [2024-11-28 12:46:41.765548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:95552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.001 [2024-11-28 12:46:41.765554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.001 [2024-11-28 12:46:41.765562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:95560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.002 [2024-11-28 12:46:41.765569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.002 [2024-11-28 12:46:41.765577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:95568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.002 [2024-11-28 12:46:41.765585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.002 [2024-11-28 12:46:41.765594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:95576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.002 [2024-11-28 12:46:41.765600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.002 [2024-11-28 12:46:41.765608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:95584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.002 [2024-11-28 12:46:41.765615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.002 [2024-11-28 12:46:41.765622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:95592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.002 [2024-11-28 12:46:41.765629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.002 [2024-11-28 12:46:41.765637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:95600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.002 [2024-11-28 12:46:41.765644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.002 [2024-11-28 12:46:41.765652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:95608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.002 [2024-11-28 12:46:41.765658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.002 [2024-11-28 12:46:41.765666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:95616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.002 [2024-11-28 12:46:41.765673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.002 [2024-11-28 12:46:41.765681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:95624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.002 [2024-11-28 12:46:41.765688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.002 [2024-11-28 12:46:41.765696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:95816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.002 [2024-11-28 12:46:41.765702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.002 [2024-11-28 12:46:41.765710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:95824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.002 [2024-11-28 12:46:41.765717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.002 [2024-11-28 12:46:41.765725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:95832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.002 [2024-11-28 12:46:41.765733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.002 [2024-11-28 12:46:41.765742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:95840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.002 [2024-11-28 12:46:41.765749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.002 [2024-11-28 12:46:41.765757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:95848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.002 [2024-11-28 12:46:41.765763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.002 [2024-11-28 12:46:41.765773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:95856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.002 [2024-11-28 12:46:41.765780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.002 [2024-11-28 12:46:41.765788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:95864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.002 [2024-11-28 12:46:41.765795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.002 [2024-11-28 12:46:41.765803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:95872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.002 [2024-11-28 12:46:41.765810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.002 [2024-11-28 12:46:41.765818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:95880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.002 [2024-11-28 12:46:41.765824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.002 [2024-11-28 12:46:41.765833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:95888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.002 [2024-11-28 12:46:41.765840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.002 [2024-11-28 12:46:41.765847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:95896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.002 [2024-11-28 12:46:41.765854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.002 [2024-11-28 12:46:41.765862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:95904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.002 [2024-11-28 12:46:41.765869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.002 [2024-11-28 12:46:41.765877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:95912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.002 [2024-11-28 12:46:41.765884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.002 [2024-11-28 12:46:41.765892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:95920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.002 [2024-11-28 12:46:41.765899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.002 [2024-11-28 12:46:41.765907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:95928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.002 [2024-11-28 12:46:41.765913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.002 [2024-11-28 12:46:41.765921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:95936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.002 [2024-11-28 12:46:41.765928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.002 [2024-11-28 12:46:41.765936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:95944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.002 [2024-11-28 12:46:41.765943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.002 [2024-11-28 12:46:41.765957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:95952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.002 [2024-11-28 12:46:41.765964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.002 [2024-11-28 12:46:41.765974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:95960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.002 [2024-11-28 12:46:41.765981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.002 [2024-11-28 12:46:41.765990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:95968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.002 [2024-11-28 12:46:41.765996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.002 [2024-11-28 12:46:41.766004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:95976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.002 [2024-11-28 12:46:41.766011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.002 [2024-11-28 12:46:41.766018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:95984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.002 [2024-11-28 12:46:41.766025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.002 [2024-11-28 12:46:41.766034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:95992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.002 [2024-11-28 12:46:41.766040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.002 [2024-11-28 12:46:41.766049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:96000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.002 [2024-11-28 12:46:41.766056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.002 [2024-11-28 12:46:41.766063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:96008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.002 [2024-11-28 12:46:41.766070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.002 [2024-11-28 12:46:41.766078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:96016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.002 [2024-11-28 12:46:41.766084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.002 [2024-11-28 12:46:41.766092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:96024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.002 [2024-11-28 12:46:41.766099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.002 [2024-11-28 12:46:41.766106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:96032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.002 [2024-11-28 12:46:41.766113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.002 [2024-11-28 12:46:41.766124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:96040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.002 [2024-11-28 12:46:41.766130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.002 [2024-11-28 12:46:41.766138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:96048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.002 [2024-11-28 12:46:41.766145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.002 [2024-11-28 12:46:41.766153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:96056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.002 [2024-11-28 12:46:41.766161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.002 [2024-11-28 12:46:41.766169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:96064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.003 [2024-11-28 12:46:41.766175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.003 [2024-11-28 12:46:41.766196] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:14.003 [2024-11-28 12:46:41.766203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96072 len:8 PRP1 0x0 PRP2 0x0 00:23:14.003 [2024-11-28 12:46:41.766210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.003 [2024-11-28 12:46:41.766219] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:14.003 [2024-11-28 12:46:41.766224] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:14.003 [2024-11-28 12:46:41.766231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96080 len:8 PRP1 0x0 PRP2 0x0 00:23:14.003 [2024-11-28 12:46:41.766238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.003 [2024-11-28 12:46:41.766281] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:23:14.003 [2024-11-28 12:46:41.766302] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:14.003 [2024-11-28 12:46:41.766309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.003 [2024-11-28 12:46:41.766317] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:14.003 [2024-11-28 12:46:41.766323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.003 [2024-11-28 12:46:41.766331] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:14.003 [2024-11-28 12:46:41.766337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.003 [2024-11-28 12:46:41.766344] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:14.003 [2024-11-28 12:46:41.766351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.003 [2024-11-28 12:46:41.766357] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:23:14.003 [2024-11-28 12:46:41.769233] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:23:14.003 [2024-11-28 12:46:41.769262] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb14370 (9): Bad file descriptor 00:23:14.003 [2024-11-28 12:46:41.839549] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:23:14.003 10343.50 IOPS, 40.40 MiB/s [2024-11-28T11:46:56.522Z] 10477.00 IOPS, 40.93 MiB/s [2024-11-28T11:46:56.522Z] 10534.25 IOPS, 41.15 MiB/s [2024-11-28T11:46:56.522Z] [2024-11-28 12:46:45.289065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:25552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.003 [2024-11-28 12:46:45.289098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.003 [2024-11-28 12:46:45.289113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:25560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.003 [2024-11-28 12:46:45.289125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.003 [2024-11-28 12:46:45.289133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.003 [2024-11-28 12:46:45.289141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.003 [2024-11-28 12:46:45.289149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.003 [2024-11-28 12:46:45.289156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.003 [2024-11-28 12:46:45.289164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:25584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.003 [2024-11-28 12:46:45.289171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.003 [2024-11-28 12:46:45.289179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:25592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.003 [2024-11-28 12:46:45.289185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.003 [2024-11-28 12:46:45.289193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.003 [2024-11-28 12:46:45.289200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.003 [2024-11-28 12:46:45.289208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.003 [2024-11-28 12:46:45.289215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.003 [2024-11-28 12:46:45.289223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:25616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.003 [2024-11-28 12:46:45.289230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.003 [2024-11-28 12:46:45.289238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:25624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.003 [2024-11-28 12:46:45.289245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.003 [2024-11-28 12:46:45.289253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:25632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.003 [2024-11-28 12:46:45.289259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.003 [2024-11-28 12:46:45.289268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:25640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.003 [2024-11-28 12:46:45.289274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.003 [2024-11-28 12:46:45.289283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:25648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.003 [2024-11-28 12:46:45.289290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.003 [2024-11-28 12:46:45.289298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:25656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.003 [2024-11-28 12:46:45.289304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.003 [2024-11-28 12:46:45.289314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:25664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.003 [2024-11-28 12:46:45.289320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.003 [2024-11-28 12:46:45.289329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:25672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.003 [2024-11-28 12:46:45.289335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.003 [2024-11-28 12:46:45.289343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.003 [2024-11-28 12:46:45.289351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.003 [2024-11-28 12:46:45.289359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:25688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.003 [2024-11-28 12:46:45.289365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.003 [2024-11-28 12:46:45.289374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.003 [2024-11-28 12:46:45.289380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.003 [2024-11-28 12:46:45.289388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:25704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.003 [2024-11-28 12:46:45.289395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.003 [2024-11-28 12:46:45.289403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:25712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.003 [2024-11-28 12:46:45.289410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.003 [2024-11-28 12:46:45.289418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.003 [2024-11-28 12:46:45.289424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.003 [2024-11-28 12:46:45.289432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:25728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.003 [2024-11-28 12:46:45.289439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.003 [2024-11-28 12:46:45.289447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:25736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.003 [2024-11-28 12:46:45.289453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.003 [2024-11-28 12:46:45.289461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:25744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.003 [2024-11-28 12:46:45.289467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.003 [2024-11-28 12:46:45.289475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:25752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.003 [2024-11-28 12:46:45.289482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.003 [2024-11-28 12:46:45.289490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:25760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.003 [2024-11-28 12:46:45.289497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.003 [2024-11-28 12:46:45.289506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:25768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.003 [2024-11-28 12:46:45.289513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.004 [2024-11-28 12:46:45.289521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:25776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.004 [2024-11-28 12:46:45.289528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.004 [2024-11-28 12:46:45.289535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:25784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.004 [2024-11-28 12:46:45.289542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.004 [2024-11-28 12:46:45.289550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:25792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.004 [2024-11-28 12:46:45.289556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.004 [2024-11-28 12:46:45.289564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:25800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.004 [2024-11-28 12:46:45.289571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.004 [2024-11-28 12:46:45.289579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:25808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.004 [2024-11-28 12:46:45.289585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.004 [2024-11-28 12:46:45.289593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:25816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.004 [2024-11-28 12:46:45.289600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.004 [2024-11-28 12:46:45.289608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:25824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.004 [2024-11-28 12:46:45.289614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.004 [2024-11-28 12:46:45.289622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:25832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.004 [2024-11-28 12:46:45.289629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.004 [2024-11-28 12:46:45.289637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:25840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.004 [2024-11-28 12:46:45.289645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.004 [2024-11-28 12:46:45.289653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:25848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.004 [2024-11-28 12:46:45.289660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.004 [2024-11-28 12:46:45.289668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:25856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.004 [2024-11-28 12:46:45.289675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.004 [2024-11-28 12:46:45.289683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:25880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.004 [2024-11-28 12:46:45.289692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.004 [2024-11-28 12:46:45.289700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:25888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.004 [2024-11-28 12:46:45.289707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.004 [2024-11-28 12:46:45.289715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:25896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.004 [2024-11-28 12:46:45.289721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.004 [2024-11-28 12:46:45.289729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:25904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.004 [2024-11-28 12:46:45.289736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.004 [2024-11-28 12:46:45.289744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:25912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.004 [2024-11-28 12:46:45.289750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.004 [2024-11-28 12:46:45.289759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:25920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.004 [2024-11-28 12:46:45.289765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.004 [2024-11-28 12:46:45.289773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:25928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.004 [2024-11-28 12:46:45.289779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.004 [2024-11-28 12:46:45.289787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.004 [2024-11-28 12:46:45.289794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.004 [2024-11-28 12:46:45.289802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:25944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.004 [2024-11-28 12:46:45.289808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.004 [2024-11-28 12:46:45.289816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:25952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.004 [2024-11-28 12:46:45.289823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.004 [2024-11-28 12:46:45.289831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:25960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.004 [2024-11-28 12:46:45.289837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.004 [2024-11-28 12:46:45.289845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:25968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.004 [2024-11-28 12:46:45.289852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.004 [2024-11-28 12:46:45.289860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:25976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.004 [2024-11-28 12:46:45.289867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.004 [2024-11-28 12:46:45.289876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:25984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.004 [2024-11-28 12:46:45.289883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.004 [2024-11-28 12:46:45.289891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:25992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.004 [2024-11-28 12:46:45.289897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.004 [2024-11-28 12:46:45.289905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:26000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.004 [2024-11-28 12:46:45.289912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.004 [2024-11-28 12:46:45.289920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:26008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.004 [2024-11-28 12:46:45.289927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.004 [2024-11-28 12:46:45.289935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:26016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.004 [2024-11-28 12:46:45.289941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.004 [2024-11-28 12:46:45.289952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.004 [2024-11-28 12:46:45.289960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.004 [2024-11-28 12:46:45.289968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:26032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.004 [2024-11-28 12:46:45.289974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.004 [2024-11-28 12:46:45.289982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:26040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.004 [2024-11-28 12:46:45.289989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.004 [2024-11-28 12:46:45.289997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:26048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.004 [2024-11-28 12:46:45.290003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.004 [2024-11-28 12:46:45.290011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:26056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.004 [2024-11-28 12:46:45.290018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.004 [2024-11-28 12:46:45.290026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:26064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.005 [2024-11-28 12:46:45.290033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.005 [2024-11-28 12:46:45.290040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:26072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.005 [2024-11-28 12:46:45.290047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.005 [2024-11-28 12:46:45.290055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:26080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.005 [2024-11-28 12:46:45.290062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.005 [2024-11-28 12:46:45.290071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:26088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.005 [2024-11-28 12:46:45.290077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.005 [2024-11-28 12:46:45.290085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:26096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.005 [2024-11-28 12:46:45.290092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.005 [2024-11-28 12:46:45.290099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:26104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.005 [2024-11-28 12:46:45.290106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.005 [2024-11-28 12:46:45.290114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:26112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.005 [2024-11-28 12:46:45.290120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.005 [2024-11-28 12:46:45.290128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:26120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.005 [2024-11-28 12:46:45.290135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.005 [2024-11-28 12:46:45.290142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:26128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.005 [2024-11-28 12:46:45.290149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.005 [2024-11-28 12:46:45.290156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:26136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.005 [2024-11-28 12:46:45.290163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.005 [2024-11-28 12:46:45.290171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:26144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.005 [2024-11-28 12:46:45.290177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.005 [2024-11-28 12:46:45.290185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:26152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.005 [2024-11-28 12:46:45.290199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.005 [2024-11-28 12:46:45.290208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:26160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.005 [2024-11-28 12:46:45.290214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.005 [2024-11-28 12:46:45.290222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:26168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.005 [2024-11-28 12:46:45.290229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.005 [2024-11-28 12:46:45.290237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:26176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.005 [2024-11-28 12:46:45.290243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.005 [2024-11-28 12:46:45.290251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:26184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.005 [2024-11-28 12:46:45.290258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.005 [2024-11-28 12:46:45.290267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:26192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.005 [2024-11-28 12:46:45.290273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.005 [2024-11-28 12:46:45.290281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:26200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.005 [2024-11-28 12:46:45.290287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.005 [2024-11-28 12:46:45.290295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:26208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.005 [2024-11-28 12:46:45.290302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.005 [2024-11-28 12:46:45.290310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:26216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.005 [2024-11-28 12:46:45.290316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.005 [2024-11-28 12:46:45.290324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:26224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.005 [2024-11-28 12:46:45.290331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.005 [2024-11-28 12:46:45.290339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:26232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.005 [2024-11-28 12:46:45.290345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.005 [2024-11-28 12:46:45.290353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:26240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.005 [2024-11-28 12:46:45.290359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.005 [2024-11-28 12:46:45.290367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:26248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.005 [2024-11-28 12:46:45.290374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.005 [2024-11-28 12:46:45.290382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:26256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.005 [2024-11-28 12:46:45.290388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.005 [2024-11-28 12:46:45.290417] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:14.005 [2024-11-28 12:46:45.290424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26264 len:8 PRP1 0x0 PRP2 0x0 00:23:14.005 [2024-11-28 12:46:45.290431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.005 [2024-11-28 12:46:45.290440] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:14.005 [2024-11-28 12:46:45.290445] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:14.005 [2024-11-28 12:46:45.290452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26272 len:8 PRP1 0x0 PRP2 0x0 00:23:14.005 [2024-11-28 12:46:45.290459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.005 [2024-11-28 12:46:45.290467] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:14.005 [2024-11-28 12:46:45.290472] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:14.005 [2024-11-28 12:46:45.290477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26280 len:8 PRP1 0x0 PRP2 0x0 00:23:14.005 [2024-11-28 12:46:45.290484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.005 [2024-11-28 12:46:45.290490] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:14.005 [2024-11-28 12:46:45.290495] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:14.005 [2024-11-28 12:46:45.290501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26288 len:8 PRP1 0x0 PRP2 0x0 00:23:14.005 [2024-11-28 12:46:45.290507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.005 [2024-11-28 12:46:45.290514] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:14.005 [2024-11-28 12:46:45.290518] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:14.005 [2024-11-28 12:46:45.290524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26296 len:8 PRP1 0x0 PRP2 0x0 00:23:14.005 [2024-11-28 12:46:45.290530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.005 [2024-11-28 12:46:45.290539] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:14.005 [2024-11-28 12:46:45.290544] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:14.005 [2024-11-28 12:46:45.290549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26304 len:8 PRP1 0x0 PRP2 0x0 00:23:14.005 [2024-11-28 12:46:45.290555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.005 [2024-11-28 12:46:45.290562] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:14.005 [2024-11-28 12:46:45.290566] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:14.005 [2024-11-28 12:46:45.290572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26312 len:8 PRP1 0x0 PRP2 0x0 00:23:14.005 [2024-11-28 12:46:45.290578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.005 [2024-11-28 12:46:45.290584] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:14.005 [2024-11-28 12:46:45.290589] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:14.005 [2024-11-28 12:46:45.290595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26320 len:8 PRP1 0x0 PRP2 0x0 00:23:14.005 [2024-11-28 12:46:45.290601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.005 [2024-11-28 12:46:45.290608] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:14.005 [2024-11-28 12:46:45.290613] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:14.005 [2024-11-28 12:46:45.290618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26328 len:8 PRP1 0x0 PRP2 0x0 00:23:14.005 [2024-11-28 12:46:45.290625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.005 [2024-11-28 12:46:45.290631] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:14.006 [2024-11-28 12:46:45.290636] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:14.006 [2024-11-28 12:46:45.290642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26336 len:8 PRP1 0x0 PRP2 0x0 00:23:14.006 [2024-11-28 12:46:45.290650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.006 [2024-11-28 12:46:45.290657] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:14.006 [2024-11-28 12:46:45.290662] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:14.006 [2024-11-28 12:46:45.290667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26344 len:8 PRP1 0x0 PRP2 0x0 00:23:14.006 [2024-11-28 12:46:45.290674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.006 [2024-11-28 12:46:45.290680] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:14.006 [2024-11-28 12:46:45.290685] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:14.006 [2024-11-28 12:46:45.290690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26352 len:8 PRP1 0x0 PRP2 0x0 00:23:14.006 [2024-11-28 12:46:45.290696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.006 [2024-11-28 12:46:45.290703] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:14.006 [2024-11-28 12:46:45.290708] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:14.006 [2024-11-28 12:46:45.290713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26360 len:8 PRP1 0x0 PRP2 0x0 00:23:14.006 [2024-11-28 12:46:45.290719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.006 [2024-11-28 12:46:45.290728] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:14.006 [2024-11-28 12:46:45.290733] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:14.006 [2024-11-28 12:46:45.290738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26368 len:8 PRP1 0x0 PRP2 0x0 00:23:14.006 [2024-11-28 12:46:45.290745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.006 [2024-11-28 12:46:45.290751] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:14.006 [2024-11-28 12:46:45.290756] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:14.006 [2024-11-28 12:46:45.290761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26376 len:8 PRP1 0x0 PRP2 0x0 00:23:14.006 [2024-11-28 12:46:45.290768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.006 [2024-11-28 12:46:45.290774] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:14.006 [2024-11-28 12:46:45.290779] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:14.006 [2024-11-28 12:46:45.290784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26384 len:8 PRP1 0x0 PRP2 0x0 00:23:14.006 [2024-11-28 12:46:45.290790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.006 [2024-11-28 12:46:45.290797] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:14.006 [2024-11-28 12:46:45.290802] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:14.006 [2024-11-28 12:46:45.290808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26392 len:8 PRP1 0x0 PRP2 0x0 00:23:14.006 [2024-11-28 12:46:45.290814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.006 [2024-11-28 12:46:45.290820] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:14.006 [2024-11-28 12:46:45.290825] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:14.006 [2024-11-28 12:46:45.290833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26400 len:8 PRP1 0x0 PRP2 0x0 00:23:14.006 [2024-11-28 12:46:45.290840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.006 [2024-11-28 12:46:45.290846] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:14.006 [2024-11-28 12:46:45.290851] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:14.006 [2024-11-28 12:46:45.290856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26408 len:8 PRP1 0x0 PRP2 0x0 00:23:14.006 [2024-11-28 12:46:45.290863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.006 [2024-11-28 12:46:45.290869] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:14.006 [2024-11-28 12:46:45.290874] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:14.006 [2024-11-28 12:46:45.290880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26416 len:8 PRP1 0x0 PRP2 0x0 00:23:14.006 [2024-11-28 12:46:45.290886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.006 [2024-11-28 12:46:45.290892] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:14.006 [2024-11-28 12:46:45.290897] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:14.006 [2024-11-28 12:46:45.290903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26424 len:8 PRP1 0x0 PRP2 0x0 00:23:14.006 [2024-11-28 12:46:45.290909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.006 [2024-11-28 12:46:45.290917] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:14.006 [2024-11-28 12:46:45.290922] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:14.006 [2024-11-28 12:46:45.290927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26432 len:8 PRP1 0x0 PRP2 0x0 00:23:14.006 [2024-11-28 12:46:45.290933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.006 [2024-11-28 12:46:45.290940] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:14.006 [2024-11-28 12:46:45.290945] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:14.006 [2024-11-28 12:46:45.290955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26440 len:8 PRP1 0x0 PRP2 0x0 00:23:14.006 [2024-11-28 12:46:45.290962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.006 [2024-11-28 12:46:45.290969] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:14.006 [2024-11-28 12:46:45.290974] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:14.006 [2024-11-28 12:46:45.290979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26448 len:8 PRP1 0x0 PRP2 0x0 00:23:14.006 [2024-11-28 12:46:45.290985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.006 [2024-11-28 12:46:45.290992] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:14.006 [2024-11-28 12:46:45.290997] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:14.006 [2024-11-28 12:46:45.291002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26456 len:8 PRP1 0x0 PRP2 0x0 00:23:14.006 [2024-11-28 12:46:45.291008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.006 [2024-11-28 12:46:45.291016] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:14.006 [2024-11-28 12:46:45.291021] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:14.006 [2024-11-28 12:46:45.291028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26464 len:8 PRP1 0x0 PRP2 0x0 00:23:14.006 [2024-11-28 12:46:45.291035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.006 [2024-11-28 12:46:45.291042] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:14.006 [2024-11-28 12:46:45.291047] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:14.006 [2024-11-28 12:46:45.301512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26472 len:8 PRP1 0x0 PRP2 0x0 00:23:14.006 [2024-11-28 12:46:45.301527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.006 [2024-11-28 12:46:45.301538] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:14.006 [2024-11-28 12:46:45.301545] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:14.006 [2024-11-28 12:46:45.301553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26480 len:8 PRP1 0x0 PRP2 0x0 00:23:14.006 [2024-11-28 12:46:45.301561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.006 [2024-11-28 12:46:45.301570] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:14.006 [2024-11-28 12:46:45.301577] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:14.006 [2024-11-28 12:46:45.301584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26488 len:8 PRP1 0x0 PRP2 0x0 00:23:14.006 [2024-11-28 12:46:45.301593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.006 [2024-11-28 12:46:45.301603] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:14.006 [2024-11-28 12:46:45.301609] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:14.006 [2024-11-28 12:46:45.301617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26496 len:8 PRP1 0x0 PRP2 0x0 00:23:14.006 [2024-11-28 12:46:45.301625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.006 [2024-11-28 12:46:45.301634] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:14.006 [2024-11-28 12:46:45.301641] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:14.006 [2024-11-28 12:46:45.301648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26504 len:8 PRP1 0x0 PRP2 0x0 00:23:14.006 [2024-11-28 12:46:45.301656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.006 [2024-11-28 12:46:45.301665] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:14.006 [2024-11-28 12:46:45.301672] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:14.006 [2024-11-28 12:46:45.301679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26512 len:8 PRP1 0x0 PRP2 0x0 00:23:14.006 [2024-11-28 12:46:45.301688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.006 [2024-11-28 12:46:45.301696] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:14.006 [2024-11-28 12:46:45.301703] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:14.006 [2024-11-28 12:46:45.301710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26520 len:8 PRP1 0x0 PRP2 0x0 00:23:14.007 [2024-11-28 12:46:45.301721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.007 [2024-11-28 12:46:45.301730] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:14.007 [2024-11-28 12:46:45.301737] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:14.007 [2024-11-28 12:46:45.301745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26528 len:8 PRP1 0x0 PRP2 0x0 00:23:14.007 [2024-11-28 12:46:45.301755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.007 [2024-11-28 12:46:45.301764] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:14.007 [2024-11-28 12:46:45.301770] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:14.007 [2024-11-28 12:46:45.301778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26536 len:8 PRP1 0x0 PRP2 0x0 00:23:14.007 [2024-11-28 12:46:45.301786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.007 [2024-11-28 12:46:45.301795] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:14.007 [2024-11-28 12:46:45.301802] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:14.007 [2024-11-28 12:46:45.301809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26544 len:8 PRP1 0x0 PRP2 0x0 00:23:14.007 [2024-11-28 12:46:45.301817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.007 [2024-11-28 12:46:45.301826] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:14.007 [2024-11-28 12:46:45.301833] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:14.007 [2024-11-28 12:46:45.301840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26552 len:8 PRP1 0x0 PRP2 0x0 00:23:14.007 [2024-11-28 12:46:45.301849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.007 [2024-11-28 12:46:45.301859] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:14.007 [2024-11-28 12:46:45.301866] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:14.007 [2024-11-28 12:46:45.301873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26560 len:8 PRP1 0x0 PRP2 0x0 00:23:14.007 [2024-11-28 12:46:45.301882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.007 [2024-11-28 12:46:45.301891] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:14.007 [2024-11-28 12:46:45.301898] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:14.007 [2024-11-28 12:46:45.301905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26568 len:8 PRP1 0x0 PRP2 0x0 00:23:14.007 [2024-11-28 12:46:45.301914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.007 [2024-11-28 12:46:45.301923] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:14.007 [2024-11-28 12:46:45.301929] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:14.007 [2024-11-28 12:46:45.301937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25864 len:8 PRP1 0x0 PRP2 0x0 00:23:14.007 [2024-11-28 12:46:45.301945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.007 [2024-11-28 12:46:45.301959] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:14.007 [2024-11-28 12:46:45.301965] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:14.007 [2024-11-28 12:46:45.301975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25872 len:8 PRP1 0x0 PRP2 0x0 00:23:14.007 [2024-11-28 12:46:45.301983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.007 [2024-11-28 12:46:45.302032] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:23:14.007 [2024-11-28 12:46:45.302059] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:14.007 [2024-11-28 12:46:45.302069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.007 [2024-11-28 12:46:45.302080] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:14.007 [2024-11-28 12:46:45.302088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.007 [2024-11-28 12:46:45.302097] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:14.007 [2024-11-28 12:46:45.302106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.007 [2024-11-28 12:46:45.302116] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:14.007 [2024-11-28 12:46:45.302124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.007 [2024-11-28 12:46:45.302133] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:23:14.007 [2024-11-28 12:46:45.302173] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb14370 (9): Bad file descriptor 00:23:14.007 [2024-11-28 12:46:45.306080] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:23:14.007 [2024-11-28 12:46:45.375299] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:23:14.007 10404.80 IOPS, 40.64 MiB/s [2024-11-28T11:46:56.526Z] 10502.00 IOPS, 41.02 MiB/s [2024-11-28T11:46:56.526Z] 10536.29 IOPS, 41.16 MiB/s [2024-11-28T11:46:56.526Z] 10571.38 IOPS, 41.29 MiB/s [2024-11-28T11:46:56.526Z] 10596.00 IOPS, 41.39 MiB/s [2024-11-28T11:46:56.526Z] [2024-11-28 12:46:49.731766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:35912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.007 [2024-11-28 12:46:49.731799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.007 [2024-11-28 12:46:49.731814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:35920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.007 [2024-11-28 12:46:49.731822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.007 [2024-11-28 12:46:49.731832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:35928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.007 [2024-11-28 12:46:49.731839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.007 [2024-11-28 12:46:49.731850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:35936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.007 [2024-11-28 12:46:49.731857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.007 [2024-11-28 12:46:49.731865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:35944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.007 [2024-11-28 12:46:49.731872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.007 [2024-11-28 12:46:49.731885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:35952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.007 [2024-11-28 12:46:49.731892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.007 [2024-11-28 12:46:49.731900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:35960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.007 [2024-11-28 12:46:49.731907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.007 [2024-11-28 12:46:49.731916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:35968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.007 [2024-11-28 12:46:49.731923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.007 [2024-11-28 12:46:49.731930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:35976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.007 [2024-11-28 12:46:49.731937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.007 [2024-11-28 12:46:49.731945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:35984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.007 [2024-11-28 12:46:49.731960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.007 [2024-11-28 12:46:49.731969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:35992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.007 [2024-11-28 12:46:49.731975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.007 [2024-11-28 12:46:49.731984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:36000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.007 [2024-11-28 12:46:49.731991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.007 [2024-11-28 12:46:49.732000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:36008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.007 [2024-11-28 12:46:49.732006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.007 [2024-11-28 12:46:49.732014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:36016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.007 [2024-11-28 12:46:49.732022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.007 [2024-11-28 12:46:49.732030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:35024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.007 [2024-11-28 12:46:49.732036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.007 [2024-11-28 12:46:49.732044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:35032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.007 [2024-11-28 12:46:49.732051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.007 [2024-11-28 12:46:49.732059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:35040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.007 [2024-11-28 12:46:49.732066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.007 [2024-11-28 12:46:49.732074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:35048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.007 [2024-11-28 12:46:49.732082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.007 [2024-11-28 12:46:49.732094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:35056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.008 [2024-11-28 12:46:49.732103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.008 [2024-11-28 12:46:49.732112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:35064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.008 [2024-11-28 12:46:49.732121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.008 [2024-11-28 12:46:49.732130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:35072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.008 [2024-11-28 12:46:49.732137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.008 [2024-11-28 12:46:49.732146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:35080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.008 [2024-11-28 12:46:49.732153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.008 [2024-11-28 12:46:49.732161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:35088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.008 [2024-11-28 12:46:49.732167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.008 [2024-11-28 12:46:49.732175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:35096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.008 [2024-11-28 12:46:49.732182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.008 [2024-11-28 12:46:49.732190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:35104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.008 [2024-11-28 12:46:49.732196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.008 [2024-11-28 12:46:49.732204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:35112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.008 [2024-11-28 12:46:49.732211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.008 [2024-11-28 12:46:49.732219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:35120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.008 [2024-11-28 12:46:49.732226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.008 [2024-11-28 12:46:49.732234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:35128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.008 [2024-11-28 12:46:49.732240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.008 [2024-11-28 12:46:49.732248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:35136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.008 [2024-11-28 12:46:49.732254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.008 [2024-11-28 12:46:49.732263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:36024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.008 [2024-11-28 12:46:49.732269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.008 [2024-11-28 12:46:49.732277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:35144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.008 [2024-11-28 12:46:49.732286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.008 [2024-11-28 12:46:49.732295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:35152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.008 [2024-11-28 12:46:49.732302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.008 [2024-11-28 12:46:49.732311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:35160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.008 [2024-11-28 12:46:49.732318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.008 [2024-11-28 12:46:49.732328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:35168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.008 [2024-11-28 12:46:49.732335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.008 [2024-11-28 12:46:49.732344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:35176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.008 [2024-11-28 12:46:49.732353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.008 [2024-11-28 12:46:49.732362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:35184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.008 [2024-11-28 12:46:49.732369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.008 [2024-11-28 12:46:49.732379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:35192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.008 [2024-11-28 12:46:49.732388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.008 [2024-11-28 12:46:49.732397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:35200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.008 [2024-11-28 12:46:49.732406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.008 [2024-11-28 12:46:49.732414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:35208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.008 [2024-11-28 12:46:49.732422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.008 [2024-11-28 12:46:49.732430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:35216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.008 [2024-11-28 12:46:49.732438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.008 [2024-11-28 12:46:49.732447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:35224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.008 [2024-11-28 12:46:49.732453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.008 [2024-11-28 12:46:49.732464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:35232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.008 [2024-11-28 12:46:49.732471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.008 [2024-11-28 12:46:49.732479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:35240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.008 [2024-11-28 12:46:49.732486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.008 [2024-11-28 12:46:49.732496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:35248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.008 [2024-11-28 12:46:49.732503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.008 [2024-11-28 12:46:49.732511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:35256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.008 [2024-11-28 12:46:49.732517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.008 [2024-11-28 12:46:49.732526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:35264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.008 [2024-11-28 12:46:49.732533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.008 [2024-11-28 12:46:49.732541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:35272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.008 [2024-11-28 12:46:49.732548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.008 [2024-11-28 12:46:49.732556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:35280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.008 [2024-11-28 12:46:49.732563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.008 [2024-11-28 12:46:49.732571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:35288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.008 [2024-11-28 12:46:49.732578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.008 [2024-11-28 12:46:49.732586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:35296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.008 [2024-11-28 12:46:49.732593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.008 [2024-11-28 12:46:49.732601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:35304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.008 [2024-11-28 12:46:49.732608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.008 [2024-11-28 12:46:49.732616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:35312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.008 [2024-11-28 12:46:49.732623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.008 [2024-11-28 12:46:49.732631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:35320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.009 [2024-11-28 12:46:49.732637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.009 [2024-11-28 12:46:49.732645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:35328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.009 [2024-11-28 12:46:49.732651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.009 [2024-11-28 12:46:49.732659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:35336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.009 [2024-11-28 12:46:49.732665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.009 [2024-11-28 12:46:49.732674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:35344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.009 [2024-11-28 12:46:49.732682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.009 [2024-11-28 12:46:49.732690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:35352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.009 [2024-11-28 12:46:49.732697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.009 [2024-11-28 12:46:49.732705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:35360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.009 [2024-11-28 12:46:49.732711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.009 [2024-11-28 12:46:49.732720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:35368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.009 [2024-11-28 12:46:49.732726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.009 [2024-11-28 12:46:49.732735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:35376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.009 [2024-11-28 12:46:49.732741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.009 [2024-11-28 12:46:49.732749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:35384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.009 [2024-11-28 12:46:49.732755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.009 [2024-11-28 12:46:49.732763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:35392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.009 [2024-11-28 12:46:49.732770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.009 [2024-11-28 12:46:49.732777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:35400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.009 [2024-11-28 12:46:49.732784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.009 [2024-11-28 12:46:49.732792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:35408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.009 [2024-11-28 12:46:49.732798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.009 [2024-11-28 12:46:49.732806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:35416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.009 [2024-11-28 12:46:49.732813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.009 [2024-11-28 12:46:49.732821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:35424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.009 [2024-11-28 12:46:49.732827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.009 [2024-11-28 12:46:49.732835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:35432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.009 [2024-11-28 12:46:49.732842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.009 [2024-11-28 12:46:49.732850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:35440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.009 [2024-11-28 12:46:49.732856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.009 [2024-11-28 12:46:49.732865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:35448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.009 [2024-11-28 12:46:49.732872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.009 [2024-11-28 12:46:49.732880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:35456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.009 [2024-11-28 12:46:49.732887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.009 [2024-11-28 12:46:49.732895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:35464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.009 [2024-11-28 12:46:49.732901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.009 [2024-11-28 12:46:49.732910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:35472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.009 [2024-11-28 12:46:49.732916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.009 [2024-11-28 12:46:49.732924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:35480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.009 [2024-11-28 12:46:49.732930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.009 [2024-11-28 12:46:49.732938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:35488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.009 [2024-11-28 12:46:49.732945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.009 [2024-11-28 12:46:49.732957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:35496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.009 [2024-11-28 12:46:49.732964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.009 [2024-11-28 12:46:49.732972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:35504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.009 [2024-11-28 12:46:49.732978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.009 [2024-11-28 12:46:49.732986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:35512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.009 [2024-11-28 12:46:49.732992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.009 [2024-11-28 12:46:49.733000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:35520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.009 [2024-11-28 12:46:49.733007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.009 [2024-11-28 12:46:49.733015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:35528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.009 [2024-11-28 12:46:49.733021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.009 [2024-11-28 12:46:49.733029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:35536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.009 [2024-11-28 12:46:49.733035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.009 [2024-11-28 12:46:49.733043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:35544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.009 [2024-11-28 12:46:49.733050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.009 [2024-11-28 12:46:49.733060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:35552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.009 [2024-11-28 12:46:49.733067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.009 [2024-11-28 12:46:49.733075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:35560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.009 [2024-11-28 12:46:49.733081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.009 [2024-11-28 12:46:49.733089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:35568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.009 [2024-11-28 12:46:49.733095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.009 [2024-11-28 12:46:49.733104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:35576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.009 [2024-11-28 12:46:49.733111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.009 [2024-11-28 12:46:49.733118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:35584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.009 [2024-11-28 12:46:49.733125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.009 [2024-11-28 12:46:49.733133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:35592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.009 [2024-11-28 12:46:49.733139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.009 [2024-11-28 12:46:49.733147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:35600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.009 [2024-11-28 12:46:49.733154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.009 [2024-11-28 12:46:49.733162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:35608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.009 [2024-11-28 12:46:49.733169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.009 [2024-11-28 12:46:49.733177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:35616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.009 [2024-11-28 12:46:49.733183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.009 [2024-11-28 12:46:49.733191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:35624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.009 [2024-11-28 12:46:49.733197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.009 [2024-11-28 12:46:49.733205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:35632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.010 [2024-11-28 12:46:49.733212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.010 [2024-11-28 12:46:49.733220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:35640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.010 [2024-11-28 12:46:49.733226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.010 [2024-11-28 12:46:49.733234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:35648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.010 [2024-11-28 12:46:49.733242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.010 [2024-11-28 12:46:49.733250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:35656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.010 [2024-11-28 12:46:49.733256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.010 [2024-11-28 12:46:49.733264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:35664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.010 [2024-11-28 12:46:49.733271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.010 [2024-11-28 12:46:49.733279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:35672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.010 [2024-11-28 12:46:49.733285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.010 [2024-11-28 12:46:49.733293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:35680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.010 [2024-11-28 12:46:49.733300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.010 [2024-11-28 12:46:49.733308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:35688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.010 [2024-11-28 12:46:49.733314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.010 [2024-11-28 12:46:49.733322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:35696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.010 [2024-11-28 12:46:49.733328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.010 [2024-11-28 12:46:49.733336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:35704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.010 [2024-11-28 12:46:49.733343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.010 [2024-11-28 12:46:49.733350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:35712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.010 [2024-11-28 12:46:49.733356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.010 [2024-11-28 12:46:49.733364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:35720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.010 [2024-11-28 12:46:49.733371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.010 [2024-11-28 12:46:49.733382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:35728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.010 [2024-11-28 12:46:49.733389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.010 [2024-11-28 12:46:49.733397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:35736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.010 [2024-11-28 12:46:49.733403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.010 [2024-11-28 12:46:49.733411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:35744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.010 [2024-11-28 12:46:49.733418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.010 [2024-11-28 12:46:49.733431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:35752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.010 [2024-11-28 12:46:49.733438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.010 [2024-11-28 12:46:49.733446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:35760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.010 [2024-11-28 12:46:49.733453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.010 [2024-11-28 12:46:49.733461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:35768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.010 [2024-11-28 12:46:49.733467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.010 [2024-11-28 12:46:49.733476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:35776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.010 [2024-11-28 12:46:49.733482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.010 [2024-11-28 12:46:49.733490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:35784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.010 [2024-11-28 12:46:49.733496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.010 [2024-11-28 12:46:49.733504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:35792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.010 [2024-11-28 12:46:49.733511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.010 [2024-11-28 12:46:49.733518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:35800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.010 [2024-11-28 12:46:49.733526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.010 [2024-11-28 12:46:49.733534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:35808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.010 [2024-11-28 12:46:49.733541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.010 [2024-11-28 12:46:49.733548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:35816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.010 [2024-11-28 12:46:49.733555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.010 [2024-11-28 12:46:49.733562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:35824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.010 [2024-11-28 12:46:49.733569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.010 [2024-11-28 12:46:49.733577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:35832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.010 [2024-11-28 12:46:49.733583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.010 [2024-11-28 12:46:49.733591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:35840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.010 [2024-11-28 12:46:49.733598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.010 [2024-11-28 12:46:49.733605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:35848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.010 [2024-11-28 12:46:49.733613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.010 [2024-11-28 12:46:49.733623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:35856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.010 [2024-11-28 12:46:49.733629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.010 [2024-11-28 12:46:49.733637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:35864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.010 [2024-11-28 12:46:49.733644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.010 [2024-11-28 12:46:49.733652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:35872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.010 [2024-11-28 12:46:49.733658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.010 [2024-11-28 12:46:49.733666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:35880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.010 [2024-11-28 12:46:49.733672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.010 [2024-11-28 12:46:49.733680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:35888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.010 [2024-11-28 12:46:49.733687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.010 [2024-11-28 12:46:49.733695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:35896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.010 [2024-11-28 12:46:49.733701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.010 [2024-11-28 12:46:49.733709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:35904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.010 [2024-11-28 12:46:49.733715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.010 [2024-11-28 12:46:49.733723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:36032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.010 [2024-11-28 12:46:49.733730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.010 [2024-11-28 12:46:49.733749] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:14.010 [2024-11-28 12:46:49.733755] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:14.010 [2024-11-28 12:46:49.733761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:36040 len:8 PRP1 0x0 PRP2 0x0 00:23:14.010 [2024-11-28 12:46:49.733769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.010 [2024-11-28 12:46:49.733813] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:23:14.010 [2024-11-28 12:46:49.733835] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:14.010 [2024-11-28 12:46:49.733842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.010 [2024-11-28 12:46:49.733850] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:14.010 [2024-11-28 12:46:49.733856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.011 [2024-11-28 12:46:49.733865] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:14.011 [2024-11-28 12:46:49.733872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.011 [2024-11-28 12:46:49.733879] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:14.011 [2024-11-28 12:46:49.733885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.011 [2024-11-28 12:46:49.733892] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:23:14.011 [2024-11-28 12:46:49.736778] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:23:14.011 [2024-11-28 12:46:49.736808] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb14370 (9): Bad file descriptor 00:23:14.011 [2024-11-28 12:46:49.805301] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:23:14.011 10545.70 IOPS, 41.19 MiB/s [2024-11-28T11:46:56.530Z] 10585.00 IOPS, 41.35 MiB/s [2024-11-28T11:46:56.530Z] 10594.92 IOPS, 41.39 MiB/s [2024-11-28T11:46:56.530Z] 10612.85 IOPS, 41.46 MiB/s [2024-11-28T11:46:56.530Z] 10624.57 IOPS, 41.50 MiB/s 00:23:14.011 Latency(us) 00:23:14.011 [2024-11-28T11:46:56.530Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:14.011 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:14.011 Verification LBA range: start 0x0 length 0x4000 00:23:14.011 NVMe0n1 : 15.00 10637.48 41.55 668.38 0.00 11298.67 418.50 21427.42 00:23:14.011 [2024-11-28T11:46:56.530Z] =================================================================================================================== 00:23:14.011 [2024-11-28T11:46:56.530Z] Total : 10637.48 41.55 668.38 0.00 11298.67 418.50 21427.42 00:23:14.011 Received shutdown signal, test time was about 15.000000 seconds 00:23:14.011 00:23:14.011 Latency(us) 00:23:14.011 [2024-11-28T11:46:56.530Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:14.011 [2024-11-28T11:46:56.530Z] =================================================================================================================== 00:23:14.011 [2024-11-28T11:46:56.530Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:14.011 12:46:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:23:14.011 12:46:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:23:14.011 12:46:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:23:14.011 12:46:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=2621886 00:23:14.011 12:46:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:23:14.011 12:46:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 2621886 /var/tmp/bdevperf.sock 00:23:14.011 12:46:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 2621886 ']' 00:23:14.011 12:46:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:14.011 12:46:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:14.011 12:46:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:14.011 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:14.011 12:46:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:14.011 12:46:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:14.011 12:46:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:14.011 12:46:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:23:14.011 12:46:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:14.011 [2024-11-28 12:46:56.370911] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:14.011 12:46:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:23:14.271 [2024-11-28 12:46:56.571486] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:23:14.271 12:46:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:14.531 NVMe0n1 00:23:14.531 12:46:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:14.790 00:23:14.790 12:46:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:15.358 00:23:15.358 12:46:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:15.358 12:46:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:23:15.617 12:46:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:15.617 12:46:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:23:18.904 12:47:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:18.904 12:47:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:23:18.904 12:47:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=2622808 00:23:18.904 12:47:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:18.904 12:47:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 2622808 00:23:20.298 { 00:23:20.298 "results": [ 00:23:20.298 { 00:23:20.298 "job": "NVMe0n1", 00:23:20.298 "core_mask": "0x1", 00:23:20.298 "workload": "verify", 00:23:20.298 "status": "finished", 00:23:20.298 "verify_range": { 00:23:20.298 "start": 0, 00:23:20.298 "length": 16384 00:23:20.298 }, 00:23:20.298 "queue_depth": 128, 00:23:20.298 "io_size": 4096, 00:23:20.298 "runtime": 1.050681, 00:23:20.298 "iops": 10372.32042836979, 00:23:20.298 "mibps": 40.516876673319494, 00:23:20.298 "io_failed": 0, 00:23:20.298 "io_timeout": 0, 00:23:20.298 "avg_latency_us": 11943.032917088896, 00:23:20.298 "min_latency_us": 2564.4521739130437, 00:23:20.298 "max_latency_us": 43310.747826086954 00:23:20.298 } 00:23:20.298 ], 00:23:20.298 "core_count": 1 00:23:20.298 } 00:23:20.298 12:47:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:20.298 [2024-11-28 12:46:55.987054] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:23:20.298 [2024-11-28 12:46:55.987110] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2621886 ] 00:23:20.298 [2024-11-28 12:46:56.051788] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:20.298 [2024-11-28 12:46:56.090080] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:20.298 [2024-11-28 12:46:58.094151] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:23:20.298 [2024-11-28 12:46:58.094199] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:20.298 [2024-11-28 12:46:58.094210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.298 [2024-11-28 12:46:58.094219] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:20.298 [2024-11-28 12:46:58.094226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.298 [2024-11-28 12:46:58.094233] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:20.298 [2024-11-28 12:46:58.094240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.299 [2024-11-28 12:46:58.094247] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:20.299 [2024-11-28 12:46:58.094253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.299 [2024-11-28 12:46:58.094260] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:23:20.299 [2024-11-28 12:46:58.094286] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:23:20.299 [2024-11-28 12:46:58.094300] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf21370 (9): Bad file descriptor 00:23:20.299 [2024-11-28 12:46:58.099791] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:23:20.299 Running I/O for 1 seconds... 00:23:20.299 10714.00 IOPS, 41.85 MiB/s 00:23:20.299 Latency(us) 00:23:20.299 [2024-11-28T11:47:02.818Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:20.299 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:20.299 Verification LBA range: start 0x0 length 0x4000 00:23:20.299 NVMe0n1 : 1.05 10372.32 40.52 0.00 0.00 11943.03 2564.45 43310.75 00:23:20.299 [2024-11-28T11:47:02.818Z] =================================================================================================================== 00:23:20.299 [2024-11-28T11:47:02.818Z] Total : 10372.32 40.52 0.00 0.00 11943.03 2564.45 43310.75 00:23:20.299 12:47:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:20.299 12:47:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:23:20.299 12:47:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:20.557 12:47:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:20.557 12:47:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:23:20.816 12:47:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:20.816 12:47:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:23:24.098 12:47:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:24.098 12:47:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:23:24.098 12:47:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 2621886 00:23:24.098 12:47:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 2621886 ']' 00:23:24.098 12:47:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 2621886 00:23:24.098 12:47:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:23:24.098 12:47:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:24.098 12:47:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2621886 00:23:24.098 12:47:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:24.098 12:47:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:24.098 12:47:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2621886' 00:23:24.098 killing process with pid 2621886 00:23:24.098 12:47:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 2621886 00:23:24.098 12:47:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 2621886 00:23:24.356 12:47:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:23:24.356 12:47:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:24.614 12:47:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:23:24.614 12:47:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:24.614 12:47:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:23:24.614 12:47:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:24.614 12:47:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:23:24.614 12:47:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:24.614 12:47:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:23:24.614 12:47:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:24.614 12:47:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:24.614 rmmod nvme_tcp 00:23:24.614 rmmod nvme_fabrics 00:23:24.614 rmmod nvme_keyring 00:23:24.614 12:47:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:24.614 12:47:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:23:24.614 12:47:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:23:24.614 12:47:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 2618980 ']' 00:23:24.614 12:47:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 2618980 00:23:24.614 12:47:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 2618980 ']' 00:23:24.614 12:47:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 2618980 00:23:24.614 12:47:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:23:24.614 12:47:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:24.614 12:47:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2618980 00:23:24.614 12:47:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:24.614 12:47:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:24.614 12:47:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2618980' 00:23:24.614 killing process with pid 2618980 00:23:24.614 12:47:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 2618980 00:23:24.614 12:47:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 2618980 00:23:24.872 12:47:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:24.872 12:47:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:24.872 12:47:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:24.872 12:47:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:23:24.872 12:47:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:24.872 12:47:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:23:24.872 12:47:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:23:24.872 12:47:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:24.872 12:47:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:24.872 12:47:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:24.872 12:47:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:24.872 12:47:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:26.775 12:47:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:26.775 00:23:26.775 real 0m37.053s 00:23:26.775 user 1m58.765s 00:23:26.775 sys 0m7.615s 00:23:26.775 12:47:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:26.775 12:47:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:26.775 ************************************ 00:23:27.033 END TEST nvmf_failover 00:23:27.033 ************************************ 00:23:27.033 12:47:09 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:23:27.033 12:47:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:27.033 12:47:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:27.033 12:47:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.033 ************************************ 00:23:27.033 START TEST nvmf_host_discovery 00:23:27.033 ************************************ 00:23:27.033 12:47:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:23:27.033 * Looking for test storage... 00:23:27.033 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:27.033 12:47:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:27.033 12:47:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:23:27.033 12:47:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:27.033 12:47:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:27.033 12:47:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:27.033 12:47:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:27.033 12:47:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:27.033 12:47:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:23:27.033 12:47:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:23:27.033 12:47:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:23:27.033 12:47:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:23:27.033 12:47:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:23:27.033 12:47:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:23:27.033 12:47:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:23:27.033 12:47:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:27.033 12:47:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:23:27.033 12:47:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:23:27.033 12:47:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:27.033 12:47:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:27.033 12:47:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:23:27.033 12:47:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:23:27.033 12:47:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:27.033 12:47:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:23:27.033 12:47:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:23:27.033 12:47:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:23:27.033 12:47:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:23:27.033 12:47:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:27.033 12:47:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:23:27.033 12:47:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:23:27.033 12:47:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:27.033 12:47:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:27.033 12:47:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:23:27.033 12:47:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:27.033 12:47:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:27.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:27.033 --rc genhtml_branch_coverage=1 00:23:27.033 --rc genhtml_function_coverage=1 00:23:27.033 --rc genhtml_legend=1 00:23:27.033 --rc geninfo_all_blocks=1 00:23:27.033 --rc geninfo_unexecuted_blocks=1 00:23:27.033 00:23:27.033 ' 00:23:27.033 12:47:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:27.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:27.033 --rc genhtml_branch_coverage=1 00:23:27.033 --rc genhtml_function_coverage=1 00:23:27.033 --rc genhtml_legend=1 00:23:27.033 --rc geninfo_all_blocks=1 00:23:27.033 --rc geninfo_unexecuted_blocks=1 00:23:27.033 00:23:27.033 ' 00:23:27.033 12:47:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:27.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:27.033 --rc genhtml_branch_coverage=1 00:23:27.033 --rc genhtml_function_coverage=1 00:23:27.033 --rc genhtml_legend=1 00:23:27.033 --rc geninfo_all_blocks=1 00:23:27.033 --rc geninfo_unexecuted_blocks=1 00:23:27.033 00:23:27.033 ' 00:23:27.033 12:47:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:27.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:27.033 --rc genhtml_branch_coverage=1 00:23:27.033 --rc genhtml_function_coverage=1 00:23:27.033 --rc genhtml_legend=1 00:23:27.033 --rc geninfo_all_blocks=1 00:23:27.033 --rc geninfo_unexecuted_blocks=1 00:23:27.033 00:23:27.033 ' 00:23:27.033 12:47:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:27.033 12:47:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:23:27.033 12:47:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:27.033 12:47:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:27.033 12:47:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:27.033 12:47:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:27.033 12:47:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:27.033 12:47:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:27.033 12:47:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:27.033 12:47:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:27.033 12:47:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:27.033 12:47:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:27.033 12:47:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:27.033 12:47:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:27.033 12:47:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:27.033 12:47:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:27.033 12:47:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:27.033 12:47:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:27.033 12:47:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:27.033 12:47:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:23:27.033 12:47:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:27.033 12:47:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:27.033 12:47:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:27.033 12:47:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:27.033 12:47:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:27.033 12:47:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:27.034 12:47:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:23:27.034 12:47:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:27.034 12:47:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:23:27.034 12:47:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:27.034 12:47:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:27.034 12:47:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:27.034 12:47:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:27.034 12:47:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:27.034 12:47:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:27.034 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:27.034 12:47:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:27.034 12:47:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:27.034 12:47:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:27.292 12:47:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:23:27.292 12:47:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:23:27.292 12:47:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:23:27.292 12:47:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:23:27.292 12:47:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:23:27.292 12:47:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:23:27.292 12:47:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:23:27.292 12:47:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:27.292 12:47:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:27.292 12:47:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:27.292 12:47:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:27.292 12:47:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:27.292 12:47:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:27.292 12:47:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:27.292 12:47:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:27.292 12:47:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:27.292 12:47:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:27.292 12:47:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:23:27.292 12:47:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:32.561 12:47:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:32.561 12:47:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:23:32.561 12:47:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:32.561 12:47:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:32.561 12:47:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:32.561 12:47:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:32.561 12:47:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:32.561 12:47:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:23:32.561 12:47:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:32.561 12:47:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:23:32.561 12:47:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:23:32.561 12:47:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:23:32.561 12:47:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:23:32.561 12:47:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:23:32.561 12:47:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:23:32.561 12:47:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:32.561 12:47:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:32.561 12:47:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:32.561 12:47:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:32.561 12:47:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:32.561 12:47:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:32.561 12:47:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:32.561 12:47:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:32.561 12:47:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:32.561 12:47:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:32.561 12:47:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:32.561 12:47:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:32.561 12:47:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:32.561 12:47:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:32.561 12:47:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:32.561 12:47:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:32.561 12:47:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:32.561 12:47:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:32.561 12:47:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:32.561 12:47:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:32.561 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:32.561 12:47:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:32.561 12:47:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:32.561 12:47:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:32.561 12:47:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:32.561 12:47:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:32.561 12:47:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:32.561 12:47:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:32.561 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:32.561 12:47:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:32.561 12:47:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:32.561 12:47:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:32.561 12:47:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:32.561 12:47:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:32.561 12:47:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:32.561 12:47:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:32.561 12:47:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:32.561 12:47:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:32.561 12:47:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:32.561 12:47:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:32.561 12:47:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:32.561 12:47:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:32.561 12:47:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:32.561 12:47:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:32.562 12:47:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:32.562 Found net devices under 0000:86:00.0: cvl_0_0 00:23:32.562 12:47:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:32.562 12:47:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:32.562 12:47:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:32.562 12:47:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:32.562 12:47:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:32.562 12:47:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:32.562 12:47:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:32.562 12:47:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:32.562 12:47:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:32.562 Found net devices under 0000:86:00.1: cvl_0_1 00:23:32.562 12:47:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:32.562 12:47:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:32.562 12:47:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:23:32.562 12:47:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:32.562 12:47:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:32.562 12:47:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:32.562 12:47:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:32.562 12:47:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:32.562 12:47:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:32.562 12:47:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:32.562 12:47:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:32.562 12:47:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:32.562 12:47:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:32.562 12:47:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:32.562 12:47:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:32.562 12:47:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:32.562 12:47:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:32.562 12:47:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:32.562 12:47:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:32.562 12:47:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:32.562 12:47:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:32.562 12:47:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:32.562 12:47:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:32.562 12:47:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:32.562 12:47:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:32.818 12:47:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:32.818 12:47:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:32.818 12:47:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:32.818 12:47:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:32.818 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:32.818 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.393 ms 00:23:32.818 00:23:32.818 --- 10.0.0.2 ping statistics --- 00:23:32.818 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:32.818 rtt min/avg/max/mdev = 0.393/0.393/0.393/0.000 ms 00:23:32.818 12:47:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:32.818 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:32.818 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.211 ms 00:23:32.818 00:23:32.818 --- 10.0.0.1 ping statistics --- 00:23:32.818 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:32.818 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:23:32.818 12:47:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:32.818 12:47:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:23:32.818 12:47:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:32.818 12:47:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:32.818 12:47:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:32.818 12:47:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:32.818 12:47:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:32.818 12:47:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:32.818 12:47:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:32.818 12:47:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:23:32.818 12:47:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:32.818 12:47:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:32.818 12:47:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:32.818 12:47:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=2627254 00:23:32.818 12:47:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:32.818 12:47:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 2627254 00:23:32.818 12:47:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 2627254 ']' 00:23:32.818 12:47:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:32.818 12:47:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:32.818 12:47:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:32.818 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:32.818 12:47:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:32.819 12:47:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:32.819 [2024-11-28 12:47:15.219454] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:23:32.819 [2024-11-28 12:47:15.219496] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:32.819 [2024-11-28 12:47:15.286588] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:32.819 [2024-11-28 12:47:15.327230] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:32.819 [2024-11-28 12:47:15.327268] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:32.819 [2024-11-28 12:47:15.327275] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:32.819 [2024-11-28 12:47:15.327281] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:32.819 [2024-11-28 12:47:15.327287] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:32.819 [2024-11-28 12:47:15.327844] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:33.076 12:47:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:33.076 12:47:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:23:33.076 12:47:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:33.076 12:47:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:33.076 12:47:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:33.076 12:47:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:33.076 12:47:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:33.076 12:47:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.076 12:47:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:33.076 [2024-11-28 12:47:15.460563] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:33.076 12:47:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.076 12:47:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:23:33.076 12:47:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.076 12:47:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:33.076 [2024-11-28 12:47:15.472757] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:23:33.076 12:47:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.076 12:47:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:23:33.076 12:47:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.076 12:47:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:33.076 null0 00:23:33.076 12:47:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.076 12:47:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:23:33.076 12:47:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.076 12:47:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:33.076 null1 00:23:33.076 12:47:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.076 12:47:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:23:33.076 12:47:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.076 12:47:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:33.076 12:47:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.076 12:47:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=2627282 00:23:33.076 12:47:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:23:33.076 12:47:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 2627282 /tmp/host.sock 00:23:33.076 12:47:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 2627282 ']' 00:23:33.076 12:47:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:23:33.076 12:47:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:33.076 12:47:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:23:33.076 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:23:33.076 12:47:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:33.076 12:47:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:33.076 [2024-11-28 12:47:15.551836] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:23:33.076 [2024-11-28 12:47:15.551879] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2627282 ] 00:23:33.334 [2024-11-28 12:47:15.613299] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:33.334 [2024-11-28 12:47:15.655900] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:33.334 12:47:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:33.334 12:47:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:23:33.334 12:47:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:33.334 12:47:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:23:33.334 12:47:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.334 12:47:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:33.334 12:47:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.334 12:47:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:23:33.334 12:47:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.334 12:47:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:33.334 12:47:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.334 12:47:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:23:33.334 12:47:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:23:33.334 12:47:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:33.334 12:47:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:33.334 12:47:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.334 12:47:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:33.334 12:47:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:33.334 12:47:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:33.334 12:47:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.334 12:47:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:23:33.334 12:47:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:23:33.334 12:47:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:33.335 12:47:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:33.335 12:47:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:33.335 12:47:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.335 12:47:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:33.335 12:47:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:33.335 12:47:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.592 12:47:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:23:33.592 12:47:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:23:33.592 12:47:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.592 12:47:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:33.592 12:47:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.592 12:47:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:23:33.593 12:47:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:33.593 12:47:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:33.593 12:47:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:33.593 12:47:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.593 12:47:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:33.593 12:47:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:33.593 12:47:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.593 12:47:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:23:33.593 12:47:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:23:33.593 12:47:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:33.593 12:47:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:33.593 12:47:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.593 12:47:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:33.593 12:47:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:33.593 12:47:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:33.593 12:47:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.593 12:47:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:23:33.593 12:47:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:23:33.593 12:47:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.593 12:47:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:33.593 12:47:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.593 12:47:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:23:33.593 12:47:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:33.593 12:47:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:33.593 12:47:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.593 12:47:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:33.593 12:47:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:33.593 12:47:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:33.593 12:47:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.593 12:47:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:23:33.593 12:47:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:23:33.593 12:47:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:33.593 12:47:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:33.593 12:47:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.593 12:47:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:33.593 12:47:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:33.593 12:47:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:33.593 12:47:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.593 12:47:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:23:33.593 12:47:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:33.593 12:47:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.593 12:47:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:33.593 [2024-11-28 12:47:16.074292] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:33.593 12:47:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.593 12:47:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:23:33.593 12:47:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:33.593 12:47:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:33.593 12:47:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.593 12:47:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:33.593 12:47:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:33.593 12:47:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:33.593 12:47:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.851 12:47:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:23:33.851 12:47:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:23:33.851 12:47:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:33.851 12:47:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:33.851 12:47:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.851 12:47:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:33.851 12:47:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:33.851 12:47:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:33.851 12:47:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.851 12:47:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:23:33.851 12:47:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:23:33.851 12:47:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:23:33.851 12:47:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:33.851 12:47:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:33.851 12:47:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:33.851 12:47:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:33.851 12:47:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:33.851 12:47:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:23:33.851 12:47:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:23:33.851 12:47:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:33.851 12:47:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.851 12:47:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:33.851 12:47:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.851 12:47:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:23:33.851 12:47:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:23:33.851 12:47:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:23:33.851 12:47:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:33.851 12:47:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:23:33.851 12:47:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.851 12:47:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:33.851 12:47:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.851 12:47:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:33.851 12:47:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:33.851 12:47:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:33.851 12:47:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:33.851 12:47:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:33.851 12:47:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:23:33.851 12:47:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:33.851 12:47:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:33.851 12:47:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:33.851 12:47:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.851 12:47:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:33.851 12:47:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:33.851 12:47:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.851 12:47:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:23:33.851 12:47:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:23:34.416 [2024-11-28 12:47:16.818463] bdev_nvme.c:7484:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:34.416 [2024-11-28 12:47:16.818482] bdev_nvme.c:7570:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:34.416 [2024-11-28 12:47:16.818495] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:34.675 [2024-11-28 12:47:16.944875] bdev_nvme.c:7413:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:23:34.675 [2024-11-28 12:47:17.119901] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:23:34.675 [2024-11-28 12:47:17.120627] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x185be30:1 started. 00:23:34.675 [2024-11-28 12:47:17.122022] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:34.675 [2024-11-28 12:47:17.122038] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:34.675 [2024-11-28 12:47:17.168584] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x185be30 was disconnected and freed. delete nvme_qpair. 00:23:34.933 12:47:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:34.933 12:47:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:34.933 12:47:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:23:34.933 12:47:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:34.933 12:47:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:34.933 12:47:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:34.933 12:47:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.933 12:47:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:34.933 12:47:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:34.933 12:47:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.933 12:47:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:34.933 12:47:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:34.933 12:47:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:23:34.933 12:47:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:23:34.933 12:47:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:34.933 12:47:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:34.934 12:47:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:23:34.934 12:47:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:23:34.934 12:47:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:34.934 12:47:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:34.934 12:47:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:34.934 12:47:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:34.934 12:47:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.934 12:47:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:34.934 12:47:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.934 12:47:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:23:34.934 12:47:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:34.934 12:47:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:23:34.934 12:47:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:23:34.934 12:47:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:34.934 12:47:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:34.934 12:47:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:23:34.934 12:47:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:23:34.934 12:47:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:34.934 12:47:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:34.934 12:47:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:34.934 12:47:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.934 12:47:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:34.934 12:47:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:34.934 12:47:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.934 12:47:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:23:34.934 12:47:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:34.934 12:47:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:23:34.934 12:47:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:23:34.934 12:47:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:34.934 12:47:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:34.934 12:47:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:34.934 12:47:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:34.934 12:47:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:34.934 12:47:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:23:34.934 12:47:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:23:34.934 12:47:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.934 12:47:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:34.934 12:47:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:34.934 12:47:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.192 12:47:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:23:35.192 12:47:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:23:35.192 12:47:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:23:35.192 12:47:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:35.192 12:47:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:23:35.192 12:47:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.192 12:47:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:35.192 [2024-11-28 12:47:17.462056] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x185c1b0:1 started. 00:23:35.192 12:47:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.192 12:47:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:35.192 12:47:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:35.192 12:47:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:35.192 12:47:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:35.192 12:47:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:35.192 12:47:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:23:35.192 [2024-11-28 12:47:17.468223] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x185c1b0 was disconnected and freed. delete nvme_qpair. 00:23:35.192 12:47:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:35.192 12:47:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:35.192 12:47:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.192 12:47:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:35.192 12:47:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:35.192 12:47:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:35.192 12:47:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.192 12:47:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:35.192 12:47:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:35.192 12:47:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:23:35.192 12:47:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:23:35.192 12:47:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:35.192 12:47:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:35.192 12:47:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:35.192 12:47:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:35.192 12:47:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:35.192 12:47:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:23:35.192 12:47:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:23:35.192 12:47:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:35.192 12:47:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.192 12:47:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:35.192 12:47:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.192 12:47:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:23:35.192 12:47:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:23:35.192 12:47:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:23:35.192 12:47:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:35.192 12:47:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:23:35.192 12:47:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.192 12:47:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:35.192 [2024-11-28 12:47:17.566305] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:35.192 [2024-11-28 12:47:17.566523] bdev_nvme.c:7466:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:35.192 [2024-11-28 12:47:17.566544] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:35.192 12:47:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.192 12:47:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:35.192 12:47:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:35.192 12:47:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:35.192 12:47:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:35.192 12:47:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:35.192 12:47:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:23:35.192 12:47:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:35.192 12:47:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.192 12:47:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:35.192 12:47:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:35.192 12:47:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:35.192 12:47:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:35.192 12:47:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.192 12:47:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:35.192 12:47:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:35.192 12:47:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:35.192 12:47:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:35.192 12:47:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:35.192 12:47:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:35.192 12:47:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:35.192 12:47:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:23:35.192 12:47:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:35.192 12:47:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:35.192 12:47:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:35.192 12:47:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.192 12:47:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:35.192 12:47:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:35.193 [2024-11-28 12:47:17.652794] bdev_nvme.c:7408:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:23:35.193 12:47:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.193 12:47:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:35.193 12:47:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:35.193 12:47:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:23:35.193 12:47:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:23:35.193 12:47:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:35.193 12:47:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:35.193 12:47:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:23:35.193 12:47:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:23:35.193 12:47:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:35.193 12:47:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:35.193 12:47:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.193 12:47:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:35.193 12:47:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:35.193 12:47:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:35.193 12:47:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.193 12:47:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:23:35.193 12:47:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:23:35.450 [2024-11-28 12:47:17.713336] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:23:35.450 [2024-11-28 12:47:17.713372] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:35.450 [2024-11-28 12:47:17.713380] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:35.450 [2024-11-28 12:47:17.713384] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:36.386 12:47:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:36.386 12:47:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:23:36.386 12:47:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:23:36.386 12:47:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:36.386 12:47:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:36.386 12:47:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.386 12:47:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:36.386 12:47:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:36.386 12:47:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:36.386 12:47:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.386 12:47:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:23:36.386 12:47:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:36.386 12:47:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:23:36.386 12:47:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:23:36.386 12:47:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:36.386 12:47:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:36.386 12:47:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:36.386 12:47:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:36.386 12:47:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:36.386 12:47:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:23:36.386 12:47:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:36.386 12:47:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:36.386 12:47:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.386 12:47:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:36.386 12:47:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.386 12:47:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:23:36.386 12:47:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:23:36.386 12:47:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:23:36.386 12:47:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:36.386 12:47:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:36.386 12:47:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.386 12:47:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:36.386 [2024-11-28 12:47:18.794150] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:36.386 [2024-11-28 12:47:18.794175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.386 [2024-11-28 12:47:18.794185] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:36.386 [2024-11-28 12:47:18.794192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.386 [2024-11-28 12:47:18.794199] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:36.386 [2024-11-28 12:47:18.794206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.386 [2024-11-28 12:47:18.794213] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:36.386 [2024-11-28 12:47:18.794223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.386 [2024-11-28 12:47:18.794230] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182c390 is same with the state(6) to be set 00:23:36.386 [2024-11-28 12:47:18.794511] bdev_nvme.c:7466:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:36.386 [2024-11-28 12:47:18.794524] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:36.386 12:47:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.386 12:47:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:36.386 12:47:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:36.386 12:47:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:36.386 12:47:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:36.386 12:47:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:36.386 12:47:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:23:36.386 [2024-11-28 12:47:18.804160] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182c390 (9): Bad file descriptor 00:23:36.386 12:47:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:36.386 12:47:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:36.386 12:47:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.386 12:47:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:36.386 12:47:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:36.386 12:47:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:36.386 [2024-11-28 12:47:18.814193] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:36.386 [2024-11-28 12:47:18.814207] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:36.386 [2024-11-28 12:47:18.814211] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:36.386 [2024-11-28 12:47:18.814215] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:36.386 [2024-11-28 12:47:18.814232] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:36.386 [2024-11-28 12:47:18.814497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:36.386 [2024-11-28 12:47:18.814512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182c390 with addr=10.0.0.2, port=4420 00:23:36.386 [2024-11-28 12:47:18.814519] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182c390 is same with the state(6) to be set 00:23:36.386 [2024-11-28 12:47:18.814531] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182c390 (9): Bad file descriptor 00:23:36.386 [2024-11-28 12:47:18.814541] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:36.386 [2024-11-28 12:47:18.814548] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:36.386 [2024-11-28 12:47:18.814555] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:36.386 [2024-11-28 12:47:18.814561] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:36.386 [2024-11-28 12:47:18.814566] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:36.386 [2024-11-28 12:47:18.814574] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:36.386 12:47:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.386 [2024-11-28 12:47:18.824262] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:36.386 [2024-11-28 12:47:18.824273] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:36.386 [2024-11-28 12:47:18.824278] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:36.386 [2024-11-28 12:47:18.824282] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:36.386 [2024-11-28 12:47:18.824295] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:36.386 [2024-11-28 12:47:18.824541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:36.386 [2024-11-28 12:47:18.824553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182c390 with addr=10.0.0.2, port=4420 00:23:36.386 [2024-11-28 12:47:18.824561] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182c390 is same with the state(6) to be set 00:23:36.386 [2024-11-28 12:47:18.824573] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182c390 (9): Bad file descriptor 00:23:36.386 [2024-11-28 12:47:18.824583] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:36.386 [2024-11-28 12:47:18.824590] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:36.387 [2024-11-28 12:47:18.824597] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:36.387 [2024-11-28 12:47:18.824604] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:36.387 [2024-11-28 12:47:18.824609] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:36.387 [2024-11-28 12:47:18.824614] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:36.387 [2024-11-28 12:47:18.834326] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:36.387 [2024-11-28 12:47:18.834337] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:36.387 [2024-11-28 12:47:18.834341] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:36.387 [2024-11-28 12:47:18.834345] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:36.387 [2024-11-28 12:47:18.834358] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:36.387 [2024-11-28 12:47:18.834545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:36.387 [2024-11-28 12:47:18.834556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182c390 with addr=10.0.0.2, port=4420 00:23:36.387 [2024-11-28 12:47:18.834564] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182c390 is same with the state(6) to be set 00:23:36.387 [2024-11-28 12:47:18.834575] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182c390 (9): Bad file descriptor 00:23:36.387 [2024-11-28 12:47:18.834584] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:36.387 [2024-11-28 12:47:18.834590] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:36.387 [2024-11-28 12:47:18.834597] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:36.387 [2024-11-28 12:47:18.834606] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:36.387 [2024-11-28 12:47:18.834611] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:36.387 [2024-11-28 12:47:18.834615] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:36.387 [2024-11-28 12:47:18.844390] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:36.387 [2024-11-28 12:47:18.844405] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:36.387 [2024-11-28 12:47:18.844411] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:36.387 [2024-11-28 12:47:18.844416] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:36.387 [2024-11-28 12:47:18.844431] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:36.387 [2024-11-28 12:47:18.844634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:36.387 [2024-11-28 12:47:18.844648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182c390 with addr=10.0.0.2, port=4420 00:23:36.387 [2024-11-28 12:47:18.844656] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182c390 is same with the state(6) to be set 00:23:36.387 [2024-11-28 12:47:18.844668] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182c390 (9): Bad file descriptor 00:23:36.387 [2024-11-28 12:47:18.844679] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:36.387 [2024-11-28 12:47:18.844686] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:36.387 [2024-11-28 12:47:18.844694] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:36.387 [2024-11-28 12:47:18.844700] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:36.387 [2024-11-28 12:47:18.844705] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:36.387 [2024-11-28 12:47:18.844709] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:36.387 12:47:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:36.387 12:47:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:36.387 12:47:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:36.387 12:47:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:36.387 12:47:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:36.387 12:47:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:36.387 12:47:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:36.387 12:47:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:23:36.387 12:47:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:36.387 12:47:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:36.387 12:47:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:36.387 [2024-11-28 12:47:18.854462] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:36.387 [2024-11-28 12:47:18.854475] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:36.387 [2024-11-28 12:47:18.854483] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:36.387 [2024-11-28 12:47:18.854487] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:36.387 [2024-11-28 12:47:18.854499] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:36.387 12:47:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.387 [2024-11-28 12:47:18.854629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:36.387 [2024-11-28 12:47:18.854642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182c390 with addr=10.0.0.2, port=4420 00:23:36.387 [2024-11-28 12:47:18.854649] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182c390 is same with the state(6) to be set 00:23:36.387 [2024-11-28 12:47:18.854660] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182c390 (9): Bad file descriptor 00:23:36.387 [2024-11-28 12:47:18.854670] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:36.387 [2024-11-28 12:47:18.854677] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:36.387 [2024-11-28 12:47:18.854684] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:36.387 [2024-11-28 12:47:18.854690] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:36.387 [2024-11-28 12:47:18.854694] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:36.387 [2024-11-28 12:47:18.854698] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:36.387 12:47:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:36.387 12:47:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:36.387 [2024-11-28 12:47:18.864532] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:36.387 [2024-11-28 12:47:18.864547] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:36.387 [2024-11-28 12:47:18.864551] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:36.387 [2024-11-28 12:47:18.864555] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:36.387 [2024-11-28 12:47:18.864569] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:36.387 [2024-11-28 12:47:18.864825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:36.387 [2024-11-28 12:47:18.864838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182c390 with addr=10.0.0.2, port=4420 00:23:36.387 [2024-11-28 12:47:18.864845] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182c390 is same with the state(6) to be set 00:23:36.387 [2024-11-28 12:47:18.864856] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182c390 (9): Bad file descriptor 00:23:36.387 [2024-11-28 12:47:18.864872] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:36.387 [2024-11-28 12:47:18.864878] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:36.387 [2024-11-28 12:47:18.864885] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:36.387 [2024-11-28 12:47:18.864891] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:36.387 [2024-11-28 12:47:18.864895] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:36.387 [2024-11-28 12:47:18.864902] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:36.387 [2024-11-28 12:47:18.874601] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:36.387 [2024-11-28 12:47:18.874612] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:36.387 [2024-11-28 12:47:18.874616] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:36.387 [2024-11-28 12:47:18.874620] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:36.387 [2024-11-28 12:47:18.874633] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:36.387 [2024-11-28 12:47:18.874822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:36.387 [2024-11-28 12:47:18.874833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182c390 with addr=10.0.0.2, port=4420 00:23:36.387 [2024-11-28 12:47:18.874841] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182c390 is same with the state(6) to be set 00:23:36.387 [2024-11-28 12:47:18.874852] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182c390 (9): Bad file descriptor 00:23:36.387 [2024-11-28 12:47:18.874863] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:36.387 [2024-11-28 12:47:18.874871] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:36.387 [2024-11-28 12:47:18.874879] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:36.387 [2024-11-28 12:47:18.874885] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:36.388 [2024-11-28 12:47:18.874892] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:36.388 [2024-11-28 12:47:18.874897] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:36.388 [2024-11-28 12:47:18.880945] bdev_nvme.c:7271:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:23:36.388 [2024-11-28 12:47:18.880967] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:36.388 12:47:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.388 12:47:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:36.388 12:47:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:36.388 12:47:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:23:36.388 12:47:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:23:36.388 12:47:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:36.388 12:47:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:36.388 12:47:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:23:36.388 12:47:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:23:36.646 12:47:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:36.646 12:47:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:36.646 12:47:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:36.646 12:47:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.646 12:47:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:36.646 12:47:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:36.646 12:47:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.646 12:47:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:23:36.646 12:47:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:36.646 12:47:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:23:36.646 12:47:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:23:36.646 12:47:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:36.646 12:47:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:36.646 12:47:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:36.646 12:47:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:36.646 12:47:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:36.646 12:47:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:23:36.646 12:47:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:36.646 12:47:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:36.646 12:47:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.646 12:47:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:36.646 12:47:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.646 12:47:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:23:36.646 12:47:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:23:36.646 12:47:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:23:36.646 12:47:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:36.646 12:47:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:23:36.646 12:47:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.646 12:47:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:36.646 12:47:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.646 12:47:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:23:36.646 12:47:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:23:36.646 12:47:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:36.646 12:47:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:36.646 12:47:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:23:36.646 12:47:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:23:36.646 12:47:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:36.646 12:47:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:36.646 12:47:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:36.646 12:47:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.646 12:47:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:36.646 12:47:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:36.646 12:47:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.646 12:47:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:23:36.646 12:47:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:36.646 12:47:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:23:36.646 12:47:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:23:36.646 12:47:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:36.646 12:47:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:36.646 12:47:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:23:36.646 12:47:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:23:36.646 12:47:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:36.646 12:47:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:36.646 12:47:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:36.646 12:47:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.646 12:47:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:36.646 12:47:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:36.646 12:47:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.646 12:47:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:23:36.646 12:47:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:36.647 12:47:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:23:36.647 12:47:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:23:36.647 12:47:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:36.647 12:47:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:36.647 12:47:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:36.647 12:47:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:36.647 12:47:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:36.647 12:47:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:23:36.647 12:47:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:36.647 12:47:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:36.647 12:47:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.647 12:47:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:36.647 12:47:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.647 12:47:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:23:36.647 12:47:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:23:36.647 12:47:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:23:36.647 12:47:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:36.647 12:47:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:36.647 12:47:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.647 12:47:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:38.020 [2024-11-28 12:47:20.197485] bdev_nvme.c:7484:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:38.020 [2024-11-28 12:47:20.197502] bdev_nvme.c:7570:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:38.020 [2024-11-28 12:47:20.197513] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:38.020 [2024-11-28 12:47:20.324940] bdev_nvme.c:7413:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:23:38.020 [2024-11-28 12:47:20.430651] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:23:38.020 [2024-11-28 12:47:20.431252] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x1842830:1 started. 00:23:38.020 [2024-11-28 12:47:20.432863] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:38.020 [2024-11-28 12:47:20.432889] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:38.020 12:47:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.020 12:47:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:38.020 12:47:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:23:38.020 12:47:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:38.020 12:47:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:38.020 [2024-11-28 12:47:20.436247] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x1842830 was disconnected and freed. delete nvme_qpair. 00:23:38.020 12:47:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:38.020 12:47:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:38.020 12:47:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:38.020 12:47:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:38.020 12:47:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.020 12:47:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:38.020 request: 00:23:38.020 { 00:23:38.020 "name": "nvme", 00:23:38.020 "trtype": "tcp", 00:23:38.020 "traddr": "10.0.0.2", 00:23:38.020 "adrfam": "ipv4", 00:23:38.020 "trsvcid": "8009", 00:23:38.020 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:38.020 "wait_for_attach": true, 00:23:38.020 "method": "bdev_nvme_start_discovery", 00:23:38.020 "req_id": 1 00:23:38.020 } 00:23:38.020 Got JSON-RPC error response 00:23:38.020 response: 00:23:38.020 { 00:23:38.020 "code": -17, 00:23:38.020 "message": "File exists" 00:23:38.020 } 00:23:38.020 12:47:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:38.020 12:47:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:23:38.020 12:47:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:38.020 12:47:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:38.020 12:47:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:38.020 12:47:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:23:38.020 12:47:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:38.020 12:47:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:38.020 12:47:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.020 12:47:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:23:38.020 12:47:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:38.020 12:47:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:23:38.020 12:47:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.020 12:47:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:23:38.020 12:47:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:23:38.020 12:47:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:38.020 12:47:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:38.020 12:47:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.020 12:47:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:38.020 12:47:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:38.020 12:47:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:38.279 12:47:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.279 12:47:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:38.279 12:47:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:38.279 12:47:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:23:38.279 12:47:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:38.279 12:47:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:38.279 12:47:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:38.279 12:47:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:38.279 12:47:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:38.279 12:47:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:38.279 12:47:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.279 12:47:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:38.279 request: 00:23:38.279 { 00:23:38.279 "name": "nvme_second", 00:23:38.279 "trtype": "tcp", 00:23:38.279 "traddr": "10.0.0.2", 00:23:38.279 "adrfam": "ipv4", 00:23:38.279 "trsvcid": "8009", 00:23:38.279 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:38.279 "wait_for_attach": true, 00:23:38.279 "method": "bdev_nvme_start_discovery", 00:23:38.279 "req_id": 1 00:23:38.279 } 00:23:38.279 Got JSON-RPC error response 00:23:38.279 response: 00:23:38.279 { 00:23:38.279 "code": -17, 00:23:38.279 "message": "File exists" 00:23:38.279 } 00:23:38.279 12:47:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:38.279 12:47:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:23:38.279 12:47:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:38.279 12:47:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:38.279 12:47:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:38.279 12:47:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:23:38.279 12:47:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:38.279 12:47:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.279 12:47:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:38.279 12:47:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:38.279 12:47:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:23:38.279 12:47:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:23:38.279 12:47:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.279 12:47:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:23:38.279 12:47:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:23:38.279 12:47:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:38.279 12:47:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:38.279 12:47:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:38.279 12:47:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.279 12:47:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:38.279 12:47:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:38.279 12:47:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.279 12:47:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:38.279 12:47:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:38.279 12:47:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:23:38.279 12:47:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:38.279 12:47:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:38.279 12:47:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:38.279 12:47:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:38.279 12:47:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:38.279 12:47:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:38.279 12:47:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.279 12:47:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:39.212 [2024-11-28 12:47:21.664469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:39.212 [2024-11-28 12:47:21.664497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182d130 with addr=10.0.0.2, port=8010 00:23:39.212 [2024-11-28 12:47:21.664510] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:23:39.212 [2024-11-28 12:47:21.664517] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:23:39.212 [2024-11-28 12:47:21.664522] bdev_nvme.c:7552:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:23:40.583 [2024-11-28 12:47:22.666961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:40.583 [2024-11-28 12:47:22.666987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182d130 with addr=10.0.0.2, port=8010 00:23:40.583 [2024-11-28 12:47:22.666999] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:23:40.583 [2024-11-28 12:47:22.667005] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:23:40.583 [2024-11-28 12:47:22.667011] bdev_nvme.c:7552:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:23:41.517 [2024-11-28 12:47:23.669128] bdev_nvme.c:7527:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:23:41.517 request: 00:23:41.517 { 00:23:41.517 "name": "nvme_second", 00:23:41.517 "trtype": "tcp", 00:23:41.517 "traddr": "10.0.0.2", 00:23:41.517 "adrfam": "ipv4", 00:23:41.517 "trsvcid": "8010", 00:23:41.517 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:41.517 "wait_for_attach": false, 00:23:41.517 "attach_timeout_ms": 3000, 00:23:41.517 "method": "bdev_nvme_start_discovery", 00:23:41.517 "req_id": 1 00:23:41.517 } 00:23:41.517 Got JSON-RPC error response 00:23:41.517 response: 00:23:41.517 { 00:23:41.517 "code": -110, 00:23:41.517 "message": "Connection timed out" 00:23:41.517 } 00:23:41.517 12:47:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:41.517 12:47:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:23:41.517 12:47:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:41.517 12:47:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:41.517 12:47:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:41.517 12:47:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:23:41.517 12:47:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:41.517 12:47:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:41.517 12:47:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.517 12:47:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:23:41.517 12:47:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:41.517 12:47:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:23:41.517 12:47:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.517 12:47:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:23:41.517 12:47:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:23:41.517 12:47:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 2627282 00:23:41.517 12:47:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:23:41.517 12:47:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:41.517 12:47:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:23:41.517 12:47:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:41.517 12:47:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:23:41.517 12:47:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:41.517 12:47:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:41.517 rmmod nvme_tcp 00:23:41.517 rmmod nvme_fabrics 00:23:41.517 rmmod nvme_keyring 00:23:41.517 12:47:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:41.517 12:47:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:23:41.517 12:47:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:23:41.517 12:47:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 2627254 ']' 00:23:41.517 12:47:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 2627254 00:23:41.517 12:47:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 2627254 ']' 00:23:41.517 12:47:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 2627254 00:23:41.517 12:47:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:23:41.517 12:47:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:41.517 12:47:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2627254 00:23:41.517 12:47:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:41.517 12:47:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:41.517 12:47:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2627254' 00:23:41.517 killing process with pid 2627254 00:23:41.517 12:47:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 2627254 00:23:41.517 12:47:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 2627254 00:23:41.517 12:47:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:41.517 12:47:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:41.517 12:47:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:41.517 12:47:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:23:41.517 12:47:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:23:41.517 12:47:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:23:41.517 12:47:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:41.517 12:47:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:41.517 12:47:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:41.517 12:47:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:41.517 12:47:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:41.517 12:47:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:44.049 12:47:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:44.049 00:23:44.049 real 0m16.708s 00:23:44.049 user 0m20.104s 00:23:44.049 sys 0m5.485s 00:23:44.049 12:47:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:44.049 12:47:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:44.049 ************************************ 00:23:44.049 END TEST nvmf_host_discovery 00:23:44.049 ************************************ 00:23:44.049 12:47:26 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:23:44.049 12:47:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:44.049 12:47:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:44.049 12:47:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.049 ************************************ 00:23:44.049 START TEST nvmf_host_multipath_status 00:23:44.049 ************************************ 00:23:44.049 12:47:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:23:44.049 * Looking for test storage... 00:23:44.049 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:44.049 12:47:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:44.049 12:47:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lcov --version 00:23:44.049 12:47:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:44.049 12:47:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:44.049 12:47:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:44.049 12:47:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:44.049 12:47:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:44.049 12:47:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:23:44.049 12:47:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:23:44.049 12:47:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:23:44.049 12:47:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:23:44.049 12:47:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:23:44.049 12:47:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:23:44.049 12:47:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:23:44.049 12:47:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:44.049 12:47:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:23:44.049 12:47:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:23:44.049 12:47:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:44.049 12:47:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:44.049 12:47:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:23:44.049 12:47:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:23:44.049 12:47:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:44.049 12:47:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:23:44.049 12:47:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:23:44.049 12:47:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:23:44.049 12:47:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:23:44.049 12:47:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:44.049 12:47:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:23:44.049 12:47:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:23:44.049 12:47:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:44.049 12:47:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:44.049 12:47:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:23:44.049 12:47:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:44.049 12:47:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:44.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:44.049 --rc genhtml_branch_coverage=1 00:23:44.049 --rc genhtml_function_coverage=1 00:23:44.049 --rc genhtml_legend=1 00:23:44.049 --rc geninfo_all_blocks=1 00:23:44.049 --rc geninfo_unexecuted_blocks=1 00:23:44.049 00:23:44.049 ' 00:23:44.049 12:47:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:44.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:44.050 --rc genhtml_branch_coverage=1 00:23:44.050 --rc genhtml_function_coverage=1 00:23:44.050 --rc genhtml_legend=1 00:23:44.050 --rc geninfo_all_blocks=1 00:23:44.050 --rc geninfo_unexecuted_blocks=1 00:23:44.050 00:23:44.050 ' 00:23:44.050 12:47:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:44.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:44.050 --rc genhtml_branch_coverage=1 00:23:44.050 --rc genhtml_function_coverage=1 00:23:44.050 --rc genhtml_legend=1 00:23:44.050 --rc geninfo_all_blocks=1 00:23:44.050 --rc geninfo_unexecuted_blocks=1 00:23:44.050 00:23:44.050 ' 00:23:44.050 12:47:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:44.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:44.050 --rc genhtml_branch_coverage=1 00:23:44.050 --rc genhtml_function_coverage=1 00:23:44.050 --rc genhtml_legend=1 00:23:44.050 --rc geninfo_all_blocks=1 00:23:44.050 --rc geninfo_unexecuted_blocks=1 00:23:44.050 00:23:44.050 ' 00:23:44.050 12:47:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:44.050 12:47:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:23:44.050 12:47:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:44.050 12:47:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:44.050 12:47:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:44.050 12:47:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:44.050 12:47:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:44.050 12:47:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:44.050 12:47:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:44.050 12:47:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:44.050 12:47:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:44.050 12:47:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:44.050 12:47:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:44.050 12:47:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:44.050 12:47:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:44.050 12:47:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:44.050 12:47:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:44.050 12:47:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:44.050 12:47:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:44.050 12:47:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:23:44.050 12:47:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:44.050 12:47:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:44.050 12:47:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:44.050 12:47:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:44.050 12:47:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:44.050 12:47:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:44.050 12:47:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:23:44.050 12:47:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:44.050 12:47:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:23:44.050 12:47:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:44.050 12:47:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:44.050 12:47:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:44.050 12:47:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:44.050 12:47:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:44.050 12:47:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:44.050 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:44.050 12:47:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:44.050 12:47:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:44.050 12:47:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:44.050 12:47:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:23:44.050 12:47:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:23:44.050 12:47:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:44.050 12:47:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:23:44.050 12:47:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:44.050 12:47:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:23:44.050 12:47:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:23:44.050 12:47:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:44.050 12:47:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:44.050 12:47:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:44.050 12:47:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:44.050 12:47:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:44.050 12:47:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:44.050 12:47:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:44.050 12:47:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:44.050 12:47:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:44.050 12:47:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:44.050 12:47:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:23:44.050 12:47:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:49.310 12:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:49.310 12:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:23:49.310 12:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:49.311 12:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:49.311 12:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:49.311 12:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:49.311 12:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:49.311 12:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:23:49.311 12:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:49.311 12:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:23:49.311 12:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:23:49.311 12:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:23:49.311 12:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:23:49.311 12:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:23:49.311 12:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:23:49.311 12:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:49.311 12:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:49.311 12:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:49.311 12:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:49.311 12:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:49.311 12:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:49.311 12:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:49.311 12:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:49.311 12:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:49.311 12:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:49.311 12:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:49.311 12:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:49.311 12:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:49.311 12:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:49.311 12:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:49.311 12:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:49.311 12:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:49.311 12:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:49.311 12:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:49.311 12:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:49.311 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:49.311 12:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:49.311 12:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:49.311 12:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:49.311 12:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:49.311 12:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:49.311 12:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:49.311 12:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:49.311 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:49.311 12:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:49.311 12:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:49.311 12:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:49.311 12:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:49.311 12:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:49.311 12:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:49.311 12:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:49.311 12:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:49.311 12:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:49.311 12:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:49.311 12:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:49.311 12:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:49.311 12:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:49.311 12:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:49.311 12:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:49.311 12:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:49.311 Found net devices under 0000:86:00.0: cvl_0_0 00:23:49.311 12:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:49.311 12:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:49.311 12:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:49.311 12:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:49.311 12:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:49.311 12:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:49.311 12:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:49.311 12:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:49.311 12:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:49.311 Found net devices under 0000:86:00.1: cvl_0_1 00:23:49.311 12:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:49.311 12:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:49.311 12:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:23:49.311 12:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:49.311 12:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:49.311 12:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:49.311 12:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:49.311 12:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:49.311 12:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:49.311 12:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:49.311 12:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:49.311 12:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:49.311 12:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:49.311 12:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:49.311 12:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:49.311 12:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:49.311 12:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:49.311 12:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:49.311 12:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:49.311 12:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:49.311 12:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:49.311 12:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:49.311 12:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:49.311 12:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:49.311 12:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:49.311 12:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:49.311 12:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:49.311 12:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:49.311 12:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:49.311 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:49.311 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.448 ms 00:23:49.311 00:23:49.311 --- 10.0.0.2 ping statistics --- 00:23:49.311 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:49.311 rtt min/avg/max/mdev = 0.448/0.448/0.448/0.000 ms 00:23:49.311 12:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:49.311 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:49.311 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.224 ms 00:23:49.311 00:23:49.311 --- 10.0.0.1 ping statistics --- 00:23:49.311 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:49.311 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:23:49.311 12:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:49.311 12:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:23:49.311 12:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:49.311 12:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:49.311 12:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:49.311 12:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:49.311 12:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:49.311 12:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:49.311 12:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:49.311 12:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:23:49.311 12:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:49.311 12:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:49.311 12:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:49.311 12:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=2632126 00:23:49.311 12:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 2632126 00:23:49.311 12:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 2632126 ']' 00:23:49.311 12:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:49.311 12:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:49.311 12:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:49.311 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:49.311 12:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:23:49.311 12:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:49.311 12:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:49.311 [2024-11-28 12:47:31.609445] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:23:49.311 [2024-11-28 12:47:31.609492] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:49.311 [2024-11-28 12:47:31.676383] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:49.311 [2024-11-28 12:47:31.718435] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:49.311 [2024-11-28 12:47:31.718473] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:49.311 [2024-11-28 12:47:31.718481] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:49.311 [2024-11-28 12:47:31.718487] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:49.311 [2024-11-28 12:47:31.718492] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:49.311 [2024-11-28 12:47:31.719664] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:49.311 [2024-11-28 12:47:31.719668] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:49.311 12:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:49.311 12:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:23:49.311 12:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:49.311 12:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:49.311 12:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:49.568 12:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:49.568 12:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=2632126 00:23:49.568 12:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:49.568 [2024-11-28 12:47:32.021125] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:49.568 12:47:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:23:49.826 Malloc0 00:23:49.826 12:47:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:23:50.082 12:47:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:50.339 12:47:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:50.339 [2024-11-28 12:47:32.786816] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:50.339 12:47:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:50.596 [2024-11-28 12:47:32.983373] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:50.596 12:47:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=2632388 00:23:50.596 12:47:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:50.596 12:47:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:23:50.596 12:47:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 2632388 /var/tmp/bdevperf.sock 00:23:50.596 12:47:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 2632388 ']' 00:23:50.596 12:47:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:50.596 12:47:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:50.596 12:47:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:50.596 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:50.596 12:47:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:50.596 12:47:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:50.853 12:47:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:50.853 12:47:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:23:50.853 12:47:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:23:51.109 12:47:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:23:51.366 Nvme0n1 00:23:51.366 12:47:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:23:51.623 Nvme0n1 00:23:51.623 12:47:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:23:51.623 12:47:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:23:54.145 12:47:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:23:54.145 12:47:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:23:54.145 12:47:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:54.145 12:47:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:23:55.077 12:47:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:23:55.077 12:47:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:55.077 12:47:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:55.077 12:47:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:55.333 12:47:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:55.333 12:47:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:55.333 12:47:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:55.333 12:47:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:55.591 12:47:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:55.591 12:47:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:55.591 12:47:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:55.591 12:47:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:55.847 12:47:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:55.847 12:47:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:55.847 12:47:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:55.847 12:47:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:55.847 12:47:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:55.848 12:47:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:55.848 12:47:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:55.848 12:47:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:56.105 12:47:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:56.105 12:47:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:56.105 12:47:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:56.105 12:47:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:56.362 12:47:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:56.362 12:47:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:23:56.362 12:47:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:56.620 12:47:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:56.878 12:47:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:23:57.810 12:47:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:23:57.810 12:47:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:57.810 12:47:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:57.810 12:47:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:58.069 12:47:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:58.069 12:47:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:58.069 12:47:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:58.069 12:47:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:58.327 12:47:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:58.327 12:47:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:58.327 12:47:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:58.327 12:47:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:58.327 12:47:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:58.327 12:47:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:58.327 12:47:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:58.327 12:47:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:58.585 12:47:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:58.585 12:47:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:58.585 12:47:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:58.585 12:47:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:58.843 12:47:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:58.843 12:47:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:58.843 12:47:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:58.843 12:47:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:59.101 12:47:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:59.101 12:47:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:23:59.101 12:47:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:59.358 12:47:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:23:59.358 12:47:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:24:00.731 12:47:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:24:00.731 12:47:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:00.731 12:47:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:00.731 12:47:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:00.731 12:47:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:00.731 12:47:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:00.731 12:47:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:00.731 12:47:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:00.989 12:47:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:00.989 12:47:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:00.989 12:47:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:00.989 12:47:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:00.989 12:47:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:00.989 12:47:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:00.989 12:47:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:00.989 12:47:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:01.247 12:47:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:01.247 12:47:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:01.247 12:47:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:01.247 12:47:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:01.504 12:47:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:01.504 12:47:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:01.505 12:47:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:01.505 12:47:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:01.762 12:47:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:01.762 12:47:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:24:01.762 12:47:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:01.762 12:47:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:02.020 12:47:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:24:03.392 12:47:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:24:03.392 12:47:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:03.392 12:47:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:03.392 12:47:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:03.392 12:47:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:03.392 12:47:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:03.392 12:47:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:03.392 12:47:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:03.392 12:47:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:03.392 12:47:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:03.392 12:47:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:03.392 12:47:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:03.650 12:47:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:03.650 12:47:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:03.650 12:47:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:03.650 12:47:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:03.908 12:47:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:03.908 12:47:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:03.908 12:47:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:03.908 12:47:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:04.165 12:47:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:04.165 12:47:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:04.165 12:47:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:04.165 12:47:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:04.421 12:47:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:04.421 12:47:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:24:04.421 12:47:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:24:04.422 12:47:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:04.679 12:47:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:24:05.612 12:47:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:24:05.612 12:47:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:05.870 12:47:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:05.870 12:47:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:05.870 12:47:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:05.870 12:47:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:05.870 12:47:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:05.870 12:47:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:06.127 12:47:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:06.127 12:47:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:06.127 12:47:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:06.127 12:47:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:06.385 12:47:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:06.385 12:47:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:06.385 12:47:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:06.385 12:47:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:06.385 12:47:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:06.385 12:47:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:24:06.385 12:47:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:06.385 12:47:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:06.646 12:47:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:06.646 12:47:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:06.646 12:47:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:06.646 12:47:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:06.949 12:47:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:06.949 12:47:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:24:06.950 12:47:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:24:07.236 12:47:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:07.236 12:47:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:24:08.224 12:47:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:24:08.224 12:47:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:08.224 12:47:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:08.224 12:47:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:08.483 12:47:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:08.483 12:47:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:08.483 12:47:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:08.483 12:47:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:08.741 12:47:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:08.741 12:47:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:08.741 12:47:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:08.741 12:47:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:09.000 12:47:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:09.000 12:47:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:09.000 12:47:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:09.000 12:47:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:09.000 12:47:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:09.000 12:47:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:24:09.000 12:47:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:09.257 12:47:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:09.257 12:47:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:09.257 12:47:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:09.257 12:47:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:09.257 12:47:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:09.514 12:47:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:09.514 12:47:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:24:09.772 12:47:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:24:09.772 12:47:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:24:10.030 12:47:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:10.030 12:47:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:24:11.404 12:47:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:24:11.404 12:47:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:11.404 12:47:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:11.404 12:47:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:11.404 12:47:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:11.404 12:47:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:11.404 12:47:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:11.404 12:47:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:11.662 12:47:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:11.662 12:47:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:11.662 12:47:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:11.662 12:47:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:11.662 12:47:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:11.662 12:47:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:11.662 12:47:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:11.662 12:47:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:11.920 12:47:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:11.920 12:47:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:11.920 12:47:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:11.920 12:47:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:12.178 12:47:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:12.178 12:47:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:12.178 12:47:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:12.178 12:47:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:12.436 12:47:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:12.436 12:47:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:24:12.436 12:47:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:12.706 12:47:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:12.706 12:47:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:24:14.084 12:47:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:24:14.084 12:47:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:14.084 12:47:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:14.084 12:47:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:14.084 12:47:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:14.084 12:47:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:14.084 12:47:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:14.084 12:47:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:14.084 12:47:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:14.084 12:47:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:14.342 12:47:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:14.342 12:47:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:14.342 12:47:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:14.342 12:47:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:14.342 12:47:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:14.342 12:47:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:14.601 12:47:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:14.601 12:47:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:14.601 12:47:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:14.601 12:47:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:14.859 12:47:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:14.859 12:47:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:14.859 12:47:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:14.859 12:47:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:15.117 12:47:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:15.117 12:47:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:24:15.117 12:47:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:15.117 12:47:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:24:15.375 12:47:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:24:16.309 12:47:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:24:16.309 12:47:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:16.309 12:47:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:16.309 12:47:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:16.567 12:47:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:16.567 12:47:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:16.567 12:47:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:16.567 12:47:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:16.825 12:47:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:16.825 12:47:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:16.825 12:47:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:16.825 12:47:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:17.083 12:47:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:17.083 12:47:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:17.083 12:47:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:17.083 12:47:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:17.341 12:47:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:17.341 12:47:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:17.341 12:47:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:17.341 12:47:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:17.341 12:47:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:17.341 12:47:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:17.341 12:47:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:17.341 12:47:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:17.599 12:48:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:17.599 12:48:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:24:17.599 12:48:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:17.857 12:48:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:18.115 12:48:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:24:19.073 12:48:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:24:19.073 12:48:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:19.073 12:48:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:19.073 12:48:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:19.331 12:48:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:19.331 12:48:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:19.331 12:48:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:19.331 12:48:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:19.589 12:48:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:19.589 12:48:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:19.589 12:48:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:19.589 12:48:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:19.589 12:48:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:19.589 12:48:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:19.589 12:48:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:19.589 12:48:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:19.847 12:48:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:19.847 12:48:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:19.847 12:48:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:19.847 12:48:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:20.106 12:48:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:20.106 12:48:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:20.106 12:48:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:20.106 12:48:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:20.364 12:48:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:20.364 12:48:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 2632388 00:24:20.364 12:48:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 2632388 ']' 00:24:20.364 12:48:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 2632388 00:24:20.364 12:48:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:24:20.364 12:48:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:20.364 12:48:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2632388 00:24:20.364 12:48:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:20.364 12:48:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:20.364 12:48:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2632388' 00:24:20.364 killing process with pid 2632388 00:24:20.364 12:48:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 2632388 00:24:20.364 12:48:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 2632388 00:24:20.364 { 00:24:20.364 "results": [ 00:24:20.364 { 00:24:20.364 "job": "Nvme0n1", 00:24:20.364 "core_mask": "0x4", 00:24:20.364 "workload": "verify", 00:24:20.364 "status": "terminated", 00:24:20.364 "verify_range": { 00:24:20.364 "start": 0, 00:24:20.364 "length": 16384 00:24:20.364 }, 00:24:20.364 "queue_depth": 128, 00:24:20.364 "io_size": 4096, 00:24:20.364 "runtime": 28.58365, 00:24:20.364 "iops": 10090.418823348313, 00:24:20.364 "mibps": 39.41569852870435, 00:24:20.364 "io_failed": 0, 00:24:20.364 "io_timeout": 0, 00:24:20.364 "avg_latency_us": 12649.267239764094, 00:24:20.364 "min_latency_us": 420.28521739130434, 00:24:20.364 "max_latency_us": 3078254.4139130437 00:24:20.364 } 00:24:20.364 ], 00:24:20.364 "core_count": 1 00:24:20.364 } 00:24:20.648 12:48:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 2632388 00:24:20.648 12:48:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:20.648 [2024-11-28 12:47:33.032276] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:24:20.648 [2024-11-28 12:47:33.032326] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2632388 ] 00:24:20.648 [2024-11-28 12:47:33.089129] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:20.648 [2024-11-28 12:47:33.130011] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:20.648 Running I/O for 90 seconds... 00:24:20.648 11099.00 IOPS, 43.36 MiB/s [2024-11-28T11:48:03.167Z] 11131.50 IOPS, 43.48 MiB/s [2024-11-28T11:48:03.167Z] 11118.00 IOPS, 43.43 MiB/s [2024-11-28T11:48:03.167Z] 11105.00 IOPS, 43.38 MiB/s [2024-11-28T11:48:03.167Z] 11057.80 IOPS, 43.19 MiB/s [2024-11-28T11:48:03.167Z] 11038.33 IOPS, 43.12 MiB/s [2024-11-28T11:48:03.167Z] 11025.29 IOPS, 43.07 MiB/s [2024-11-28T11:48:03.167Z] 11012.38 IOPS, 43.02 MiB/s [2024-11-28T11:48:03.167Z] 11016.33 IOPS, 43.03 MiB/s [2024-11-28T11:48:03.167Z] 10993.30 IOPS, 42.94 MiB/s [2024-11-28T11:48:03.167Z] 10990.09 IOPS, 42.93 MiB/s [2024-11-28T11:48:03.167Z] 10979.92 IOPS, 42.89 MiB/s [2024-11-28T11:48:03.167Z] [2024-11-28 12:47:46.898361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:64656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.648 [2024-11-28 12:47:46.898399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:20.648 [2024-11-28 12:47:46.898420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:64664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.648 [2024-11-28 12:47:46.898428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:20.648 [2024-11-28 12:47:46.898442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:64672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.648 [2024-11-28 12:47:46.898449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:20.648 [2024-11-28 12:47:46.898461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:64680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.648 [2024-11-28 12:47:46.898469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:20.649 [2024-11-28 12:47:46.898481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:64688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.649 [2024-11-28 12:47:46.898488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:20.649 [2024-11-28 12:47:46.898500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:64696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.649 [2024-11-28 12:47:46.898507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:20.649 [2024-11-28 12:47:46.898520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:64704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.649 [2024-11-28 12:47:46.898527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:20.649 [2024-11-28 12:47:46.898539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:64712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.649 [2024-11-28 12:47:46.898546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:20.649 [2024-11-28 12:47:46.898558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:64720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.649 [2024-11-28 12:47:46.898565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:20.649 [2024-11-28 12:47:46.898578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:64728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.649 [2024-11-28 12:47:46.898590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:20.649 [2024-11-28 12:47:46.898603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:64736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.649 [2024-11-28 12:47:46.898610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:20.649 [2024-11-28 12:47:46.898622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:64744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.649 [2024-11-28 12:47:46.898629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:20.649 [2024-11-28 12:47:46.898642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:64752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.649 [2024-11-28 12:47:46.898649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:20.649 [2024-11-28 12:47:46.898661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:64760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.649 [2024-11-28 12:47:46.898668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:20.649 [2024-11-28 12:47:46.898681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:64768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.649 [2024-11-28 12:47:46.898688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:20.649 [2024-11-28 12:47:46.898700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.649 [2024-11-28 12:47:46.898707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:20.649 [2024-11-28 12:47:46.898720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:64784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.649 [2024-11-28 12:47:46.898727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:20.649 [2024-11-28 12:47:46.898740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:64792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.649 [2024-11-28 12:47:46.898747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:20.649 [2024-11-28 12:47:46.898759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:64800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.649 [2024-11-28 12:47:46.898766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:20.649 [2024-11-28 12:47:46.898779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:64808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.649 [2024-11-28 12:47:46.898786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:20.649 [2024-11-28 12:47:46.898798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.649 [2024-11-28 12:47:46.898806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:20.649 [2024-11-28 12:47:46.898818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:64824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.649 [2024-11-28 12:47:46.898828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:20.649 [2024-11-28 12:47:46.898841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:64832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.649 [2024-11-28 12:47:46.898848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:20.649 [2024-11-28 12:47:46.898861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:64840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.649 [2024-11-28 12:47:46.898867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:20.649 [2024-11-28 12:47:46.898880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:64848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.649 [2024-11-28 12:47:46.898887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:20.649 [2024-11-28 12:47:46.898900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:64856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.649 [2024-11-28 12:47:46.898906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:20.649 [2024-11-28 12:47:46.898919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:64864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.649 [2024-11-28 12:47:46.898925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:20.649 [2024-11-28 12:47:46.898938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:64872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.649 [2024-11-28 12:47:46.898944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:20.649 [2024-11-28 12:47:46.898963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:64880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.649 [2024-11-28 12:47:46.898970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:20.649 [2024-11-28 12:47:46.898982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:64888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.649 [2024-11-28 12:47:46.898989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:20.649 [2024-11-28 12:47:46.899001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:64896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.649 [2024-11-28 12:47:46.899008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:20.649 [2024-11-28 12:47:46.899021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:64904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.649 [2024-11-28 12:47:46.899028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:20.649 [2024-11-28 12:47:46.899040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:64912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.649 [2024-11-28 12:47:46.899047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:20.649 [2024-11-28 12:47:46.899060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:64920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.649 [2024-11-28 12:47:46.899067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:20.649 [2024-11-28 12:47:46.899081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:64928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.649 [2024-11-28 12:47:46.899089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:20.649 [2024-11-28 12:47:46.899101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:64936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.649 [2024-11-28 12:47:46.899108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:20.649 [2024-11-28 12:47:46.899120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:64944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.649 [2024-11-28 12:47:46.899127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:20.649 [2024-11-28 12:47:46.899139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:64952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.649 [2024-11-28 12:47:46.899146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:20.649 [2024-11-28 12:47:46.899158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:64960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.649 [2024-11-28 12:47:46.899165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:20.649 [2024-11-28 12:47:46.899177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:64968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.649 [2024-11-28 12:47:46.899183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:20.649 [2024-11-28 12:47:46.899196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:64976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.649 [2024-11-28 12:47:46.899203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:20.649 [2024-11-28 12:47:46.899215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:64984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.649 [2024-11-28 12:47:46.899222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:20.649 [2024-11-28 12:47:46.899234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:64992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.649 [2024-11-28 12:47:46.899241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:20.650 [2024-11-28 12:47:46.899253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:65000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.650 [2024-11-28 12:47:46.899260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.650 [2024-11-28 12:47:46.899272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:65008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.650 [2024-11-28 12:47:46.899279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.650 [2024-11-28 12:47:46.899292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:65016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.650 [2024-11-28 12:47:46.899299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:20.650 [2024-11-28 12:47:46.899313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:65024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.650 [2024-11-28 12:47:46.899319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:20.650 [2024-11-28 12:47:46.899332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:65032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.650 [2024-11-28 12:47:46.899339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:20.650 [2024-11-28 12:47:46.899351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:65040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.650 [2024-11-28 12:47:46.899358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:20.650 [2024-11-28 12:47:46.899371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:65048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.650 [2024-11-28 12:47:46.899378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:20.650 [2024-11-28 12:47:46.899390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:65056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.650 [2024-11-28 12:47:46.899396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:20.650 [2024-11-28 12:47:46.899409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:65064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.650 [2024-11-28 12:47:46.899416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:20.650 [2024-11-28 12:47:46.899428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:65072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.650 [2024-11-28 12:47:46.899435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:20.650 [2024-11-28 12:47:46.899447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:65080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.650 [2024-11-28 12:47:46.899454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:20.650 [2024-11-28 12:47:46.899466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:65088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.650 [2024-11-28 12:47:46.899473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:20.650 [2024-11-28 12:47:46.899485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:65096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.650 [2024-11-28 12:47:46.899492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:20.650 [2024-11-28 12:47:46.899504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:65104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.650 [2024-11-28 12:47:46.899512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:20.650 [2024-11-28 12:47:46.899524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:65112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.650 [2024-11-28 12:47:46.899531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:20.650 [2024-11-28 12:47:46.899543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:65120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.650 [2024-11-28 12:47:46.899551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:20.650 [2024-11-28 12:47:46.899564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:65128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.650 [2024-11-28 12:47:46.899571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:20.650 [2024-11-28 12:47:46.899583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:65136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.650 [2024-11-28 12:47:46.899590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:20.650 [2024-11-28 12:47:46.899602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:65144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.650 [2024-11-28 12:47:46.899610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:20.650 [2024-11-28 12:47:46.899622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:65152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.650 [2024-11-28 12:47:46.899629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:20.650 [2024-11-28 12:47:46.899642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:65160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.650 [2024-11-28 12:47:46.899649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:20.650 [2024-11-28 12:47:46.899661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:65168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.650 [2024-11-28 12:47:46.899668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:20.650 [2024-11-28 12:47:46.899680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:65176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.650 [2024-11-28 12:47:46.899688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:20.650 [2024-11-28 12:47:46.899700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:65184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.650 [2024-11-28 12:47:46.899707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:20.650 [2024-11-28 12:47:46.899719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:65192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.650 [2024-11-28 12:47:46.899725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:20.650 [2024-11-28 12:47:46.899738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:65200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.650 [2024-11-28 12:47:46.899745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:20.650 [2024-11-28 12:47:46.899757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:65208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.650 [2024-11-28 12:47:46.899763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:20.650 [2024-11-28 12:47:46.899776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:64504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.650 [2024-11-28 12:47:46.899784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:20.650 [2024-11-28 12:47:46.899797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:64512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.650 [2024-11-28 12:47:46.899804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:20.650 [2024-11-28 12:47:46.900427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:64520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.650 [2024-11-28 12:47:46.900443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:20.650 [2024-11-28 12:47:46.900461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:65216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.650 [2024-11-28 12:47:46.900468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:20.650 [2024-11-28 12:47:46.900481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:65224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.650 [2024-11-28 12:47:46.900488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:20.650 [2024-11-28 12:47:46.900500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:65232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.650 [2024-11-28 12:47:46.900507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:20.650 [2024-11-28 12:47:46.900520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:65240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.650 [2024-11-28 12:47:46.900527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:20.650 [2024-11-28 12:47:46.900540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:65248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.650 [2024-11-28 12:47:46.900547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:20.650 [2024-11-28 12:47:46.900560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:65256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.650 [2024-11-28 12:47:46.900567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:20.650 [2024-11-28 12:47:46.900579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:65264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.650 [2024-11-28 12:47:46.900586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:20.650 [2024-11-28 12:47:46.900598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:65272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.650 [2024-11-28 12:47:46.900605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:20.650 [2024-11-28 12:47:46.900618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:65280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.650 [2024-11-28 12:47:46.900625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:20.651 [2024-11-28 12:47:46.900637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:65288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.651 [2024-11-28 12:47:46.900644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:20.651 [2024-11-28 12:47:46.900659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:65296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.651 [2024-11-28 12:47:46.900666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:20.651 [2024-11-28 12:47:46.900679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:65304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.651 [2024-11-28 12:47:46.900685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:20.651 [2024-11-28 12:47:46.900698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:65312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.651 [2024-11-28 12:47:46.900704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:20.651 [2024-11-28 12:47:46.900717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:65320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.651 [2024-11-28 12:47:46.900723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:20.651 [2024-11-28 12:47:46.900736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:65328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.651 [2024-11-28 12:47:46.900743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:20.651 [2024-11-28 12:47:46.900755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:65336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.651 [2024-11-28 12:47:46.900761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:20.651 [2024-11-28 12:47:46.900773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:65344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.651 [2024-11-28 12:47:46.900781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:20.651 [2024-11-28 12:47:46.900793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:65352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.651 [2024-11-28 12:47:46.900799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:20.651 [2024-11-28 12:47:46.900812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:65360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.651 [2024-11-28 12:47:46.900819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:20.651 [2024-11-28 12:47:46.900831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:65368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.651 [2024-11-28 12:47:46.900838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:20.651 [2024-11-28 12:47:46.900850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:65376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.651 [2024-11-28 12:47:46.900857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:20.651 [2024-11-28 12:47:46.900869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:65384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.651 [2024-11-28 12:47:46.900875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:20.651 [2024-11-28 12:47:46.900889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:65392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.651 [2024-11-28 12:47:46.900896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:20.651 [2024-11-28 12:47:46.900909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:65400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.651 [2024-11-28 12:47:46.900915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:20.651 [2024-11-28 12:47:46.900930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:65408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.651 [2024-11-28 12:47:46.900936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:20.651 [2024-11-28 12:47:46.900955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:65416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.651 [2024-11-28 12:47:46.900962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:20.651 [2024-11-28 12:47:46.900975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:65424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.651 [2024-11-28 12:47:46.900981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:20.651 [2024-11-28 12:47:46.900994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:65432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.651 [2024-11-28 12:47:46.901000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:20.651 [2024-11-28 12:47:46.901013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:65440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.651 [2024-11-28 12:47:46.901020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:20.651 [2024-11-28 12:47:46.901032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:65448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.651 [2024-11-28 12:47:46.901039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:20.651 [2024-11-28 12:47:46.901051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:65456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.651 [2024-11-28 12:47:46.901057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:20.651 [2024-11-28 12:47:46.901070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:65464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.651 [2024-11-28 12:47:46.901077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:20.651 [2024-11-28 12:47:46.901089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:65472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.651 [2024-11-28 12:47:46.901096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:20.651 [2024-11-28 12:47:46.901108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:65480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.651 [2024-11-28 12:47:46.901114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:20.651 [2024-11-28 12:47:46.901127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:64528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.651 [2024-11-28 12:47:46.901138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:20.651 [2024-11-28 12:47:46.901151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:64536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.651 [2024-11-28 12:47:46.901157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:20.651 [2024-11-28 12:47:46.901169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:64544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.651 [2024-11-28 12:47:46.901176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:20.651 [2024-11-28 12:47:46.901189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:64552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.651 [2024-11-28 12:47:46.901196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:20.651 [2024-11-28 12:47:46.901208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:64560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.651 [2024-11-28 12:47:46.901215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:20.651 [2024-11-28 12:47:46.901227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:64568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.651 [2024-11-28 12:47:46.901234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:20.651 [2024-11-28 12:47:46.901247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:64576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.651 [2024-11-28 12:47:46.901254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:20.651 [2024-11-28 12:47:46.901267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:64584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.651 [2024-11-28 12:47:46.901273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:20.651 [2024-11-28 12:47:46.901286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:64592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.651 [2024-11-28 12:47:46.901292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:20.651 [2024-11-28 12:47:46.901305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:64600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.651 [2024-11-28 12:47:46.901312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:20.651 [2024-11-28 12:47:46.901324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:64608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.651 [2024-11-28 12:47:46.901330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:20.651 [2024-11-28 12:47:46.901343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:64616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.651 [2024-11-28 12:47:46.901350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:20.651 [2024-11-28 12:47:46.901362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:64624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.651 [2024-11-28 12:47:46.901370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:20.651 [2024-11-28 12:47:46.901382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:64632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.651 [2024-11-28 12:47:46.901389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:20.652 [2024-11-28 12:47:46.901402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:64640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.652 [2024-11-28 12:47:46.901408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:20.652 [2024-11-28 12:47:46.901421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:65488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.652 [2024-11-28 12:47:46.901427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:20.652 [2024-11-28 12:47:46.901440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:65496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.652 [2024-11-28 12:47:46.901447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:20.652 [2024-11-28 12:47:46.901459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:65504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.652 [2024-11-28 12:47:46.901466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:20.652 [2024-11-28 12:47:46.901478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:65512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.652 [2024-11-28 12:47:46.901485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:20.652 [2024-11-28 12:47:46.901497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:65520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.652 [2024-11-28 12:47:46.901504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:20.652 [2024-11-28 12:47:46.901965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:64648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.652 [2024-11-28 12:47:46.901978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:20.652 [2024-11-28 12:47:46.901993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:64656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.652 [2024-11-28 12:47:46.902000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:20.652 [2024-11-28 12:47:46.902015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:64664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.652 [2024-11-28 12:47:46.902022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:20.652 [2024-11-28 12:47:46.902034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:64672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.652 [2024-11-28 12:47:46.902041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:20.652 [2024-11-28 12:47:46.902054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:64680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.652 [2024-11-28 12:47:46.902061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:20.652 [2024-11-28 12:47:46.902075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:64688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.652 [2024-11-28 12:47:46.902082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:20.652 [2024-11-28 12:47:46.902095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:64696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.652 [2024-11-28 12:47:46.902102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:20.652 [2024-11-28 12:47:46.902114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:64704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.652 [2024-11-28 12:47:46.902120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:20.652 [2024-11-28 12:47:46.902133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:64712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.652 [2024-11-28 12:47:46.902140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:20.652 [2024-11-28 12:47:46.902152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:64720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.652 [2024-11-28 12:47:46.902159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:20.652 [2024-11-28 12:47:46.902171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:64728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.652 [2024-11-28 12:47:46.902178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:20.652 [2024-11-28 12:47:46.902190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:64736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.652 [2024-11-28 12:47:46.902197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:20.652 [2024-11-28 12:47:46.902210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:64744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.652 [2024-11-28 12:47:46.902216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:20.652 [2024-11-28 12:47:46.902228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:64752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.652 [2024-11-28 12:47:46.902235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:20.652 [2024-11-28 12:47:46.902247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:64760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.652 [2024-11-28 12:47:46.902254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:20.652 [2024-11-28 12:47:46.902267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:64768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.652 [2024-11-28 12:47:46.902273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:20.652 [2024-11-28 12:47:46.902285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:64776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.652 [2024-11-28 12:47:46.902292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:20.652 [2024-11-28 12:47:46.902306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:64784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.652 [2024-11-28 12:47:46.902313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:20.652 [2024-11-28 12:47:46.902326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:64792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.652 [2024-11-28 12:47:46.902333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:20.652 [2024-11-28 12:47:46.902347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:64800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.652 [2024-11-28 12:47:46.902354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:20.652 [2024-11-28 12:47:46.902367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:64808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.652 [2024-11-28 12:47:46.902373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:20.652 [2024-11-28 12:47:46.902386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:64816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.652 [2024-11-28 12:47:46.902392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:20.652 [2024-11-28 12:47:46.902404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:64824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.652 [2024-11-28 12:47:46.902411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:20.652 [2024-11-28 12:47:46.902424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:64832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.652 [2024-11-28 12:47:46.902431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:20.652 [2024-11-28 12:47:46.902443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:64840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.652 [2024-11-28 12:47:46.902449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:20.652 [2024-11-28 12:47:46.902462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:64848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.652 [2024-11-28 12:47:46.902469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:20.652 [2024-11-28 12:47:46.902481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:64856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.652 [2024-11-28 12:47:46.902488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:20.652 [2024-11-28 12:47:46.902500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:64864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.652 [2024-11-28 12:47:46.902507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:20.652 [2024-11-28 12:47:46.902519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:64872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.652 [2024-11-28 12:47:46.902526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:20.652 [2024-11-28 12:47:46.902538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:64880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.652 [2024-11-28 12:47:46.902547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:20.652 [2024-11-28 12:47:46.902559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:64888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.652 [2024-11-28 12:47:46.902566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:20.652 [2024-11-28 12:47:46.902578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:64896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.652 [2024-11-28 12:47:46.902585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:20.652 [2024-11-28 12:47:46.902597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:64904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.652 [2024-11-28 12:47:46.902604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:20.653 [2024-11-28 12:47:46.902616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:64912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.653 [2024-11-28 12:47:46.902623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:20.653 [2024-11-28 12:47:46.902637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:64920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.653 [2024-11-28 12:47:46.902643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:20.653 [2024-11-28 12:47:46.902657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:64928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.653 [2024-11-28 12:47:46.902663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:20.653 [2024-11-28 12:47:46.902676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:64936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.653 [2024-11-28 12:47:46.902683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:20.653 [2024-11-28 12:47:46.902695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:64944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.653 [2024-11-28 12:47:46.902702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:20.653 [2024-11-28 12:47:46.902714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:64952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.653 [2024-11-28 12:47:46.902721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:20.653 [2024-11-28 12:47:46.902734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:64960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.653 [2024-11-28 12:47:46.902741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:20.653 [2024-11-28 12:47:46.902753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:64968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.653 [2024-11-28 12:47:46.902760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:20.653 [2024-11-28 12:47:46.902772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:64976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.653 [2024-11-28 12:47:46.902781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:20.653 [2024-11-28 12:47:46.902793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:64984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.653 [2024-11-28 12:47:46.902800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:20.653 [2024-11-28 12:47:46.902812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:64992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.653 [2024-11-28 12:47:46.902819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:20.653 [2024-11-28 12:47:46.902832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:65000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.653 [2024-11-28 12:47:46.902838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.653 [2024-11-28 12:47:46.902851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:65008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.653 [2024-11-28 12:47:46.902857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.653 [2024-11-28 12:47:46.902870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:65016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.653 [2024-11-28 12:47:46.912545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:20.653 [2024-11-28 12:47:46.912562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:65024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.653 [2024-11-28 12:47:46.912570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:20.653 [2024-11-28 12:47:46.913024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:65032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.653 [2024-11-28 12:47:46.913039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:20.653 [2024-11-28 12:47:46.913055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:65040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.653 [2024-11-28 12:47:46.913062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:20.653 [2024-11-28 12:47:46.913076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:65048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.653 [2024-11-28 12:47:46.913083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:20.653 [2024-11-28 12:47:46.913096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:65056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.653 [2024-11-28 12:47:46.913104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:20.653 [2024-11-28 12:47:46.913116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:65064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.653 [2024-11-28 12:47:46.913123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:20.653 [2024-11-28 12:47:46.913135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:65072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.653 [2024-11-28 12:47:46.913142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:20.653 [2024-11-28 12:47:46.913158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:65080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.653 [2024-11-28 12:47:46.913165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:20.653 [2024-11-28 12:47:46.913177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:65088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.653 [2024-11-28 12:47:46.913184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:20.653 [2024-11-28 12:47:46.913196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.653 [2024-11-28 12:47:46.913203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:20.653 [2024-11-28 12:47:46.913216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:65104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.653 [2024-11-28 12:47:46.913223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:20.653 [2024-11-28 12:47:46.913235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:65112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.653 [2024-11-28 12:47:46.913242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:20.653 [2024-11-28 12:47:46.913254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:65120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.653 [2024-11-28 12:47:46.913261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:20.653 [2024-11-28 12:47:46.913273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:65128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.653 [2024-11-28 12:47:46.913280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:20.653 [2024-11-28 12:47:46.913292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:65136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.653 [2024-11-28 12:47:46.913299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:20.653 [2024-11-28 12:47:46.913312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:65144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.653 [2024-11-28 12:47:46.913319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:20.653 [2024-11-28 12:47:46.913331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:65152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.653 [2024-11-28 12:47:46.913338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:20.653 [2024-11-28 12:47:46.913350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:65160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.653 [2024-11-28 12:47:46.913357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:20.653 [2024-11-28 12:47:46.913370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:65168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.653 [2024-11-28 12:47:46.913376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:20.654 [2024-11-28 12:47:46.913391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:65176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.654 [2024-11-28 12:47:46.913398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:20.654 [2024-11-28 12:47:46.913410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:65184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.654 [2024-11-28 12:47:46.913417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:20.654 [2024-11-28 12:47:46.913429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:65192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.654 [2024-11-28 12:47:46.913436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:20.654 [2024-11-28 12:47:46.913449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:65200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.654 [2024-11-28 12:47:46.913456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:20.654 [2024-11-28 12:47:46.913468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:65208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.654 [2024-11-28 12:47:46.913475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:20.654 [2024-11-28 12:47:46.913488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:64504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.654 [2024-11-28 12:47:46.913495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:20.654 [2024-11-28 12:47:46.913507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:64512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.654 [2024-11-28 12:47:46.913514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:20.654 [2024-11-28 12:47:46.913527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:64520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.654 [2024-11-28 12:47:46.913533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:20.654 [2024-11-28 12:47:46.913546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:65216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.654 [2024-11-28 12:47:46.913552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:20.654 [2024-11-28 12:47:46.913565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:65224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.654 [2024-11-28 12:47:46.913572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:20.654 [2024-11-28 12:47:46.913585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:65232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.654 [2024-11-28 12:47:46.913592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:20.654 [2024-11-28 12:47:46.913604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:65240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.654 [2024-11-28 12:47:46.913611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:20.654 [2024-11-28 12:47:46.913623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:65248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.654 [2024-11-28 12:47:46.913631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:20.654 [2024-11-28 12:47:46.913644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:65256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.654 [2024-11-28 12:47:46.913651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:20.654 [2024-11-28 12:47:46.913663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:65264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.654 [2024-11-28 12:47:46.913670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:20.654 [2024-11-28 12:47:46.913682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:65272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.654 [2024-11-28 12:47:46.913689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:20.654 [2024-11-28 12:47:46.913701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:65280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.654 [2024-11-28 12:47:46.913708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:20.654 [2024-11-28 12:47:46.913721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:65288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.654 [2024-11-28 12:47:46.913728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:20.654 [2024-11-28 12:47:46.913740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:65296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.654 [2024-11-28 12:47:46.913747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:20.654 [2024-11-28 12:47:46.913759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:65304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.654 [2024-11-28 12:47:46.913766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:20.654 [2024-11-28 12:47:46.913778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:65312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.654 [2024-11-28 12:47:46.913785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:20.654 [2024-11-28 12:47:46.913797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:65320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.654 [2024-11-28 12:47:46.913804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:20.654 [2024-11-28 12:47:46.913816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:65328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.654 [2024-11-28 12:47:46.913823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:20.654 [2024-11-28 12:47:46.913836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:65336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.654 [2024-11-28 12:47:46.913842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:20.654 [2024-11-28 12:47:46.913854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:65344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.654 [2024-11-28 12:47:46.913862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:20.654 [2024-11-28 12:47:46.913875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:65352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.654 [2024-11-28 12:47:46.913882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:20.654 [2024-11-28 12:47:46.913894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:65360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.654 [2024-11-28 12:47:46.913901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:20.654 [2024-11-28 12:47:46.913913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:65368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.654 [2024-11-28 12:47:46.913920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:20.654 [2024-11-28 12:47:46.913932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:65376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.654 [2024-11-28 12:47:46.913939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:20.654 [2024-11-28 12:47:46.913956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:65384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.654 [2024-11-28 12:47:46.913963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:20.654 [2024-11-28 12:47:46.913976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:65392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.654 [2024-11-28 12:47:46.913983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:20.654 [2024-11-28 12:47:46.913995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:65400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.654 [2024-11-28 12:47:46.914002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:20.654 [2024-11-28 12:47:46.914015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:65408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.654 [2024-11-28 12:47:46.914022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:20.654 [2024-11-28 12:47:46.914034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:65416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.654 [2024-11-28 12:47:46.914041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:20.654 [2024-11-28 12:47:46.914054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:65424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.654 [2024-11-28 12:47:46.914061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:20.654 [2024-11-28 12:47:46.914073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:65432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.654 [2024-11-28 12:47:46.914080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:20.654 [2024-11-28 12:47:46.914092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:65440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.654 [2024-11-28 12:47:46.914099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:20.654 [2024-11-28 12:47:46.914113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:65448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.654 [2024-11-28 12:47:46.914120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:20.654 [2024-11-28 12:47:46.914132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:65456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.654 [2024-11-28 12:47:46.914139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:20.654 [2024-11-28 12:47:46.914151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:65464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.655 [2024-11-28 12:47:46.914158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:20.655 [2024-11-28 12:47:46.914170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:65472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.655 [2024-11-28 12:47:46.914177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:20.655 [2024-11-28 12:47:46.914189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:65480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.655 [2024-11-28 12:47:46.914196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:20.655 [2024-11-28 12:47:46.914208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:64528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.655 [2024-11-28 12:47:46.914215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:20.655 [2024-11-28 12:47:46.914227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:64536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.655 [2024-11-28 12:47:46.914234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:20.655 [2024-11-28 12:47:46.914246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:64544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.655 [2024-11-28 12:47:46.914253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:20.655 [2024-11-28 12:47:46.914266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:64552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.655 [2024-11-28 12:47:46.914273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:20.655 [2024-11-28 12:47:46.914285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:64560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.655 [2024-11-28 12:47:46.914292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:20.655 [2024-11-28 12:47:46.914304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:64568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.655 [2024-11-28 12:47:46.914311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:20.655 [2024-11-28 12:47:46.914324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:64576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.655 [2024-11-28 12:47:46.914331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:20.655 [2024-11-28 12:47:46.914345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:64584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.655 [2024-11-28 12:47:46.914352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:20.655 [2024-11-28 12:47:46.914364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:64592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.655 [2024-11-28 12:47:46.914371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:20.655 [2024-11-28 12:47:46.914383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:64600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.655 [2024-11-28 12:47:46.914390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:20.655 [2024-11-28 12:47:46.914403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:64608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.655 [2024-11-28 12:47:46.914409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:20.655 [2024-11-28 12:47:46.914421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:64616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.655 [2024-11-28 12:47:46.914429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:20.655 [2024-11-28 12:47:46.914442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:64624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.655 [2024-11-28 12:47:46.914448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:20.655 [2024-11-28 12:47:46.914461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:64632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.655 [2024-11-28 12:47:46.914467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:20.655 [2024-11-28 12:47:46.914480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:64640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.655 [2024-11-28 12:47:46.914487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:20.655 [2024-11-28 12:47:46.914499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:65488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.655 [2024-11-28 12:47:46.914506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:20.655 [2024-11-28 12:47:46.914518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:65496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.655 [2024-11-28 12:47:46.914525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:20.655 [2024-11-28 12:47:46.914537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:65504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.655 [2024-11-28 12:47:46.914544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:20.655 [2024-11-28 12:47:46.914556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:65512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.655 [2024-11-28 12:47:46.914563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:20.655 [2024-11-28 12:47:46.915231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:65520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.655 [2024-11-28 12:47:46.915248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:20.655 [2024-11-28 12:47:46.915263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:64648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.655 [2024-11-28 12:47:46.915270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:20.655 [2024-11-28 12:47:46.915283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:64656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.655 [2024-11-28 12:47:46.915289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:20.655 [2024-11-28 12:47:46.915303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:64664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.655 [2024-11-28 12:47:46.915310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:20.655 [2024-11-28 12:47:46.915323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:64672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.655 [2024-11-28 12:47:46.915329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:20.655 [2024-11-28 12:47:46.915342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:64680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.655 [2024-11-28 12:47:46.915349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:20.655 [2024-11-28 12:47:46.915361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:64688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.655 [2024-11-28 12:47:46.915368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:20.655 [2024-11-28 12:47:46.915380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:64696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.655 [2024-11-28 12:47:46.915387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:20.655 [2024-11-28 12:47:46.915400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:64704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.655 [2024-11-28 12:47:46.915407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:20.655 [2024-11-28 12:47:46.915419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:64712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.655 [2024-11-28 12:47:46.915426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:20.655 [2024-11-28 12:47:46.915438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:64720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.655 [2024-11-28 12:47:46.915445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:20.655 [2024-11-28 12:47:46.915457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:64728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.655 [2024-11-28 12:47:46.915464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:20.655 [2024-11-28 12:47:46.915476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:64736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.655 [2024-11-28 12:47:46.915485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:20.655 [2024-11-28 12:47:46.915497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:64744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.655 [2024-11-28 12:47:46.915504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:20.655 [2024-11-28 12:47:46.915516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:64752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.655 [2024-11-28 12:47:46.915523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:20.655 [2024-11-28 12:47:46.915535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:64760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.655 [2024-11-28 12:47:46.915542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:20.655 [2024-11-28 12:47:46.915555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:64768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.655 [2024-11-28 12:47:46.915561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:20.655 [2024-11-28 12:47:46.915574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:64776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.656 [2024-11-28 12:47:46.915581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:20.656 [2024-11-28 12:47:46.915593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:64784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.656 [2024-11-28 12:47:46.915600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:20.656 [2024-11-28 12:47:46.915612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:64792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.656 [2024-11-28 12:47:46.915619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:20.656 [2024-11-28 12:47:46.915631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:64800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.656 [2024-11-28 12:47:46.915638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:20.656 [2024-11-28 12:47:46.915650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:64808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.656 [2024-11-28 12:47:46.915657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:20.656 [2024-11-28 12:47:46.915669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:64816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.656 [2024-11-28 12:47:46.915676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:20.656 [2024-11-28 12:47:46.915689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:64824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.656 [2024-11-28 12:47:46.915695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:20.656 [2024-11-28 12:47:46.915708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:64832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.656 [2024-11-28 12:47:46.915714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:20.656 [2024-11-28 12:47:46.915728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:64840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.656 [2024-11-28 12:47:46.915735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:20.656 [2024-11-28 12:47:46.915747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:64848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.656 [2024-11-28 12:47:46.915755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:20.656 [2024-11-28 12:47:46.915767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:64856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.656 [2024-11-28 12:47:46.915774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:20.656 [2024-11-28 12:47:46.915786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:64864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.656 [2024-11-28 12:47:46.915793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:20.656 [2024-11-28 12:47:46.915805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:64872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.656 [2024-11-28 12:47:46.915812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:20.656 [2024-11-28 12:47:46.915824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:64880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.656 [2024-11-28 12:47:46.915831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:20.656 [2024-11-28 12:47:46.915843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:64888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.656 [2024-11-28 12:47:46.915850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:20.656 [2024-11-28 12:47:46.915862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:64896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.656 [2024-11-28 12:47:46.915869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:20.656 [2024-11-28 12:47:46.915881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:64904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.656 [2024-11-28 12:47:46.915889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:20.656 [2024-11-28 12:47:46.915901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:64912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.656 [2024-11-28 12:47:46.915908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:20.656 [2024-11-28 12:47:46.915920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:64920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.656 [2024-11-28 12:47:46.915927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:20.656 [2024-11-28 12:47:46.915939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:64928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.656 [2024-11-28 12:47:46.915951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:20.656 [2024-11-28 12:47:46.915968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:64936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.656 [2024-11-28 12:47:46.915975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:20.656 [2024-11-28 12:47:46.915987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:64944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.656 [2024-11-28 12:47:46.915994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:20.656 [2024-11-28 12:47:46.916007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:64952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.656 [2024-11-28 12:47:46.916013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:20.656 [2024-11-28 12:47:46.916026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:64960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.656 [2024-11-28 12:47:46.916033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:20.656 [2024-11-28 12:47:46.916045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:64968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.656 [2024-11-28 12:47:46.916051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:20.656 [2024-11-28 12:47:46.916064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:64976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.656 [2024-11-28 12:47:46.916070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:20.656 [2024-11-28 12:47:46.916083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:64984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.656 [2024-11-28 12:47:46.916089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:20.656 [2024-11-28 12:47:46.916102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:64992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.656 [2024-11-28 12:47:46.916108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:20.656 [2024-11-28 12:47:46.916121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:65000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.656 [2024-11-28 12:47:46.916128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.656 [2024-11-28 12:47:46.916140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:65008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.656 [2024-11-28 12:47:46.916147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.656 [2024-11-28 12:47:46.916159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:65016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.656 [2024-11-28 12:47:46.916166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:20.656 [2024-11-28 12:47:46.916577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:65024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.656 [2024-11-28 12:47:46.916587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:20.656 [2024-11-28 12:47:46.916600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:65032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.656 [2024-11-28 12:47:46.916610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:20.656 [2024-11-28 12:47:46.916622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:65040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.656 [2024-11-28 12:47:46.916629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:20.656 [2024-11-28 12:47:46.916642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:65048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.656 [2024-11-28 12:47:46.916648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:20.656 [2024-11-28 12:47:46.916661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:65056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.656 [2024-11-28 12:47:46.916667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:20.656 [2024-11-28 12:47:46.916680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:65064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.656 [2024-11-28 12:47:46.916687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:20.656 [2024-11-28 12:47:46.916699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:65072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.656 [2024-11-28 12:47:46.916706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:20.656 [2024-11-28 12:47:46.916718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:65080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.656 [2024-11-28 12:47:46.916725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:20.656 [2024-11-28 12:47:46.916738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:65088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.657 [2024-11-28 12:47:46.916744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:20.657 [2024-11-28 12:47:46.916757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:65096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.657 [2024-11-28 12:47:46.916763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:20.657 [2024-11-28 12:47:46.916776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:65104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.657 [2024-11-28 12:47:46.916783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:20.657 [2024-11-28 12:47:46.916795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:65112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.657 [2024-11-28 12:47:46.916802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:20.657 [2024-11-28 12:47:46.916814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:65120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.657 [2024-11-28 12:47:46.916821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:20.657 [2024-11-28 12:47:46.916833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:65128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.657 [2024-11-28 12:47:46.916842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:20.657 [2024-11-28 12:47:46.916854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:65136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.657 [2024-11-28 12:47:46.916861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:20.657 [2024-11-28 12:47:46.916873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:65144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.657 [2024-11-28 12:47:46.916880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:20.657 [2024-11-28 12:47:46.916892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:65152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.657 [2024-11-28 12:47:46.916901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:20.657 [2024-11-28 12:47:46.916914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:65160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.657 [2024-11-28 12:47:46.916921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:20.657 [2024-11-28 12:47:46.916934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:65168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.657 [2024-11-28 12:47:46.916941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:20.657 [2024-11-28 12:47:46.916959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:65176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.657 [2024-11-28 12:47:46.916967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:20.657 [2024-11-28 12:47:46.916982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:65184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.657 [2024-11-28 12:47:46.916990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:20.657 [2024-11-28 12:47:46.917002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:65192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.657 [2024-11-28 12:47:46.917010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:20.657 [2024-11-28 12:47:46.917022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:65200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.657 [2024-11-28 12:47:46.917030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:20.657 [2024-11-28 12:47:46.917042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:65208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.657 [2024-11-28 12:47:46.917050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:20.657 [2024-11-28 12:47:46.917062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:64504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.657 [2024-11-28 12:47:46.917069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:20.657 [2024-11-28 12:47:46.917082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:64512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.657 [2024-11-28 12:47:46.917089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:20.657 [2024-11-28 12:47:46.917103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:64520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.657 [2024-11-28 12:47:46.917110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:20.657 [2024-11-28 12:47:46.917123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:65216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.657 [2024-11-28 12:47:46.917130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:20.657 [2024-11-28 12:47:46.917150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:65224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.657 [2024-11-28 12:47:46.917157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:20.657 [2024-11-28 12:47:46.917169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:65232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.657 [2024-11-28 12:47:46.917176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:20.657 [2024-11-28 12:47:46.917188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:65240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.657 [2024-11-28 12:47:46.917195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:20.657 [2024-11-28 12:47:46.917207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:65248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.657 [2024-11-28 12:47:46.917214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:20.657 [2024-11-28 12:47:46.917226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:65256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.657 [2024-11-28 12:47:46.917233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:20.657 [2024-11-28 12:47:46.917246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:65264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.657 [2024-11-28 12:47:46.917253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:20.657 [2024-11-28 12:47:46.917265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:65272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.657 [2024-11-28 12:47:46.917272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:20.657 [2024-11-28 12:47:46.917285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:65280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.657 [2024-11-28 12:47:46.917292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:20.657 [2024-11-28 12:47:46.917305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:65288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.657 [2024-11-28 12:47:46.917311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:20.657 [2024-11-28 12:47:46.917324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:65296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.657 [2024-11-28 12:47:46.917331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:20.657 [2024-11-28 12:47:46.917344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:65304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.657 [2024-11-28 12:47:46.917352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:20.657 [2024-11-28 12:47:46.917364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:65312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.657 [2024-11-28 12:47:46.917371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:20.657 [2024-11-28 12:47:46.917383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:65320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.657 [2024-11-28 12:47:46.917390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:20.657 [2024-11-28 12:47:46.917402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:65328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.657 [2024-11-28 12:47:46.917409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:20.657 [2024-11-28 12:47:46.917421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:65336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.658 [2024-11-28 12:47:46.917428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:20.658 [2024-11-28 12:47:46.917440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:65344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.658 [2024-11-28 12:47:46.917447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:20.658 [2024-11-28 12:47:46.917460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:65352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.658 [2024-11-28 12:47:46.917467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:20.658 [2024-11-28 12:47:46.917480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:65360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.658 [2024-11-28 12:47:46.917486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:20.658 [2024-11-28 12:47:46.917499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:65368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.658 [2024-11-28 12:47:46.917505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:20.658 [2024-11-28 12:47:46.917518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:65376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.658 [2024-11-28 12:47:46.917525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:20.658 [2024-11-28 12:47:46.917537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:65384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.658 [2024-11-28 12:47:46.917543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:20.658 [2024-11-28 12:47:46.917556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:65392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.658 [2024-11-28 12:47:46.917564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:20.658 [2024-11-28 12:47:46.918029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:65400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.658 [2024-11-28 12:47:46.918044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:20.658 [2024-11-28 12:47:46.918060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:65408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.658 [2024-11-28 12:47:46.918067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:20.658 [2024-11-28 12:47:46.918080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:65416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.658 [2024-11-28 12:47:46.918087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:20.658 [2024-11-28 12:47:46.918099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:65424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.658 [2024-11-28 12:47:46.918106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:20.658 [2024-11-28 12:47:46.918118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:65432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.658 [2024-11-28 12:47:46.918125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:20.658 [2024-11-28 12:47:46.918137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:65440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.658 [2024-11-28 12:47:46.918144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:20.658 [2024-11-28 12:47:46.918156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:65448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.658 [2024-11-28 12:47:46.918163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:20.658 [2024-11-28 12:47:46.918176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:65456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.658 [2024-11-28 12:47:46.918183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:20.658 [2024-11-28 12:47:46.918195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:65464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.658 [2024-11-28 12:47:46.918202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:20.658 [2024-11-28 12:47:46.918214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:65472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.658 [2024-11-28 12:47:46.918221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:20.658 [2024-11-28 12:47:46.918233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:65480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.658 [2024-11-28 12:47:46.923617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:20.658 [2024-11-28 12:47:46.923633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:64528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.658 [2024-11-28 12:47:46.923641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:20.658 [2024-11-28 12:47:46.923654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:64536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.658 [2024-11-28 12:47:46.923663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:20.658 [2024-11-28 12:47:46.923676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:64544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.658 [2024-11-28 12:47:46.923683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:20.658 [2024-11-28 12:47:46.923695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:64552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.658 [2024-11-28 12:47:46.923702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:20.658 [2024-11-28 12:47:46.923714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:64560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.658 [2024-11-28 12:47:46.923721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:20.658 [2024-11-28 12:47:46.923734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:64568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.658 [2024-11-28 12:47:46.923740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:20.658 [2024-11-28 12:47:46.923753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:64576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.658 [2024-11-28 12:47:46.923761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:20.658 [2024-11-28 12:47:46.923773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:64584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.658 [2024-11-28 12:47:46.923780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:20.658 [2024-11-28 12:47:46.923792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:64592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.658 [2024-11-28 12:47:46.923799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:20.658 [2024-11-28 12:47:46.923811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:64600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.658 [2024-11-28 12:47:46.923818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:20.658 [2024-11-28 12:47:46.923830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:64608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.658 [2024-11-28 12:47:46.923837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:20.658 [2024-11-28 12:47:46.923849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:64616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.658 [2024-11-28 12:47:46.923856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:20.658 [2024-11-28 12:47:46.923868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:64624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.658 [2024-11-28 12:47:46.923875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:20.658 [2024-11-28 12:47:46.923887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:64632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.658 [2024-11-28 12:47:46.923894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:20.658 [2024-11-28 12:47:46.923909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:64640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.658 [2024-11-28 12:47:46.923916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:20.658 [2024-11-28 12:47:46.923928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:65488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.658 [2024-11-28 12:47:46.923935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:20.658 [2024-11-28 12:47:46.923949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:65496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.658 [2024-11-28 12:47:46.923957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:20.658 [2024-11-28 12:47:46.923969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:65504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.658 [2024-11-28 12:47:46.923975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:20.658 [2024-11-28 12:47:46.923988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:65512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.658 [2024-11-28 12:47:46.923994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:20.658 [2024-11-28 12:47:46.924007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:65520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.658 [2024-11-28 12:47:46.924013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:20.658 [2024-11-28 12:47:46.924026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:64648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.658 [2024-11-28 12:47:46.924033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:20.659 [2024-11-28 12:47:46.924328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:64656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.659 [2024-11-28 12:47:46.924339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:20.659 [2024-11-28 12:47:46.924353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:64664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.659 [2024-11-28 12:47:46.924360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:20.659 [2024-11-28 12:47:46.924374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:64672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.659 [2024-11-28 12:47:46.924383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:20.659 [2024-11-28 12:47:46.924396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:64680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.659 [2024-11-28 12:47:46.924402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:20.659 [2024-11-28 12:47:46.924415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:64688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.659 [2024-11-28 12:47:46.924422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:20.659 [2024-11-28 12:47:46.924436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:64696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.659 [2024-11-28 12:47:46.924444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:20.659 [2024-11-28 12:47:46.924456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:64704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.659 [2024-11-28 12:47:46.924463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:20.659 [2024-11-28 12:47:46.924475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:64712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.659 [2024-11-28 12:47:46.924482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:20.659 [2024-11-28 12:47:46.924494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:64720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.659 [2024-11-28 12:47:46.924501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:20.659 [2024-11-28 12:47:46.924514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:64728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.659 [2024-11-28 12:47:46.924521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:20.659 [2024-11-28 12:47:46.924534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:64736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.659 [2024-11-28 12:47:46.924541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:20.659 [2024-11-28 12:47:46.924553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:64744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.659 [2024-11-28 12:47:46.924560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:20.659 [2024-11-28 12:47:46.924572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:64752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.659 [2024-11-28 12:47:46.924579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:20.659 [2024-11-28 12:47:46.924591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:64760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.659 [2024-11-28 12:47:46.924598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:20.659 [2024-11-28 12:47:46.924610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:64768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.659 [2024-11-28 12:47:46.924617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:20.659 [2024-11-28 12:47:46.924630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:64776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.659 [2024-11-28 12:47:46.924636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:20.659 [2024-11-28 12:47:46.924649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:64784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.659 [2024-11-28 12:47:46.924656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:20.659 [2024-11-28 12:47:46.924669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:64792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.659 [2024-11-28 12:47:46.924677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:20.659 [2024-11-28 12:47:46.924690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:64800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.659 [2024-11-28 12:47:46.924697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:20.659 [2024-11-28 12:47:46.924709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:64808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.659 [2024-11-28 12:47:46.924716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:20.659 [2024-11-28 12:47:46.924728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:64816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.659 [2024-11-28 12:47:46.924735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:20.659 [2024-11-28 12:47:46.924747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:64824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.659 [2024-11-28 12:47:46.924754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:20.659 [2024-11-28 12:47:46.924766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:64832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.659 [2024-11-28 12:47:46.924773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:20.659 [2024-11-28 12:47:46.924785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:64840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.659 [2024-11-28 12:47:46.924792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:20.659 [2024-11-28 12:47:46.924805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:64848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.659 [2024-11-28 12:47:46.924811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:20.659 [2024-11-28 12:47:46.924824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:64856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.659 [2024-11-28 12:47:46.924831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:20.659 [2024-11-28 12:47:46.924844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:64864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.659 [2024-11-28 12:47:46.924851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:20.659 [2024-11-28 12:47:46.924863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:64872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.659 [2024-11-28 12:47:46.924870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:20.659 [2024-11-28 12:47:46.924882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:64880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.659 [2024-11-28 12:47:46.924889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:20.659 [2024-11-28 12:47:46.924901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:64888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.659 [2024-11-28 12:47:46.924910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:20.659 [2024-11-28 12:47:46.924922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:64896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.659 [2024-11-28 12:47:46.924929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:20.659 [2024-11-28 12:47:46.924941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:64904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.659 [2024-11-28 12:47:46.924954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:20.659 [2024-11-28 12:47:46.924966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:64912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.659 [2024-11-28 12:47:46.924973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:20.659 [2024-11-28 12:47:46.924986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:64920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.659 [2024-11-28 12:47:46.924992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:20.659 [2024-11-28 12:47:46.925005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:64928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.659 [2024-11-28 12:47:46.925012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:20.659 [2024-11-28 12:47:46.925024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:64936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.659 [2024-11-28 12:47:46.925031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:20.659 [2024-11-28 12:47:46.925043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:64944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.659 [2024-11-28 12:47:46.925050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:20.659 [2024-11-28 12:47:46.925063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:64952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.659 [2024-11-28 12:47:46.925070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:20.659 [2024-11-28 12:47:46.925082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:64960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.659 [2024-11-28 12:47:46.925089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:20.660 [2024-11-28 12:47:46.925101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:64968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.660 [2024-11-28 12:47:46.925108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:20.660 [2024-11-28 12:47:46.925120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:64976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.660 [2024-11-28 12:47:46.925127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:20.660 [2024-11-28 12:47:46.925139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:64984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.660 [2024-11-28 12:47:46.925146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:20.660 [2024-11-28 12:47:46.925160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:64992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.660 [2024-11-28 12:47:46.925167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:20.660 [2024-11-28 12:47:46.925180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:65000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.660 [2024-11-28 12:47:46.925186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.660 [2024-11-28 12:47:46.925199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:65008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.660 [2024-11-28 12:47:46.925207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.660 [2024-11-28 12:47:46.925220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:65016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.660 [2024-11-28 12:47:46.925226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:20.660 [2024-11-28 12:47:46.925239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:65024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.660 [2024-11-28 12:47:46.925246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:20.660 [2024-11-28 12:47:46.925258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:65032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.660 [2024-11-28 12:47:46.925265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:20.660 [2024-11-28 12:47:46.925277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:65040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.660 [2024-11-28 12:47:46.925284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:20.660 [2024-11-28 12:47:46.925296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:65048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.660 [2024-11-28 12:47:46.925303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:20.660 [2024-11-28 12:47:46.925316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:65056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.660 [2024-11-28 12:47:46.925323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:20.660 [2024-11-28 12:47:46.925335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:65064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.660 [2024-11-28 12:47:46.925341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:20.660 [2024-11-28 12:47:46.925354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:65072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.660 [2024-11-28 12:47:46.925361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:20.660 [2024-11-28 12:47:46.925373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:65080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.660 [2024-11-28 12:47:46.925380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:20.660 [2024-11-28 12:47:46.925394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:65088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.660 [2024-11-28 12:47:46.925401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:20.660 [2024-11-28 12:47:46.925413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:65096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.660 [2024-11-28 12:47:46.925421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:20.660 [2024-11-28 12:47:46.925433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:65104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.660 [2024-11-28 12:47:46.925440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:20.660 [2024-11-28 12:47:46.925452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:65112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.660 [2024-11-28 12:47:46.925459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:20.660 [2024-11-28 12:47:46.925471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:65120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.660 [2024-11-28 12:47:46.925479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:20.660 [2024-11-28 12:47:46.925492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:65128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.660 [2024-11-28 12:47:46.925500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:20.660 [2024-11-28 12:47:46.925513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:65136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.660 [2024-11-28 12:47:46.925520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:20.660 [2024-11-28 12:47:46.925533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:65144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.660 [2024-11-28 12:47:46.925540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:20.660 [2024-11-28 12:47:46.925552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:65152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.660 [2024-11-28 12:47:46.925559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:20.660 [2024-11-28 12:47:46.925571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:65160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.660 [2024-11-28 12:47:46.925578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:20.660 [2024-11-28 12:47:46.925590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:65168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.660 [2024-11-28 12:47:46.925597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:20.660 [2024-11-28 12:47:46.925610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:65176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.660 [2024-11-28 12:47:46.925617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:20.660 [2024-11-28 12:47:46.925629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:65184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.660 [2024-11-28 12:47:46.925638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:20.660 [2024-11-28 12:47:46.925650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:65192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.660 [2024-11-28 12:47:46.925657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:20.660 [2024-11-28 12:47:46.925669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:65200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.660 [2024-11-28 12:47:46.925676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:20.660 [2024-11-28 12:47:46.925688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:65208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.660 [2024-11-28 12:47:46.925695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:20.660 [2024-11-28 12:47:46.925707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:64504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.660 [2024-11-28 12:47:46.925716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:20.660 [2024-11-28 12:47:46.925729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:64512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.660 [2024-11-28 12:47:46.925735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:20.660 [2024-11-28 12:47:46.925748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:64520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.660 [2024-11-28 12:47:46.925755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:20.660 [2024-11-28 12:47:46.925767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:65216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.660 [2024-11-28 12:47:46.925774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:20.660 [2024-11-28 12:47:46.925786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:65224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.660 [2024-11-28 12:47:46.925793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:20.660 [2024-11-28 12:47:46.925806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:65232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.660 [2024-11-28 12:47:46.925813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:20.660 [2024-11-28 12:47:46.925825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:65240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.660 [2024-11-28 12:47:46.925832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:20.660 [2024-11-28 12:47:46.925844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:65248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.660 [2024-11-28 12:47:46.925851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:20.660 [2024-11-28 12:47:46.925864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:65256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.661 [2024-11-28 12:47:46.925873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:20.661 [2024-11-28 12:47:46.925886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:65264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.661 [2024-11-28 12:47:46.925893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:20.661 [2024-11-28 12:47:46.925905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:65272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.661 [2024-11-28 12:47:46.925912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:20.661 [2024-11-28 12:47:46.925925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:65280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.661 [2024-11-28 12:47:46.925932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:20.661 [2024-11-28 12:47:46.925944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:65288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.661 [2024-11-28 12:47:46.925954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:20.661 [2024-11-28 12:47:46.925967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:65296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.661 [2024-11-28 12:47:46.925974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:20.661 [2024-11-28 12:47:46.925987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:65304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.661 [2024-11-28 12:47:46.925994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:20.661 [2024-11-28 12:47:46.926006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:65312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.661 [2024-11-28 12:47:46.926013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:20.661 [2024-11-28 12:47:46.926025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:65320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.661 [2024-11-28 12:47:46.926033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:20.661 [2024-11-28 12:47:46.926045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:65328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.661 [2024-11-28 12:47:46.926052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:20.661 [2024-11-28 12:47:46.926064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:65336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.661 [2024-11-28 12:47:46.926072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:20.661 [2024-11-28 12:47:46.926084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:65344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.661 [2024-11-28 12:47:46.926091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:20.661 [2024-11-28 12:47:46.926103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:65352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.661 [2024-11-28 12:47:46.926110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:20.661 [2024-11-28 12:47:46.926126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:65360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.661 [2024-11-28 12:47:46.926133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:20.661 [2024-11-28 12:47:46.926145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:65368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.661 [2024-11-28 12:47:46.926152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:20.661 [2024-11-28 12:47:46.926165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:65376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.661 [2024-11-28 12:47:46.926172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:20.661 [2024-11-28 12:47:46.926185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:65384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.661 [2024-11-28 12:47:46.926194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:20.661 [2024-11-28 12:47:46.927012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:65392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.661 [2024-11-28 12:47:46.927031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:20.661 [2024-11-28 12:47:46.927046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:65400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.661 [2024-11-28 12:47:46.927055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:20.661 [2024-11-28 12:47:46.927068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:65408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.661 [2024-11-28 12:47:46.927076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:20.661 [2024-11-28 12:47:46.927088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:65416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.661 [2024-11-28 12:47:46.927098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:20.661 [2024-11-28 12:47:46.927111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:65424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.661 [2024-11-28 12:47:46.927118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:20.661 [2024-11-28 12:47:46.927130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:65432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.661 [2024-11-28 12:47:46.927138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:20.661 [2024-11-28 12:47:46.927150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:65440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.661 [2024-11-28 12:47:46.927157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:20.661 [2024-11-28 12:47:46.927170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:65448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.661 [2024-11-28 12:47:46.927177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:20.661 [2024-11-28 12:47:46.927193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:65456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.661 [2024-11-28 12:47:46.927200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:20.661 [2024-11-28 12:47:46.927212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:65464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.661 [2024-11-28 12:47:46.927222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:20.661 [2024-11-28 12:47:46.927235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:65472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.661 [2024-11-28 12:47:46.927242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:20.661 [2024-11-28 12:47:46.927254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.661 [2024-11-28 12:47:46.927262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:20.661 [2024-11-28 12:47:46.927275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:64528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.661 [2024-11-28 12:47:46.927282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:20.661 [2024-11-28 12:47:46.927295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:64536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.661 [2024-11-28 12:47:46.927301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:20.661 [2024-11-28 12:47:46.927314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:64544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.661 [2024-11-28 12:47:46.927322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:20.661 [2024-11-28 12:47:46.927336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:64552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.661 [2024-11-28 12:47:46.927344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:20.661 [2024-11-28 12:47:46.927358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:64560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.661 [2024-11-28 12:47:46.927367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:20.661 [2024-11-28 12:47:46.927380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:64568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.661 [2024-11-28 12:47:46.927388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:20.661 [2024-11-28 12:47:46.927402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:64576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.661 [2024-11-28 12:47:46.927410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:20.661 [2024-11-28 12:47:46.927423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:64584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.661 [2024-11-28 12:47:46.927430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:20.661 [2024-11-28 12:47:46.927442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:64592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.661 [2024-11-28 12:47:46.927450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:20.661 [2024-11-28 12:47:46.927463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:64600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.661 [2024-11-28 12:47:46.927470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:20.661 [2024-11-28 12:47:46.927482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:64608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.662 [2024-11-28 12:47:46.927489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:20.662 [2024-11-28 12:47:46.927501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:64616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.662 [2024-11-28 12:47:46.927508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:20.662 [2024-11-28 12:47:46.927520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:64624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.662 [2024-11-28 12:47:46.927527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:20.662 [2024-11-28 12:47:46.927540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:64632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.662 [2024-11-28 12:47:46.927546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:20.662 [2024-11-28 12:47:46.927559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:64640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.662 [2024-11-28 12:47:46.927567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:20.662 [2024-11-28 12:47:46.927581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:65488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.662 [2024-11-28 12:47:46.927588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:20.662 [2024-11-28 12:47:46.927600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:65496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.662 [2024-11-28 12:47:46.927606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:20.662 [2024-11-28 12:47:46.927619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:65504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.662 [2024-11-28 12:47:46.927626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:20.662 [2024-11-28 12:47:46.927638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:65512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.662 [2024-11-28 12:47:46.927645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:20.662 [2024-11-28 12:47:46.927657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:65520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.662 [2024-11-28 12:47:46.927664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:20.662 [2024-11-28 12:47:46.927676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:64648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.662 [2024-11-28 12:47:46.927685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:20.662 [2024-11-28 12:47:46.927697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:64656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.662 [2024-11-28 12:47:46.927704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:20.662 [2024-11-28 12:47:46.927718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:64664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.662 [2024-11-28 12:47:46.927725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:20.662 [2024-11-28 12:47:46.927737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:64672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.662 [2024-11-28 12:47:46.927744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:20.662 [2024-11-28 12:47:46.927756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:64680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.662 [2024-11-28 12:47:46.927769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:20.662 [2024-11-28 12:47:46.927781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:64688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.662 [2024-11-28 12:47:46.927788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:20.662 [2024-11-28 12:47:46.927801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:64696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.662 [2024-11-28 12:47:46.927808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:20.662 [2024-11-28 12:47:46.927821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:64704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.662 [2024-11-28 12:47:46.927828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:20.662 [2024-11-28 12:47:46.928159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:64712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.662 [2024-11-28 12:47:46.928169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:20.662 [2024-11-28 12:47:46.928184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:64720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.662 [2024-11-28 12:47:46.928192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:20.662 [2024-11-28 12:47:46.928205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:64728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.662 [2024-11-28 12:47:46.928212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:20.662 [2024-11-28 12:47:46.928224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:64736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.662 [2024-11-28 12:47:46.928231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:20.662 [2024-11-28 12:47:46.928244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:64744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.662 [2024-11-28 12:47:46.928251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:20.662 [2024-11-28 12:47:46.928265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:64752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.662 [2024-11-28 12:47:46.928272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:20.662 [2024-11-28 12:47:46.928285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:64760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.662 [2024-11-28 12:47:46.928292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:20.662 [2024-11-28 12:47:46.928307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:64768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.662 [2024-11-28 12:47:46.928315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:20.662 [2024-11-28 12:47:46.928329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:64776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.662 [2024-11-28 12:47:46.928336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:20.662 [2024-11-28 12:47:46.928348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:64784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.662 [2024-11-28 12:47:46.928355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:20.662 [2024-11-28 12:47:46.928369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:64792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.662 [2024-11-28 12:47:46.928376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:20.662 [2024-11-28 12:47:46.928389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:64800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.662 [2024-11-28 12:47:46.928396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:20.662 [2024-11-28 12:47:46.928408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:64808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.662 [2024-11-28 12:47:46.928417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:20.662 [2024-11-28 12:47:46.928431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:64816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.662 [2024-11-28 12:47:46.928439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:20.662 [2024-11-28 12:47:46.928452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:64824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.662 [2024-11-28 12:47:46.928459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:20.662 [2024-11-28 12:47:46.928471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:64832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.662 [2024-11-28 12:47:46.928478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:20.662 [2024-11-28 12:47:46.928491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:64840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.662 [2024-11-28 12:47:46.928498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:20.663 [2024-11-28 12:47:46.928512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:64848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.663 [2024-11-28 12:47:46.928519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:20.663 [2024-11-28 12:47:46.928531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:64856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.663 [2024-11-28 12:47:46.928538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:20.663 [2024-11-28 12:47:46.928551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:64864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.663 [2024-11-28 12:47:46.928557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:20.663 [2024-11-28 12:47:46.928570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:64872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.663 [2024-11-28 12:47:46.928576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:20.663 [2024-11-28 12:47:46.928589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:64880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.663 [2024-11-28 12:47:46.928595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:20.663 [2024-11-28 12:47:46.928608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:64888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.663 [2024-11-28 12:47:46.928615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:20.663 [2024-11-28 12:47:46.928627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:64896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.663 [2024-11-28 12:47:46.928634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:20.663 [2024-11-28 12:47:46.928646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:64904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.663 [2024-11-28 12:47:46.928653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:20.663 [2024-11-28 12:47:46.928665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:64912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.663 [2024-11-28 12:47:46.928672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:20.663 [2024-11-28 12:47:46.928685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:64920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.663 [2024-11-28 12:47:46.928692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:20.663 [2024-11-28 12:47:46.928704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:64928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.663 [2024-11-28 12:47:46.928711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:20.663 [2024-11-28 12:47:46.928723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:64936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.663 [2024-11-28 12:47:46.928730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:20.663 [2024-11-28 12:47:46.928742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:64944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.663 [2024-11-28 12:47:46.928751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:20.663 [2024-11-28 12:47:46.928763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:64952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.663 [2024-11-28 12:47:46.928770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:20.663 [2024-11-28 12:47:46.928783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:64960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.663 [2024-11-28 12:47:46.928789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:20.663 [2024-11-28 12:47:46.929093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:64968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.663 [2024-11-28 12:47:46.929104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:20.663 [2024-11-28 12:47:46.929119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:64976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.663 [2024-11-28 12:47:46.929126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:20.663 [2024-11-28 12:47:46.929138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:64984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.663 [2024-11-28 12:47:46.929145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:20.663 [2024-11-28 12:47:46.929158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:64992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.663 [2024-11-28 12:47:46.929164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:20.663 [2024-11-28 12:47:46.929177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:65000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.663 [2024-11-28 12:47:46.929183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.663 [2024-11-28 12:47:46.929196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:65008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.663 [2024-11-28 12:47:46.929203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.663 [2024-11-28 12:47:46.929215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:65016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.663 [2024-11-28 12:47:46.929222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:20.663 [2024-11-28 12:47:46.929234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:65024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.663 [2024-11-28 12:47:46.929241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:20.663 [2024-11-28 12:47:46.929253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:65032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.663 [2024-11-28 12:47:46.929260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:20.663 [2024-11-28 12:47:46.929273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:65040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.663 [2024-11-28 12:47:46.929282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:20.663 [2024-11-28 12:47:46.929294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:65048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.663 [2024-11-28 12:47:46.929301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:20.663 [2024-11-28 12:47:46.929314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:65056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.663 [2024-11-28 12:47:46.929320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:20.663 [2024-11-28 12:47:46.929333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:65064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.663 [2024-11-28 12:47:46.929340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:20.663 [2024-11-28 12:47:46.929352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:65072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.663 [2024-11-28 12:47:46.929359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:20.663 [2024-11-28 12:47:46.929371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:65080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.663 [2024-11-28 12:47:46.929378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:20.663 [2024-11-28 12:47:46.929390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:65088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.663 [2024-11-28 12:47:46.929397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:20.663 [2024-11-28 12:47:46.929409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:65096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.663 [2024-11-28 12:47:46.929416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:20.663 [2024-11-28 12:47:46.929428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:65104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.663 [2024-11-28 12:47:46.929435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:20.663 [2024-11-28 12:47:46.929447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:65112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.663 [2024-11-28 12:47:46.929454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:20.663 [2024-11-28 12:47:46.929466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:65120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.663 [2024-11-28 12:47:46.929473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:20.663 [2024-11-28 12:47:46.929485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:65128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.663 [2024-11-28 12:47:46.929492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:20.663 [2024-11-28 12:47:46.929504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:65136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.663 [2024-11-28 12:47:46.929511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:20.663 [2024-11-28 12:47:46.929525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:65144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.663 [2024-11-28 12:47:46.929531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:20.663 [2024-11-28 12:47:46.929544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:65152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.663 [2024-11-28 12:47:46.929551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:20.664 [2024-11-28 12:47:46.929563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:65160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.664 [2024-11-28 12:47:46.929569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:20.664 [2024-11-28 12:47:46.929582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:65168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.664 [2024-11-28 12:47:46.929588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:20.664 [2024-11-28 12:47:46.929601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:65176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.664 [2024-11-28 12:47:46.929608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:20.664 [2024-11-28 12:47:46.929620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:65184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.664 [2024-11-28 12:47:46.929629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:20.664 [2024-11-28 12:47:46.929641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:65192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.664 [2024-11-28 12:47:46.929648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:20.664 [2024-11-28 12:47:46.929660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:65200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.664 [2024-11-28 12:47:46.929667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:20.664 [2024-11-28 12:47:46.929679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:65208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.664 [2024-11-28 12:47:46.929686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:20.664 [2024-11-28 12:47:46.929698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:64504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.664 [2024-11-28 12:47:46.929705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:20.664 [2024-11-28 12:47:46.929718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:64512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.664 [2024-11-28 12:47:46.929724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:20.664 [2024-11-28 12:47:46.929737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:64520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.664 [2024-11-28 12:47:46.929743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:20.664 [2024-11-28 12:47:46.929757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:65216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.664 [2024-11-28 12:47:46.929765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:20.664 [2024-11-28 12:47:46.930076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:65224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.664 [2024-11-28 12:47:46.930086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:20.664 [2024-11-28 12:47:46.930099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:65232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.664 [2024-11-28 12:47:46.930107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:20.664 [2024-11-28 12:47:46.930119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:65240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.664 [2024-11-28 12:47:46.930127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:20.664 [2024-11-28 12:47:46.930139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:65248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.664 [2024-11-28 12:47:46.930146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:20.664 [2024-11-28 12:47:46.930158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:65256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.664 [2024-11-28 12:47:46.930165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:20.664 [2024-11-28 12:47:46.930177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:65264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.664 [2024-11-28 12:47:46.930184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:20.664 [2024-11-28 12:47:46.930196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:65272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.664 [2024-11-28 12:47:46.930205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:20.664 [2024-11-28 12:47:46.930217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:65280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.664 [2024-11-28 12:47:46.930224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:20.664 [2024-11-28 12:47:46.930237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:65288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.664 [2024-11-28 12:47:46.930244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:20.664 [2024-11-28 12:47:46.930256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:65296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.664 [2024-11-28 12:47:46.930263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:20.664 [2024-11-28 12:47:46.930276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:65304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.664 [2024-11-28 12:47:46.930283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:20.664 [2024-11-28 12:47:46.930295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:65312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.664 [2024-11-28 12:47:46.930304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:20.664 [2024-11-28 12:47:46.930316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:65320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.664 [2024-11-28 12:47:46.930323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:20.664 [2024-11-28 12:47:46.930336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:65328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.664 [2024-11-28 12:47:46.930342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:20.664 [2024-11-28 12:47:46.930355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:65336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.664 [2024-11-28 12:47:46.930361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:20.664 [2024-11-28 12:47:46.930374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:65344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.664 [2024-11-28 12:47:46.930381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:20.664 [2024-11-28 12:47:46.930393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:65352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.664 [2024-11-28 12:47:46.930400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:20.664 [2024-11-28 12:47:46.930412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:65360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.664 [2024-11-28 12:47:46.930419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:20.664 [2024-11-28 12:47:46.930432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:65368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.664 [2024-11-28 12:47:46.930439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:20.664 [2024-11-28 12:47:46.930451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:65376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.664 [2024-11-28 12:47:46.930457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:20.664 [2024-11-28 12:47:46.930470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:65384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.664 [2024-11-28 12:47:46.930477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:20.664 [2024-11-28 12:47:46.930489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:65392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.664 [2024-11-28 12:47:46.930496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:20.664 [2024-11-28 12:47:46.930508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:65400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.664 [2024-11-28 12:47:46.930515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:20.664 [2024-11-28 12:47:46.930529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:65408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.664 [2024-11-28 12:47:46.930538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:20.664 [2024-11-28 12:47:46.930550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:65416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.664 [2024-11-28 12:47:46.930557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:20.664 [2024-11-28 12:47:46.930570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:65424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.664 [2024-11-28 12:47:46.930577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:20.664 [2024-11-28 12:47:46.930589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:65432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.664 [2024-11-28 12:47:46.930596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:20.664 [2024-11-28 12:47:46.930608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:65440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.664 [2024-11-28 12:47:46.930615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:20.665 [2024-11-28 12:47:46.930627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:65448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.665 [2024-11-28 12:47:46.930634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:20.665 [2024-11-28 12:47:46.930647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:65456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.665 [2024-11-28 12:47:46.930654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:20.665 [2024-11-28 12:47:46.930666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:65464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.665 [2024-11-28 12:47:46.930673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:20.665 [2024-11-28 12:47:46.930686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:65472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.665 [2024-11-28 12:47:46.930692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:20.665 [2024-11-28 12:47:46.930988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:65480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.665 [2024-11-28 12:47:46.930999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:20.665 [2024-11-28 12:47:46.931013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:64528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.665 [2024-11-28 12:47:46.931020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:20.665 [2024-11-28 12:47:46.931033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:64536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.665 [2024-11-28 12:47:46.931040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:20.665 [2024-11-28 12:47:46.931052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:64544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.665 [2024-11-28 12:47:46.931059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:20.665 [2024-11-28 12:47:46.931074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:64552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.665 [2024-11-28 12:47:46.931081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:20.665 [2024-11-28 12:47:46.931093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:64560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.665 [2024-11-28 12:47:46.931100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:20.665 [2024-11-28 12:47:46.931113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:64568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.665 [2024-11-28 12:47:46.931119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:20.665 [2024-11-28 12:47:46.931133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:64576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.665 [2024-11-28 12:47:46.931140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:20.665 [2024-11-28 12:47:46.931152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:64584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.665 [2024-11-28 12:47:46.931159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:20.665 [2024-11-28 12:47:46.931171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:64592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.665 [2024-11-28 12:47:46.931178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:20.665 [2024-11-28 12:47:46.931190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:64600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.665 [2024-11-28 12:47:46.931197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:20.665 [2024-11-28 12:47:46.931209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:64608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.665 [2024-11-28 12:47:46.931216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:20.665 [2024-11-28 12:47:46.931229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:64616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.665 [2024-11-28 12:47:46.931236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:20.665 [2024-11-28 12:47:46.931248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:64624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.665 [2024-11-28 12:47:46.931255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:20.665 [2024-11-28 12:47:46.931267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:64632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.665 [2024-11-28 12:47:46.931274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:20.665 [2024-11-28 12:47:46.931287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:64640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.665 [2024-11-28 12:47:46.931294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:20.665 [2024-11-28 12:47:46.931308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.665 [2024-11-28 12:47:46.931317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:20.665 [2024-11-28 12:47:46.931331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:65496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.665 [2024-11-28 12:47:46.931338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:20.665 [2024-11-28 12:47:46.931350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:65504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.665 [2024-11-28 12:47:46.931357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:20.665 [2024-11-28 12:47:46.931369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:65512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.665 [2024-11-28 12:47:46.931377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:20.665 [2024-11-28 12:47:46.931389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:65520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.665 [2024-11-28 12:47:46.931396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:20.665 [2024-11-28 12:47:46.931408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:64648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.665 [2024-11-28 12:47:46.931415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:20.665 [2024-11-28 12:47:46.931427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:64656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.665 [2024-11-28 12:47:46.931434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:20.665 [2024-11-28 12:47:46.931447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:64664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.665 [2024-11-28 12:47:46.931454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:20.665 [2024-11-28 12:47:46.932381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:64672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.665 [2024-11-28 12:47:46.932390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:20.665 [2024-11-28 12:47:46.932404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:64680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.665 [2024-11-28 12:47:46.932412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:20.665 [2024-11-28 12:47:46.932424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:64688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.665 [2024-11-28 12:47:46.932431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:20.665 [2024-11-28 12:47:46.932443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:64696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.665 [2024-11-28 12:47:46.932450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:20.665 [2024-11-28 12:47:46.932462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:64704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.665 [2024-11-28 12:47:46.932472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:20.665 [2024-11-28 12:47:46.932485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:64712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.665 [2024-11-28 12:47:46.932491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:20.665 [2024-11-28 12:47:46.932504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:64720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.665 [2024-11-28 12:47:46.932511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:20.665 [2024-11-28 12:47:46.932523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:64728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.665 [2024-11-28 12:47:46.932530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:20.665 [2024-11-28 12:47:46.932542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:64736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.665 [2024-11-28 12:47:46.932549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:20.665 [2024-11-28 12:47:46.932561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:64744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.665 [2024-11-28 12:47:46.932569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:20.665 [2024-11-28 12:47:46.932581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:64752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.665 [2024-11-28 12:47:46.932588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:20.665 [2024-11-28 12:47:46.932600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:64760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.666 [2024-11-28 12:47:46.932607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:20.666 [2024-11-28 12:47:46.932619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:64768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.666 [2024-11-28 12:47:46.932626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:20.666 [2024-11-28 12:47:46.932638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:64776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.666 [2024-11-28 12:47:46.932645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:20.666 [2024-11-28 12:47:46.932658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:64784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.666 [2024-11-28 12:47:46.932665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:20.666 [2024-11-28 12:47:46.932677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:64792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.666 [2024-11-28 12:47:46.932684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:20.666 [2024-11-28 12:47:46.932696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:64800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.666 [2024-11-28 12:47:46.932708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:20.666 [2024-11-28 12:47:46.932869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:64808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.666 [2024-11-28 12:47:46.932878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:20.666 [2024-11-28 12:47:46.932892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:64816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.666 [2024-11-28 12:47:46.932898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:20.666 [2024-11-28 12:47:46.932911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:64824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.666 [2024-11-28 12:47:46.932918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:20.666 [2024-11-28 12:47:46.932930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:64832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.666 [2024-11-28 12:47:46.932937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:20.666 [2024-11-28 12:47:46.932953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:64840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.666 [2024-11-28 12:47:46.932961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:20.666 [2024-11-28 12:47:46.932973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:64848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.666 [2024-11-28 12:47:46.932980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:20.666 [2024-11-28 12:47:46.932992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:64856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.666 [2024-11-28 12:47:46.932999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:20.666 [2024-11-28 12:47:46.933011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:64864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.666 [2024-11-28 12:47:46.933018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:20.666 [2024-11-28 12:47:46.933030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:64872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.666 [2024-11-28 12:47:46.933037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:20.666 [2024-11-28 12:47:46.933049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:64880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.666 [2024-11-28 12:47:46.933056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:20.666 [2024-11-28 12:47:46.933068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:64888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.666 [2024-11-28 12:47:46.933075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:20.666 [2024-11-28 12:47:46.933087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:64896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.666 [2024-11-28 12:47:46.933094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:20.666 [2024-11-28 12:47:46.933109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:64904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.666 [2024-11-28 12:47:46.933116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:20.666 [2024-11-28 12:47:46.933128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:64912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.666 [2024-11-28 12:47:46.933135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:20.666 [2024-11-28 12:47:46.933148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:64920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.666 [2024-11-28 12:47:46.933154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:20.666 [2024-11-28 12:47:46.933167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:64928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.666 [2024-11-28 12:47:46.933174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:20.666 [2024-11-28 12:47:46.933186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:64936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.666 [2024-11-28 12:47:46.933193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:20.666 [2024-11-28 12:47:46.933205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:64944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.666 [2024-11-28 12:47:46.933212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:20.666 [2024-11-28 12:47:46.933224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:64952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.666 [2024-11-28 12:47:46.933232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:20.666 [2024-11-28 12:47:46.933244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:64960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.666 [2024-11-28 12:47:46.933251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:20.666 [2024-11-28 12:47:46.933263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:64968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.666 [2024-11-28 12:47:46.933270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:20.666 [2024-11-28 12:47:46.933282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:64976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.666 [2024-11-28 12:47:46.933289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:20.666 [2024-11-28 12:47:46.933301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:64984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.666 [2024-11-28 12:47:46.933308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:20.666 [2024-11-28 12:47:46.933321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:64992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.666 [2024-11-28 12:47:46.933328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:20.666 [2024-11-28 12:47:46.933341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:65000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.666 [2024-11-28 12:47:46.933348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.666 [2024-11-28 12:47:46.933361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:65008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.666 [2024-11-28 12:47:46.933367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.666 [2024-11-28 12:47:46.933380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:65016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.666 [2024-11-28 12:47:46.933387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:20.666 [2024-11-28 12:47:46.933399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:65024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.666 [2024-11-28 12:47:46.933406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:20.666 [2024-11-28 12:47:46.933418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:65032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.666 [2024-11-28 12:47:46.933425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:20.667 [2024-11-28 12:47:46.933437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:65040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.667 [2024-11-28 12:47:46.933444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:20.667 [2024-11-28 12:47:46.933457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:65048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.667 [2024-11-28 12:47:46.933464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:20.667 [2024-11-28 12:47:46.933477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:65056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.667 [2024-11-28 12:47:46.933483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:20.667 [2024-11-28 12:47:46.933496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:65064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.667 [2024-11-28 12:47:46.933503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:20.667 [2024-11-28 12:47:46.933515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:65072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.667 [2024-11-28 12:47:46.933522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:20.667 [2024-11-28 12:47:46.933534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:65080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.667 [2024-11-28 12:47:46.933541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:20.667 [2024-11-28 12:47:46.933553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:65088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.667 [2024-11-28 12:47:46.933560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:20.667 [2024-11-28 12:47:46.933572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:65096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.667 [2024-11-28 12:47:46.933581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:20.667 [2024-11-28 12:47:46.933593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:65104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.667 [2024-11-28 12:47:46.933600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:20.667 [2024-11-28 12:47:46.933612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:65112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.667 [2024-11-28 12:47:46.933619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:20.667 [2024-11-28 12:47:46.933631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:65120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.667 [2024-11-28 12:47:46.933638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:20.667 [2024-11-28 12:47:46.933650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:65128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.667 [2024-11-28 12:47:46.933657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:20.667 [2024-11-28 12:47:46.933669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:65136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.667 [2024-11-28 12:47:46.933676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:20.667 [2024-11-28 12:47:46.933688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:65144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.667 [2024-11-28 12:47:46.933695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:20.667 [2024-11-28 12:47:46.933708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:65152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.667 [2024-11-28 12:47:46.933714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:20.667 [2024-11-28 12:47:46.933727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:65160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.667 [2024-11-28 12:47:46.933734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:20.667 [2024-11-28 12:47:46.933747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:65168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.667 [2024-11-28 12:47:46.933754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:20.667 [2024-11-28 12:47:46.933767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:65176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.667 [2024-11-28 12:47:46.933774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:20.667 [2024-11-28 12:47:46.933787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:65184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.667 [2024-11-28 12:47:46.933793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:20.667 [2024-11-28 12:47:46.933806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:65192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.667 [2024-11-28 12:47:46.933814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:20.667 [2024-11-28 12:47:46.933827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:65200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.667 [2024-11-28 12:47:46.933834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:20.667 [2024-11-28 12:47:46.933846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:65208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.667 [2024-11-28 12:47:46.933853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:20.667 [2024-11-28 12:47:46.933865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:64504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.667 [2024-11-28 12:47:46.933872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:20.667 [2024-11-28 12:47:46.933884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:64512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.667 [2024-11-28 12:47:46.933891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:20.667 [2024-11-28 12:47:46.933904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:64520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.667 [2024-11-28 12:47:46.933910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:20.667 [2024-11-28 12:47:46.933923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:65216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.667 [2024-11-28 12:47:46.933930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:20.667 [2024-11-28 12:47:46.933942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:65224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.667 [2024-11-28 12:47:46.933952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:20.667 [2024-11-28 12:47:46.933965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:65232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.667 [2024-11-28 12:47:46.933971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:20.667 [2024-11-28 12:47:46.933984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:65240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.667 [2024-11-28 12:47:46.933991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:20.667 [2024-11-28 12:47:46.934503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:65248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.667 [2024-11-28 12:47:46.934516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:20.667 [2024-11-28 12:47:46.934530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:65256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.667 [2024-11-28 12:47:46.934537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:20.667 [2024-11-28 12:47:46.934550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:65264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.667 [2024-11-28 12:47:46.934557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:20.667 [2024-11-28 12:47:46.934573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:65272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.667 [2024-11-28 12:47:46.934579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:20.667 [2024-11-28 12:47:46.934592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:65280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.667 [2024-11-28 12:47:46.934599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:20.667 [2024-11-28 12:47:46.934611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:65288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.667 [2024-11-28 12:47:46.934618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:20.667 [2024-11-28 12:47:46.934630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:65296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.667 [2024-11-28 12:47:46.934637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:20.667 [2024-11-28 12:47:46.934649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:65304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.667 [2024-11-28 12:47:46.934656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:20.667 [2024-11-28 12:47:46.934668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:65312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.667 [2024-11-28 12:47:46.934675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:20.667 [2024-11-28 12:47:46.934688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:65320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.667 [2024-11-28 12:47:46.934694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:20.667 [2024-11-28 12:47:46.934707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:65328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.668 [2024-11-28 12:47:46.934714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:20.668 [2024-11-28 12:47:46.934726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:65336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.668 [2024-11-28 12:47:46.934733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:20.668 [2024-11-28 12:47:46.934745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:65344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.668 [2024-11-28 12:47:46.934752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:20.668 [2024-11-28 12:47:46.934764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:65352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.668 [2024-11-28 12:47:46.934771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:20.668 [2024-11-28 12:47:46.934783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:65360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.668 [2024-11-28 12:47:46.934790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:20.668 [2024-11-28 12:47:46.934803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:65368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.668 [2024-11-28 12:47:46.934811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:20.668 [2024-11-28 12:47:46.934823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:65376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.668 [2024-11-28 12:47:46.934830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:20.668 [2024-11-28 12:47:46.934842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:65384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.668 [2024-11-28 12:47:46.934849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:20.668 [2024-11-28 12:47:46.934861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:65392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.668 [2024-11-28 12:47:46.934868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:20.668 [2024-11-28 12:47:46.934880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:65400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.668 [2024-11-28 12:47:46.934887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:20.668 [2024-11-28 12:47:46.934900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:65408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.668 [2024-11-28 12:47:46.934906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:20.668 [2024-11-28 12:47:46.934919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:65416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.668 [2024-11-28 12:47:46.934926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:20.668 [2024-11-28 12:47:46.934938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:65424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.668 [2024-11-28 12:47:46.934945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:20.668 [2024-11-28 12:47:46.934963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:65432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.668 [2024-11-28 12:47:46.934971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:20.668 [2024-11-28 12:47:46.934983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:65440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.668 [2024-11-28 12:47:46.934990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:20.668 [2024-11-28 12:47:46.935002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:65448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.668 [2024-11-28 12:47:46.935009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:20.668 [2024-11-28 12:47:46.935021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:65456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.668 [2024-11-28 12:47:46.935028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:20.668 [2024-11-28 12:47:46.935040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:65464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.668 [2024-11-28 12:47:46.935049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:20.668 [2024-11-28 12:47:46.935061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:65472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.668 [2024-11-28 12:47:46.935068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:20.668 [2024-11-28 12:47:46.935080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:65480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.668 [2024-11-28 12:47:46.935087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:20.668 [2024-11-28 12:47:46.935099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:64528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.668 [2024-11-28 12:47:46.935106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:20.668 [2024-11-28 12:47:46.935118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:64536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.668 [2024-11-28 12:47:46.935125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:20.668 [2024-11-28 12:47:46.935138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:64544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.668 [2024-11-28 12:47:46.935144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:20.668 [2024-11-28 12:47:46.935157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:64552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.668 [2024-11-28 12:47:46.935164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:20.668 [2024-11-28 12:47:46.935176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:64560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.668 [2024-11-28 12:47:46.935182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:20.668 [2024-11-28 12:47:46.935195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:64568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.668 [2024-11-28 12:47:46.935202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:20.668 [2024-11-28 12:47:46.935216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:64576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.668 [2024-11-28 12:47:46.935223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:20.668 [2024-11-28 12:47:46.935235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:64584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.668 [2024-11-28 12:47:46.935242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:20.668 [2024-11-28 12:47:46.938807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:64592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.668 [2024-11-28 12:47:46.938817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:20.668 [2024-11-28 12:47:46.938830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:64600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.668 [2024-11-28 12:47:46.938839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:20.668 [2024-11-28 12:47:46.938852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:64608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.668 [2024-11-28 12:47:46.938859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:20.668 [2024-11-28 12:47:46.938871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:64616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.668 [2024-11-28 12:47:46.938878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:20.668 [2024-11-28 12:47:46.938891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:64624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.668 [2024-11-28 12:47:46.938898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:20.668 [2024-11-28 12:47:46.938910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:64632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.668 [2024-11-28 12:47:46.938917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:20.668 [2024-11-28 12:47:46.938929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:64640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.668 [2024-11-28 12:47:46.938936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:20.668 [2024-11-28 12:47:46.938958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:65488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.668 [2024-11-28 12:47:46.938966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:20.668 [2024-11-28 12:47:46.938979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:65496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.668 [2024-11-28 12:47:46.938986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:20.668 [2024-11-28 12:47:46.939393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:65504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.668 [2024-11-28 12:47:46.939404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:20.668 [2024-11-28 12:47:46.939419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:65512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.668 [2024-11-28 12:47:46.939426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:20.668 [2024-11-28 12:47:46.939439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:65520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.669 [2024-11-28 12:47:46.939446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:20.669 [2024-11-28 12:47:46.939459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:64648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.669 [2024-11-28 12:47:46.939466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:20.669 [2024-11-28 12:47:46.939479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:64656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.669 [2024-11-28 12:47:46.939486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:20.669 [2024-11-28 12:47:46.939502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:64664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.669 [2024-11-28 12:47:46.939509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:20.669 [2024-11-28 12:47:46.939521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:64672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.669 [2024-11-28 12:47:46.939528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:20.669 [2024-11-28 12:47:46.939540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:64680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.669 [2024-11-28 12:47:46.939547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:20.669 [2024-11-28 12:47:46.939560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:64688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.669 [2024-11-28 12:47:46.939566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:20.669 [2024-11-28 12:47:46.939579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:64696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.669 [2024-11-28 12:47:46.939586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:20.669 [2024-11-28 12:47:46.939598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:64704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.669 [2024-11-28 12:47:46.939605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:20.669 [2024-11-28 12:47:46.939617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:64712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.669 [2024-11-28 12:47:46.939624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:20.669 [2024-11-28 12:47:46.939636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:64720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.669 [2024-11-28 12:47:46.939643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:20.669 [2024-11-28 12:47:46.939655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:64728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.669 [2024-11-28 12:47:46.939662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:20.669 [2024-11-28 12:47:46.939675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:64736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.669 [2024-11-28 12:47:46.939682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:20.669 [2024-11-28 12:47:46.939694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:64744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.669 [2024-11-28 12:47:46.939700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:20.669 [2024-11-28 12:47:46.939713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:64752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.669 [2024-11-28 12:47:46.939720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:20.669 [2024-11-28 12:47:46.939736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:64760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.669 [2024-11-28 12:47:46.939743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:20.669 [2024-11-28 12:47:46.939755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:64768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.669 [2024-11-28 12:47:46.939762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:20.669 [2024-11-28 12:47:46.939775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:64776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.669 [2024-11-28 12:47:46.939782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:20.669 [2024-11-28 12:47:46.939794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:64784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.669 [2024-11-28 12:47:46.939801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:20.669 [2024-11-28 12:47:46.939813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:64792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.669 [2024-11-28 12:47:46.939820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:20.669 [2024-11-28 12:47:46.939833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:64800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.669 [2024-11-28 12:47:46.939840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:20.669 [2024-11-28 12:47:46.939852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:64808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.669 [2024-11-28 12:47:46.939859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:20.669 [2024-11-28 12:47:46.939871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:64816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.669 [2024-11-28 12:47:46.939878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:20.669 [2024-11-28 12:47:46.939890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:64824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.669 [2024-11-28 12:47:46.939897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:20.669 [2024-11-28 12:47:46.939912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:64832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.669 [2024-11-28 12:47:46.939920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:20.669 [2024-11-28 12:47:46.939933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:64840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.669 [2024-11-28 12:47:46.939940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:20.669 [2024-11-28 12:47:46.939959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:64848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.669 [2024-11-28 12:47:46.939967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:20.669 [2024-11-28 12:47:46.939979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:64856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.669 [2024-11-28 12:47:46.939988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:20.669 [2024-11-28 12:47:46.940000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:64864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.669 [2024-11-28 12:47:46.940007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:20.669 [2024-11-28 12:47:46.940019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:64872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.669 [2024-11-28 12:47:46.940026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:20.669 [2024-11-28 12:47:46.940038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:64880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.669 [2024-11-28 12:47:46.940045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:20.669 [2024-11-28 12:47:46.940058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:64888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.669 [2024-11-28 12:47:46.940065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:20.669 [2024-11-28 12:47:46.940077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:64896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.669 [2024-11-28 12:47:46.940084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:20.669 [2024-11-28 12:47:46.940096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:64904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.669 [2024-11-28 12:47:46.940103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:20.669 [2024-11-28 12:47:46.940115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:64912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.669 [2024-11-28 12:47:46.940122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:20.669 [2024-11-28 12:47:46.940135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:64920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.669 [2024-11-28 12:47:46.940142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:20.669 [2024-11-28 12:47:46.940155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:64928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.669 [2024-11-28 12:47:46.940162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:20.669 [2024-11-28 12:47:46.940174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:64936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.669 [2024-11-28 12:47:46.940181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:20.669 [2024-11-28 12:47:46.940194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:64944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.669 [2024-11-28 12:47:46.940201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:20.669 [2024-11-28 12:47:46.940214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:64952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.669 [2024-11-28 12:47:46.940223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:20.670 [2024-11-28 12:47:46.940235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:64960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.670 [2024-11-28 12:47:46.940243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:20.670 [2024-11-28 12:47:46.940256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:64968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.670 [2024-11-28 12:47:46.940263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:20.670 [2024-11-28 12:47:46.940275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:64976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.670 [2024-11-28 12:47:46.940282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:20.670 [2024-11-28 12:47:46.940295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:64984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.670 [2024-11-28 12:47:46.940302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:20.670 [2024-11-28 12:47:46.940314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:64992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.670 [2024-11-28 12:47:46.940321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:20.670 [2024-11-28 12:47:46.940333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:65000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.670 [2024-11-28 12:47:46.940340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.670 [2024-11-28 12:47:46.940353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:65008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.670 [2024-11-28 12:47:46.940359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.670 [2024-11-28 12:47:46.940372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:65016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.670 [2024-11-28 12:47:46.940379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:20.670 [2024-11-28 12:47:46.940393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:65024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.670 [2024-11-28 12:47:46.940399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:20.670 [2024-11-28 12:47:46.940412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:65032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.670 [2024-11-28 12:47:46.940420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:20.670 [2024-11-28 12:47:46.940433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:65040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.670 [2024-11-28 12:47:46.940440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:20.670 [2024-11-28 12:47:46.940452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:65048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.670 [2024-11-28 12:47:46.940459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:20.670 [2024-11-28 12:47:46.940473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:65056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.670 [2024-11-28 12:47:46.940481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:20.670 [2024-11-28 12:47:46.940493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:65064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.670 [2024-11-28 12:47:46.940500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:20.670 [2024-11-28 12:47:46.940512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:65072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.670 [2024-11-28 12:47:46.940519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:20.670 [2024-11-28 12:47:46.940532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:65080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.670 [2024-11-28 12:47:46.940539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:20.670 [2024-11-28 12:47:46.940552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:65088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.670 [2024-11-28 12:47:46.940558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:20.670 [2024-11-28 12:47:46.940571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:65096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.670 [2024-11-28 12:47:46.940578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:20.670 [2024-11-28 12:47:46.940591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:65104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.670 [2024-11-28 12:47:46.940598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:20.670 [2024-11-28 12:47:46.940611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:65112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.670 [2024-11-28 12:47:46.940618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:20.670 [2024-11-28 12:47:46.940630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:65120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.670 [2024-11-28 12:47:46.940637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:20.670 [2024-11-28 12:47:46.940650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:65128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.670 [2024-11-28 12:47:46.940657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:20.670 [2024-11-28 12:47:46.940669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:65136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.670 [2024-11-28 12:47:46.940676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:20.670 [2024-11-28 12:47:46.940689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:65144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.670 [2024-11-28 12:47:46.940696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:20.670 [2024-11-28 12:47:46.940710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:65152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.670 [2024-11-28 12:47:46.940717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:20.670 [2024-11-28 12:47:46.940729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:65160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.670 [2024-11-28 12:47:46.940736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:20.670 [2024-11-28 12:47:46.940749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:65168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.670 [2024-11-28 12:47:46.940756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:20.670 [2024-11-28 12:47:46.940769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:65176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.670 [2024-11-28 12:47:46.940775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:20.670 [2024-11-28 12:47:46.940788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:65184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.670 [2024-11-28 12:47:46.940794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:20.670 [2024-11-28 12:47:46.940807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:65192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.670 [2024-11-28 12:47:46.940814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:20.670 [2024-11-28 12:47:46.940826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:65200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.670 [2024-11-28 12:47:46.940833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:20.670 [2024-11-28 12:47:46.940846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:65208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.670 [2024-11-28 12:47:46.940853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:20.670 [2024-11-28 12:47:46.940866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:64504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.670 [2024-11-28 12:47:46.940874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:20.670 [2024-11-28 12:47:46.940887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:64512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.670 [2024-11-28 12:47:46.940894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:20.670 [2024-11-28 12:47:46.940908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:64520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.670 [2024-11-28 12:47:46.940915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:20.671 [2024-11-28 12:47:46.940927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:65216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.671 [2024-11-28 12:47:46.940934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:20.671 [2024-11-28 12:47:46.940950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:65224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.671 [2024-11-28 12:47:46.940961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:20.671 [2024-11-28 12:47:46.940973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:65232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.671 [2024-11-28 12:47:46.940981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:20.671 [2024-11-28 12:47:46.941666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:65240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.671 [2024-11-28 12:47:46.941679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:20.671 [2024-11-28 12:47:46.941694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:65248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.671 [2024-11-28 12:47:46.941701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:20.671 [2024-11-28 12:47:46.941714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:65256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.671 [2024-11-28 12:47:46.941721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:20.671 [2024-11-28 12:47:46.941733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:65264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.671 [2024-11-28 12:47:46.941740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:20.671 [2024-11-28 12:47:46.941753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:65272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.671 [2024-11-28 12:47:46.941760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:20.671 [2024-11-28 12:47:46.941773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:65280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.671 [2024-11-28 12:47:46.941779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:20.671 [2024-11-28 12:47:46.941792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:65288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.671 [2024-11-28 12:47:46.941799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:20.671 [2024-11-28 12:47:46.941811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:65296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.671 [2024-11-28 12:47:46.941818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:20.671 [2024-11-28 12:47:46.941830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:65304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.671 [2024-11-28 12:47:46.941837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:20.671 [2024-11-28 12:47:46.941850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:65312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.671 [2024-11-28 12:47:46.941856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:20.671 [2024-11-28 12:47:46.941869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:65320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.671 [2024-11-28 12:47:46.941879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:20.671 [2024-11-28 12:47:46.941891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.671 [2024-11-28 12:47:46.941898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:20.671 [2024-11-28 12:47:46.941911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:65336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.671 [2024-11-28 12:47:46.941917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:20.671 [2024-11-28 12:47:46.941930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:65344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.671 [2024-11-28 12:47:46.941937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:20.671 [2024-11-28 12:47:46.941955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:65352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.671 [2024-11-28 12:47:46.941962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:20.671 [2024-11-28 12:47:46.941975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:65360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.671 [2024-11-28 12:47:46.941982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:20.671 [2024-11-28 12:47:46.941994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:65368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.671 [2024-11-28 12:47:46.942001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:20.671 [2024-11-28 12:47:46.942013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:65376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.671 [2024-11-28 12:47:46.942020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:20.671 [2024-11-28 12:47:46.942032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:65384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.671 [2024-11-28 12:47:46.942039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:20.671 [2024-11-28 12:47:46.942051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:65392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.671 [2024-11-28 12:47:46.942058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:20.671 [2024-11-28 12:47:46.942070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:65400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.671 [2024-11-28 12:47:46.942078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:20.671 [2024-11-28 12:47:46.942090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:65408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.671 [2024-11-28 12:47:46.942097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:20.671 [2024-11-28 12:47:46.942109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:65416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.671 [2024-11-28 12:47:46.942116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:20.671 [2024-11-28 12:47:46.942130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:65424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.671 [2024-11-28 12:47:46.942137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:20.671 [2024-11-28 12:47:46.942149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:65432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.671 [2024-11-28 12:47:46.942156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:20.671 [2024-11-28 12:47:46.942168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:65440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.671 [2024-11-28 12:47:46.942175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:20.671 [2024-11-28 12:47:46.942188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:65448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.671 [2024-11-28 12:47:46.942194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:20.671 [2024-11-28 12:47:46.942207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:65456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.671 [2024-11-28 12:47:46.942213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:20.671 [2024-11-28 12:47:46.942226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:65464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.671 [2024-11-28 12:47:46.942233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:20.671 [2024-11-28 12:47:46.942245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:65472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.671 [2024-11-28 12:47:46.942252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:20.671 [2024-11-28 12:47:46.942265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:65480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.671 [2024-11-28 12:47:46.942271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:20.671 [2024-11-28 12:47:46.942284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:64528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.671 [2024-11-28 12:47:46.942291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:20.671 [2024-11-28 12:47:46.942305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:64536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.671 [2024-11-28 12:47:46.942313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:20.671 [2024-11-28 12:47:46.942325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:64544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.671 [2024-11-28 12:47:46.942332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:20.671 [2024-11-28 12:47:46.942344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:64552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.671 [2024-11-28 12:47:46.942351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:20.671 [2024-11-28 12:47:46.942366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:64560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.671 [2024-11-28 12:47:46.942374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:20.671 [2024-11-28 12:47:46.942386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:64568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.672 [2024-11-28 12:47:46.942394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:20.672 [2024-11-28 12:47:46.942406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:64576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.672 [2024-11-28 12:47:46.942413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:20.672 [2024-11-28 12:47:46.942425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:64584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.672 [2024-11-28 12:47:46.942432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:20.672 [2024-11-28 12:47:46.942444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:64592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.672 [2024-11-28 12:47:46.942451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:20.672 [2024-11-28 12:47:46.942464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:64600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.672 [2024-11-28 12:47:46.942471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:20.672 [2024-11-28 12:47:46.942483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:64608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.672 [2024-11-28 12:47:46.942490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:20.672 [2024-11-28 12:47:46.942502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:64616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.672 [2024-11-28 12:47:46.942509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:20.672 [2024-11-28 12:47:46.942522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:64624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.672 [2024-11-28 12:47:46.942529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:20.672 [2024-11-28 12:47:46.942541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:64632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.672 [2024-11-28 12:47:46.942548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:20.672 [2024-11-28 12:47:46.942560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:64640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.672 [2024-11-28 12:47:46.942567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:20.672 [2024-11-28 12:47:46.942580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:65488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.672 [2024-11-28 12:47:46.942587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:20.672 [2024-11-28 12:47:46.942975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:65496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.672 [2024-11-28 12:47:46.942989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:20.672 [2024-11-28 12:47:46.943003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:65504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.672 [2024-11-28 12:47:46.943010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:20.672 [2024-11-28 12:47:46.943022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:65512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.672 [2024-11-28 12:47:46.943029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:20.672 [2024-11-28 12:47:46.943041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:65520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.672 [2024-11-28 12:47:46.943048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:20.672 [2024-11-28 12:47:46.943061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:64648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.672 [2024-11-28 12:47:46.943067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:20.672 [2024-11-28 12:47:46.943080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:64656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.672 [2024-11-28 12:47:46.943087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:20.672 [2024-11-28 12:47:46.943100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:64664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.672 [2024-11-28 12:47:46.943107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:20.672 [2024-11-28 12:47:46.943119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:64672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.672 [2024-11-28 12:47:46.943126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:20.672 [2024-11-28 12:47:46.943138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:64680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.672 [2024-11-28 12:47:46.943145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:20.672 [2024-11-28 12:47:46.943157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:64688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.672 [2024-11-28 12:47:46.943164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:20.672 [2024-11-28 12:47:46.943177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:64696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.672 [2024-11-28 12:47:46.943184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:20.672 [2024-11-28 12:47:46.943196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:64704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.672 [2024-11-28 12:47:46.943203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:20.672 [2024-11-28 12:47:46.943215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:64712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.672 [2024-11-28 12:47:46.943224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:20.672 [2024-11-28 12:47:46.943237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:64720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.672 [2024-11-28 12:47:46.943243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:20.672 [2024-11-28 12:47:46.943256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:64728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.672 [2024-11-28 12:47:46.943263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:20.672 [2024-11-28 12:47:46.943276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:64736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.672 [2024-11-28 12:47:46.943282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:20.672 [2024-11-28 12:47:46.943295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:64744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.672 [2024-11-28 12:47:46.943303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:20.672 [2024-11-28 12:47:46.943316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:64752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.672 [2024-11-28 12:47:46.943323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:20.672 [2024-11-28 12:47:46.943336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:64760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.672 [2024-11-28 12:47:46.943343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:20.672 [2024-11-28 12:47:46.943355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:64768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.672 [2024-11-28 12:47:46.943362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:20.672 [2024-11-28 12:47:46.943374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:64776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.672 [2024-11-28 12:47:46.943381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:20.672 [2024-11-28 12:47:46.943393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:64784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.672 [2024-11-28 12:47:46.943400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:20.672 [2024-11-28 12:47:46.943412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:64792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.672 [2024-11-28 12:47:46.943420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:20.672 [2024-11-28 12:47:46.943433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:64800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.672 [2024-11-28 12:47:46.943439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:20.672 [2024-11-28 12:47:46.943452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:64808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.672 [2024-11-28 12:47:46.943459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:20.672 [2024-11-28 12:47:46.943473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:64816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.672 [2024-11-28 12:47:46.943480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:20.672 [2024-11-28 12:47:46.943492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:64824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.672 [2024-11-28 12:47:46.943499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:20.672 [2024-11-28 12:47:46.943511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:64832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.672 [2024-11-28 12:47:46.943518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:20.673 [2024-11-28 12:47:46.943530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:64840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.673 [2024-11-28 12:47:46.943537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:20.673 [2024-11-28 12:47:46.943549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:64848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.673 [2024-11-28 12:47:46.943556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:20.673 [2024-11-28 12:47:46.943569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:64856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.673 [2024-11-28 12:47:46.943576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:20.673 [2024-11-28 12:47:46.943588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:64864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.673 [2024-11-28 12:47:46.943595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:20.673 [2024-11-28 12:47:46.943615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:64872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.673 [2024-11-28 12:47:46.943623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:20.673 [2024-11-28 12:47:46.943635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:64880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.673 [2024-11-28 12:47:46.943642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:20.673 [2024-11-28 12:47:46.943655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:64888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.673 [2024-11-28 12:47:46.943662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:20.673 [2024-11-28 12:47:46.943676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:64896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.673 [2024-11-28 12:47:46.943683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:20.673 [2024-11-28 12:47:46.943695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:64904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.673 [2024-11-28 12:47:46.943702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:20.673 [2024-11-28 12:47:46.943716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:64912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.673 [2024-11-28 12:47:46.943722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:20.673 [2024-11-28 12:47:46.943735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:64920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.673 [2024-11-28 12:47:46.943742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:20.673 [2024-11-28 12:47:46.943754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:64928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.673 [2024-11-28 12:47:46.943761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:20.673 [2024-11-28 12:47:46.943773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:64936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.673 [2024-11-28 12:47:46.943780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:20.673 [2024-11-28 12:47:46.943792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:64944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.673 [2024-11-28 12:47:46.943799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:20.673 [2024-11-28 12:47:46.943811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:64952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.673 [2024-11-28 12:47:46.943818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:20.673 [2024-11-28 12:47:46.943830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:64960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.673 [2024-11-28 12:47:46.943837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:20.673 [2024-11-28 12:47:46.943849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:64968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.673 [2024-11-28 12:47:46.943856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:20.673 [2024-11-28 12:47:46.943868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:64976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.673 [2024-11-28 12:47:46.943875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:20.673 [2024-11-28 12:47:46.943888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:64984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.673 [2024-11-28 12:47:46.943894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:20.673 [2024-11-28 12:47:46.943907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:64992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.673 [2024-11-28 12:47:46.943914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:20.673 [2024-11-28 12:47:46.944355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:65000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.673 [2024-11-28 12:47:46.944367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.673 [2024-11-28 12:47:46.944382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:65008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.673 [2024-11-28 12:47:46.944392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.673 [2024-11-28 12:47:46.944405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:65016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.673 [2024-11-28 12:47:46.944412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:20.673 [2024-11-28 12:47:46.944424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:65024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.673 [2024-11-28 12:47:46.944432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:20.673 [2024-11-28 12:47:46.944444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:65032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.673 [2024-11-28 12:47:46.944451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:20.673 [2024-11-28 12:47:46.944463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:65040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.673 [2024-11-28 12:47:46.944470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:20.673 [2024-11-28 12:47:46.944483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:65048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.673 [2024-11-28 12:47:46.944490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:20.673 [2024-11-28 12:47:46.944502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:65056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.673 [2024-11-28 12:47:46.944509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:20.673 [2024-11-28 12:47:46.944522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:65064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.673 [2024-11-28 12:47:46.944528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:20.673 [2024-11-28 12:47:46.944541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:65072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.673 [2024-11-28 12:47:46.944548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:20.673 [2024-11-28 12:47:46.944560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:65080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.673 [2024-11-28 12:47:46.944567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:20.673 [2024-11-28 12:47:46.944579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:65088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.673 [2024-11-28 12:47:46.944586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:20.673 [2024-11-28 12:47:46.944598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:65096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.673 [2024-11-28 12:47:46.944605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:20.673 [2024-11-28 12:47:46.944618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:65104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.673 [2024-11-28 12:47:46.944626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:20.673 [2024-11-28 12:47:46.944639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:65112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.673 [2024-11-28 12:47:46.944646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:20.673 [2024-11-28 12:47:46.944658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:65120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.673 [2024-11-28 12:47:46.944665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:20.673 [2024-11-28 12:47:46.944677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:65128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.673 [2024-11-28 12:47:46.944684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:20.673 [2024-11-28 12:47:46.944697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:65136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.673 [2024-11-28 12:47:46.944704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:20.673 [2024-11-28 12:47:46.944716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:65144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.673 [2024-11-28 12:47:46.944723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:20.673 [2024-11-28 12:47:46.944735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:65152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.674 [2024-11-28 12:47:46.944742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:20.674 [2024-11-28 12:47:46.944754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:65160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.674 [2024-11-28 12:47:46.944761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:20.674 [2024-11-28 12:47:46.944773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:65168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.674 [2024-11-28 12:47:46.944780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:20.674 [2024-11-28 12:47:46.944793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:65176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.674 [2024-11-28 12:47:46.944800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:20.674 [2024-11-28 12:47:46.944812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:65184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.674 [2024-11-28 12:47:46.944819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:20.674 [2024-11-28 12:47:46.944831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:65192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.674 [2024-11-28 12:47:46.944838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:20.674 [2024-11-28 12:47:46.944850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:65200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.674 [2024-11-28 12:47:46.944857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:20.674 [2024-11-28 12:47:46.944871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:65208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.674 [2024-11-28 12:47:46.944878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:20.674 [2024-11-28 12:47:46.944890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:64504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.674 [2024-11-28 12:47:46.944897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:20.674 [2024-11-28 12:47:46.944909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:64512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.674 [2024-11-28 12:47:46.944916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:20.674 [2024-11-28 12:47:46.944929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:64520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.674 [2024-11-28 12:47:46.944935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:20.674 [2024-11-28 12:47:46.944953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:65216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.674 [2024-11-28 12:47:46.944960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:20.674 [2024-11-28 12:47:46.944973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:65224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.674 [2024-11-28 12:47:46.944979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:20.674 [2024-11-28 12:47:46.944992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:65232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.674 [2024-11-28 12:47:46.944999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:20.674 [2024-11-28 12:47:46.945012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:65240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.674 [2024-11-28 12:47:46.945018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:20.674 [2024-11-28 12:47:46.945031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:65248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.674 [2024-11-28 12:47:46.945037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:20.674 [2024-11-28 12:47:46.945050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:65256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.674 [2024-11-28 12:47:46.945057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:20.674 [2024-11-28 12:47:46.945069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:65264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.674 [2024-11-28 12:47:46.945076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:20.674 [2024-11-28 12:47:46.945088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:65272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.674 [2024-11-28 12:47:46.945095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:20.674 [2024-11-28 12:47:46.945109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:65280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.674 [2024-11-28 12:47:46.945116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:20.674 [2024-11-28 12:47:46.945128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:65288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.674 [2024-11-28 12:47:46.945135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:20.674 [2024-11-28 12:47:46.945147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:65296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.674 [2024-11-28 12:47:46.945154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:20.674 [2024-11-28 12:47:46.945167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:65304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.674 [2024-11-28 12:47:46.945173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:20.674 [2024-11-28 12:47:46.945186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:65312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.674 [2024-11-28 12:47:46.945193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:20.674 [2024-11-28 12:47:46.945568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:65320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.674 [2024-11-28 12:47:46.945578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:20.674 [2024-11-28 12:47:46.945591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:65328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.674 [2024-11-28 12:47:46.945598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:20.674 [2024-11-28 12:47:46.945611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:65336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.674 [2024-11-28 12:47:46.945618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:20.674 [2024-11-28 12:47:46.945630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:65344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.674 [2024-11-28 12:47:46.945637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:20.674 [2024-11-28 12:47:46.945650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:65352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.674 [2024-11-28 12:47:46.945657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:20.674 [2024-11-28 12:47:46.945669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:65360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.674 [2024-11-28 12:47:46.945676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:20.674 [2024-11-28 12:47:46.945689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:65368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.674 [2024-11-28 12:47:46.945695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:20.674 [2024-11-28 12:47:46.945708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:65376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.674 [2024-11-28 12:47:46.945717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:20.674 [2024-11-28 12:47:46.945729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:65384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.674 [2024-11-28 12:47:46.945736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:20.674 [2024-11-28 12:47:46.945748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:65392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.674 [2024-11-28 12:47:46.945755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:20.674 [2024-11-28 12:47:46.945767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:65400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.674 [2024-11-28 12:47:46.945774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:20.674 [2024-11-28 12:47:46.945787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:65408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.674 [2024-11-28 12:47:46.945793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:20.674 [2024-11-28 12:47:46.945806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:65416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.674 [2024-11-28 12:47:46.945813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:20.674 [2024-11-28 12:47:46.945826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:65424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.674 [2024-11-28 12:47:46.945833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:20.674 [2024-11-28 12:47:46.945846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:65432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.674 [2024-11-28 12:47:46.945854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:20.674 [2024-11-28 12:47:46.945866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:65440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.674 [2024-11-28 12:47:46.945873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:20.675 [2024-11-28 12:47:46.945886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:65448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.675 [2024-11-28 12:47:46.945893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:20.675 [2024-11-28 12:47:46.945905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:65456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.675 [2024-11-28 12:47:46.945912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:20.675 [2024-11-28 12:47:46.945924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:65464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.675 [2024-11-28 12:47:46.945931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:20.675 [2024-11-28 12:47:46.945944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:65472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.675 [2024-11-28 12:47:46.945957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:20.675 [2024-11-28 12:47:46.945970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:65480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.675 [2024-11-28 12:47:46.945977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:20.675 [2024-11-28 12:47:46.945989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:64528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.675 [2024-11-28 12:47:46.945996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:20.675 [2024-11-28 12:47:46.946008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:64536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.675 [2024-11-28 12:47:46.946015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:20.675 [2024-11-28 12:47:46.946027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:64544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.675 [2024-11-28 12:47:46.946034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:20.675 [2024-11-28 12:47:46.946046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:64552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.675 [2024-11-28 12:47:46.946053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:20.675 [2024-11-28 12:47:46.946066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:64560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.675 [2024-11-28 12:47:46.946072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:20.675 [2024-11-28 12:47:46.946085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:64568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.675 [2024-11-28 12:47:46.946092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:20.675 [2024-11-28 12:47:46.946104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:64576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.675 [2024-11-28 12:47:46.946111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:20.675 [2024-11-28 12:47:46.946123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:64584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.675 [2024-11-28 12:47:46.946130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:20.675 [2024-11-28 12:47:46.946143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:64592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.675 [2024-11-28 12:47:46.946150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:20.675 [2024-11-28 12:47:46.946162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:64600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.675 [2024-11-28 12:47:46.946168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:20.675 [2024-11-28 12:47:46.946181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:64608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.675 [2024-11-28 12:47:46.946188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:20.675 [2024-11-28 12:47:46.946202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:64616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.675 [2024-11-28 12:47:46.946209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:20.675 [2024-11-28 12:47:46.946480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:64624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.675 [2024-11-28 12:47:46.946490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:20.675 [2024-11-28 12:47:46.946504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:64632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.675 [2024-11-28 12:47:46.946510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:20.675 [2024-11-28 12:47:46.946523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:64640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.675 [2024-11-28 12:47:46.946530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:20.675 [2024-11-28 12:47:46.946543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:65488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.675 [2024-11-28 12:47:46.946550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:20.675 [2024-11-28 12:47:46.946562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:65496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.675 [2024-11-28 12:47:46.946569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:20.675 [2024-11-28 12:47:46.946581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:65504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.675 [2024-11-28 12:47:46.946588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:20.675 [2024-11-28 12:47:46.946600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:65512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.675 [2024-11-28 12:47:46.946608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:20.675 [2024-11-28 12:47:46.946620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:65520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.675 [2024-11-28 12:47:46.946627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:20.675 [2024-11-28 12:47:46.946639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:64648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.675 [2024-11-28 12:47:46.946646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:20.675 [2024-11-28 12:47:46.946659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:64656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.675 [2024-11-28 12:47:46.946665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:20.675 [2024-11-28 12:47:46.946679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:64664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.675 [2024-11-28 12:47:46.946686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:20.675 [2024-11-28 12:47:46.946703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:64672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.675 [2024-11-28 12:47:46.946711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:20.675 [2024-11-28 12:47:46.946724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:64680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.675 [2024-11-28 12:47:46.946730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:20.675 [2024-11-28 12:47:46.946743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:64688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.675 [2024-11-28 12:47:46.946750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:20.675 [2024-11-28 12:47:46.946762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:64696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.675 [2024-11-28 12:47:46.946769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:20.675 [2024-11-28 12:47:46.946781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:64704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.675 [2024-11-28 12:47:46.946788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:20.675 [2024-11-28 12:47:46.946801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:64712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.675 [2024-11-28 12:47:46.946808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:20.675 [2024-11-28 12:47:46.946820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:64720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.676 [2024-11-28 12:47:46.946827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:20.676 [2024-11-28 12:47:46.946839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:64728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.676 [2024-11-28 12:47:46.946846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:20.676 [2024-11-28 12:47:46.946858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:64736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.676 [2024-11-28 12:47:46.946865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:20.676 [2024-11-28 12:47:46.946877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:64744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.676 [2024-11-28 12:47:46.946884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:20.676 [2024-11-28 12:47:46.946897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:64752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.676 [2024-11-28 12:47:46.946904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:20.676 [2024-11-28 12:47:46.946916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:64760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.676 [2024-11-28 12:47:46.946923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:20.676 [2024-11-28 12:47:46.946935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:64768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.676 [2024-11-28 12:47:46.946944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:20.676 [2024-11-28 12:47:46.946962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:64776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.676 [2024-11-28 12:47:46.946969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:20.676 [2024-11-28 12:47:46.946981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:64784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.676 [2024-11-28 12:47:46.946988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:20.676 [2024-11-28 12:47:46.947001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:64792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.676 [2024-11-28 12:47:46.947008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:20.676 [2024-11-28 12:47:46.947021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:64800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.676 [2024-11-28 12:47:46.947028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:20.676 [2024-11-28 12:47:46.947041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:64808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.676 [2024-11-28 12:47:46.947048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:20.676 [2024-11-28 12:47:46.947060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:64816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.676 [2024-11-28 12:47:46.947067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:20.676 [2024-11-28 12:47:46.947079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:64824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.676 [2024-11-28 12:47:46.947086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:20.676 [2024-11-28 12:47:46.947354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:64832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.676 [2024-11-28 12:47:46.947364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:20.676 [2024-11-28 12:47:46.947377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:64840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.676 [2024-11-28 12:47:46.947384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:20.676 [2024-11-28 12:47:46.947397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:64848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.676 [2024-11-28 12:47:46.947403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:20.676 [2024-11-28 12:47:46.947416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:64856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.676 [2024-11-28 12:47:46.947423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:20.676 [2024-11-28 12:47:46.947435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:64864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.676 [2024-11-28 12:47:46.947444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:20.676 [2024-11-28 12:47:46.947457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:64872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.676 [2024-11-28 12:47:46.947464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:20.676 [2024-11-28 12:47:46.947476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:64880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.676 [2024-11-28 12:47:46.947483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:20.676 [2024-11-28 12:47:46.947495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:64888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.676 [2024-11-28 12:47:46.947502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:20.676 [2024-11-28 12:47:46.947515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:64896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.676 [2024-11-28 12:47:46.947521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:20.676 [2024-11-28 12:47:46.947534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:64904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.676 [2024-11-28 12:47:46.947541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:20.676 [2024-11-28 12:47:46.947553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:64912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.676 [2024-11-28 12:47:46.947560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:20.676 [2024-11-28 12:47:46.947572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:64920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.676 [2024-11-28 12:47:46.947579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:20.676 [2024-11-28 12:47:46.947592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:64928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.676 [2024-11-28 12:47:46.947599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:20.676 [2024-11-28 12:47:46.947611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:64936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.676 [2024-11-28 12:47:46.947618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:20.676 [2024-11-28 12:47:46.947630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:64944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.676 [2024-11-28 12:47:46.947637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:20.676 [2024-11-28 12:47:46.947649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:64952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.676 [2024-11-28 12:47:46.947656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:20.676 [2024-11-28 12:47:46.947669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:64960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.676 [2024-11-28 12:47:46.947675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:20.676 [2024-11-28 12:47:46.947689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:64968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.676 [2024-11-28 12:47:46.947697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:20.676 [2024-11-28 12:47:46.947709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:64976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.676 [2024-11-28 12:47:46.947716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:20.676 [2024-11-28 12:47:46.947728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:64984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.676 [2024-11-28 12:47:46.947736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:20.676 [2024-11-28 12:47:46.947748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:64992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.676 [2024-11-28 12:47:46.947757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:20.676 [2024-11-28 12:47:46.947769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:65000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.676 [2024-11-28 12:47:46.947776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.676 [2024-11-28 12:47:46.947789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:65008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.676 [2024-11-28 12:47:46.947795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.676 [2024-11-28 12:47:46.947809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:65016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.676 [2024-11-28 12:47:46.947816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:20.676 [2024-11-28 12:47:46.948039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:65024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.676 [2024-11-28 12:47:46.948049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:20.676 [2024-11-28 12:47:46.948063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:65032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.676 [2024-11-28 12:47:46.948070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:20.677 [2024-11-28 12:47:46.948082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:65040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.677 [2024-11-28 12:47:46.948089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:20.677 [2024-11-28 12:47:46.948102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:65048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.677 [2024-11-28 12:47:46.948109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:20.677 [2024-11-28 12:47:46.948121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:65056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.677 [2024-11-28 12:47:46.948128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:20.677 [2024-11-28 12:47:46.948142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:65064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.677 [2024-11-28 12:47:46.948149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:20.677 [2024-11-28 12:47:46.948162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:65072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.677 [2024-11-28 12:47:46.948169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:20.677 [2024-11-28 12:47:46.948181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:65080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.677 [2024-11-28 12:47:46.948188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:20.677 [2024-11-28 12:47:46.948200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:65088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.677 [2024-11-28 12:47:46.948207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:20.677 [2024-11-28 12:47:46.948220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:65096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.677 [2024-11-28 12:47:46.948226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:20.677 [2024-11-28 12:47:46.948239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:65104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.677 [2024-11-28 12:47:46.948248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:20.677 [2024-11-28 12:47:46.948260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:65112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.677 [2024-11-28 12:47:46.948268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:20.677 [2024-11-28 12:47:46.948280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:65120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.677 [2024-11-28 12:47:46.948287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:20.677 [2024-11-28 12:47:46.948300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:65128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.677 [2024-11-28 12:47:46.948307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:20.677 [2024-11-28 12:47:46.948319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:65136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.677 [2024-11-28 12:47:46.948326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:20.677 [2024-11-28 12:47:46.948339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:65144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.677 [2024-11-28 12:47:46.948347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:20.677 [2024-11-28 12:47:46.948359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:65152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.677 [2024-11-28 12:47:46.948367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:20.677 [2024-11-28 12:47:46.948379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:65160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.677 [2024-11-28 12:47:46.948390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:20.677 [2024-11-28 12:47:46.948402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:65168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.677 [2024-11-28 12:47:46.948409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:20.677 [2024-11-28 12:47:46.948423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:65176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.677 [2024-11-28 12:47:46.948431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:20.677 [2024-11-28 12:47:46.948444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:65184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.677 [2024-11-28 12:47:46.948451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:20.677 [2024-11-28 12:47:46.948463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:65192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.677 [2024-11-28 12:47:46.948470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:20.677 [2024-11-28 12:47:46.948483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:65200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.677 [2024-11-28 12:47:46.948490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:20.677 [2024-11-28 12:47:46.948503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:65208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.677 [2024-11-28 12:47:46.948509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:20.677 [2024-11-28 12:47:46.948522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:64504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.677 [2024-11-28 12:47:46.948529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:20.677 [2024-11-28 12:47:46.948541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:64512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.677 [2024-11-28 12:47:46.948548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:20.677 [2024-11-28 12:47:46.948561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:64520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.677 [2024-11-28 12:47:46.948568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:20.677 [2024-11-28 12:47:46.948819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:65216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.677 [2024-11-28 12:47:46.948830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:20.677 [2024-11-28 12:47:46.948844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:65224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.677 [2024-11-28 12:47:46.948852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:20.677 [2024-11-28 12:47:46.948866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:65232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.677 [2024-11-28 12:47:46.948876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:20.677 [2024-11-28 12:47:46.948888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:65240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.677 [2024-11-28 12:47:46.948896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:20.677 [2024-11-28 12:47:46.948909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:65248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.677 [2024-11-28 12:47:46.948916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:20.677 [2024-11-28 12:47:46.948929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:65256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.677 [2024-11-28 12:47:46.948936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:20.677 [2024-11-28 12:47:46.948954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:65264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.677 [2024-11-28 12:47:46.948961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:20.677 [2024-11-28 12:47:46.948974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:65272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.677 [2024-11-28 12:47:46.948980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:20.677 [2024-11-28 12:47:46.948993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:65280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.677 [2024-11-28 12:47:46.949000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:20.677 [2024-11-28 12:47:46.949013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:65288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.677 [2024-11-28 12:47:46.949020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:20.677 [2024-11-28 12:47:46.949032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:65296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.677 [2024-11-28 12:47:46.949039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:20.677 [2024-11-28 12:47:46.949052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:65304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.677 [2024-11-28 12:47:46.949058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:20.677 [2024-11-28 12:47:46.949071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:65312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.677 [2024-11-28 12:47:46.949078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:20.677 [2024-11-28 12:47:46.949090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:65320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.677 [2024-11-28 12:47:46.949097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:20.678 [2024-11-28 12:47:46.949109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:65328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.678 [2024-11-28 12:47:46.949116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:20.678 [2024-11-28 12:47:46.949130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:65336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.678 [2024-11-28 12:47:46.949137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:20.678 [2024-11-28 12:47:46.949149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:65344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.678 [2024-11-28 12:47:46.949156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:20.678 [2024-11-28 12:47:46.949168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:65352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.678 [2024-11-28 12:47:46.949177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:20.678 [2024-11-28 12:47:46.949190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:65360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.678 [2024-11-28 12:47:46.949197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:20.678 [2024-11-28 12:47:46.949210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:65368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.678 [2024-11-28 12:47:46.949219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:20.678 [2024-11-28 12:47:46.949231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.678 [2024-11-28 12:47:46.949238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:20.678 [2024-11-28 12:47:46.949251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:65384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.678 [2024-11-28 12:47:46.949258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:20.678 [2024-11-28 12:47:46.949271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:65392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.678 [2024-11-28 12:47:46.949277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:20.678 [2024-11-28 12:47:46.949290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:65400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.678 [2024-11-28 12:47:46.949297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:20.678 [2024-11-28 12:47:46.949523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:65408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.678 [2024-11-28 12:47:46.949532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:20.678 [2024-11-28 12:47:46.949546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:65416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.678 [2024-11-28 12:47:46.949553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:20.678 [2024-11-28 12:47:46.949565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:65424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.678 [2024-11-28 12:47:46.949572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:20.678 [2024-11-28 12:47:46.949587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:65432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.678 [2024-11-28 12:47:46.949594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:20.678 [2024-11-28 12:47:46.949606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:65440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.678 [2024-11-28 12:47:46.949613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:20.678 [2024-11-28 12:47:46.949626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:65448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.678 [2024-11-28 12:47:46.949632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:20.678 [2024-11-28 12:47:46.949645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:65456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.678 [2024-11-28 12:47:46.949652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:20.678 [2024-11-28 12:47:46.949664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:65464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.678 [2024-11-28 12:47:46.949671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:20.678 [2024-11-28 12:47:46.949758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:65472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.678 [2024-11-28 12:47:46.949767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:20.678 [2024-11-28 12:47:46.949781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:65480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.678 [2024-11-28 12:47:46.949787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:20.678 [2024-11-28 12:47:46.949800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:64528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.678 [2024-11-28 12:47:46.949807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:20.678 [2024-11-28 12:47:46.949819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:64536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.678 [2024-11-28 12:47:46.949826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:20.678 [2024-11-28 12:47:46.949839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:64544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.678 [2024-11-28 12:47:46.949846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:20.678 [2024-11-28 12:47:46.949858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:64552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.678 [2024-11-28 12:47:46.949867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:20.678 [2024-11-28 12:47:46.949879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:64560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.678 [2024-11-28 12:47:46.949887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:20.678 [2024-11-28 12:47:46.949899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:64568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.678 [2024-11-28 12:47:46.949910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:20.678 [2024-11-28 12:47:46.949922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:64576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.678 [2024-11-28 12:47:46.949929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:20.678 [2024-11-28 12:47:46.949942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:64584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.678 [2024-11-28 12:47:46.949954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:20.678 [2024-11-28 12:47:46.949967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:64592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.678 [2024-11-28 12:47:46.949973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:20.678 [2024-11-28 12:47:46.949986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:64600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.678 [2024-11-28 12:47:46.949993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:20.678 [2024-11-28 12:47:46.950005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:64608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.678 [2024-11-28 12:47:46.950012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:20.678 [2024-11-28 12:47:46.950024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:64616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.678 [2024-11-28 12:47:46.950031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:20.678 [2024-11-28 12:47:46.950043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:64624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.678 [2024-11-28 12:47:46.950050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:20.678 [2024-11-28 12:47:46.950063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:64632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.678 [2024-11-28 12:47:46.950070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:20.678 [2024-11-28 12:47:46.950082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:64640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.678 [2024-11-28 12:47:46.950089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:20.678 [2024-11-28 12:47:46.950101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:65488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.678 [2024-11-28 12:47:46.950108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:20.678 [2024-11-28 12:47:46.950121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:65496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.678 [2024-11-28 12:47:46.950127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:20.678 [2024-11-28 12:47:46.950140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:65504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.678 [2024-11-28 12:47:46.950148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:20.678 [2024-11-28 12:47:46.950161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:65512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.678 [2024-11-28 12:47:46.950168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:20.679 [2024-11-28 12:47:46.950181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:65520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.679 [2024-11-28 12:47:46.950188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:20.679 [2024-11-28 12:47:46.950201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:64648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.679 [2024-11-28 12:47:46.950208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:20.679 [2024-11-28 12:47:46.950221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:64656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.679 [2024-11-28 12:47:46.950228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:20.679 [2024-11-28 12:47:46.951331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:64664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.679 [2024-11-28 12:47:46.951341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:20.679 [2024-11-28 12:47:46.951354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:64672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.679 [2024-11-28 12:47:46.951361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:20.679 [2024-11-28 12:47:46.951374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:64680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.679 [2024-11-28 12:47:46.951380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:20.679 [2024-11-28 12:47:46.951393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:64688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.679 [2024-11-28 12:47:46.951400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:20.679 [2024-11-28 12:47:46.951413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:64696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.679 [2024-11-28 12:47:46.951420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:20.679 [2024-11-28 12:47:46.951433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:64704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.679 [2024-11-28 12:47:46.951440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:20.679 [2024-11-28 12:47:46.951452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:64712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.679 [2024-11-28 12:47:46.951460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:20.679 [2024-11-28 12:47:46.951473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:64720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.679 [2024-11-28 12:47:46.951480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:20.679 [2024-11-28 12:47:46.951494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:64728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.679 [2024-11-28 12:47:46.951501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:20.679 [2024-11-28 12:47:46.951513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:64736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.679 [2024-11-28 12:47:46.951520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:20.679 [2024-11-28 12:47:46.951532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:64744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.679 [2024-11-28 12:47:46.951539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:20.679 [2024-11-28 12:47:46.951552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:64752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.679 [2024-11-28 12:47:46.951559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:20.679 [2024-11-28 12:47:46.951572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:64760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.679 [2024-11-28 12:47:46.951578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:20.679 [2024-11-28 12:47:46.951591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:64768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.679 [2024-11-28 12:47:46.951597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:20.679 [2024-11-28 12:47:46.951610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:64776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.679 [2024-11-28 12:47:46.951617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:20.679 [2024-11-28 12:47:46.951629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:64784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.679 [2024-11-28 12:47:46.951636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:20.679 [2024-11-28 12:47:46.951649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:64792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.679 [2024-11-28 12:47:46.951656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:20.679 [2024-11-28 12:47:46.951820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:64800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.679 [2024-11-28 12:47:46.951828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:20.679 [2024-11-28 12:47:46.951842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:64808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.679 [2024-11-28 12:47:46.951849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:20.679 [2024-11-28 12:47:46.951861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:64816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.679 [2024-11-28 12:47:46.951868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:20.679 [2024-11-28 12:47:46.951882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:64824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.679 [2024-11-28 12:47:46.951890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:20.679 [2024-11-28 12:47:46.951902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:64832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.679 [2024-11-28 12:47:46.951909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:20.679 [2024-11-28 12:47:46.951921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:64840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.679 [2024-11-28 12:47:46.951928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:20.679 [2024-11-28 12:47:46.951940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:64848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.679 [2024-11-28 12:47:46.951951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:20.679 [2024-11-28 12:47:46.951964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:64856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.679 [2024-11-28 12:47:46.951971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:20.679 [2024-11-28 12:47:46.951983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:64864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.679 [2024-11-28 12:47:46.951990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:20.679 [2024-11-28 12:47:46.952002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:64872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.679 [2024-11-28 12:47:46.952009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:20.679 [2024-11-28 12:47:46.952021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:64880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.679 [2024-11-28 12:47:46.952029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:20.679 [2024-11-28 12:47:46.952041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:64888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.679 [2024-11-28 12:47:46.952048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:20.679 [2024-11-28 12:47:46.952060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:64896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.679 [2024-11-28 12:47:46.952067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:20.679 [2024-11-28 12:47:46.952080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:64904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.679 [2024-11-28 12:47:46.952086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:20.680 [2024-11-28 12:47:46.952099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:64912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.680 [2024-11-28 12:47:46.952105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:20.680 [2024-11-28 12:47:46.952118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:64920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.680 [2024-11-28 12:47:46.952127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:20.680 [2024-11-28 12:47:46.952139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:64928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.680 [2024-11-28 12:47:46.952146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:20.680 [2024-11-28 12:47:46.952158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:64936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.680 [2024-11-28 12:47:46.952165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:20.680 [2024-11-28 12:47:46.952177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:64944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.680 [2024-11-28 12:47:46.952184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:20.680 [2024-11-28 12:47:46.952196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:64952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.680 [2024-11-28 12:47:46.952203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:20.680 [2024-11-28 12:47:46.952215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:64960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.680 [2024-11-28 12:47:46.952222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:20.680 [2024-11-28 12:47:46.952235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:64968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.680 [2024-11-28 12:47:46.952241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:20.680 [2024-11-28 12:47:46.952254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:64976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.680 [2024-11-28 12:47:46.952260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:20.680 [2024-11-28 12:47:46.952273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:64984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.680 [2024-11-28 12:47:46.952280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:20.680 [2024-11-28 12:47:46.952292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:64992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.680 [2024-11-28 12:47:46.952299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:20.680 [2024-11-28 12:47:46.952311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:65000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.680 [2024-11-28 12:47:46.952318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.680 [2024-11-28 12:47:46.952330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:65008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.680 [2024-11-28 12:47:46.952337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.680 [2024-11-28 12:47:46.952350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:65016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.680 [2024-11-28 12:47:46.952358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:20.680 [2024-11-28 12:47:46.952371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:65024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.680 [2024-11-28 12:47:46.952379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:20.680 [2024-11-28 12:47:46.952391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:65032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.680 [2024-11-28 12:47:46.952399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:20.680 [2024-11-28 12:47:46.952412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:65040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.680 [2024-11-28 12:47:46.952420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:20.680 [2024-11-28 12:47:46.952432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:65048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.680 [2024-11-28 12:47:46.952439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:20.680 [2024-11-28 12:47:46.952452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:65056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.680 [2024-11-28 12:47:46.952458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:20.680 [2024-11-28 12:47:46.952471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:65064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.680 [2024-11-28 12:47:46.952478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:20.680 [2024-11-28 12:47:46.952490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:65072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.680 [2024-11-28 12:47:46.952497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:20.680 [2024-11-28 12:47:46.952509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:65080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.680 [2024-11-28 12:47:46.952516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:20.680 [2024-11-28 12:47:46.952528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:65088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.680 [2024-11-28 12:47:46.952535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:20.680 [2024-11-28 12:47:46.952547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:65096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.680 [2024-11-28 12:47:46.952554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:20.680 [2024-11-28 12:47:46.952566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:65104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.680 [2024-11-28 12:47:46.952573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:20.680 [2024-11-28 12:47:46.952585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:65112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.680 [2024-11-28 12:47:46.952592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:20.680 [2024-11-28 12:47:46.952608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:65120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.680 [2024-11-28 12:47:46.952614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:20.680 [2024-11-28 12:47:46.952627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:65128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.680 [2024-11-28 12:47:46.952634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:20.680 [2024-11-28 12:47:46.952646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:65136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.680 [2024-11-28 12:47:46.952653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:20.680 [2024-11-28 12:47:46.952665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:65144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.680 [2024-11-28 12:47:46.952672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:20.680 [2024-11-28 12:47:46.952684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:65152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.680 [2024-11-28 12:47:46.952691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:20.680 [2024-11-28 12:47:46.952704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:65160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.680 [2024-11-28 12:47:46.952711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:20.680 [2024-11-28 12:47:46.952724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:65168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.680 [2024-11-28 12:47:46.952730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:20.680 [2024-11-28 12:47:46.952743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:65176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.680 [2024-11-28 12:47:46.952750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:20.680 [2024-11-28 12:47:46.952763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:65184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.680 [2024-11-28 12:47:46.952769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:20.680 [2024-11-28 12:47:46.952782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:65192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.680 [2024-11-28 12:47:46.952788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:20.680 [2024-11-28 12:47:46.952801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:65200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.680 [2024-11-28 12:47:46.952808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:20.680 [2024-11-28 12:47:46.952820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:65208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.680 [2024-11-28 12:47:46.952827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:20.680 [2024-11-28 12:47:46.952841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:64504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.680 [2024-11-28 12:47:46.952848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:20.680 [2024-11-28 12:47:46.952861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:64512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.681 [2024-11-28 12:47:46.952867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:20.681 [2024-11-28 12:47:46.952880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:64520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.681 [2024-11-28 12:47:46.952887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:20.681 [2024-11-28 12:47:46.952900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:65216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.681 [2024-11-28 12:47:46.952908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:20.681 [2024-11-28 12:47:46.952921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:65224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.681 [2024-11-28 12:47:46.952928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:20.681 [2024-11-28 12:47:46.952941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.681 [2024-11-28 12:47:46.952952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:20.681 [2024-11-28 12:47:46.953456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:65240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.681 [2024-11-28 12:47:46.953468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:20.681 [2024-11-28 12:47:46.953483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:65248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.681 [2024-11-28 12:47:46.953490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:20.681 [2024-11-28 12:47:46.953503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:65256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.681 [2024-11-28 12:47:46.953510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:20.681 [2024-11-28 12:47:46.953522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:65264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.681 [2024-11-28 12:47:46.953529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:20.681 [2024-11-28 12:47:46.953541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:65272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.681 [2024-11-28 12:47:46.953548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:20.681 [2024-11-28 12:47:46.953561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:65280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.681 [2024-11-28 12:47:46.953568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:20.681 [2024-11-28 12:47:46.953581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:65288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.681 [2024-11-28 12:47:46.953590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:20.681 [2024-11-28 12:47:46.953603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:65296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.681 [2024-11-28 12:47:46.953610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:20.681 [2024-11-28 12:47:46.953622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:65304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.681 [2024-11-28 12:47:46.953629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:20.681 [2024-11-28 12:47:46.953641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:65312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.681 [2024-11-28 12:47:46.953649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:20.681 [2024-11-28 12:47:46.953661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:65320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.681 [2024-11-28 12:47:46.953668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:20.681 [2024-11-28 12:47:46.953680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:65328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.681 [2024-11-28 12:47:46.953687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:20.681 [2024-11-28 12:47:46.953699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:65336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.681 [2024-11-28 12:47:46.953707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:20.681 [2024-11-28 12:47:46.953720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:65344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.681 [2024-11-28 12:47:46.953728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:20.681 [2024-11-28 12:47:46.953741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:65352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.681 [2024-11-28 12:47:46.953748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:20.681 [2024-11-28 12:47:46.953762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:65360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.681 [2024-11-28 12:47:46.953769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:20.681 [2024-11-28 12:47:46.953781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:65368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.681 [2024-11-28 12:47:46.953789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:20.681 [2024-11-28 12:47:46.953802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:65376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.681 [2024-11-28 12:47:46.953810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:20.681 [2024-11-28 12:47:46.953822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:65384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.681 [2024-11-28 12:47:46.953831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:20.681 [2024-11-28 12:47:46.953843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:65392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.681 [2024-11-28 12:47:46.953852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:20.681 [2024-11-28 12:47:46.953866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:65400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.681 [2024-11-28 12:47:46.953873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:20.681 [2024-11-28 12:47:46.953885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:65408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.681 [2024-11-28 12:47:46.953892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:20.681 [2024-11-28 12:47:46.953906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:65416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.681 [2024-11-28 12:47:46.953914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:20.681 [2024-11-28 12:47:46.953927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:65424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.681 [2024-11-28 12:47:46.953934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:20.681 [2024-11-28 12:47:46.953946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:65432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.681 [2024-11-28 12:47:46.953958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:20.681 [2024-11-28 12:47:46.953970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:65440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.681 [2024-11-28 12:47:46.953978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:20.681 [2024-11-28 12:47:46.953990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:65448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.681 [2024-11-28 12:47:46.953997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:20.681 [2024-11-28 12:47:46.954009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:65456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.681 [2024-11-28 12:47:46.954016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:20.681 [2024-11-28 12:47:46.954028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:65464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.681 [2024-11-28 12:47:46.954035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:20.681 [2024-11-28 12:47:46.954050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:65472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.681 [2024-11-28 12:47:46.954056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:20.681 [2024-11-28 12:47:46.954069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:65480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.681 [2024-11-28 12:47:46.954075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:20.681 [2024-11-28 12:47:46.954090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:64528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.681 [2024-11-28 12:47:46.954097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:20.681 [2024-11-28 12:47:46.954110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:64536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.681 [2024-11-28 12:47:46.954117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:20.681 [2024-11-28 12:47:46.954130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:64544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.681 [2024-11-28 12:47:46.954136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:20.682 [2024-11-28 12:47:46.954149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:64552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.682 [2024-11-28 12:47:46.954155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:20.682 [2024-11-28 12:47:46.954168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:64560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.682 [2024-11-28 12:47:46.954175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:20.682 [2024-11-28 12:47:46.954187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:64568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.682 [2024-11-28 12:47:46.954194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:20.682 [2024-11-28 12:47:46.954206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:64576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.682 [2024-11-28 12:47:46.954213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:20.682 [2024-11-28 12:47:46.954226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:64584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.682 [2024-11-28 12:47:46.954232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:20.682 [2024-11-28 12:47:46.954245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:64592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.682 [2024-11-28 12:47:46.954251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:20.682 [2024-11-28 12:47:46.954264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:64600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.682 [2024-11-28 12:47:46.954271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:20.682 [2024-11-28 12:47:46.954283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:64608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.682 [2024-11-28 12:47:46.954290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:20.682 [2024-11-28 12:47:46.954302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:64616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.682 [2024-11-28 12:47:46.954309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:20.682 [2024-11-28 12:47:46.954323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:64624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.682 [2024-11-28 12:47:46.954331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:20.682 [2024-11-28 12:47:46.954344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:64632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.682 [2024-11-28 12:47:46.954352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:20.682 [2024-11-28 12:47:46.954367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:64640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.682 [2024-11-28 12:47:46.954374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:20.682 [2024-11-28 12:47:46.954386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:65488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.682 [2024-11-28 12:47:46.954394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:20.682 [2024-11-28 12:47:46.954786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:65496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.682 [2024-11-28 12:47:46.954797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:20.682 [2024-11-28 12:47:46.954811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:65504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.682 [2024-11-28 12:47:46.954819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:20.682 [2024-11-28 12:47:46.954832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:65512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.682 [2024-11-28 12:47:46.954839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:20.682 [2024-11-28 12:47:46.954852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:65520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.682 [2024-11-28 12:47:46.954860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:20.682 [2024-11-28 12:47:46.954873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:64648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.682 [2024-11-28 12:47:46.954879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:20.682 [2024-11-28 12:47:46.954892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:64656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.682 [2024-11-28 12:47:46.954900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:20.682 [2024-11-28 12:47:46.954913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:64664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.682 [2024-11-28 12:47:46.954921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:20.682 [2024-11-28 12:47:46.954933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:64672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.682 [2024-11-28 12:47:46.954940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:20.682 [2024-11-28 12:47:46.954957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:64680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.682 [2024-11-28 12:47:46.954968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:20.682 [2024-11-28 12:47:46.954981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:64688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.682 [2024-11-28 12:47:46.954988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:20.682 [2024-11-28 12:47:46.955002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:64696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.682 [2024-11-28 12:47:46.955009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:20.682 [2024-11-28 12:47:46.955021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:64704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.682 [2024-11-28 12:47:46.955030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:20.682 [2024-11-28 12:47:46.955043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:64712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.682 [2024-11-28 12:47:46.955050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:20.682 [2024-11-28 12:47:46.955063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:64720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.682 [2024-11-28 12:47:46.955070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:20.682 [2024-11-28 12:47:46.955083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:64728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.682 [2024-11-28 12:47:46.955090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:20.682 [2024-11-28 12:47:46.955102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:64736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.682 [2024-11-28 12:47:46.955109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:20.682 [2024-11-28 12:47:46.955121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:64744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.682 [2024-11-28 12:47:46.955128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:20.682 [2024-11-28 12:47:46.955140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:64752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.682 [2024-11-28 12:47:46.955147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:20.682 [2024-11-28 12:47:46.955159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:64760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.682 [2024-11-28 12:47:46.955166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:20.682 [2024-11-28 12:47:46.955179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:64768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.682 [2024-11-28 12:47:46.955186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:20.682 [2024-11-28 12:47:46.955198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:64776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.682 [2024-11-28 12:47:46.955207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:20.682 [2024-11-28 12:47:46.955219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:64784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.682 [2024-11-28 12:47:46.955226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:20.682 [2024-11-28 12:47:46.955239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.682 [2024-11-28 12:47:46.955246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:20.682 [2024-11-28 12:47:46.955258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:64800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.682 [2024-11-28 12:47:46.955265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:20.682 [2024-11-28 12:47:46.955277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:64808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.682 [2024-11-28 12:47:46.955284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:20.682 [2024-11-28 12:47:46.955297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:64816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.682 [2024-11-28 12:47:46.955304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:20.682 [2024-11-28 12:47:46.955316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:64824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.683 [2024-11-28 12:47:46.955323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:20.683 [2024-11-28 12:47:46.955335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:64832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.683 [2024-11-28 12:47:46.955342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:20.683 [2024-11-28 12:47:46.955354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:64840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.683 [2024-11-28 12:47:46.955361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:20.683 [2024-11-28 12:47:46.955374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:64848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.683 [2024-11-28 12:47:46.955380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:20.683 [2024-11-28 12:47:46.955393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:64856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.683 [2024-11-28 12:47:46.955400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:20.683 [2024-11-28 12:47:46.955413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:64864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.683 [2024-11-28 12:47:46.955419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:20.683 [2024-11-28 12:47:46.955431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:64872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.683 [2024-11-28 12:47:46.955438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:20.683 [2024-11-28 12:47:46.955452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:64880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.683 [2024-11-28 12:47:46.955459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:20.683 [2024-11-28 12:47:46.955472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:64888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.683 [2024-11-28 12:47:46.955478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:20.683 [2024-11-28 12:47:46.955491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:64896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.683 [2024-11-28 12:47:46.955498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:20.683 [2024-11-28 12:47:46.955511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:64904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.683 [2024-11-28 12:47:46.955518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:20.683 [2024-11-28 12:47:46.955530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:64912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.683 [2024-11-28 12:47:46.955537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:20.683 [2024-11-28 12:47:46.955549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:64920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.683 [2024-11-28 12:47:46.955556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:20.683 [2024-11-28 12:47:46.955568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:64928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.683 [2024-11-28 12:47:46.955575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:20.683 [2024-11-28 12:47:46.955587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:64936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.683 [2024-11-28 12:47:46.955594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:20.683 [2024-11-28 12:47:46.955957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:64944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.683 [2024-11-28 12:47:46.955967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:20.683 [2024-11-28 12:47:46.955981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:64952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.683 [2024-11-28 12:47:46.955988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:20.683 [2024-11-28 12:47:46.956001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:64960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.683 [2024-11-28 12:47:46.956008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:20.683 [2024-11-28 12:47:46.956021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:64968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.683 [2024-11-28 12:47:46.956028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:20.683 [2024-11-28 12:47:46.956043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:64976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.683 [2024-11-28 12:47:46.956050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:20.683 [2024-11-28 12:47:46.956062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:64984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.683 [2024-11-28 12:47:46.956069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:20.683 [2024-11-28 12:47:46.956082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:64992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.683 [2024-11-28 12:47:46.956088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:20.683 [2024-11-28 12:47:46.956101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:65000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.683 [2024-11-28 12:47:46.956108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.683 [2024-11-28 12:47:46.956120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:65008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.683 [2024-11-28 12:47:46.956127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.683 [2024-11-28 12:47:46.956139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:65016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.683 [2024-11-28 12:47:46.956146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:20.683 [2024-11-28 12:47:46.956159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:65024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.683 [2024-11-28 12:47:46.956165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:20.683 [2024-11-28 12:47:46.956178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:65032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.683 [2024-11-28 12:47:46.956185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:20.683 [2024-11-28 12:47:46.956197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:65040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.683 [2024-11-28 12:47:46.956204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:20.683 [2024-11-28 12:47:46.956216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:65048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.683 [2024-11-28 12:47:46.956223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:20.683 [2024-11-28 12:47:46.956235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:65056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.683 [2024-11-28 12:47:46.956242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:20.683 [2024-11-28 12:47:46.956254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:65064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.683 [2024-11-28 12:47:46.956261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:20.683 [2024-11-28 12:47:46.956274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:65072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.683 [2024-11-28 12:47:46.956282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:20.683 [2024-11-28 12:47:46.956294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:65080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.683 [2024-11-28 12:47:46.956301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:20.683 [2024-11-28 12:47:46.956314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:65088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.683 [2024-11-28 12:47:46.956320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:20.683 [2024-11-28 12:47:46.956333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:65096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.683 [2024-11-28 12:47:46.956340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:20.683 [2024-11-28 12:47:46.956352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:65104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.683 [2024-11-28 12:47:46.956359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:20.683 [2024-11-28 12:47:46.956372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:65112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.683 [2024-11-28 12:47:46.956379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:20.683 [2024-11-28 12:47:46.956391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:65120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.683 [2024-11-28 12:47:46.956398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:20.683 [2024-11-28 12:47:46.956410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:65128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.683 [2024-11-28 12:47:46.956417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:20.683 [2024-11-28 12:47:46.956429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:65136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.684 [2024-11-28 12:47:46.956436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:20.684 [2024-11-28 12:47:46.956448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:65144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.684 [2024-11-28 12:47:46.956455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:20.684 [2024-11-28 12:47:46.956468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:65152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.684 [2024-11-28 12:47:46.956474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:20.684 [2024-11-28 12:47:46.956487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:65160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.684 [2024-11-28 12:47:46.956494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:20.684 [2024-11-28 12:47:46.956506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:65168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.684 [2024-11-28 12:47:46.956515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:20.684 [2024-11-28 12:47:46.956527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:65176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.684 [2024-11-28 12:47:46.956534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:20.684 [2024-11-28 12:47:46.956546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:65184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.684 [2024-11-28 12:47:46.956553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:20.684 [2024-11-28 12:47:46.956566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:65192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.684 [2024-11-28 12:47:46.956573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:20.684 [2024-11-28 12:47:46.956859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:65200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.684 [2024-11-28 12:47:46.956868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:20.684 [2024-11-28 12:47:46.956882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:65208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.684 [2024-11-28 12:47:46.956889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:20.684 [2024-11-28 12:47:46.956902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:64504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.684 [2024-11-28 12:47:46.956910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:20.684 [2024-11-28 12:47:46.956922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:64512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.684 [2024-11-28 12:47:46.956929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:20.684 [2024-11-28 12:47:46.956941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:64520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.684 [2024-11-28 12:47:46.956954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:20.684 [2024-11-28 12:47:46.956967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:65216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.684 [2024-11-28 12:47:46.956974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:20.684 [2024-11-28 12:47:46.956987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:65224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.684 [2024-11-28 12:47:46.956993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:20.684 [2024-11-28 12:47:46.957006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:65232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.684 [2024-11-28 12:47:46.957013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:20.684 [2024-11-28 12:47:46.957026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:65240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.684 [2024-11-28 12:47:46.957033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:20.684 [2024-11-28 12:47:46.957047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:65248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.684 [2024-11-28 12:47:46.957054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:20.684 [2024-11-28 12:47:46.957066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:65256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.684 [2024-11-28 12:47:46.957073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:20.684 [2024-11-28 12:47:46.957086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:65264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.684 [2024-11-28 12:47:46.957092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:20.684 [2024-11-28 12:47:46.957105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:65272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.684 [2024-11-28 12:47:46.957112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:20.684 [2024-11-28 12:47:46.957126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:65280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.684 [2024-11-28 12:47:46.957132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:20.684 [2024-11-28 12:47:46.957145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:65288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.684 [2024-11-28 12:47:46.957152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:20.684 [2024-11-28 12:47:46.957164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:65296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.684 [2024-11-28 12:47:46.957171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:20.684 [2024-11-28 12:47:46.957183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:65304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.684 [2024-11-28 12:47:46.957190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:20.684 [2024-11-28 12:47:46.957202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:65312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.684 [2024-11-28 12:47:46.957209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:20.684 [2024-11-28 12:47:46.957222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:65320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.684 [2024-11-28 12:47:46.957229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:20.684 [2024-11-28 12:47:46.957241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:65328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.684 [2024-11-28 12:47:46.957248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:20.684 [2024-11-28 12:47:46.957261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:65336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.684 [2024-11-28 12:47:46.957267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:20.684 [2024-11-28 12:47:46.957281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:65344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.684 [2024-11-28 12:47:46.957288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:20.684 [2024-11-28 12:47:46.957301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:65352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.684 [2024-11-28 12:47:46.957308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:20.684 [2024-11-28 12:47:46.957320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:65360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.684 [2024-11-28 12:47:46.957327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:20.684 [2024-11-28 12:47:46.957340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:65368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.684 [2024-11-28 12:47:46.957346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:20.684 [2024-11-28 12:47:46.957359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:65376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.684 [2024-11-28 12:47:46.957366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:20.684 [2024-11-28 12:47:46.957378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:65384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.684 [2024-11-28 12:47:46.957385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:20.685 [2024-11-28 12:47:46.957398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:65392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.685 [2024-11-28 12:47:46.957404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:20.685 [2024-11-28 12:47:46.957417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:65400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.685 [2024-11-28 12:47:46.957423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:20.685 [2024-11-28 12:47:46.957436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:65408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.685 [2024-11-28 12:47:46.957443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:20.685 [2024-11-28 12:47:46.957456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:65416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.685 [2024-11-28 12:47:46.957462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:20.685 [2024-11-28 12:47:46.957475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:65424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.685 [2024-11-28 12:47:46.957481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:20.685 [2024-11-28 12:47:46.957494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:65432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.685 [2024-11-28 12:47:46.957501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:20.685 [2024-11-28 12:47:46.957513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:65440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.685 [2024-11-28 12:47:46.957523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:20.685 [2024-11-28 12:47:46.957535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:65448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.685 [2024-11-28 12:47:46.957542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:20.685 [2024-11-28 12:47:46.957865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:65456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.685 [2024-11-28 12:47:46.957875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:20.685 [2024-11-28 12:47:46.957890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:65464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.685 [2024-11-28 12:47:46.957897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:20.685 [2024-11-28 12:47:46.957910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:65472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.685 [2024-11-28 12:47:46.957917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:20.685 [2024-11-28 12:47:46.957929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:65480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.685 [2024-11-28 12:47:46.957936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:20.685 [2024-11-28 12:47:46.957953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:64528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.685 [2024-11-28 12:47:46.957961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:20.685 [2024-11-28 12:47:46.957973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:64536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.685 [2024-11-28 12:47:46.957980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:20.685 [2024-11-28 12:47:46.957993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:64544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.685 [2024-11-28 12:47:46.958000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:20.685 [2024-11-28 12:47:46.958013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:64552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.685 [2024-11-28 12:47:46.958020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:20.685 [2024-11-28 12:47:46.958032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:64560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.685 [2024-11-28 12:47:46.958039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:20.685 [2024-11-28 12:47:46.958052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:64568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.685 [2024-11-28 12:47:46.958058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:20.685 [2024-11-28 12:47:46.958071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:64576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.685 [2024-11-28 12:47:46.958082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:20.685 [2024-11-28 12:47:46.958095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:64584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.685 [2024-11-28 12:47:46.958102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:20.685 [2024-11-28 12:47:46.958114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:64592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.685 [2024-11-28 12:47:46.958121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:20.685 [2024-11-28 12:47:46.958134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:64600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.685 [2024-11-28 12:47:46.958140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:20.685 [2024-11-28 12:47:46.958153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:64608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.685 [2024-11-28 12:47:46.958160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:20.685 [2024-11-28 12:47:46.958172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:64616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.685 [2024-11-28 12:47:46.958179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:20.685 [2024-11-28 12:47:46.958192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:64624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.685 [2024-11-28 12:47:46.958199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:20.685 [2024-11-28 12:47:46.958211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:64632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.685 [2024-11-28 12:47:46.958218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:20.685 [2024-11-28 12:47:46.958230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:64640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.685 [2024-11-28 12:47:46.958237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:20.685 [2024-11-28 12:47:46.958249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:65488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.685 [2024-11-28 12:47:46.958256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:20.685 [2024-11-28 12:47:46.958269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:65496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.685 [2024-11-28 12:47:46.958276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:20.685 [2024-11-28 12:47:46.958288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:65504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.685 [2024-11-28 12:47:46.958295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:20.685 [2024-11-28 12:47:46.958307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:65512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.685 [2024-11-28 12:47:46.958314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:20.685 [2024-11-28 12:47:46.958328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:65520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.685 [2024-11-28 12:47:46.958335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:20.685 [2024-11-28 12:47:46.958347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:64648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.685 [2024-11-28 12:47:46.958354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:20.685 [2024-11-28 12:47:46.958367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:64656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.685 [2024-11-28 12:47:46.958373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:20.685 [2024-11-28 12:47:46.958386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:64664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.685 [2024-11-28 12:47:46.958393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:20.685 [2024-11-28 12:47:46.958405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:64672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.685 [2024-11-28 12:47:46.958412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:20.685 [2024-11-28 12:47:46.958424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:64680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.685 [2024-11-28 12:47:46.958431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:20.685 [2024-11-28 12:47:46.958444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:64688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.685 [2024-11-28 12:47:46.958450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:20.685 [2024-11-28 12:47:46.958463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:64696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.685 [2024-11-28 12:47:46.958470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:20.686 [2024-11-28 12:47:46.958482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:64704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.686 [2024-11-28 12:47:46.958489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:20.686 [2024-11-28 12:47:46.958752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:64712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.686 [2024-11-28 12:47:46.958761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:20.686 [2024-11-28 12:47:46.958774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:64720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.686 [2024-11-28 12:47:46.958781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:20.686 [2024-11-28 12:47:46.958794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:64728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.686 [2024-11-28 12:47:46.958801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:20.686 [2024-11-28 12:47:46.958815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:64736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.686 [2024-11-28 12:47:46.958822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:20.686 [2024-11-28 12:47:46.958834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:64744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.686 [2024-11-28 12:47:46.958841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:20.686 [2024-11-28 12:47:46.958854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.686 [2024-11-28 12:47:46.958861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:20.686 [2024-11-28 12:47:46.958873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:64760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.686 [2024-11-28 12:47:46.958880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:20.686 [2024-11-28 12:47:46.958892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:64768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.686 [2024-11-28 12:47:46.958899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:20.686 [2024-11-28 12:47:46.958911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:64776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.686 [2024-11-28 12:47:46.958918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:20.686 [2024-11-28 12:47:46.958930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:64784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.686 [2024-11-28 12:47:46.958937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:20.686 [2024-11-28 12:47:46.958956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:64792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.686 [2024-11-28 12:47:46.958963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:20.686 [2024-11-28 12:47:46.958976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:64800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.686 [2024-11-28 12:47:46.958982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:20.686 [2024-11-28 12:47:46.958995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:64808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.686 [2024-11-28 12:47:46.959002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:20.686 [2024-11-28 12:47:46.959014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:64816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.686 [2024-11-28 12:47:46.959021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:20.686 [2024-11-28 12:47:46.959033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:64824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.686 [2024-11-28 12:47:46.959040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:20.686 [2024-11-28 12:47:46.959053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:64832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.686 [2024-11-28 12:47:46.959061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:20.686 [2024-11-28 12:47:46.959074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:64840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.686 [2024-11-28 12:47:46.959081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:20.686 [2024-11-28 12:47:46.959093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:64848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.686 [2024-11-28 12:47:46.959100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:20.686 [2024-11-28 12:47:46.959113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:64856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.686 [2024-11-28 12:47:46.959120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:20.686 [2024-11-28 12:47:46.959132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:64864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.686 [2024-11-28 12:47:46.959139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:20.686 [2024-11-28 12:47:46.959151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:64872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.686 [2024-11-28 12:47:46.959158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:20.686 [2024-11-28 12:47:46.959170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:64880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.686 [2024-11-28 12:47:46.959177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:20.686 [2024-11-28 12:47:46.959190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:64888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.686 [2024-11-28 12:47:46.959196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:20.686 [2024-11-28 12:47:46.959209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:64896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.686 [2024-11-28 12:47:46.959216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:20.686 [2024-11-28 12:47:46.959228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:64904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.686 [2024-11-28 12:47:46.959235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:20.686 [2024-11-28 12:47:46.959247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:64912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.686 [2024-11-28 12:47:46.959254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:20.686 [2024-11-28 12:47:46.959268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:64920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.686 [2024-11-28 12:47:46.959275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:20.686 [2024-11-28 12:47:46.959287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:64928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.686 [2024-11-28 12:47:46.959296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:20.686 [2024-11-28 12:47:46.959308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:64936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.686 [2024-11-28 12:47:46.959315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:20.686 [2024-11-28 12:47:46.959327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:64944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.686 [2024-11-28 12:47:46.959334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:20.686 [2024-11-28 12:47:46.959347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:64952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.686 [2024-11-28 12:47:46.959355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:20.686 [2024-11-28 12:47:46.959368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:64960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.686 [2024-11-28 12:47:46.959375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:20.686 [2024-11-28 12:47:46.959502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:64968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.686 [2024-11-28 12:47:46.959512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:20.686 [2024-11-28 12:47:46.959536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:64976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.686 [2024-11-28 12:47:46.959544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:20.686 [2024-11-28 12:47:46.959559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:64984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.686 [2024-11-28 12:47:46.959566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:20.686 [2024-11-28 12:47:46.959580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:64992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.686 [2024-11-28 12:47:46.959587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:20.686 [2024-11-28 12:47:46.959601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:65000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.686 [2024-11-28 12:47:46.959608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.686 [2024-11-28 12:47:46.959622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:65008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.686 [2024-11-28 12:47:46.959629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.686 [2024-11-28 12:47:46.959643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:65016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.687 [2024-11-28 12:47:46.959650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:20.687 [2024-11-28 12:47:46.959665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:65024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.687 [2024-11-28 12:47:46.959672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:20.687 [2024-11-28 12:47:46.959688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:65032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.687 [2024-11-28 12:47:46.959696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:20.687 [2024-11-28 12:47:46.959710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:65040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.687 [2024-11-28 12:47:46.959717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:20.687 [2024-11-28 12:47:46.959732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:65048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.687 [2024-11-28 12:47:46.959739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:20.687 [2024-11-28 12:47:46.959753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:65056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.687 [2024-11-28 12:47:46.959759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:20.687 [2024-11-28 12:47:46.959773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:65064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.687 [2024-11-28 12:47:46.959780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:20.687 [2024-11-28 12:47:46.959794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:65072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.687 [2024-11-28 12:47:46.959801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:20.687 [2024-11-28 12:47:46.959815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:65080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.687 [2024-11-28 12:47:46.959822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:20.687 [2024-11-28 12:47:46.959837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:65088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.687 [2024-11-28 12:47:46.959844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:20.687 [2024-11-28 12:47:46.959857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:65096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.687 [2024-11-28 12:47:46.959864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:20.687 [2024-11-28 12:47:46.959878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:65104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.687 [2024-11-28 12:47:46.959885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:20.687 [2024-11-28 12:47:46.959899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:65112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.687 [2024-11-28 12:47:46.959906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:20.687 [2024-11-28 12:47:46.959920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:65120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.687 [2024-11-28 12:47:46.959927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:20.687 [2024-11-28 12:47:46.959943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:65128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.687 [2024-11-28 12:47:46.959956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:20.687 [2024-11-28 12:47:46.959970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:65136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.687 [2024-11-28 12:47:46.959977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:20.687 [2024-11-28 12:47:46.959990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:65144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.687 [2024-11-28 12:47:46.959997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:20.687 [2024-11-28 12:47:46.960012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:65152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.687 [2024-11-28 12:47:46.960018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:20.687 [2024-11-28 12:47:46.960084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:65160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.687 [2024-11-28 12:47:46.960093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:20.687 [2024-11-28 12:47:46.960109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:65168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.687 [2024-11-28 12:47:46.960116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:20.687 [2024-11-28 12:47:46.960132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:65176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.687 [2024-11-28 12:47:46.960139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:20.687 [2024-11-28 12:47:46.960154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:65184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.687 [2024-11-28 12:47:46.960161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:20.687 [2024-11-28 12:47:46.960176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:65192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.687 [2024-11-28 12:47:46.960183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:20.687 [2024-11-28 12:47:46.960198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:65200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.687 [2024-11-28 12:47:46.960205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:20.687 [2024-11-28 12:47:46.960221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:65208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.687 [2024-11-28 12:47:46.960228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:20.687 [2024-11-28 12:47:46.960243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:64504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.687 [2024-11-28 12:47:46.960250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:20.687 [2024-11-28 12:47:46.960265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:64512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.687 [2024-11-28 12:47:46.960274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:20.687 [2024-11-28 12:47:46.960289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:64520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.687 [2024-11-28 12:47:46.960296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:20.687 [2024-11-28 12:47:46.960311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:65216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.687 [2024-11-28 12:47:46.960318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:20.687 [2024-11-28 12:47:46.960333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:65224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.687 [2024-11-28 12:47:46.960340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:20.687 [2024-11-28 12:47:46.960355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:65232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.687 [2024-11-28 12:47:46.960362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:20.687 [2024-11-28 12:47:46.960377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:65240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.687 [2024-11-28 12:47:46.960384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:20.687 [2024-11-28 12:47:46.960399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:65248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.687 [2024-11-28 12:47:46.960407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:20.687 [2024-11-28 12:47:46.960422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:65256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.687 [2024-11-28 12:47:46.960429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:20.687 [2024-11-28 12:47:46.960443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:65264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.687 [2024-11-28 12:47:46.960450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:20.687 [2024-11-28 12:47:46.960466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:65272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.687 [2024-11-28 12:47:46.960473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:20.687 [2024-11-28 12:47:46.960488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:65280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.687 [2024-11-28 12:47:46.960496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:20.687 [2024-11-28 12:47:46.960549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:65288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.687 [2024-11-28 12:47:46.960557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:20.687 [2024-11-28 12:47:46.960574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:65296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.687 [2024-11-28 12:47:46.960583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:20.687 [2024-11-28 12:47:46.960599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:65304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.688 [2024-11-28 12:47:46.960606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:20.688 [2024-11-28 12:47:46.960623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:65312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.688 [2024-11-28 12:47:46.960630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:20.688 [2024-11-28 12:47:46.960646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:65320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.688 [2024-11-28 12:47:46.960653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:20.688 [2024-11-28 12:47:46.960669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:65328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.688 [2024-11-28 12:47:46.960676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:20.688 [2024-11-28 12:47:46.960692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:65336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.688 [2024-11-28 12:47:46.960699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:20.688 [2024-11-28 12:47:46.960715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:65344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.688 [2024-11-28 12:47:46.960722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:20.688 [2024-11-28 12:47:46.960763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:65352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.688 [2024-11-28 12:47:46.960772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:20.688 [2024-11-28 12:47:46.960789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:65360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.688 [2024-11-28 12:47:46.960796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:20.688 [2024-11-28 12:47:46.960813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:65368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.688 [2024-11-28 12:47:46.960819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:20.688 [2024-11-28 12:47:46.960836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:65376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.688 [2024-11-28 12:47:46.960843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:20.688 [2024-11-28 12:47:46.960860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:65384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.688 [2024-11-28 12:47:46.960867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:20.688 [2024-11-28 12:47:46.960883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:65392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.688 [2024-11-28 12:47:46.960890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:20.688 [2024-11-28 12:47:46.960916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:65400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.688 [2024-11-28 12:47:46.960923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:20.688 [2024-11-28 12:47:46.960940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:65408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.688 [2024-11-28 12:47:46.960952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:20.688 [2024-11-28 12:47:46.960992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:65416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.688 [2024-11-28 12:47:46.961000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:20.688 [2024-11-28 12:47:46.961017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:65424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.688 [2024-11-28 12:47:46.961025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:20.688 [2024-11-28 12:47:46.961042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:65432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.688 [2024-11-28 12:47:46.961049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:20.688 [2024-11-28 12:47:46.961066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:65440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.688 [2024-11-28 12:47:46.961073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:20.688 [2024-11-28 12:47:46.961090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:65448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.688 [2024-11-28 12:47:46.961097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:20.688 [2024-11-28 12:47:46.961114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:65456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.688 [2024-11-28 12:47:46.961121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:20.688 [2024-11-28 12:47:46.961138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:65464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.688 [2024-11-28 12:47:46.961145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:20.688 [2024-11-28 12:47:46.961162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:65472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.688 [2024-11-28 12:47:46.961169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:20.688 [2024-11-28 12:47:46.961468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:65480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.688 [2024-11-28 12:47:46.961477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:20.688 [2024-11-28 12:47:46.961495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:64528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.688 [2024-11-28 12:47:46.961502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:20.688 [2024-11-28 12:47:46.961522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:64536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.688 [2024-11-28 12:47:46.961529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:20.688 [2024-11-28 12:47:46.961546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:64544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.688 [2024-11-28 12:47:46.961554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:20.688 [2024-11-28 12:47:46.961571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:64552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.688 [2024-11-28 12:47:46.961578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:20.688 [2024-11-28 12:47:46.961595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:64560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.688 [2024-11-28 12:47:46.961602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:20.688 [2024-11-28 12:47:46.961620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:64568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.688 [2024-11-28 12:47:46.961627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:20.688 [2024-11-28 12:47:46.961645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:64576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.688 [2024-11-28 12:47:46.961652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:20.688 [2024-11-28 12:47:46.961670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:64584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.688 [2024-11-28 12:47:46.961676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:20.688 [2024-11-28 12:47:46.961694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:64592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.688 [2024-11-28 12:47:46.961701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:20.688 [2024-11-28 12:47:46.961719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:64600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.688 [2024-11-28 12:47:46.961726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:20.688 [2024-11-28 12:47:46.961743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:64608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.688 [2024-11-28 12:47:46.961750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:20.688 [2024-11-28 12:47:46.961768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:64616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.688 [2024-11-28 12:47:46.961775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:20.688 [2024-11-28 12:47:46.961792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:64624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.689 [2024-11-28 12:47:46.961799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:20.689 [2024-11-28 12:47:46.961816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:64632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.689 [2024-11-28 12:47:46.961825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:20.689 [2024-11-28 12:47:46.961843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:64640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.689 [2024-11-28 12:47:46.961850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:20.689 [2024-11-28 12:47:46.961867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:65488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.689 [2024-11-28 12:47:46.961874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:20.689 [2024-11-28 12:47:46.961937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:65496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.689 [2024-11-28 12:47:46.961951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:20.689 [2024-11-28 12:47:46.961971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:65504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.689 [2024-11-28 12:47:46.961978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:20.689 [2024-11-28 12:47:46.961996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:65512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.689 [2024-11-28 12:47:46.962003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:20.689 [2024-11-28 12:47:46.962022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:65520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.689 [2024-11-28 12:47:46.962029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:20.689 [2024-11-28 12:47:46.962047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:64648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.689 [2024-11-28 12:47:46.962054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:20.689 [2024-11-28 12:47:46.962072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:64656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.689 [2024-11-28 12:47:46.962079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:20.689 [2024-11-28 12:47:46.962097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:64664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.689 [2024-11-28 12:47:46.962104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:20.689 [2024-11-28 12:47:46.962123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:64672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.689 [2024-11-28 12:47:46.962130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:20.689 [2024-11-28 12:47:46.962148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:64680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.689 [2024-11-28 12:47:46.962155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:20.689 [2024-11-28 12:47:46.962173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:64688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.689 [2024-11-28 12:47:46.962182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:20.689 [2024-11-28 12:47:46.962200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:64696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.689 [2024-11-28 12:47:46.962207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:20.689 [2024-11-28 12:47:46.962226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:64704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.689 [2024-11-28 12:47:46.962233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:20.689 10702.69 IOPS, 41.81 MiB/s [2024-11-28T11:48:03.208Z] 9938.21 IOPS, 38.82 MiB/s [2024-11-28T11:48:03.208Z] 9275.67 IOPS, 36.23 MiB/s [2024-11-28T11:48:03.208Z] 8864.25 IOPS, 34.63 MiB/s [2024-11-28T11:48:03.208Z] 8979.35 IOPS, 35.08 MiB/s [2024-11-28T11:48:03.208Z] 9075.89 IOPS, 35.45 MiB/s [2024-11-28T11:48:03.208Z] 9278.00 IOPS, 36.24 MiB/s [2024-11-28T11:48:03.208Z] 9466.10 IOPS, 36.98 MiB/s [2024-11-28T11:48:03.208Z] 9609.71 IOPS, 37.54 MiB/s [2024-11-28T11:48:03.208Z] 9662.95 IOPS, 37.75 MiB/s [2024-11-28T11:48:03.208Z] 9704.91 IOPS, 37.91 MiB/s [2024-11-28T11:48:03.208Z] 9786.08 IOPS, 38.23 MiB/s [2024-11-28T11:48:03.208Z] 9916.48 IOPS, 38.74 MiB/s [2024-11-28T11:48:03.208Z] 10030.23 IOPS, 39.18 MiB/s [2024-11-28T11:48:03.208Z] [2024-11-28 12:48:00.460653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:9896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.689 [2024-11-28 12:48:00.460692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:20.689 [2024-11-28 12:48:00.460740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:9912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.689 [2024-11-28 12:48:00.460749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:20.689 [2024-11-28 12:48:00.460762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:9928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.689 [2024-11-28 12:48:00.460770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:20.689 [2024-11-28 12:48:00.460783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:9944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.689 [2024-11-28 12:48:00.460790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:20.689 [2024-11-28 12:48:00.460803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:9960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.689 [2024-11-28 12:48:00.460810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:20.689 [2024-11-28 12:48:00.460824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:9976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.689 [2024-11-28 12:48:00.460831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:20.689 [2024-11-28 12:48:00.460843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:9992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.689 [2024-11-28 12:48:00.460850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:20.689 [2024-11-28 12:48:00.460863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:10008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.689 [2024-11-28 12:48:00.460871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:20.689 [2024-11-28 12:48:00.461721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:10024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.689 [2024-11-28 12:48:00.461745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:20.689 [2024-11-28 12:48:00.461762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.689 [2024-11-28 12:48:00.461770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:20.689 [2024-11-28 12:48:00.461782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:10056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.689 [2024-11-28 12:48:00.461790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:20.689 [2024-11-28 12:48:00.461802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.689 [2024-11-28 12:48:00.461810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:20.689 [2024-11-28 12:48:00.461822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:10088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.689 [2024-11-28 12:48:00.461829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:20.689 [2024-11-28 12:48:00.461842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:10104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.689 [2024-11-28 12:48:00.461849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:20.689 [2024-11-28 12:48:00.461862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:10120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.689 [2024-11-28 12:48:00.461869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:20.689 [2024-11-28 12:48:00.461882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.689 [2024-11-28 12:48:00.461889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:20.689 [2024-11-28 12:48:00.461901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:10152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.689 [2024-11-28 12:48:00.461908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:20.689 [2024-11-28 12:48:00.461921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:10168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.689 [2024-11-28 12:48:00.461928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:20.689 [2024-11-28 12:48:00.461940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:10184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.689 [2024-11-28 12:48:00.461956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:20.689 [2024-11-28 12:48:00.461969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:10200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.690 [2024-11-28 12:48:00.461976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:20.690 [2024-11-28 12:48:00.461988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:10216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.690 [2024-11-28 12:48:00.461998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:20.690 [2024-11-28 12:48:00.462011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:10232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.690 [2024-11-28 12:48:00.462019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:20.690 [2024-11-28 12:48:00.462031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:10248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.690 [2024-11-28 12:48:00.462039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:20.690 [2024-11-28 12:48:00.462052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:10264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.690 [2024-11-28 12:48:00.462059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:20.690 [2024-11-28 12:48:00.462071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:10280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.690 [2024-11-28 12:48:00.462078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:20.690 [2024-11-28 12:48:00.462091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:10296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.690 [2024-11-28 12:48:00.462098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:20.690 [2024-11-28 12:48:00.462110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:10312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.690 [2024-11-28 12:48:00.462117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:20.690 [2024-11-28 12:48:00.462130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:10328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.690 [2024-11-28 12:48:00.462136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:20.690 [2024-11-28 12:48:00.462149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:10344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.690 [2024-11-28 12:48:00.462156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:20.690 [2024-11-28 12:48:00.462169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:10360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.690 [2024-11-28 12:48:00.462175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:20.690 [2024-11-28 12:48:00.462188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:10376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.690 [2024-11-28 12:48:00.462198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:20.690 [2024-11-28 12:48:00.462211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:10392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.690 [2024-11-28 12:48:00.462218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:20.690 [2024-11-28 12:48:00.462231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:10408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.690 [2024-11-28 12:48:00.462238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:20.690 [2024-11-28 12:48:00.462255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:10424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.690 [2024-11-28 12:48:00.462262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:20.690 [2024-11-28 12:48:00.462275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:10440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.690 [2024-11-28 12:48:00.462282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:20.690 [2024-11-28 12:48:00.462294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:10456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.690 [2024-11-28 12:48:00.462301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:20.690 [2024-11-28 12:48:00.462314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:10472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.690 [2024-11-28 12:48:00.462321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:20.690 [2024-11-28 12:48:00.462333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:10488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.690 [2024-11-28 12:48:00.462340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:20.690 [2024-11-28 12:48:00.462353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:10504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.690 [2024-11-28 12:48:00.462360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:20.690 [2024-11-28 12:48:00.462373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:10520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.690 [2024-11-28 12:48:00.462380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:20.690 [2024-11-28 12:48:00.463469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:10536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.690 [2024-11-28 12:48:00.463485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:20.690 [2024-11-28 12:48:00.463501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:10552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.690 [2024-11-28 12:48:00.463509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:20.690 [2024-11-28 12:48:00.463522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:10568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.690 [2024-11-28 12:48:00.463529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:20.690 [2024-11-28 12:48:00.463542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:10584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.690 [2024-11-28 12:48:00.463549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:20.690 [2024-11-28 12:48:00.463562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:10600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.690 [2024-11-28 12:48:00.463569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:20.690 [2024-11-28 12:48:00.463585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:10616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.690 [2024-11-28 12:48:00.463592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:20.690 [2024-11-28 12:48:00.463604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:10632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.690 [2024-11-28 12:48:00.463613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:20.690 [2024-11-28 12:48:00.463626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.690 [2024-11-28 12:48:00.463633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:20.691 [2024-11-28 12:48:00.463645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:10664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.691 [2024-11-28 12:48:00.463652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:20.691 [2024-11-28 12:48:00.463664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:10680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.691 [2024-11-28 12:48:00.463673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:20.691 [2024-11-28 12:48:00.463685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:10696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.691 [2024-11-28 12:48:00.463692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:20.691 [2024-11-28 12:48:00.463705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:9848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.691 [2024-11-28 12:48:00.463711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:20.691 [2024-11-28 12:48:00.463724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:9880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.691 [2024-11-28 12:48:00.463731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:20.691 [2024-11-28 12:48:00.463744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:10720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.691 [2024-11-28 12:48:00.463750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:20.691 [2024-11-28 12:48:00.463763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:10736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.691 [2024-11-28 12:48:00.463771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:20.691 [2024-11-28 12:48:00.463784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:10752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.691 [2024-11-28 12:48:00.463791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:20.691 10073.22 IOPS, 39.35 MiB/s [2024-11-28T11:48:03.210Z] 10094.43 IOPS, 39.43 MiB/s [2024-11-28T11:48:03.210Z] Received shutdown signal, test time was about 28.584318 seconds 00:24:20.691 00:24:20.691 Latency(us) 00:24:20.691 [2024-11-28T11:48:03.210Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:20.691 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:20.691 Verification LBA range: start 0x0 length 0x4000 00:24:20.691 Nvme0n1 : 28.58 10090.42 39.42 0.00 0.00 12649.27 420.29 3078254.41 00:24:20.691 [2024-11-28T11:48:03.210Z] =================================================================================================================== 00:24:20.691 [2024-11-28T11:48:03.210Z] Total : 10090.42 39.42 0.00 0.00 12649.27 420.29 3078254.41 00:24:20.691 12:48:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:20.949 12:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:24:20.949 12:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:20.949 12:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:24:20.949 12:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:20.949 12:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:24:20.949 12:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:20.949 12:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:24:20.949 12:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:20.949 12:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:20.949 rmmod nvme_tcp 00:24:20.949 rmmod nvme_fabrics 00:24:20.949 rmmod nvme_keyring 00:24:20.949 12:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:20.949 12:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:24:20.949 12:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:24:20.949 12:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 2632126 ']' 00:24:20.949 12:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 2632126 00:24:20.949 12:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 2632126 ']' 00:24:20.949 12:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 2632126 00:24:20.949 12:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:24:20.949 12:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:20.950 12:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2632126 00:24:20.950 12:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:20.950 12:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:20.950 12:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2632126' 00:24:20.950 killing process with pid 2632126 00:24:20.950 12:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 2632126 00:24:20.950 12:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 2632126 00:24:21.208 12:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:21.208 12:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:21.208 12:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:21.208 12:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:24:21.208 12:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:24:21.208 12:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:21.208 12:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:24:21.208 12:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:21.208 12:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:21.208 12:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:21.208 12:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:21.208 12:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:23.110 12:48:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:23.110 00:24:23.110 real 0m39.434s 00:24:23.110 user 1m48.246s 00:24:23.110 sys 0m10.940s 00:24:23.110 12:48:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:23.110 12:48:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:23.110 ************************************ 00:24:23.110 END TEST nvmf_host_multipath_status 00:24:23.110 ************************************ 00:24:23.110 12:48:05 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:24:23.110 12:48:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:23.110 12:48:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:23.110 12:48:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.370 ************************************ 00:24:23.370 START TEST nvmf_discovery_remove_ifc 00:24:23.370 ************************************ 00:24:23.370 12:48:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:24:23.370 * Looking for test storage... 00:24:23.370 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:23.370 12:48:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:23.370 12:48:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lcov --version 00:24:23.370 12:48:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:23.370 12:48:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:23.370 12:48:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:23.370 12:48:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:23.370 12:48:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:23.370 12:48:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:24:23.370 12:48:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:24:23.370 12:48:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:24:23.370 12:48:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:24:23.370 12:48:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:24:23.370 12:48:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:24:23.370 12:48:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:24:23.370 12:48:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:23.370 12:48:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:24:23.370 12:48:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:24:23.370 12:48:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:23.370 12:48:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:23.370 12:48:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:24:23.370 12:48:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:24:23.370 12:48:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:23.370 12:48:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:24:23.370 12:48:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:24:23.370 12:48:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:24:23.370 12:48:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:24:23.370 12:48:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:23.370 12:48:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:24:23.370 12:48:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:24:23.370 12:48:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:23.370 12:48:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:23.370 12:48:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:24:23.370 12:48:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:23.370 12:48:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:23.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:23.370 --rc genhtml_branch_coverage=1 00:24:23.370 --rc genhtml_function_coverage=1 00:24:23.370 --rc genhtml_legend=1 00:24:23.370 --rc geninfo_all_blocks=1 00:24:23.370 --rc geninfo_unexecuted_blocks=1 00:24:23.370 00:24:23.370 ' 00:24:23.370 12:48:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:23.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:23.370 --rc genhtml_branch_coverage=1 00:24:23.370 --rc genhtml_function_coverage=1 00:24:23.370 --rc genhtml_legend=1 00:24:23.370 --rc geninfo_all_blocks=1 00:24:23.370 --rc geninfo_unexecuted_blocks=1 00:24:23.370 00:24:23.370 ' 00:24:23.370 12:48:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:23.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:23.370 --rc genhtml_branch_coverage=1 00:24:23.370 --rc genhtml_function_coverage=1 00:24:23.370 --rc genhtml_legend=1 00:24:23.370 --rc geninfo_all_blocks=1 00:24:23.370 --rc geninfo_unexecuted_blocks=1 00:24:23.370 00:24:23.370 ' 00:24:23.370 12:48:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:23.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:23.370 --rc genhtml_branch_coverage=1 00:24:23.370 --rc genhtml_function_coverage=1 00:24:23.370 --rc genhtml_legend=1 00:24:23.370 --rc geninfo_all_blocks=1 00:24:23.370 --rc geninfo_unexecuted_blocks=1 00:24:23.370 00:24:23.370 ' 00:24:23.370 12:48:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:23.370 12:48:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:24:23.370 12:48:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:23.370 12:48:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:23.370 12:48:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:23.370 12:48:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:23.370 12:48:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:23.370 12:48:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:23.370 12:48:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:23.370 12:48:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:23.370 12:48:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:23.370 12:48:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:23.370 12:48:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:24:23.370 12:48:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:24:23.370 12:48:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:23.370 12:48:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:23.370 12:48:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:23.370 12:48:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:23.370 12:48:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:23.370 12:48:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:24:23.370 12:48:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:23.370 12:48:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:23.370 12:48:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:23.370 12:48:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:23.370 12:48:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:23.371 12:48:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:23.371 12:48:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:24:23.371 12:48:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:23.371 12:48:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:24:23.371 12:48:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:23.371 12:48:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:23.371 12:48:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:23.371 12:48:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:23.371 12:48:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:23.371 12:48:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:23.371 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:23.371 12:48:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:23.371 12:48:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:23.371 12:48:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:23.371 12:48:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:24:23.371 12:48:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:24:23.371 12:48:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:24:23.371 12:48:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:24:23.371 12:48:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:24:23.371 12:48:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:24:23.371 12:48:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:24:23.371 12:48:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:23.371 12:48:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:23.371 12:48:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:23.371 12:48:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:23.371 12:48:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:23.371 12:48:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:23.371 12:48:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:23.371 12:48:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:23.371 12:48:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:23.371 12:48:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:23.371 12:48:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:24:23.371 12:48:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:29.925 12:48:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:29.925 12:48:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:24:29.925 12:48:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:29.925 12:48:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:29.925 12:48:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:29.925 12:48:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:29.925 12:48:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:29.925 12:48:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:24:29.925 12:48:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:29.925 12:48:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:24:29.925 12:48:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:24:29.925 12:48:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:24:29.925 12:48:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:24:29.925 12:48:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:24:29.925 12:48:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:24:29.925 12:48:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:29.925 12:48:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:29.925 12:48:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:29.925 12:48:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:29.925 12:48:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:29.925 12:48:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:29.925 12:48:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:29.925 12:48:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:29.925 12:48:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:29.925 12:48:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:29.925 12:48:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:29.925 12:48:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:29.925 12:48:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:29.925 12:48:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:29.925 12:48:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:29.926 12:48:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:29.926 12:48:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:29.926 12:48:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:29.926 12:48:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:29.926 12:48:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:29.926 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:29.926 12:48:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:29.926 12:48:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:29.926 12:48:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:29.926 12:48:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:29.926 12:48:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:29.926 12:48:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:29.926 12:48:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:29.926 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:29.926 12:48:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:29.926 12:48:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:29.926 12:48:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:29.926 12:48:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:29.926 12:48:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:29.926 12:48:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:29.926 12:48:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:29.926 12:48:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:29.926 12:48:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:29.926 12:48:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:29.926 12:48:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:29.926 12:48:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:29.926 12:48:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:29.926 12:48:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:29.926 12:48:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:29.926 12:48:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:29.926 Found net devices under 0000:86:00.0: cvl_0_0 00:24:29.926 12:48:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:29.926 12:48:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:29.926 12:48:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:29.926 12:48:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:29.926 12:48:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:29.926 12:48:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:29.926 12:48:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:29.926 12:48:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:29.926 12:48:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:29.926 Found net devices under 0000:86:00.1: cvl_0_1 00:24:29.926 12:48:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:29.926 12:48:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:29.926 12:48:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:24:29.926 12:48:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:29.926 12:48:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:29.926 12:48:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:29.926 12:48:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:29.926 12:48:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:29.926 12:48:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:29.926 12:48:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:29.926 12:48:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:29.926 12:48:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:29.926 12:48:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:29.926 12:48:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:29.926 12:48:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:29.926 12:48:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:29.926 12:48:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:29.926 12:48:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:29.926 12:48:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:29.926 12:48:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:29.926 12:48:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:29.926 12:48:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:29.926 12:48:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:29.926 12:48:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:29.926 12:48:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:29.926 12:48:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:29.926 12:48:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:29.926 12:48:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:29.926 12:48:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:29.926 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:29.926 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.351 ms 00:24:29.926 00:24:29.926 --- 10.0.0.2 ping statistics --- 00:24:29.926 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:29.926 rtt min/avg/max/mdev = 0.351/0.351/0.351/0.000 ms 00:24:29.926 12:48:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:29.926 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:29.926 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.126 ms 00:24:29.926 00:24:29.926 --- 10.0.0.1 ping statistics --- 00:24:29.926 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:29.926 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:24:29.926 12:48:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:29.926 12:48:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:24:29.926 12:48:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:29.926 12:48:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:29.926 12:48:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:29.926 12:48:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:29.926 12:48:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:29.926 12:48:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:29.926 12:48:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:29.926 12:48:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:24:29.926 12:48:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:29.926 12:48:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:29.926 12:48:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:29.926 12:48:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=2641447 00:24:29.926 12:48:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:29.926 12:48:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 2641447 00:24:29.926 12:48:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 2641447 ']' 00:24:29.926 12:48:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:29.926 12:48:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:29.926 12:48:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:29.926 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:29.926 12:48:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:29.926 12:48:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:29.926 [2024-11-28 12:48:11.578613] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:24:29.926 [2024-11-28 12:48:11.578666] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:29.927 [2024-11-28 12:48:11.645276] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:29.927 [2024-11-28 12:48:11.687594] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:29.927 [2024-11-28 12:48:11.687631] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:29.927 [2024-11-28 12:48:11.687639] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:29.927 [2024-11-28 12:48:11.687645] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:29.927 [2024-11-28 12:48:11.687650] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:29.927 [2024-11-28 12:48:11.688234] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:29.927 12:48:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:29.927 12:48:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:24:29.927 12:48:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:29.927 12:48:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:29.927 12:48:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:29.927 12:48:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:29.927 12:48:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:24:29.927 12:48:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.927 12:48:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:29.927 [2024-11-28 12:48:11.829185] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:29.927 [2024-11-28 12:48:11.837360] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:24:29.927 null0 00:24:29.927 [2024-11-28 12:48:11.869341] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:29.927 12:48:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.927 12:48:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=2641658 00:24:29.927 12:48:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:24:29.927 12:48:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 2641658 /tmp/host.sock 00:24:29.927 12:48:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 2641658 ']' 00:24:29.927 12:48:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:24:29.927 12:48:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:29.927 12:48:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:24:29.927 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:24:29.927 12:48:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:29.927 12:48:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:29.927 [2024-11-28 12:48:11.938071] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:24:29.927 [2024-11-28 12:48:11.938114] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2641658 ] 00:24:29.927 [2024-11-28 12:48:11.999451] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:29.927 [2024-11-28 12:48:12.042048] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:29.927 12:48:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:29.927 12:48:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:24:29.927 12:48:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:29.927 12:48:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:24:29.927 12:48:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.927 12:48:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:29.927 12:48:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.927 12:48:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:24:29.927 12:48:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.927 12:48:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:29.927 12:48:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.927 12:48:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:24:29.927 12:48:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.927 12:48:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:30.857 [2024-11-28 12:48:13.226430] bdev_nvme.c:7484:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:30.857 [2024-11-28 12:48:13.226450] bdev_nvme.c:7570:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:30.857 [2024-11-28 12:48:13.226465] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:30.857 [2024-11-28 12:48:13.352861] bdev_nvme.c:7413:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:24:31.114 [2024-11-28 12:48:13.448528] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:24:31.114 [2024-11-28 12:48:13.449308] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x21eda50:1 started. 00:24:31.114 [2024-11-28 12:48:13.450646] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:24:31.114 [2024-11-28 12:48:13.450689] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:24:31.114 [2024-11-28 12:48:13.450708] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:24:31.114 [2024-11-28 12:48:13.450720] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:31.114 [2024-11-28 12:48:13.450736] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:31.114 12:48:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.114 12:48:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:24:31.114 12:48:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:31.114 [2024-11-28 12:48:13.454767] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x21eda50 was disconnected and freed. delete nvme_qpair. 00:24:31.114 12:48:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:31.114 12:48:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:31.114 12:48:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.114 12:48:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:31.114 12:48:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:31.114 12:48:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:31.114 12:48:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.114 12:48:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:24:31.114 12:48:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:24:31.114 12:48:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:24:31.114 12:48:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:24:31.114 12:48:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:31.114 12:48:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:31.114 12:48:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:31.114 12:48:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.114 12:48:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:31.114 12:48:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:31.115 12:48:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:31.115 12:48:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.371 12:48:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:31.371 12:48:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:32.302 12:48:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:32.302 12:48:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:32.302 12:48:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:32.302 12:48:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:32.302 12:48:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.302 12:48:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:32.302 12:48:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:32.302 12:48:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.302 12:48:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:32.302 12:48:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:33.232 12:48:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:33.232 12:48:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:33.232 12:48:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:33.232 12:48:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.232 12:48:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:33.232 12:48:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:33.232 12:48:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:33.232 12:48:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.488 12:48:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:33.488 12:48:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:34.420 12:48:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:34.420 12:48:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:34.420 12:48:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:34.420 12:48:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.420 12:48:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:34.420 12:48:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:34.420 12:48:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:34.420 12:48:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.420 12:48:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:34.420 12:48:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:35.353 12:48:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:35.353 12:48:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:35.353 12:48:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:35.353 12:48:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.353 12:48:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:35.353 12:48:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:35.353 12:48:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:35.353 12:48:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.353 12:48:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:35.353 12:48:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:36.723 12:48:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:36.723 12:48:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:36.723 12:48:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:36.723 12:48:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:36.723 12:48:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.723 12:48:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:36.723 12:48:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:36.723 12:48:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.723 [2024-11-28 12:48:18.892254] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:24:36.723 [2024-11-28 12:48:18.892294] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:36.723 [2024-11-28 12:48:18.892319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.723 [2024-11-28 12:48:18.892329] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:36.723 [2024-11-28 12:48:18.892340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.723 [2024-11-28 12:48:18.892348] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:36.723 [2024-11-28 12:48:18.892355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.723 [2024-11-28 12:48:18.892362] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:36.723 [2024-11-28 12:48:18.892369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.723 [2024-11-28 12:48:18.892376] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:36.723 [2024-11-28 12:48:18.892383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.723 [2024-11-28 12:48:18.892389] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ca240 is same with the state(6) to be set 00:24:36.723 [2024-11-28 12:48:18.902277] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21ca240 (9): Bad file descriptor 00:24:36.723 12:48:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:36.723 12:48:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:36.723 [2024-11-28 12:48:18.912311] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:36.723 [2024-11-28 12:48:18.912324] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:36.723 [2024-11-28 12:48:18.912329] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:36.723 [2024-11-28 12:48:18.912334] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:36.723 [2024-11-28 12:48:18.912352] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:37.654 12:48:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:37.654 12:48:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:37.654 12:48:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:37.654 12:48:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:37.654 12:48:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:37.654 12:48:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:37.654 12:48:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:37.654 [2024-11-28 12:48:19.951964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:24:37.654 [2024-11-28 12:48:19.952001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21ca240 with addr=10.0.0.2, port=4420 00:24:37.654 [2024-11-28 12:48:19.952015] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ca240 is same with the state(6) to be set 00:24:37.654 [2024-11-28 12:48:19.952044] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21ca240 (9): Bad file descriptor 00:24:37.654 [2024-11-28 12:48:19.952451] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:24:37.654 [2024-11-28 12:48:19.952477] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:37.654 [2024-11-28 12:48:19.952487] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:37.654 [2024-11-28 12:48:19.952499] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:37.654 [2024-11-28 12:48:19.952508] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:37.654 [2024-11-28 12:48:19.952515] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:37.654 [2024-11-28 12:48:19.952521] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:37.654 [2024-11-28 12:48:19.952531] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:37.654 [2024-11-28 12:48:19.952537] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:37.654 12:48:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:37.654 12:48:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:37.654 12:48:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:38.596 [2024-11-28 12:48:20.955017] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:38.596 [2024-11-28 12:48:20.955038] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:38.596 [2024-11-28 12:48:20.955051] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:38.596 [2024-11-28 12:48:20.955058] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:38.596 [2024-11-28 12:48:20.955067] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:24:38.596 [2024-11-28 12:48:20.955074] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:38.596 [2024-11-28 12:48:20.955079] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:38.596 [2024-11-28 12:48:20.955083] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:38.596 [2024-11-28 12:48:20.955104] bdev_nvme.c:7235:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:24:38.596 [2024-11-28 12:48:20.955125] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:38.596 [2024-11-28 12:48:20.955135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.596 [2024-11-28 12:48:20.955144] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:38.596 [2024-11-28 12:48:20.955164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.596 [2024-11-28 12:48:20.955171] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:38.596 [2024-11-28 12:48:20.955178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.596 [2024-11-28 12:48:20.955185] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:38.596 [2024-11-28 12:48:20.955191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.596 [2024-11-28 12:48:20.955198] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:38.596 [2024-11-28 12:48:20.955205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.596 [2024-11-28 12:48:20.955211] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:24:38.596 [2024-11-28 12:48:20.955302] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21b9910 (9): Bad file descriptor 00:24:38.596 [2024-11-28 12:48:20.956313] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:24:38.596 [2024-11-28 12:48:20.956323] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:24:38.596 12:48:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:38.596 12:48:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:38.596 12:48:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:38.596 12:48:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:38.596 12:48:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:38.596 12:48:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.596 12:48:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:38.596 12:48:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.596 12:48:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:24:38.596 12:48:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:38.596 12:48:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:38.596 12:48:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:24:38.596 12:48:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:38.596 12:48:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:38.596 12:48:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:38.597 12:48:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.597 12:48:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:38.597 12:48:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:38.597 12:48:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:38.597 12:48:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.853 12:48:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:24:38.853 12:48:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:39.783 12:48:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:39.783 12:48:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:39.783 12:48:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:39.783 12:48:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.783 12:48:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:39.783 12:48:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:39.783 12:48:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:39.783 12:48:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.783 12:48:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:24:39.783 12:48:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:40.715 [2024-11-28 12:48:23.011032] bdev_nvme.c:7484:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:40.715 [2024-11-28 12:48:23.011050] bdev_nvme.c:7570:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:40.715 [2024-11-28 12:48:23.011065] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:40.715 [2024-11-28 12:48:23.097326] bdev_nvme.c:7413:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:24:40.715 [2024-11-28 12:48:23.191960] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:24:40.715 [2024-11-28 12:48:23.192541] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x21f74a0:1 started. 00:24:40.715 [2024-11-28 12:48:23.193586] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:24:40.715 [2024-11-28 12:48:23.193617] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:24:40.715 [2024-11-28 12:48:23.193634] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:24:40.715 [2024-11-28 12:48:23.193646] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:24:40.715 [2024-11-28 12:48:23.193654] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:40.715 12:48:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:40.715 12:48:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:40.715 12:48:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:40.715 12:48:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:40.715 12:48:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.715 12:48:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:40.715 12:48:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:40.715 12:48:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.972 [2024-11-28 12:48:23.240426] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x21f74a0 was disconnected and freed. delete nvme_qpair. 00:24:40.972 12:48:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:24:40.972 12:48:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:24:40.972 12:48:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 2641658 00:24:40.972 12:48:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 2641658 ']' 00:24:40.972 12:48:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 2641658 00:24:40.972 12:48:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:24:40.972 12:48:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:40.972 12:48:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2641658 00:24:40.972 12:48:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:40.972 12:48:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:40.972 12:48:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2641658' 00:24:40.972 killing process with pid 2641658 00:24:40.972 12:48:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 2641658 00:24:40.972 12:48:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 2641658 00:24:40.972 12:48:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:24:40.972 12:48:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:40.972 12:48:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:24:40.972 12:48:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:40.972 12:48:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:24:40.972 12:48:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:40.972 12:48:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:40.972 rmmod nvme_tcp 00:24:40.972 rmmod nvme_fabrics 00:24:41.230 rmmod nvme_keyring 00:24:41.230 12:48:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:41.230 12:48:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:24:41.230 12:48:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:24:41.230 12:48:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 2641447 ']' 00:24:41.230 12:48:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 2641447 00:24:41.230 12:48:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 2641447 ']' 00:24:41.230 12:48:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 2641447 00:24:41.230 12:48:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:24:41.230 12:48:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:41.230 12:48:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2641447 00:24:41.230 12:48:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:41.230 12:48:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:41.230 12:48:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2641447' 00:24:41.230 killing process with pid 2641447 00:24:41.230 12:48:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 2641447 00:24:41.230 12:48:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 2641447 00:24:41.230 12:48:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:41.230 12:48:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:41.230 12:48:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:41.230 12:48:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:24:41.230 12:48:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:24:41.230 12:48:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:24:41.230 12:48:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:41.230 12:48:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:41.231 12:48:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:41.231 12:48:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:41.489 12:48:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:41.489 12:48:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:43.385 12:48:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:43.385 00:24:43.385 real 0m20.160s 00:24:43.385 user 0m24.489s 00:24:43.385 sys 0m5.662s 00:24:43.385 12:48:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:43.385 12:48:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:43.385 ************************************ 00:24:43.385 END TEST nvmf_discovery_remove_ifc 00:24:43.385 ************************************ 00:24:43.385 12:48:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:24:43.385 12:48:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:43.385 12:48:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:43.385 12:48:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.385 ************************************ 00:24:43.385 START TEST nvmf_identify_kernel_target 00:24:43.385 ************************************ 00:24:43.385 12:48:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:24:43.643 * Looking for test storage... 00:24:43.643 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:43.643 12:48:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:43.643 12:48:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lcov --version 00:24:43.643 12:48:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:43.643 12:48:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:43.643 12:48:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:43.643 12:48:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:43.643 12:48:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:43.643 12:48:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:24:43.643 12:48:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:24:43.643 12:48:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:24:43.643 12:48:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:24:43.643 12:48:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:24:43.643 12:48:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:24:43.643 12:48:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:24:43.643 12:48:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:43.643 12:48:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:24:43.643 12:48:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:24:43.643 12:48:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:43.643 12:48:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:43.643 12:48:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:24:43.643 12:48:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:24:43.643 12:48:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:43.643 12:48:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:24:43.643 12:48:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:24:43.643 12:48:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:24:43.643 12:48:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:24:43.643 12:48:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:43.643 12:48:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:24:43.643 12:48:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:24:43.643 12:48:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:43.643 12:48:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:43.643 12:48:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:24:43.643 12:48:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:43.643 12:48:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:43.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:43.643 --rc genhtml_branch_coverage=1 00:24:43.643 --rc genhtml_function_coverage=1 00:24:43.643 --rc genhtml_legend=1 00:24:43.643 --rc geninfo_all_blocks=1 00:24:43.643 --rc geninfo_unexecuted_blocks=1 00:24:43.643 00:24:43.643 ' 00:24:43.643 12:48:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:43.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:43.643 --rc genhtml_branch_coverage=1 00:24:43.643 --rc genhtml_function_coverage=1 00:24:43.643 --rc genhtml_legend=1 00:24:43.643 --rc geninfo_all_blocks=1 00:24:43.643 --rc geninfo_unexecuted_blocks=1 00:24:43.643 00:24:43.643 ' 00:24:43.643 12:48:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:43.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:43.643 --rc genhtml_branch_coverage=1 00:24:43.643 --rc genhtml_function_coverage=1 00:24:43.643 --rc genhtml_legend=1 00:24:43.643 --rc geninfo_all_blocks=1 00:24:43.643 --rc geninfo_unexecuted_blocks=1 00:24:43.643 00:24:43.643 ' 00:24:43.643 12:48:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:43.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:43.643 --rc genhtml_branch_coverage=1 00:24:43.643 --rc genhtml_function_coverage=1 00:24:43.643 --rc genhtml_legend=1 00:24:43.643 --rc geninfo_all_blocks=1 00:24:43.643 --rc geninfo_unexecuted_blocks=1 00:24:43.643 00:24:43.643 ' 00:24:43.643 12:48:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:43.643 12:48:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:24:43.643 12:48:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:43.643 12:48:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:43.643 12:48:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:43.643 12:48:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:43.643 12:48:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:43.643 12:48:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:43.643 12:48:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:43.643 12:48:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:43.643 12:48:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:43.643 12:48:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:43.643 12:48:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:24:43.643 12:48:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:24:43.643 12:48:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:43.643 12:48:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:43.643 12:48:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:43.643 12:48:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:43.643 12:48:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:43.643 12:48:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:24:43.643 12:48:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:43.643 12:48:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:43.644 12:48:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:43.644 12:48:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:43.644 12:48:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:43.644 12:48:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:43.644 12:48:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:24:43.644 12:48:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:43.644 12:48:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:24:43.644 12:48:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:43.644 12:48:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:43.644 12:48:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:43.644 12:48:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:43.644 12:48:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:43.644 12:48:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:43.644 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:43.644 12:48:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:43.644 12:48:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:43.644 12:48:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:43.644 12:48:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:24:43.644 12:48:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:43.644 12:48:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:43.644 12:48:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:43.644 12:48:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:43.644 12:48:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:43.644 12:48:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:43.644 12:48:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:43.644 12:48:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:43.644 12:48:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:43.644 12:48:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:43.644 12:48:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:24:43.644 12:48:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:24:48.950 12:48:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:48.950 12:48:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:24:48.950 12:48:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:48.950 12:48:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:48.950 12:48:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:48.950 12:48:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:48.950 12:48:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:48.950 12:48:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:24:48.950 12:48:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:48.950 12:48:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:24:48.950 12:48:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:24:48.950 12:48:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:24:48.950 12:48:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:24:48.950 12:48:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:24:48.950 12:48:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:24:48.950 12:48:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:48.950 12:48:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:48.950 12:48:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:48.950 12:48:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:48.950 12:48:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:48.950 12:48:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:48.950 12:48:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:48.950 12:48:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:48.950 12:48:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:48.950 12:48:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:48.950 12:48:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:48.950 12:48:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:48.950 12:48:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:48.950 12:48:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:48.950 12:48:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:48.950 12:48:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:48.950 12:48:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:48.950 12:48:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:48.950 12:48:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:48.950 12:48:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:48.950 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:48.950 12:48:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:48.950 12:48:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:48.950 12:48:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:48.950 12:48:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:48.950 12:48:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:48.950 12:48:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:48.950 12:48:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:48.950 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:48.950 12:48:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:48.951 12:48:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:48.951 12:48:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:48.951 12:48:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:48.951 12:48:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:48.951 12:48:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:48.951 12:48:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:48.951 12:48:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:48.951 12:48:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:48.951 12:48:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:48.951 12:48:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:48.951 12:48:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:48.951 12:48:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:48.951 12:48:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:48.951 12:48:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:48.951 12:48:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:48.951 Found net devices under 0000:86:00.0: cvl_0_0 00:24:48.951 12:48:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:48.951 12:48:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:48.951 12:48:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:48.951 12:48:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:48.951 12:48:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:48.951 12:48:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:48.951 12:48:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:48.951 12:48:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:48.951 12:48:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:48.951 Found net devices under 0000:86:00.1: cvl_0_1 00:24:48.951 12:48:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:48.951 12:48:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:48.951 12:48:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:24:48.951 12:48:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:48.951 12:48:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:48.951 12:48:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:48.951 12:48:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:48.951 12:48:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:48.951 12:48:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:48.951 12:48:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:48.951 12:48:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:48.951 12:48:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:48.951 12:48:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:48.951 12:48:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:48.951 12:48:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:48.951 12:48:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:48.951 12:48:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:48.951 12:48:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:49.210 12:48:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:49.210 12:48:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:49.210 12:48:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:49.210 12:48:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:49.210 12:48:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:49.210 12:48:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:49.210 12:48:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:49.210 12:48:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:49.210 12:48:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:49.210 12:48:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:49.210 12:48:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:49.210 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:49.210 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.420 ms 00:24:49.210 00:24:49.210 --- 10.0.0.2 ping statistics --- 00:24:49.210 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:49.210 rtt min/avg/max/mdev = 0.420/0.420/0.420/0.000 ms 00:24:49.210 12:48:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:49.210 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:49.210 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.211 ms 00:24:49.210 00:24:49.210 --- 10.0.0.1 ping statistics --- 00:24:49.210 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:49.210 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:24:49.210 12:48:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:49.210 12:48:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:24:49.210 12:48:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:49.210 12:48:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:49.210 12:48:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:49.210 12:48:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:49.210 12:48:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:49.210 12:48:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:49.210 12:48:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:49.470 12:48:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:24:49.470 12:48:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:24:49.470 12:48:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:24:49.470 12:48:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:49.470 12:48:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:49.470 12:48:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:49.470 12:48:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:49.470 12:48:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:49.470 12:48:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:49.470 12:48:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:49.470 12:48:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:49.470 12:48:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:49.470 12:48:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:24:49.470 12:48:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:24:49.470 12:48:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:24:49.470 12:48:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:24:49.470 12:48:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:49.470 12:48:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:49.470 12:48:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:24:49.470 12:48:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:24:49.470 12:48:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:24:49.470 12:48:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:24:49.470 12:48:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:24:49.470 12:48:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:24:52.000 Waiting for block devices as requested 00:24:52.000 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:24:52.258 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:24:52.258 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:24:52.258 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:24:52.515 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:24:52.515 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:24:52.515 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:24:52.515 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:24:52.782 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:24:52.782 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:24:52.782 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:24:52.782 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:24:53.039 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:24:53.039 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:24:53.039 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:24:53.296 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:24:53.296 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:24:53.296 12:48:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:24:53.296 12:48:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:24:53.296 12:48:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:24:53.296 12:48:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:24:53.296 12:48:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:24:53.296 12:48:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:24:53.296 12:48:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:24:53.296 12:48:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:24:53.296 12:48:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:24:53.296 No valid GPT data, bailing 00:24:53.296 12:48:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:24:53.296 12:48:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:24:53.296 12:48:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:24:53.296 12:48:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:24:53.296 12:48:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:24:53.296 12:48:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:53.297 12:48:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:53.297 12:48:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:24:53.297 12:48:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:24:53.297 12:48:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:24:53.297 12:48:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:24:53.297 12:48:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:24:53.297 12:48:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:24:53.297 12:48:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:24:53.297 12:48:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:24:53.297 12:48:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:24:53.297 12:48:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:24:53.555 12:48:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:24:53.555 00:24:53.555 Discovery Log Number of Records 2, Generation counter 2 00:24:53.555 =====Discovery Log Entry 0====== 00:24:53.555 trtype: tcp 00:24:53.555 adrfam: ipv4 00:24:53.555 subtype: current discovery subsystem 00:24:53.555 treq: not specified, sq flow control disable supported 00:24:53.555 portid: 1 00:24:53.555 trsvcid: 4420 00:24:53.555 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:24:53.555 traddr: 10.0.0.1 00:24:53.555 eflags: none 00:24:53.555 sectype: none 00:24:53.555 =====Discovery Log Entry 1====== 00:24:53.555 trtype: tcp 00:24:53.555 adrfam: ipv4 00:24:53.555 subtype: nvme subsystem 00:24:53.555 treq: not specified, sq flow control disable supported 00:24:53.555 portid: 1 00:24:53.555 trsvcid: 4420 00:24:53.555 subnqn: nqn.2016-06.io.spdk:testnqn 00:24:53.555 traddr: 10.0.0.1 00:24:53.555 eflags: none 00:24:53.555 sectype: none 00:24:53.555 12:48:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:24:53.555 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:24:53.555 ===================================================== 00:24:53.555 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:24:53.555 ===================================================== 00:24:53.555 Controller Capabilities/Features 00:24:53.555 ================================ 00:24:53.555 Vendor ID: 0000 00:24:53.555 Subsystem Vendor ID: 0000 00:24:53.555 Serial Number: 579a2cf409b64722ae22 00:24:53.555 Model Number: Linux 00:24:53.555 Firmware Version: 6.8.9-20 00:24:53.555 Recommended Arb Burst: 0 00:24:53.555 IEEE OUI Identifier: 00 00 00 00:24:53.555 Multi-path I/O 00:24:53.555 May have multiple subsystem ports: No 00:24:53.555 May have multiple controllers: No 00:24:53.555 Associated with SR-IOV VF: No 00:24:53.555 Max Data Transfer Size: Unlimited 00:24:53.555 Max Number of Namespaces: 0 00:24:53.555 Max Number of I/O Queues: 1024 00:24:53.555 NVMe Specification Version (VS): 1.3 00:24:53.555 NVMe Specification Version (Identify): 1.3 00:24:53.555 Maximum Queue Entries: 1024 00:24:53.555 Contiguous Queues Required: No 00:24:53.555 Arbitration Mechanisms Supported 00:24:53.555 Weighted Round Robin: Not Supported 00:24:53.555 Vendor Specific: Not Supported 00:24:53.555 Reset Timeout: 7500 ms 00:24:53.555 Doorbell Stride: 4 bytes 00:24:53.555 NVM Subsystem Reset: Not Supported 00:24:53.555 Command Sets Supported 00:24:53.555 NVM Command Set: Supported 00:24:53.555 Boot Partition: Not Supported 00:24:53.555 Memory Page Size Minimum: 4096 bytes 00:24:53.555 Memory Page Size Maximum: 4096 bytes 00:24:53.555 Persistent Memory Region: Not Supported 00:24:53.555 Optional Asynchronous Events Supported 00:24:53.555 Namespace Attribute Notices: Not Supported 00:24:53.555 Firmware Activation Notices: Not Supported 00:24:53.555 ANA Change Notices: Not Supported 00:24:53.555 PLE Aggregate Log Change Notices: Not Supported 00:24:53.555 LBA Status Info Alert Notices: Not Supported 00:24:53.555 EGE Aggregate Log Change Notices: Not Supported 00:24:53.555 Normal NVM Subsystem Shutdown event: Not Supported 00:24:53.555 Zone Descriptor Change Notices: Not Supported 00:24:53.555 Discovery Log Change Notices: Supported 00:24:53.555 Controller Attributes 00:24:53.555 128-bit Host Identifier: Not Supported 00:24:53.555 Non-Operational Permissive Mode: Not Supported 00:24:53.555 NVM Sets: Not Supported 00:24:53.555 Read Recovery Levels: Not Supported 00:24:53.555 Endurance Groups: Not Supported 00:24:53.555 Predictable Latency Mode: Not Supported 00:24:53.555 Traffic Based Keep ALive: Not Supported 00:24:53.555 Namespace Granularity: Not Supported 00:24:53.555 SQ Associations: Not Supported 00:24:53.555 UUID List: Not Supported 00:24:53.555 Multi-Domain Subsystem: Not Supported 00:24:53.555 Fixed Capacity Management: Not Supported 00:24:53.555 Variable Capacity Management: Not Supported 00:24:53.555 Delete Endurance Group: Not Supported 00:24:53.555 Delete NVM Set: Not Supported 00:24:53.555 Extended LBA Formats Supported: Not Supported 00:24:53.555 Flexible Data Placement Supported: Not Supported 00:24:53.555 00:24:53.555 Controller Memory Buffer Support 00:24:53.555 ================================ 00:24:53.555 Supported: No 00:24:53.555 00:24:53.555 Persistent Memory Region Support 00:24:53.555 ================================ 00:24:53.555 Supported: No 00:24:53.555 00:24:53.555 Admin Command Set Attributes 00:24:53.555 ============================ 00:24:53.555 Security Send/Receive: Not Supported 00:24:53.555 Format NVM: Not Supported 00:24:53.555 Firmware Activate/Download: Not Supported 00:24:53.555 Namespace Management: Not Supported 00:24:53.555 Device Self-Test: Not Supported 00:24:53.555 Directives: Not Supported 00:24:53.555 NVMe-MI: Not Supported 00:24:53.555 Virtualization Management: Not Supported 00:24:53.555 Doorbell Buffer Config: Not Supported 00:24:53.555 Get LBA Status Capability: Not Supported 00:24:53.555 Command & Feature Lockdown Capability: Not Supported 00:24:53.555 Abort Command Limit: 1 00:24:53.555 Async Event Request Limit: 1 00:24:53.555 Number of Firmware Slots: N/A 00:24:53.555 Firmware Slot 1 Read-Only: N/A 00:24:53.555 Firmware Activation Without Reset: N/A 00:24:53.555 Multiple Update Detection Support: N/A 00:24:53.555 Firmware Update Granularity: No Information Provided 00:24:53.555 Per-Namespace SMART Log: No 00:24:53.555 Asymmetric Namespace Access Log Page: Not Supported 00:24:53.555 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:24:53.555 Command Effects Log Page: Not Supported 00:24:53.555 Get Log Page Extended Data: Supported 00:24:53.555 Telemetry Log Pages: Not Supported 00:24:53.555 Persistent Event Log Pages: Not Supported 00:24:53.555 Supported Log Pages Log Page: May Support 00:24:53.555 Commands Supported & Effects Log Page: Not Supported 00:24:53.555 Feature Identifiers & Effects Log Page:May Support 00:24:53.555 NVMe-MI Commands & Effects Log Page: May Support 00:24:53.555 Data Area 4 for Telemetry Log: Not Supported 00:24:53.555 Error Log Page Entries Supported: 1 00:24:53.555 Keep Alive: Not Supported 00:24:53.555 00:24:53.555 NVM Command Set Attributes 00:24:53.555 ========================== 00:24:53.555 Submission Queue Entry Size 00:24:53.555 Max: 1 00:24:53.555 Min: 1 00:24:53.555 Completion Queue Entry Size 00:24:53.555 Max: 1 00:24:53.555 Min: 1 00:24:53.555 Number of Namespaces: 0 00:24:53.555 Compare Command: Not Supported 00:24:53.555 Write Uncorrectable Command: Not Supported 00:24:53.555 Dataset Management Command: Not Supported 00:24:53.556 Write Zeroes Command: Not Supported 00:24:53.556 Set Features Save Field: Not Supported 00:24:53.556 Reservations: Not Supported 00:24:53.556 Timestamp: Not Supported 00:24:53.556 Copy: Not Supported 00:24:53.556 Volatile Write Cache: Not Present 00:24:53.556 Atomic Write Unit (Normal): 1 00:24:53.556 Atomic Write Unit (PFail): 1 00:24:53.556 Atomic Compare & Write Unit: 1 00:24:53.556 Fused Compare & Write: Not Supported 00:24:53.556 Scatter-Gather List 00:24:53.556 SGL Command Set: Supported 00:24:53.556 SGL Keyed: Not Supported 00:24:53.556 SGL Bit Bucket Descriptor: Not Supported 00:24:53.556 SGL Metadata Pointer: Not Supported 00:24:53.556 Oversized SGL: Not Supported 00:24:53.556 SGL Metadata Address: Not Supported 00:24:53.556 SGL Offset: Supported 00:24:53.556 Transport SGL Data Block: Not Supported 00:24:53.556 Replay Protected Memory Block: Not Supported 00:24:53.556 00:24:53.556 Firmware Slot Information 00:24:53.556 ========================= 00:24:53.556 Active slot: 0 00:24:53.556 00:24:53.556 00:24:53.556 Error Log 00:24:53.556 ========= 00:24:53.556 00:24:53.556 Active Namespaces 00:24:53.556 ================= 00:24:53.556 Discovery Log Page 00:24:53.556 ================== 00:24:53.556 Generation Counter: 2 00:24:53.556 Number of Records: 2 00:24:53.556 Record Format: 0 00:24:53.556 00:24:53.556 Discovery Log Entry 0 00:24:53.556 ---------------------- 00:24:53.556 Transport Type: 3 (TCP) 00:24:53.556 Address Family: 1 (IPv4) 00:24:53.556 Subsystem Type: 3 (Current Discovery Subsystem) 00:24:53.556 Entry Flags: 00:24:53.556 Duplicate Returned Information: 0 00:24:53.556 Explicit Persistent Connection Support for Discovery: 0 00:24:53.556 Transport Requirements: 00:24:53.556 Secure Channel: Not Specified 00:24:53.556 Port ID: 1 (0x0001) 00:24:53.556 Controller ID: 65535 (0xffff) 00:24:53.556 Admin Max SQ Size: 32 00:24:53.556 Transport Service Identifier: 4420 00:24:53.556 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:24:53.556 Transport Address: 10.0.0.1 00:24:53.556 Discovery Log Entry 1 00:24:53.556 ---------------------- 00:24:53.556 Transport Type: 3 (TCP) 00:24:53.556 Address Family: 1 (IPv4) 00:24:53.556 Subsystem Type: 2 (NVM Subsystem) 00:24:53.556 Entry Flags: 00:24:53.556 Duplicate Returned Information: 0 00:24:53.556 Explicit Persistent Connection Support for Discovery: 0 00:24:53.556 Transport Requirements: 00:24:53.556 Secure Channel: Not Specified 00:24:53.556 Port ID: 1 (0x0001) 00:24:53.556 Controller ID: 65535 (0xffff) 00:24:53.556 Admin Max SQ Size: 32 00:24:53.556 Transport Service Identifier: 4420 00:24:53.556 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:24:53.556 Transport Address: 10.0.0.1 00:24:53.556 12:48:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:53.815 get_feature(0x01) failed 00:24:53.815 get_feature(0x02) failed 00:24:53.815 get_feature(0x04) failed 00:24:53.815 ===================================================== 00:24:53.815 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:24:53.815 ===================================================== 00:24:53.815 Controller Capabilities/Features 00:24:53.815 ================================ 00:24:53.815 Vendor ID: 0000 00:24:53.815 Subsystem Vendor ID: 0000 00:24:53.815 Serial Number: 5250173bd0f03a121d6e 00:24:53.815 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:24:53.815 Firmware Version: 6.8.9-20 00:24:53.815 Recommended Arb Burst: 6 00:24:53.815 IEEE OUI Identifier: 00 00 00 00:24:53.815 Multi-path I/O 00:24:53.815 May have multiple subsystem ports: Yes 00:24:53.815 May have multiple controllers: Yes 00:24:53.815 Associated with SR-IOV VF: No 00:24:53.815 Max Data Transfer Size: Unlimited 00:24:53.815 Max Number of Namespaces: 1024 00:24:53.815 Max Number of I/O Queues: 128 00:24:53.815 NVMe Specification Version (VS): 1.3 00:24:53.815 NVMe Specification Version (Identify): 1.3 00:24:53.815 Maximum Queue Entries: 1024 00:24:53.815 Contiguous Queues Required: No 00:24:53.815 Arbitration Mechanisms Supported 00:24:53.815 Weighted Round Robin: Not Supported 00:24:53.815 Vendor Specific: Not Supported 00:24:53.815 Reset Timeout: 7500 ms 00:24:53.815 Doorbell Stride: 4 bytes 00:24:53.815 NVM Subsystem Reset: Not Supported 00:24:53.815 Command Sets Supported 00:24:53.815 NVM Command Set: Supported 00:24:53.815 Boot Partition: Not Supported 00:24:53.815 Memory Page Size Minimum: 4096 bytes 00:24:53.815 Memory Page Size Maximum: 4096 bytes 00:24:53.815 Persistent Memory Region: Not Supported 00:24:53.815 Optional Asynchronous Events Supported 00:24:53.815 Namespace Attribute Notices: Supported 00:24:53.815 Firmware Activation Notices: Not Supported 00:24:53.815 ANA Change Notices: Supported 00:24:53.815 PLE Aggregate Log Change Notices: Not Supported 00:24:53.815 LBA Status Info Alert Notices: Not Supported 00:24:53.815 EGE Aggregate Log Change Notices: Not Supported 00:24:53.815 Normal NVM Subsystem Shutdown event: Not Supported 00:24:53.815 Zone Descriptor Change Notices: Not Supported 00:24:53.815 Discovery Log Change Notices: Not Supported 00:24:53.815 Controller Attributes 00:24:53.815 128-bit Host Identifier: Supported 00:24:53.815 Non-Operational Permissive Mode: Not Supported 00:24:53.815 NVM Sets: Not Supported 00:24:53.815 Read Recovery Levels: Not Supported 00:24:53.815 Endurance Groups: Not Supported 00:24:53.815 Predictable Latency Mode: Not Supported 00:24:53.815 Traffic Based Keep ALive: Supported 00:24:53.815 Namespace Granularity: Not Supported 00:24:53.815 SQ Associations: Not Supported 00:24:53.815 UUID List: Not Supported 00:24:53.815 Multi-Domain Subsystem: Not Supported 00:24:53.815 Fixed Capacity Management: Not Supported 00:24:53.815 Variable Capacity Management: Not Supported 00:24:53.815 Delete Endurance Group: Not Supported 00:24:53.815 Delete NVM Set: Not Supported 00:24:53.815 Extended LBA Formats Supported: Not Supported 00:24:53.815 Flexible Data Placement Supported: Not Supported 00:24:53.815 00:24:53.815 Controller Memory Buffer Support 00:24:53.815 ================================ 00:24:53.815 Supported: No 00:24:53.815 00:24:53.815 Persistent Memory Region Support 00:24:53.815 ================================ 00:24:53.815 Supported: No 00:24:53.815 00:24:53.815 Admin Command Set Attributes 00:24:53.815 ============================ 00:24:53.815 Security Send/Receive: Not Supported 00:24:53.815 Format NVM: Not Supported 00:24:53.815 Firmware Activate/Download: Not Supported 00:24:53.815 Namespace Management: Not Supported 00:24:53.815 Device Self-Test: Not Supported 00:24:53.815 Directives: Not Supported 00:24:53.815 NVMe-MI: Not Supported 00:24:53.815 Virtualization Management: Not Supported 00:24:53.815 Doorbell Buffer Config: Not Supported 00:24:53.815 Get LBA Status Capability: Not Supported 00:24:53.815 Command & Feature Lockdown Capability: Not Supported 00:24:53.815 Abort Command Limit: 4 00:24:53.815 Async Event Request Limit: 4 00:24:53.815 Number of Firmware Slots: N/A 00:24:53.815 Firmware Slot 1 Read-Only: N/A 00:24:53.815 Firmware Activation Without Reset: N/A 00:24:53.815 Multiple Update Detection Support: N/A 00:24:53.815 Firmware Update Granularity: No Information Provided 00:24:53.815 Per-Namespace SMART Log: Yes 00:24:53.815 Asymmetric Namespace Access Log Page: Supported 00:24:53.815 ANA Transition Time : 10 sec 00:24:53.815 00:24:53.815 Asymmetric Namespace Access Capabilities 00:24:53.815 ANA Optimized State : Supported 00:24:53.815 ANA Non-Optimized State : Supported 00:24:53.815 ANA Inaccessible State : Supported 00:24:53.815 ANA Persistent Loss State : Supported 00:24:53.815 ANA Change State : Supported 00:24:53.815 ANAGRPID is not changed : No 00:24:53.815 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:24:53.815 00:24:53.815 ANA Group Identifier Maximum : 128 00:24:53.815 Number of ANA Group Identifiers : 128 00:24:53.815 Max Number of Allowed Namespaces : 1024 00:24:53.815 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:24:53.815 Command Effects Log Page: Supported 00:24:53.815 Get Log Page Extended Data: Supported 00:24:53.815 Telemetry Log Pages: Not Supported 00:24:53.815 Persistent Event Log Pages: Not Supported 00:24:53.815 Supported Log Pages Log Page: May Support 00:24:53.815 Commands Supported & Effects Log Page: Not Supported 00:24:53.815 Feature Identifiers & Effects Log Page:May Support 00:24:53.815 NVMe-MI Commands & Effects Log Page: May Support 00:24:53.816 Data Area 4 for Telemetry Log: Not Supported 00:24:53.816 Error Log Page Entries Supported: 128 00:24:53.816 Keep Alive: Supported 00:24:53.816 Keep Alive Granularity: 1000 ms 00:24:53.816 00:24:53.816 NVM Command Set Attributes 00:24:53.816 ========================== 00:24:53.816 Submission Queue Entry Size 00:24:53.816 Max: 64 00:24:53.816 Min: 64 00:24:53.816 Completion Queue Entry Size 00:24:53.816 Max: 16 00:24:53.816 Min: 16 00:24:53.816 Number of Namespaces: 1024 00:24:53.816 Compare Command: Not Supported 00:24:53.816 Write Uncorrectable Command: Not Supported 00:24:53.816 Dataset Management Command: Supported 00:24:53.816 Write Zeroes Command: Supported 00:24:53.816 Set Features Save Field: Not Supported 00:24:53.816 Reservations: Not Supported 00:24:53.816 Timestamp: Not Supported 00:24:53.816 Copy: Not Supported 00:24:53.816 Volatile Write Cache: Present 00:24:53.816 Atomic Write Unit (Normal): 1 00:24:53.816 Atomic Write Unit (PFail): 1 00:24:53.816 Atomic Compare & Write Unit: 1 00:24:53.816 Fused Compare & Write: Not Supported 00:24:53.816 Scatter-Gather List 00:24:53.816 SGL Command Set: Supported 00:24:53.816 SGL Keyed: Not Supported 00:24:53.816 SGL Bit Bucket Descriptor: Not Supported 00:24:53.816 SGL Metadata Pointer: Not Supported 00:24:53.816 Oversized SGL: Not Supported 00:24:53.816 SGL Metadata Address: Not Supported 00:24:53.816 SGL Offset: Supported 00:24:53.816 Transport SGL Data Block: Not Supported 00:24:53.816 Replay Protected Memory Block: Not Supported 00:24:53.816 00:24:53.816 Firmware Slot Information 00:24:53.816 ========================= 00:24:53.816 Active slot: 0 00:24:53.816 00:24:53.816 Asymmetric Namespace Access 00:24:53.816 =========================== 00:24:53.816 Change Count : 0 00:24:53.816 Number of ANA Group Descriptors : 1 00:24:53.816 ANA Group Descriptor : 0 00:24:53.816 ANA Group ID : 1 00:24:53.816 Number of NSID Values : 1 00:24:53.816 Change Count : 0 00:24:53.816 ANA State : 1 00:24:53.816 Namespace Identifier : 1 00:24:53.816 00:24:53.816 Commands Supported and Effects 00:24:53.816 ============================== 00:24:53.816 Admin Commands 00:24:53.816 -------------- 00:24:53.816 Get Log Page (02h): Supported 00:24:53.816 Identify (06h): Supported 00:24:53.816 Abort (08h): Supported 00:24:53.816 Set Features (09h): Supported 00:24:53.816 Get Features (0Ah): Supported 00:24:53.816 Asynchronous Event Request (0Ch): Supported 00:24:53.816 Keep Alive (18h): Supported 00:24:53.816 I/O Commands 00:24:53.816 ------------ 00:24:53.816 Flush (00h): Supported 00:24:53.816 Write (01h): Supported LBA-Change 00:24:53.816 Read (02h): Supported 00:24:53.816 Write Zeroes (08h): Supported LBA-Change 00:24:53.816 Dataset Management (09h): Supported 00:24:53.816 00:24:53.816 Error Log 00:24:53.816 ========= 00:24:53.816 Entry: 0 00:24:53.816 Error Count: 0x3 00:24:53.816 Submission Queue Id: 0x0 00:24:53.816 Command Id: 0x5 00:24:53.816 Phase Bit: 0 00:24:53.816 Status Code: 0x2 00:24:53.816 Status Code Type: 0x0 00:24:53.816 Do Not Retry: 1 00:24:53.816 Error Location: 0x28 00:24:53.816 LBA: 0x0 00:24:53.816 Namespace: 0x0 00:24:53.816 Vendor Log Page: 0x0 00:24:53.816 ----------- 00:24:53.816 Entry: 1 00:24:53.816 Error Count: 0x2 00:24:53.816 Submission Queue Id: 0x0 00:24:53.816 Command Id: 0x5 00:24:53.816 Phase Bit: 0 00:24:53.816 Status Code: 0x2 00:24:53.816 Status Code Type: 0x0 00:24:53.816 Do Not Retry: 1 00:24:53.816 Error Location: 0x28 00:24:53.816 LBA: 0x0 00:24:53.816 Namespace: 0x0 00:24:53.816 Vendor Log Page: 0x0 00:24:53.816 ----------- 00:24:53.816 Entry: 2 00:24:53.816 Error Count: 0x1 00:24:53.816 Submission Queue Id: 0x0 00:24:53.816 Command Id: 0x4 00:24:53.816 Phase Bit: 0 00:24:53.816 Status Code: 0x2 00:24:53.816 Status Code Type: 0x0 00:24:53.816 Do Not Retry: 1 00:24:53.816 Error Location: 0x28 00:24:53.816 LBA: 0x0 00:24:53.816 Namespace: 0x0 00:24:53.816 Vendor Log Page: 0x0 00:24:53.816 00:24:53.816 Number of Queues 00:24:53.816 ================ 00:24:53.816 Number of I/O Submission Queues: 128 00:24:53.816 Number of I/O Completion Queues: 128 00:24:53.816 00:24:53.816 ZNS Specific Controller Data 00:24:53.816 ============================ 00:24:53.816 Zone Append Size Limit: 0 00:24:53.816 00:24:53.816 00:24:53.816 Active Namespaces 00:24:53.816 ================= 00:24:53.816 get_feature(0x05) failed 00:24:53.816 Namespace ID:1 00:24:53.816 Command Set Identifier: NVM (00h) 00:24:53.816 Deallocate: Supported 00:24:53.816 Deallocated/Unwritten Error: Not Supported 00:24:53.816 Deallocated Read Value: Unknown 00:24:53.816 Deallocate in Write Zeroes: Not Supported 00:24:53.816 Deallocated Guard Field: 0xFFFF 00:24:53.816 Flush: Supported 00:24:53.816 Reservation: Not Supported 00:24:53.816 Namespace Sharing Capabilities: Multiple Controllers 00:24:53.816 Size (in LBAs): 1953525168 (931GiB) 00:24:53.816 Capacity (in LBAs): 1953525168 (931GiB) 00:24:53.816 Utilization (in LBAs): 1953525168 (931GiB) 00:24:53.816 UUID: b1406536-69a8-41fd-b6fb-748309438d0f 00:24:53.816 Thin Provisioning: Not Supported 00:24:53.816 Per-NS Atomic Units: Yes 00:24:53.816 Atomic Boundary Size (Normal): 0 00:24:53.816 Atomic Boundary Size (PFail): 0 00:24:53.816 Atomic Boundary Offset: 0 00:24:53.816 NGUID/EUI64 Never Reused: No 00:24:53.816 ANA group ID: 1 00:24:53.816 Namespace Write Protected: No 00:24:53.816 Number of LBA Formats: 1 00:24:53.816 Current LBA Format: LBA Format #00 00:24:53.816 LBA Format #00: Data Size: 512 Metadata Size: 0 00:24:53.816 00:24:53.816 12:48:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:24:53.816 12:48:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:53.816 12:48:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:24:53.816 12:48:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:53.816 12:48:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:24:53.816 12:48:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:53.816 12:48:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:53.816 rmmod nvme_tcp 00:24:53.816 rmmod nvme_fabrics 00:24:53.817 12:48:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:53.817 12:48:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:24:53.817 12:48:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:24:53.817 12:48:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:24:53.817 12:48:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:53.817 12:48:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:53.817 12:48:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:53.817 12:48:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:24:53.817 12:48:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:24:53.817 12:48:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:24:53.817 12:48:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:53.817 12:48:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:53.817 12:48:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:53.817 12:48:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:53.817 12:48:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:53.817 12:48:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:55.720 12:48:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:55.720 12:48:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:24:55.720 12:48:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:24:55.720 12:48:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:24:55.720 12:48:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:55.720 12:48:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:55.977 12:48:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:24:55.977 12:48:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:55.977 12:48:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:24:55.977 12:48:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:24:55.977 12:48:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:24:58.506 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:24:58.506 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:24:58.506 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:24:58.506 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:24:58.506 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:24:58.506 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:24:58.506 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:24:58.506 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:24:58.506 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:24:58.506 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:24:58.506 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:24:58.506 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:24:58.506 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:24:58.507 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:24:58.507 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:24:58.507 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:24:59.442 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:24:59.442 00:24:59.442 real 0m15.968s 00:24:59.442 user 0m3.936s 00:24:59.442 sys 0m8.430s 00:24:59.442 12:48:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:59.442 12:48:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:24:59.442 ************************************ 00:24:59.442 END TEST nvmf_identify_kernel_target 00:24:59.442 ************************************ 00:24:59.442 12:48:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:24:59.442 12:48:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:59.442 12:48:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:59.442 12:48:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.442 ************************************ 00:24:59.442 START TEST nvmf_auth_host 00:24:59.442 ************************************ 00:24:59.442 12:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:24:59.702 * Looking for test storage... 00:24:59.702 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:59.702 12:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:59.702 12:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lcov --version 00:24:59.702 12:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:59.702 12:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:59.702 12:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:59.702 12:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:59.702 12:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:59.702 12:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:24:59.702 12:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:24:59.702 12:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:24:59.702 12:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:24:59.702 12:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:24:59.702 12:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:24:59.702 12:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:24:59.702 12:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:59.702 12:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:24:59.702 12:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:24:59.702 12:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:59.702 12:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:59.702 12:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:24:59.702 12:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:24:59.702 12:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:59.702 12:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:24:59.702 12:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:24:59.702 12:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:24:59.702 12:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:24:59.702 12:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:59.702 12:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:24:59.702 12:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:24:59.702 12:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:59.702 12:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:59.702 12:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:24:59.702 12:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:59.702 12:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:59.702 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:59.702 --rc genhtml_branch_coverage=1 00:24:59.702 --rc genhtml_function_coverage=1 00:24:59.702 --rc genhtml_legend=1 00:24:59.702 --rc geninfo_all_blocks=1 00:24:59.702 --rc geninfo_unexecuted_blocks=1 00:24:59.702 00:24:59.702 ' 00:24:59.702 12:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:59.702 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:59.702 --rc genhtml_branch_coverage=1 00:24:59.702 --rc genhtml_function_coverage=1 00:24:59.702 --rc genhtml_legend=1 00:24:59.702 --rc geninfo_all_blocks=1 00:24:59.702 --rc geninfo_unexecuted_blocks=1 00:24:59.702 00:24:59.702 ' 00:24:59.702 12:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:59.702 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:59.702 --rc genhtml_branch_coverage=1 00:24:59.702 --rc genhtml_function_coverage=1 00:24:59.702 --rc genhtml_legend=1 00:24:59.702 --rc geninfo_all_blocks=1 00:24:59.702 --rc geninfo_unexecuted_blocks=1 00:24:59.702 00:24:59.702 ' 00:24:59.702 12:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:59.702 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:59.702 --rc genhtml_branch_coverage=1 00:24:59.702 --rc genhtml_function_coverage=1 00:24:59.702 --rc genhtml_legend=1 00:24:59.702 --rc geninfo_all_blocks=1 00:24:59.702 --rc geninfo_unexecuted_blocks=1 00:24:59.702 00:24:59.702 ' 00:24:59.702 12:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:59.702 12:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:24:59.702 12:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:59.702 12:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:59.702 12:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:59.702 12:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:59.702 12:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:59.702 12:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:59.702 12:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:59.702 12:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:59.702 12:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:59.702 12:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:59.702 12:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:24:59.702 12:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:24:59.702 12:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:59.702 12:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:59.702 12:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:59.702 12:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:59.702 12:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:59.702 12:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:24:59.702 12:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:59.702 12:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:59.702 12:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:59.702 12:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:59.702 12:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:59.703 12:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:59.703 12:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:24:59.703 12:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:59.703 12:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:24:59.703 12:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:59.703 12:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:59.703 12:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:59.703 12:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:59.703 12:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:59.703 12:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:59.703 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:59.703 12:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:59.703 12:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:59.703 12:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:59.703 12:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:24:59.703 12:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:24:59.703 12:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:24:59.703 12:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:24:59.703 12:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:59.703 12:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:24:59.703 12:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:24:59.703 12:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:24:59.703 12:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:24:59.703 12:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:59.703 12:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:59.703 12:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:59.703 12:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:59.703 12:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:59.703 12:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:59.703 12:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:59.703 12:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:59.703 12:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:59.703 12:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:59.703 12:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:24:59.703 12:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.264 12:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:06.264 12:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:25:06.264 12:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:06.264 12:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:06.264 12:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:06.264 12:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:06.264 12:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:06.264 12:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:25:06.264 12:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:06.264 12:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:25:06.264 12:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:25:06.264 12:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:25:06.264 12:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:25:06.264 12:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:25:06.264 12:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:25:06.264 12:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:06.264 12:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:06.264 12:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:06.264 12:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:06.264 12:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:06.264 12:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:06.264 12:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:06.264 12:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:06.264 12:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:06.264 12:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:06.264 12:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:06.264 12:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:06.264 12:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:06.264 12:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:06.264 12:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:06.264 12:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:06.264 12:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:06.264 12:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:06.264 12:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:06.264 12:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:06.264 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:06.264 12:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:06.264 12:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:06.264 12:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:06.264 12:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:06.264 12:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:06.264 12:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:06.264 12:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:06.264 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:06.264 12:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:06.265 12:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:06.265 12:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:06.265 12:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:06.265 12:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:06.265 12:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:06.265 12:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:06.265 12:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:06.265 12:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:06.265 12:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:06.265 12:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:06.265 12:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:06.265 12:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:06.265 12:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:06.265 12:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:06.265 12:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:06.265 Found net devices under 0000:86:00.0: cvl_0_0 00:25:06.265 12:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:06.265 12:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:06.265 12:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:06.265 12:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:06.265 12:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:06.265 12:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:06.265 12:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:06.265 12:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:06.265 12:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:06.265 Found net devices under 0000:86:00.1: cvl_0_1 00:25:06.265 12:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:06.265 12:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:06.265 12:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:25:06.265 12:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:06.265 12:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:06.265 12:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:06.265 12:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:06.265 12:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:06.265 12:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:06.265 12:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:06.265 12:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:06.265 12:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:06.265 12:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:06.265 12:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:06.265 12:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:06.265 12:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:06.265 12:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:06.265 12:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:06.265 12:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:06.265 12:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:06.265 12:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:06.265 12:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:06.265 12:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:06.265 12:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:06.265 12:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:06.265 12:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:06.265 12:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:06.265 12:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:06.265 12:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:06.265 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:06.265 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.308 ms 00:25:06.265 00:25:06.265 --- 10.0.0.2 ping statistics --- 00:25:06.265 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:06.265 rtt min/avg/max/mdev = 0.308/0.308/0.308/0.000 ms 00:25:06.265 12:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:06.265 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:06.265 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.188 ms 00:25:06.265 00:25:06.265 --- 10.0.0.1 ping statistics --- 00:25:06.265 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:06.265 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:25:06.265 12:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:06.265 12:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:25:06.265 12:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:06.265 12:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:06.265 12:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:06.265 12:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:06.265 12:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:06.265 12:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:06.265 12:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:06.265 12:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:25:06.265 12:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:06.265 12:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:06.265 12:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.265 12:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=2653418 00:25:06.265 12:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 2653418 00:25:06.265 12:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:25:06.265 12:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 2653418 ']' 00:25:06.266 12:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:06.266 12:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:06.266 12:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:06.266 12:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:06.266 12:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.266 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:06.266 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:25:06.266 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:06.266 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:06.266 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.266 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:06.266 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:25:06.266 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:25:06.266 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:06.266 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:06.266 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:06.266 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:25:06.266 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:25:06.266 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:06.266 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=e74445d18bea1ec4fa44f109dbef3a38 00:25:06.266 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:25:06.266 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.0p0 00:25:06.266 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key e74445d18bea1ec4fa44f109dbef3a38 0 00:25:06.266 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 e74445d18bea1ec4fa44f109dbef3a38 0 00:25:06.266 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:06.266 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:06.266 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=e74445d18bea1ec4fa44f109dbef3a38 00:25:06.266 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:25:06.266 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:06.266 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.0p0 00:25:06.266 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.0p0 00:25:06.266 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.0p0 00:25:06.266 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:25:06.266 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:06.266 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:06.266 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:06.266 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:25:06.266 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:25:06.266 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:25:06.266 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=f0617a8b20a3428aa26cefcaab4f1b9d8439520c6be28eaea7b46f4dec65b38b 00:25:06.266 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:25:06.266 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.fH4 00:25:06.266 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key f0617a8b20a3428aa26cefcaab4f1b9d8439520c6be28eaea7b46f4dec65b38b 3 00:25:06.266 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 f0617a8b20a3428aa26cefcaab4f1b9d8439520c6be28eaea7b46f4dec65b38b 3 00:25:06.266 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:06.266 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:06.266 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=f0617a8b20a3428aa26cefcaab4f1b9d8439520c6be28eaea7b46f4dec65b38b 00:25:06.266 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:25:06.266 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:06.266 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.fH4 00:25:06.266 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.fH4 00:25:06.266 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.fH4 00:25:06.266 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:25:06.266 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:06.266 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:06.266 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:06.266 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:25:06.266 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:25:06.266 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:06.266 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=5369440778de59758bffae91de15b354fe83c166015f596f 00:25:06.266 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:25:06.266 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.h5x 00:25:06.266 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 5369440778de59758bffae91de15b354fe83c166015f596f 0 00:25:06.266 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 5369440778de59758bffae91de15b354fe83c166015f596f 0 00:25:06.266 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:06.266 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:06.266 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=5369440778de59758bffae91de15b354fe83c166015f596f 00:25:06.266 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:25:06.266 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:06.266 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.h5x 00:25:06.266 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.h5x 00:25:06.266 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.h5x 00:25:06.266 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:25:06.266 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:06.266 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:06.266 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:06.266 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:25:06.266 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:25:06.266 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:06.266 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=28649180ea11f5e9f312282aa744a6eed52ebe8c6efb2f76 00:25:06.266 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:25:06.266 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.skE 00:25:06.266 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 28649180ea11f5e9f312282aa744a6eed52ebe8c6efb2f76 2 00:25:06.267 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 28649180ea11f5e9f312282aa744a6eed52ebe8c6efb2f76 2 00:25:06.267 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:06.267 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:06.267 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=28649180ea11f5e9f312282aa744a6eed52ebe8c6efb2f76 00:25:06.267 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:25:06.267 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:06.267 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.skE 00:25:06.267 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.skE 00:25:06.267 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.skE 00:25:06.267 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:25:06.267 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:06.267 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:06.267 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:06.267 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:25:06.267 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:25:06.267 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:06.267 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=eced1aec8f4f03eb48a75d06856496ff 00:25:06.267 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:25:06.267 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.dy0 00:25:06.267 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key eced1aec8f4f03eb48a75d06856496ff 1 00:25:06.267 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 eced1aec8f4f03eb48a75d06856496ff 1 00:25:06.267 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:06.267 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:06.267 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=eced1aec8f4f03eb48a75d06856496ff 00:25:06.267 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:25:06.267 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:06.267 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.dy0 00:25:06.267 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.dy0 00:25:06.267 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.dy0 00:25:06.267 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:25:06.267 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:06.267 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:06.267 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:06.267 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:25:06.267 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:25:06.267 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:06.267 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=7480a602de4ed9cab94ed21a12b9d36d 00:25:06.267 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:25:06.267 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.VtN 00:25:06.267 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 7480a602de4ed9cab94ed21a12b9d36d 1 00:25:06.267 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 7480a602de4ed9cab94ed21a12b9d36d 1 00:25:06.267 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:06.267 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:06.267 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=7480a602de4ed9cab94ed21a12b9d36d 00:25:06.267 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:25:06.267 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:06.267 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.VtN 00:25:06.267 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.VtN 00:25:06.267 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.VtN 00:25:06.267 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:25:06.267 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:06.267 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:06.267 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:06.267 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:25:06.267 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:25:06.267 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:06.267 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=f86748bba05e473553d7b384dc776214cefaa524c13c4215 00:25:06.267 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:25:06.267 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.xpB 00:25:06.267 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key f86748bba05e473553d7b384dc776214cefaa524c13c4215 2 00:25:06.267 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 f86748bba05e473553d7b384dc776214cefaa524c13c4215 2 00:25:06.267 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:06.267 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:06.267 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=f86748bba05e473553d7b384dc776214cefaa524c13c4215 00:25:06.267 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:25:06.267 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:06.267 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.xpB 00:25:06.267 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.xpB 00:25:06.267 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.xpB 00:25:06.267 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:25:06.267 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:06.267 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:06.267 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:06.267 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:25:06.267 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:25:06.267 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:06.267 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=f6623e36f171e6e7eabd99ed702e1a64 00:25:06.267 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:25:06.267 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.nx8 00:25:06.267 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key f6623e36f171e6e7eabd99ed702e1a64 0 00:25:06.267 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 f6623e36f171e6e7eabd99ed702e1a64 0 00:25:06.267 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:06.267 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:06.267 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=f6623e36f171e6e7eabd99ed702e1a64 00:25:06.267 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:25:06.267 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:06.267 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.nx8 00:25:06.267 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.nx8 00:25:06.267 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.nx8 00:25:06.267 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:25:06.267 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:06.267 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:06.268 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:06.268 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:25:06.268 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:25:06.268 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:25:06.268 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=790b4b5733a239b330337ecbb0c451df49e911e53e7c19b423cb5bc246fce811 00:25:06.268 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:25:06.268 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.FUg 00:25:06.268 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 790b4b5733a239b330337ecbb0c451df49e911e53e7c19b423cb5bc246fce811 3 00:25:06.268 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 790b4b5733a239b330337ecbb0c451df49e911e53e7c19b423cb5bc246fce811 3 00:25:06.268 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:06.268 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:06.268 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=790b4b5733a239b330337ecbb0c451df49e911e53e7c19b423cb5bc246fce811 00:25:06.268 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:25:06.268 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:06.268 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.FUg 00:25:06.268 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.FUg 00:25:06.268 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.FUg 00:25:06.268 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:25:06.268 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 2653418 00:25:06.268 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 2653418 ']' 00:25:06.268 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:06.268 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:06.268 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:06.268 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:06.268 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:06.268 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.527 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:06.527 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:25:06.527 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:06.527 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.0p0 00:25:06.527 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.527 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.527 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.527 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.fH4 ]] 00:25:06.527 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.fH4 00:25:06.527 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.527 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.527 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.527 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:06.527 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.h5x 00:25:06.527 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.527 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.527 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.527 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.skE ]] 00:25:06.527 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.skE 00:25:06.527 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.527 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.527 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.527 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:06.527 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.dy0 00:25:06.527 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.527 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.527 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.527 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.VtN ]] 00:25:06.527 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.VtN 00:25:06.527 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.527 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.527 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.527 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:06.527 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.xpB 00:25:06.527 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.527 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.527 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.527 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.nx8 ]] 00:25:06.527 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.nx8 00:25:06.527 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.527 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.527 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.527 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:06.527 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.FUg 00:25:06.527 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.527 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.527 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.527 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:25:06.527 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:25:06.527 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:25:06.527 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:06.527 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:06.527 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:06.527 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:06.527 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:06.527 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:06.527 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:06.527 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:06.527 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:06.527 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:06.527 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:25:06.527 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:25:06.527 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:25:06.527 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:06.528 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:06.528 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:25:06.528 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:25:06.528 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:25:06.528 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:25:06.528 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:25:06.528 12:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:25:09.060 Waiting for block devices as requested 00:25:09.319 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:25:09.319 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:09.319 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:09.578 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:09.578 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:09.578 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:09.578 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:09.838 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:09.838 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:09.838 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:09.838 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:10.097 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:10.097 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:10.097 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:10.355 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:10.355 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:10.355 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:10.922 12:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:25:10.922 12:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:25:10.922 12:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:25:10.922 12:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:25:10.922 12:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:25:10.922 12:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:25:10.922 12:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:25:10.922 12:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:25:10.922 12:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:25:10.922 No valid GPT data, bailing 00:25:10.922 12:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:25:10.922 12:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:25:10.922 12:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:25:10.922 12:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:25:10.922 12:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:25:10.922 12:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:10.922 12:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:10.922 12:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:25:10.922 12:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:25:10.922 12:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:25:10.922 12:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:25:10.922 12:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:25:10.922 12:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:25:10.922 12:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:25:10.922 12:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:25:10.922 12:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:25:10.922 12:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:25:10.922 12:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:25:11.180 00:25:11.180 Discovery Log Number of Records 2, Generation counter 2 00:25:11.180 =====Discovery Log Entry 0====== 00:25:11.180 trtype: tcp 00:25:11.180 adrfam: ipv4 00:25:11.180 subtype: current discovery subsystem 00:25:11.180 treq: not specified, sq flow control disable supported 00:25:11.180 portid: 1 00:25:11.180 trsvcid: 4420 00:25:11.180 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:25:11.180 traddr: 10.0.0.1 00:25:11.180 eflags: none 00:25:11.180 sectype: none 00:25:11.180 =====Discovery Log Entry 1====== 00:25:11.180 trtype: tcp 00:25:11.180 adrfam: ipv4 00:25:11.180 subtype: nvme subsystem 00:25:11.180 treq: not specified, sq flow control disable supported 00:25:11.180 portid: 1 00:25:11.180 trsvcid: 4420 00:25:11.181 subnqn: nqn.2024-02.io.spdk:cnode0 00:25:11.181 traddr: 10.0.0.1 00:25:11.181 eflags: none 00:25:11.181 sectype: none 00:25:11.181 12:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:11.181 12:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:25:11.181 12:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:25:11.181 12:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:11.181 12:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:11.181 12:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:11.181 12:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:11.181 12:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:11.181 12:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTM2OTQ0MDc3OGRlNTk3NThiZmZhZTkxZGUxNWIzNTRmZTgzYzE2NjAxNWY1OTZmBVBxsQ==: 00:25:11.181 12:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mjg2NDkxODBlYTExZjVlOWYzMTIyODJhYTc0NGE2ZWVkNTJlYmU4YzZlZmIyZjc24KSCRQ==: 00:25:11.181 12:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:11.181 12:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:11.181 12:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTM2OTQ0MDc3OGRlNTk3NThiZmZhZTkxZGUxNWIzNTRmZTgzYzE2NjAxNWY1OTZmBVBxsQ==: 00:25:11.181 12:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mjg2NDkxODBlYTExZjVlOWYzMTIyODJhYTc0NGE2ZWVkNTJlYmU4YzZlZmIyZjc24KSCRQ==: ]] 00:25:11.181 12:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mjg2NDkxODBlYTExZjVlOWYzMTIyODJhYTc0NGE2ZWVkNTJlYmU4YzZlZmIyZjc24KSCRQ==: 00:25:11.181 12:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:25:11.181 12:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:25:11.181 12:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:25:11.181 12:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:11.181 12:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:25:11.181 12:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:11.181 12:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:25:11.181 12:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:11.181 12:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:11.181 12:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:11.181 12:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:11.181 12:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.181 12:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.181 12:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.181 12:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:11.181 12:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:11.181 12:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:11.181 12:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:11.181 12:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:11.181 12:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:11.181 12:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:11.181 12:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:11.181 12:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:11.181 12:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:11.181 12:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:11.181 12:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:11.181 12:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.181 12:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.181 nvme0n1 00:25:11.181 12:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.181 12:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:11.181 12:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:11.181 12:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.181 12:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.181 12:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.181 12:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:11.181 12:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:11.181 12:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.181 12:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.439 12:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.439 12:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:11.439 12:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:11.439 12:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:11.439 12:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:25:11.439 12:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:11.439 12:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:11.440 12:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:11.440 12:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:11.440 12:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTc0NDQ1ZDE4YmVhMWVjNGZhNDRmMTA5ZGJlZjNhMzhwbF0K: 00:25:11.440 12:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjA2MTdhOGIyMGEzNDI4YWEyNmNlZmNhYWI0ZjFiOWQ4NDM5NTIwYzZiZTI4ZWFlYTdiNDZmNGRlYzY1YjM4YgEqzZ0=: 00:25:11.440 12:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:11.440 12:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:11.440 12:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTc0NDQ1ZDE4YmVhMWVjNGZhNDRmMTA5ZGJlZjNhMzhwbF0K: 00:25:11.440 12:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjA2MTdhOGIyMGEzNDI4YWEyNmNlZmNhYWI0ZjFiOWQ4NDM5NTIwYzZiZTI4ZWFlYTdiNDZmNGRlYzY1YjM4YgEqzZ0=: ]] 00:25:11.440 12:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjA2MTdhOGIyMGEzNDI4YWEyNmNlZmNhYWI0ZjFiOWQ4NDM5NTIwYzZiZTI4ZWFlYTdiNDZmNGRlYzY1YjM4YgEqzZ0=: 00:25:11.440 12:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:25:11.440 12:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:11.440 12:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:11.440 12:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:11.440 12:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:11.440 12:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:11.440 12:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:11.440 12:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.440 12:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.440 12:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.440 12:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:11.440 12:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:11.440 12:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:11.440 12:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:11.440 12:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:11.440 12:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:11.440 12:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:11.440 12:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:11.440 12:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:11.440 12:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:11.440 12:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:11.440 12:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:11.440 12:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.440 12:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.440 nvme0n1 00:25:11.440 12:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.440 12:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:11.440 12:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:11.440 12:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.440 12:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.440 12:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.440 12:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:11.440 12:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:11.440 12:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.440 12:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.440 12:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.440 12:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:11.440 12:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:11.440 12:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:11.440 12:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:11.440 12:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:11.440 12:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:11.440 12:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTM2OTQ0MDc3OGRlNTk3NThiZmZhZTkxZGUxNWIzNTRmZTgzYzE2NjAxNWY1OTZmBVBxsQ==: 00:25:11.440 12:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mjg2NDkxODBlYTExZjVlOWYzMTIyODJhYTc0NGE2ZWVkNTJlYmU4YzZlZmIyZjc24KSCRQ==: 00:25:11.440 12:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:11.440 12:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:11.440 12:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTM2OTQ0MDc3OGRlNTk3NThiZmZhZTkxZGUxNWIzNTRmZTgzYzE2NjAxNWY1OTZmBVBxsQ==: 00:25:11.440 12:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mjg2NDkxODBlYTExZjVlOWYzMTIyODJhYTc0NGE2ZWVkNTJlYmU4YzZlZmIyZjc24KSCRQ==: ]] 00:25:11.440 12:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mjg2NDkxODBlYTExZjVlOWYzMTIyODJhYTc0NGE2ZWVkNTJlYmU4YzZlZmIyZjc24KSCRQ==: 00:25:11.440 12:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:25:11.440 12:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:11.440 12:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:11.440 12:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:11.440 12:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:11.440 12:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:11.440 12:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:11.440 12:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.440 12:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.698 12:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.698 12:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:11.698 12:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:11.698 12:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:11.698 12:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:11.698 12:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:11.698 12:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:11.698 12:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:11.698 12:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:11.698 12:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:11.698 12:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:11.698 12:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:11.698 12:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:11.698 12:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.698 12:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.698 nvme0n1 00:25:11.698 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.698 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:11.698 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:11.698 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.698 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.698 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.698 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:11.698 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:11.699 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.699 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.699 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.699 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:11.699 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:25:11.699 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:11.699 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:11.699 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:11.699 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:11.699 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWNlZDFhZWM4ZjRmMDNlYjQ4YTc1ZDA2ODU2NDk2ZmZmvwd3: 00:25:11.699 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzQ4MGE2MDJkZTRlZDljYWI5NGVkMjFhMTJiOWQzNmTxy2LF: 00:25:11.699 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:11.699 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:11.699 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWNlZDFhZWM4ZjRmMDNlYjQ4YTc1ZDA2ODU2NDk2ZmZmvwd3: 00:25:11.699 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzQ4MGE2MDJkZTRlZDljYWI5NGVkMjFhMTJiOWQzNmTxy2LF: ]] 00:25:11.699 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzQ4MGE2MDJkZTRlZDljYWI5NGVkMjFhMTJiOWQzNmTxy2LF: 00:25:11.699 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:25:11.699 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:11.699 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:11.699 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:11.699 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:11.699 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:11.699 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:11.699 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.699 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.699 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.699 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:11.699 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:11.699 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:11.699 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:11.699 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:11.699 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:11.699 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:11.699 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:11.699 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:11.699 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:11.699 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:11.699 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:11.699 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.699 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.957 nvme0n1 00:25:11.957 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.957 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:11.957 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:11.957 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.957 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.957 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.957 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:11.957 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:11.957 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.957 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.957 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.957 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:11.957 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:25:11.957 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:11.957 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:11.957 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:11.957 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:11.957 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Zjg2NzQ4YmJhMDVlNDczNTUzZDdiMzg0ZGM3NzYyMTRjZWZhYTUyNGMxM2M0MjE1iBbTJA==: 00:25:11.957 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjY2MjNlMzZmMTcxZTZlN2VhYmQ5OWVkNzAyZTFhNjTxxFgU: 00:25:11.957 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:11.957 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:11.957 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Zjg2NzQ4YmJhMDVlNDczNTUzZDdiMzg0ZGM3NzYyMTRjZWZhYTUyNGMxM2M0MjE1iBbTJA==: 00:25:11.957 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjY2MjNlMzZmMTcxZTZlN2VhYmQ5OWVkNzAyZTFhNjTxxFgU: ]] 00:25:11.957 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjY2MjNlMzZmMTcxZTZlN2VhYmQ5OWVkNzAyZTFhNjTxxFgU: 00:25:11.957 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:25:11.957 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:11.957 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:11.957 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:11.957 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:11.957 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:11.957 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:11.957 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.957 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.957 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.957 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:11.957 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:11.957 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:11.957 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:11.957 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:11.957 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:11.957 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:11.957 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:11.957 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:11.957 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:11.957 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:11.957 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:11.957 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.957 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.215 nvme0n1 00:25:12.215 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.215 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:12.215 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.216 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:12.216 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.216 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.216 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:12.216 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:12.216 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.216 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.216 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.216 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:12.216 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:25:12.216 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:12.216 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:12.216 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:12.216 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:12.216 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzkwYjRiNTczM2EyMzliMzMwMzM3ZWNiYjBjNDUxZGY0OWU5MTFlNTNlN2MxOWI0MjNjYjViYzI0NmZjZTgxMVbClsc=: 00:25:12.216 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:12.216 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:12.216 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:12.216 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzkwYjRiNTczM2EyMzliMzMwMzM3ZWNiYjBjNDUxZGY0OWU5MTFlNTNlN2MxOWI0MjNjYjViYzI0NmZjZTgxMVbClsc=: 00:25:12.216 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:12.216 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:25:12.216 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:12.216 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:12.216 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:12.216 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:12.216 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:12.216 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:12.216 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.216 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.216 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.216 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:12.216 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:12.216 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:12.216 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:12.216 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:12.216 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:12.216 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:12.216 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:12.216 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:12.216 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:12.216 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:12.216 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:12.216 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.216 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.474 nvme0n1 00:25:12.474 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.474 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:12.474 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:12.474 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.474 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.474 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.474 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:12.474 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:12.474 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.474 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.474 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.474 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:12.474 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:12.474 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:25:12.474 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:12.474 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:12.474 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:12.474 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:12.474 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTc0NDQ1ZDE4YmVhMWVjNGZhNDRmMTA5ZGJlZjNhMzhwbF0K: 00:25:12.474 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjA2MTdhOGIyMGEzNDI4YWEyNmNlZmNhYWI0ZjFiOWQ4NDM5NTIwYzZiZTI4ZWFlYTdiNDZmNGRlYzY1YjM4YgEqzZ0=: 00:25:12.474 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:12.474 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:12.474 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTc0NDQ1ZDE4YmVhMWVjNGZhNDRmMTA5ZGJlZjNhMzhwbF0K: 00:25:12.474 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjA2MTdhOGIyMGEzNDI4YWEyNmNlZmNhYWI0ZjFiOWQ4NDM5NTIwYzZiZTI4ZWFlYTdiNDZmNGRlYzY1YjM4YgEqzZ0=: ]] 00:25:12.474 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjA2MTdhOGIyMGEzNDI4YWEyNmNlZmNhYWI0ZjFiOWQ4NDM5NTIwYzZiZTI4ZWFlYTdiNDZmNGRlYzY1YjM4YgEqzZ0=: 00:25:12.474 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:25:12.474 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:12.474 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:12.474 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:12.474 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:12.474 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:12.474 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:12.474 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.474 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.474 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.474 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:12.474 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:12.474 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:12.474 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:12.474 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:12.474 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:12.474 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:12.474 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:12.474 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:12.474 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:12.474 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:12.474 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:12.474 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.474 12:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.732 nvme0n1 00:25:12.732 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.732 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:12.732 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:12.732 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.732 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.732 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.732 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:12.732 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:12.732 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.732 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.732 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.732 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:12.732 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:25:12.732 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:12.732 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:12.732 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:12.732 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:12.732 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTM2OTQ0MDc3OGRlNTk3NThiZmZhZTkxZGUxNWIzNTRmZTgzYzE2NjAxNWY1OTZmBVBxsQ==: 00:25:12.732 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mjg2NDkxODBlYTExZjVlOWYzMTIyODJhYTc0NGE2ZWVkNTJlYmU4YzZlZmIyZjc24KSCRQ==: 00:25:12.732 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:12.732 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:12.732 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTM2OTQ0MDc3OGRlNTk3NThiZmZhZTkxZGUxNWIzNTRmZTgzYzE2NjAxNWY1OTZmBVBxsQ==: 00:25:12.732 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mjg2NDkxODBlYTExZjVlOWYzMTIyODJhYTc0NGE2ZWVkNTJlYmU4YzZlZmIyZjc24KSCRQ==: ]] 00:25:12.732 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mjg2NDkxODBlYTExZjVlOWYzMTIyODJhYTc0NGE2ZWVkNTJlYmU4YzZlZmIyZjc24KSCRQ==: 00:25:12.732 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:25:12.732 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:12.732 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:12.733 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:12.733 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:12.733 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:12.733 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:12.733 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.733 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.733 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.733 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:12.733 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:12.733 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:12.733 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:12.733 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:12.733 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:12.733 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:12.733 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:12.733 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:12.733 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:12.733 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:12.733 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:12.733 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.733 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.991 nvme0n1 00:25:12.991 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.991 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:12.991 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:12.991 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.991 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.991 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.991 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:12.991 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:12.991 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.991 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.991 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.991 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:12.991 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:25:12.991 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:12.991 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:12.991 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:12.991 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:12.991 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWNlZDFhZWM4ZjRmMDNlYjQ4YTc1ZDA2ODU2NDk2ZmZmvwd3: 00:25:12.991 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzQ4MGE2MDJkZTRlZDljYWI5NGVkMjFhMTJiOWQzNmTxy2LF: 00:25:12.991 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:12.991 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:12.991 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWNlZDFhZWM4ZjRmMDNlYjQ4YTc1ZDA2ODU2NDk2ZmZmvwd3: 00:25:12.991 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzQ4MGE2MDJkZTRlZDljYWI5NGVkMjFhMTJiOWQzNmTxy2LF: ]] 00:25:12.991 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzQ4MGE2MDJkZTRlZDljYWI5NGVkMjFhMTJiOWQzNmTxy2LF: 00:25:12.991 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:25:12.991 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:12.991 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:12.991 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:12.991 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:12.991 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:12.991 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:12.991 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.991 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.991 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.991 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:12.991 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:12.991 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:12.991 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:12.991 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:12.991 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:12.991 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:12.991 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:12.991 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:12.991 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:12.991 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:12.991 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:12.991 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.991 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.249 nvme0n1 00:25:13.249 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.249 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:13.249 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:13.249 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.249 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.249 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.249 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:13.249 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:13.249 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.249 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.249 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.249 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:13.249 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:25:13.249 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:13.249 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:13.249 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:13.249 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:13.249 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Zjg2NzQ4YmJhMDVlNDczNTUzZDdiMzg0ZGM3NzYyMTRjZWZhYTUyNGMxM2M0MjE1iBbTJA==: 00:25:13.249 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjY2MjNlMzZmMTcxZTZlN2VhYmQ5OWVkNzAyZTFhNjTxxFgU: 00:25:13.249 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:13.249 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:13.249 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Zjg2NzQ4YmJhMDVlNDczNTUzZDdiMzg0ZGM3NzYyMTRjZWZhYTUyNGMxM2M0MjE1iBbTJA==: 00:25:13.249 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjY2MjNlMzZmMTcxZTZlN2VhYmQ5OWVkNzAyZTFhNjTxxFgU: ]] 00:25:13.249 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjY2MjNlMzZmMTcxZTZlN2VhYmQ5OWVkNzAyZTFhNjTxxFgU: 00:25:13.249 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:25:13.250 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:13.250 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:13.250 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:13.250 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:13.250 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:13.250 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:13.250 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.250 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.250 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.250 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:13.250 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:13.250 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:13.250 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:13.250 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:13.250 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:13.250 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:13.250 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:13.250 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:13.250 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:13.250 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:13.250 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:13.250 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.250 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.508 nvme0n1 00:25:13.508 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.508 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:13.508 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:13.508 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.508 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.508 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.508 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:13.508 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:13.508 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.508 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.508 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.508 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:13.508 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:25:13.508 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:13.508 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:13.508 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:13.508 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:13.508 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzkwYjRiNTczM2EyMzliMzMwMzM3ZWNiYjBjNDUxZGY0OWU5MTFlNTNlN2MxOWI0MjNjYjViYzI0NmZjZTgxMVbClsc=: 00:25:13.508 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:13.508 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:13.508 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:13.508 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzkwYjRiNTczM2EyMzliMzMwMzM3ZWNiYjBjNDUxZGY0OWU5MTFlNTNlN2MxOWI0MjNjYjViYzI0NmZjZTgxMVbClsc=: 00:25:13.508 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:13.508 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:25:13.508 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:13.508 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:13.508 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:13.508 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:13.508 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:13.508 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:13.508 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.508 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.508 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.508 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:13.508 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:13.508 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:13.508 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:13.508 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:13.508 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:13.508 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:13.508 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:13.508 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:13.508 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:13.508 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:13.508 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:13.508 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.508 12:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.766 nvme0n1 00:25:13.766 12:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.766 12:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:13.766 12:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:13.766 12:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.766 12:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.766 12:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.766 12:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:13.766 12:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:13.766 12:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.766 12:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.766 12:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.766 12:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:13.766 12:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:13.766 12:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:25:13.766 12:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:13.766 12:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:13.766 12:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:13.766 12:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:13.766 12:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTc0NDQ1ZDE4YmVhMWVjNGZhNDRmMTA5ZGJlZjNhMzhwbF0K: 00:25:13.766 12:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjA2MTdhOGIyMGEzNDI4YWEyNmNlZmNhYWI0ZjFiOWQ4NDM5NTIwYzZiZTI4ZWFlYTdiNDZmNGRlYzY1YjM4YgEqzZ0=: 00:25:13.766 12:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:13.766 12:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:13.766 12:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTc0NDQ1ZDE4YmVhMWVjNGZhNDRmMTA5ZGJlZjNhMzhwbF0K: 00:25:13.766 12:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjA2MTdhOGIyMGEzNDI4YWEyNmNlZmNhYWI0ZjFiOWQ4NDM5NTIwYzZiZTI4ZWFlYTdiNDZmNGRlYzY1YjM4YgEqzZ0=: ]] 00:25:13.766 12:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjA2MTdhOGIyMGEzNDI4YWEyNmNlZmNhYWI0ZjFiOWQ4NDM5NTIwYzZiZTI4ZWFlYTdiNDZmNGRlYzY1YjM4YgEqzZ0=: 00:25:13.766 12:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:25:13.766 12:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:13.766 12:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:13.766 12:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:13.766 12:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:13.766 12:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:13.766 12:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:13.766 12:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.766 12:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.766 12:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.766 12:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:13.766 12:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:13.766 12:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:13.766 12:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:13.766 12:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:13.766 12:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:13.766 12:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:13.766 12:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:13.766 12:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:13.766 12:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:13.766 12:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:13.766 12:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:13.766 12:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.766 12:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.024 nvme0n1 00:25:14.024 12:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.024 12:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:14.024 12:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.024 12:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.024 12:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:14.024 12:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.024 12:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:14.024 12:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:14.024 12:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.024 12:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.024 12:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.024 12:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:14.024 12:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:25:14.024 12:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:14.024 12:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:14.024 12:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:14.024 12:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:14.024 12:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTM2OTQ0MDc3OGRlNTk3NThiZmZhZTkxZGUxNWIzNTRmZTgzYzE2NjAxNWY1OTZmBVBxsQ==: 00:25:14.024 12:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mjg2NDkxODBlYTExZjVlOWYzMTIyODJhYTc0NGE2ZWVkNTJlYmU4YzZlZmIyZjc24KSCRQ==: 00:25:14.024 12:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:14.024 12:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:14.024 12:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTM2OTQ0MDc3OGRlNTk3NThiZmZhZTkxZGUxNWIzNTRmZTgzYzE2NjAxNWY1OTZmBVBxsQ==: 00:25:14.024 12:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mjg2NDkxODBlYTExZjVlOWYzMTIyODJhYTc0NGE2ZWVkNTJlYmU4YzZlZmIyZjc24KSCRQ==: ]] 00:25:14.024 12:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mjg2NDkxODBlYTExZjVlOWYzMTIyODJhYTc0NGE2ZWVkNTJlYmU4YzZlZmIyZjc24KSCRQ==: 00:25:14.024 12:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:25:14.024 12:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:14.024 12:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:14.024 12:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:14.024 12:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:14.024 12:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:14.024 12:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:14.024 12:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.024 12:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.024 12:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.024 12:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:14.024 12:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:14.024 12:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:14.024 12:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:14.024 12:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:14.025 12:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:14.025 12:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:14.025 12:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:14.025 12:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:14.025 12:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:14.025 12:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:14.025 12:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:14.025 12:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.025 12:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.283 nvme0n1 00:25:14.283 12:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.283 12:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:14.283 12:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.283 12:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:14.283 12:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.283 12:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.283 12:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:14.283 12:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:14.283 12:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.283 12:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.283 12:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.283 12:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:14.283 12:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:25:14.283 12:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:14.283 12:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:14.283 12:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:14.283 12:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:14.283 12:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWNlZDFhZWM4ZjRmMDNlYjQ4YTc1ZDA2ODU2NDk2ZmZmvwd3: 00:25:14.283 12:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzQ4MGE2MDJkZTRlZDljYWI5NGVkMjFhMTJiOWQzNmTxy2LF: 00:25:14.283 12:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:14.283 12:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:14.283 12:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWNlZDFhZWM4ZjRmMDNlYjQ4YTc1ZDA2ODU2NDk2ZmZmvwd3: 00:25:14.283 12:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzQ4MGE2MDJkZTRlZDljYWI5NGVkMjFhMTJiOWQzNmTxy2LF: ]] 00:25:14.283 12:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzQ4MGE2MDJkZTRlZDljYWI5NGVkMjFhMTJiOWQzNmTxy2LF: 00:25:14.283 12:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:25:14.283 12:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:14.283 12:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:14.283 12:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:14.283 12:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:14.283 12:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:14.283 12:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:14.283 12:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.283 12:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.283 12:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.283 12:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:14.283 12:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:14.283 12:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:14.283 12:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:14.283 12:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:14.283 12:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:14.283 12:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:14.284 12:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:14.284 12:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:14.284 12:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:14.284 12:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:14.284 12:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:14.284 12:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.284 12:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.542 nvme0n1 00:25:14.542 12:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.542 12:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:14.542 12:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:14.542 12:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.542 12:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.542 12:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.800 12:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:14.800 12:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:14.800 12:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.800 12:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.800 12:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.800 12:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:14.800 12:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:25:14.800 12:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:14.800 12:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:14.800 12:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:14.800 12:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:14.800 12:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Zjg2NzQ4YmJhMDVlNDczNTUzZDdiMzg0ZGM3NzYyMTRjZWZhYTUyNGMxM2M0MjE1iBbTJA==: 00:25:14.800 12:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjY2MjNlMzZmMTcxZTZlN2VhYmQ5OWVkNzAyZTFhNjTxxFgU: 00:25:14.800 12:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:14.800 12:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:14.800 12:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Zjg2NzQ4YmJhMDVlNDczNTUzZDdiMzg0ZGM3NzYyMTRjZWZhYTUyNGMxM2M0MjE1iBbTJA==: 00:25:14.800 12:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjY2MjNlMzZmMTcxZTZlN2VhYmQ5OWVkNzAyZTFhNjTxxFgU: ]] 00:25:14.800 12:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjY2MjNlMzZmMTcxZTZlN2VhYmQ5OWVkNzAyZTFhNjTxxFgU: 00:25:14.800 12:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:25:14.800 12:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:14.800 12:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:14.800 12:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:14.801 12:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:14.801 12:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:14.801 12:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:14.801 12:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.801 12:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.801 12:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.801 12:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:14.801 12:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:14.801 12:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:14.801 12:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:14.801 12:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:14.801 12:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:14.801 12:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:14.801 12:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:14.801 12:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:14.801 12:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:14.801 12:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:14.801 12:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:14.801 12:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.801 12:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.059 nvme0n1 00:25:15.059 12:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.059 12:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:15.059 12:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:15.059 12:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.059 12:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.059 12:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.059 12:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:15.059 12:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:15.059 12:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.059 12:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.059 12:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.059 12:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:15.059 12:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:25:15.059 12:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:15.059 12:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:15.059 12:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:15.059 12:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:15.059 12:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzkwYjRiNTczM2EyMzliMzMwMzM3ZWNiYjBjNDUxZGY0OWU5MTFlNTNlN2MxOWI0MjNjYjViYzI0NmZjZTgxMVbClsc=: 00:25:15.059 12:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:15.059 12:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:15.060 12:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:15.060 12:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzkwYjRiNTczM2EyMzliMzMwMzM3ZWNiYjBjNDUxZGY0OWU5MTFlNTNlN2MxOWI0MjNjYjViYzI0NmZjZTgxMVbClsc=: 00:25:15.060 12:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:15.060 12:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:25:15.060 12:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:15.060 12:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:15.060 12:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:15.060 12:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:15.060 12:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:15.060 12:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:15.060 12:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.060 12:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.060 12:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.060 12:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:15.060 12:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:15.060 12:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:15.060 12:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:15.060 12:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:15.060 12:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:15.060 12:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:15.060 12:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:15.060 12:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:15.060 12:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:15.060 12:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:15.060 12:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:15.060 12:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.060 12:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.318 nvme0n1 00:25:15.318 12:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.318 12:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:15.318 12:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:15.318 12:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.318 12:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.318 12:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.318 12:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:15.318 12:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:15.318 12:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.318 12:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.318 12:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.318 12:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:15.318 12:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:15.318 12:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:25:15.318 12:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:15.318 12:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:15.318 12:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:15.318 12:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:15.318 12:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTc0NDQ1ZDE4YmVhMWVjNGZhNDRmMTA5ZGJlZjNhMzhwbF0K: 00:25:15.318 12:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjA2MTdhOGIyMGEzNDI4YWEyNmNlZmNhYWI0ZjFiOWQ4NDM5NTIwYzZiZTI4ZWFlYTdiNDZmNGRlYzY1YjM4YgEqzZ0=: 00:25:15.318 12:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:15.318 12:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:15.318 12:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTc0NDQ1ZDE4YmVhMWVjNGZhNDRmMTA5ZGJlZjNhMzhwbF0K: 00:25:15.318 12:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjA2MTdhOGIyMGEzNDI4YWEyNmNlZmNhYWI0ZjFiOWQ4NDM5NTIwYzZiZTI4ZWFlYTdiNDZmNGRlYzY1YjM4YgEqzZ0=: ]] 00:25:15.318 12:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjA2MTdhOGIyMGEzNDI4YWEyNmNlZmNhYWI0ZjFiOWQ4NDM5NTIwYzZiZTI4ZWFlYTdiNDZmNGRlYzY1YjM4YgEqzZ0=: 00:25:15.318 12:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:25:15.318 12:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:15.318 12:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:15.318 12:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:15.318 12:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:15.318 12:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:15.318 12:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:15.318 12:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.318 12:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.318 12:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.318 12:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:15.318 12:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:15.318 12:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:15.318 12:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:15.318 12:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:15.318 12:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:15.318 12:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:15.318 12:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:15.319 12:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:15.319 12:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:15.319 12:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:15.319 12:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:15.319 12:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.319 12:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.885 nvme0n1 00:25:15.885 12:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.885 12:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:15.885 12:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:15.885 12:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.885 12:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.885 12:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.885 12:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:15.885 12:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:15.885 12:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.885 12:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.885 12:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.885 12:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:15.885 12:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:25:15.885 12:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:15.885 12:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:15.885 12:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:15.885 12:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:15.885 12:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTM2OTQ0MDc3OGRlNTk3NThiZmZhZTkxZGUxNWIzNTRmZTgzYzE2NjAxNWY1OTZmBVBxsQ==: 00:25:15.885 12:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mjg2NDkxODBlYTExZjVlOWYzMTIyODJhYTc0NGE2ZWVkNTJlYmU4YzZlZmIyZjc24KSCRQ==: 00:25:15.885 12:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:15.885 12:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:15.885 12:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTM2OTQ0MDc3OGRlNTk3NThiZmZhZTkxZGUxNWIzNTRmZTgzYzE2NjAxNWY1OTZmBVBxsQ==: 00:25:15.885 12:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mjg2NDkxODBlYTExZjVlOWYzMTIyODJhYTc0NGE2ZWVkNTJlYmU4YzZlZmIyZjc24KSCRQ==: ]] 00:25:15.885 12:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mjg2NDkxODBlYTExZjVlOWYzMTIyODJhYTc0NGE2ZWVkNTJlYmU4YzZlZmIyZjc24KSCRQ==: 00:25:15.885 12:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:25:15.885 12:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:15.885 12:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:15.885 12:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:15.885 12:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:15.885 12:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:15.885 12:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:15.885 12:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.885 12:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.885 12:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.885 12:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:15.885 12:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:15.885 12:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:15.885 12:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:15.885 12:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:15.885 12:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:15.885 12:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:15.885 12:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:15.885 12:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:15.885 12:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:15.885 12:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:15.885 12:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:15.885 12:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.885 12:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.144 nvme0n1 00:25:16.144 12:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.144 12:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:16.144 12:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.144 12:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:16.144 12:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.144 12:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.144 12:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:16.144 12:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:16.144 12:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.144 12:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.144 12:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.144 12:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:16.144 12:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:25:16.144 12:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:16.144 12:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:16.144 12:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:16.144 12:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:16.144 12:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWNlZDFhZWM4ZjRmMDNlYjQ4YTc1ZDA2ODU2NDk2ZmZmvwd3: 00:25:16.144 12:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzQ4MGE2MDJkZTRlZDljYWI5NGVkMjFhMTJiOWQzNmTxy2LF: 00:25:16.144 12:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:16.144 12:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:16.144 12:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWNlZDFhZWM4ZjRmMDNlYjQ4YTc1ZDA2ODU2NDk2ZmZmvwd3: 00:25:16.144 12:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzQ4MGE2MDJkZTRlZDljYWI5NGVkMjFhMTJiOWQzNmTxy2LF: ]] 00:25:16.144 12:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzQ4MGE2MDJkZTRlZDljYWI5NGVkMjFhMTJiOWQzNmTxy2LF: 00:25:16.144 12:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:25:16.144 12:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:16.144 12:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:16.144 12:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:16.144 12:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:16.144 12:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:16.144 12:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:16.144 12:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.144 12:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.144 12:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.402 12:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:16.402 12:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:16.402 12:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:16.402 12:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:16.402 12:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:16.402 12:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:16.402 12:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:16.402 12:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:16.402 12:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:16.402 12:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:16.402 12:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:16.402 12:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:16.402 12:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.402 12:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.660 nvme0n1 00:25:16.660 12:48:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.660 12:48:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:16.660 12:48:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.660 12:48:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.660 12:48:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:16.660 12:48:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.660 12:48:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:16.660 12:48:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:16.660 12:48:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.660 12:48:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.660 12:48:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.660 12:48:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:16.660 12:48:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:25:16.660 12:48:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:16.660 12:48:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:16.660 12:48:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:16.660 12:48:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:16.660 12:48:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Zjg2NzQ4YmJhMDVlNDczNTUzZDdiMzg0ZGM3NzYyMTRjZWZhYTUyNGMxM2M0MjE1iBbTJA==: 00:25:16.660 12:48:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjY2MjNlMzZmMTcxZTZlN2VhYmQ5OWVkNzAyZTFhNjTxxFgU: 00:25:16.660 12:48:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:16.660 12:48:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:16.660 12:48:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Zjg2NzQ4YmJhMDVlNDczNTUzZDdiMzg0ZGM3NzYyMTRjZWZhYTUyNGMxM2M0MjE1iBbTJA==: 00:25:16.660 12:48:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjY2MjNlMzZmMTcxZTZlN2VhYmQ5OWVkNzAyZTFhNjTxxFgU: ]] 00:25:16.660 12:48:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjY2MjNlMzZmMTcxZTZlN2VhYmQ5OWVkNzAyZTFhNjTxxFgU: 00:25:16.660 12:48:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:25:16.660 12:48:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:16.660 12:48:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:16.660 12:48:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:16.660 12:48:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:16.661 12:48:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:16.661 12:48:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:16.661 12:48:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.661 12:48:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.661 12:48:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.661 12:48:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:16.661 12:48:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:16.661 12:48:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:16.661 12:48:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:16.661 12:48:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:16.661 12:48:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:16.661 12:48:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:16.661 12:48:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:16.661 12:48:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:16.661 12:48:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:16.661 12:48:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:16.661 12:48:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:16.661 12:48:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.661 12:48:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.225 nvme0n1 00:25:17.225 12:48:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.225 12:48:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:17.225 12:48:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:17.225 12:48:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.225 12:48:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.225 12:48:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.225 12:48:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:17.225 12:48:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:17.225 12:48:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.225 12:48:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.225 12:48:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.225 12:48:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:17.225 12:48:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:25:17.225 12:48:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:17.225 12:48:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:17.225 12:48:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:17.225 12:48:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:17.225 12:48:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzkwYjRiNTczM2EyMzliMzMwMzM3ZWNiYjBjNDUxZGY0OWU5MTFlNTNlN2MxOWI0MjNjYjViYzI0NmZjZTgxMVbClsc=: 00:25:17.225 12:48:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:17.225 12:48:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:17.225 12:48:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:17.225 12:48:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzkwYjRiNTczM2EyMzliMzMwMzM3ZWNiYjBjNDUxZGY0OWU5MTFlNTNlN2MxOWI0MjNjYjViYzI0NmZjZTgxMVbClsc=: 00:25:17.225 12:48:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:17.225 12:48:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:25:17.225 12:48:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:17.225 12:48:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:17.225 12:48:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:17.225 12:48:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:17.225 12:48:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:17.225 12:48:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:17.225 12:48:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.225 12:48:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.225 12:48:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.225 12:48:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:17.225 12:48:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:17.225 12:48:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:17.225 12:48:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:17.225 12:48:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:17.225 12:48:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:17.225 12:48:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:17.225 12:48:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:17.226 12:48:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:17.226 12:48:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:17.226 12:48:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:17.226 12:48:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:17.226 12:48:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.226 12:48:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.484 nvme0n1 00:25:17.484 12:48:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.484 12:48:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:17.484 12:48:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:17.484 12:48:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.484 12:48:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.484 12:48:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.742 12:49:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:17.742 12:49:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:17.742 12:49:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.742 12:49:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.742 12:49:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.742 12:49:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:17.742 12:49:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:17.742 12:49:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:25:17.742 12:49:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:17.742 12:49:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:17.742 12:49:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:17.742 12:49:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:17.742 12:49:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTc0NDQ1ZDE4YmVhMWVjNGZhNDRmMTA5ZGJlZjNhMzhwbF0K: 00:25:17.742 12:49:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjA2MTdhOGIyMGEzNDI4YWEyNmNlZmNhYWI0ZjFiOWQ4NDM5NTIwYzZiZTI4ZWFlYTdiNDZmNGRlYzY1YjM4YgEqzZ0=: 00:25:17.742 12:49:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:17.742 12:49:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:17.742 12:49:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTc0NDQ1ZDE4YmVhMWVjNGZhNDRmMTA5ZGJlZjNhMzhwbF0K: 00:25:17.742 12:49:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjA2MTdhOGIyMGEzNDI4YWEyNmNlZmNhYWI0ZjFiOWQ4NDM5NTIwYzZiZTI4ZWFlYTdiNDZmNGRlYzY1YjM4YgEqzZ0=: ]] 00:25:17.742 12:49:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjA2MTdhOGIyMGEzNDI4YWEyNmNlZmNhYWI0ZjFiOWQ4NDM5NTIwYzZiZTI4ZWFlYTdiNDZmNGRlYzY1YjM4YgEqzZ0=: 00:25:17.742 12:49:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:25:17.742 12:49:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:17.742 12:49:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:17.742 12:49:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:17.742 12:49:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:17.742 12:49:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:17.742 12:49:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:17.742 12:49:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.742 12:49:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.742 12:49:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.742 12:49:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:17.742 12:49:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:17.742 12:49:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:17.742 12:49:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:17.742 12:49:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:17.743 12:49:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:17.743 12:49:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:17.743 12:49:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:17.743 12:49:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:17.743 12:49:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:17.743 12:49:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:17.743 12:49:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:17.743 12:49:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.743 12:49:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.308 nvme0n1 00:25:18.308 12:49:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.308 12:49:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:18.308 12:49:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.308 12:49:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.308 12:49:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:18.308 12:49:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.308 12:49:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:18.308 12:49:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:18.308 12:49:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.308 12:49:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.308 12:49:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.308 12:49:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:18.308 12:49:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:25:18.308 12:49:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:18.308 12:49:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:18.308 12:49:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:18.308 12:49:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:18.308 12:49:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTM2OTQ0MDc3OGRlNTk3NThiZmZhZTkxZGUxNWIzNTRmZTgzYzE2NjAxNWY1OTZmBVBxsQ==: 00:25:18.308 12:49:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mjg2NDkxODBlYTExZjVlOWYzMTIyODJhYTc0NGE2ZWVkNTJlYmU4YzZlZmIyZjc24KSCRQ==: 00:25:18.308 12:49:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:18.308 12:49:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:18.308 12:49:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTM2OTQ0MDc3OGRlNTk3NThiZmZhZTkxZGUxNWIzNTRmZTgzYzE2NjAxNWY1OTZmBVBxsQ==: 00:25:18.308 12:49:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mjg2NDkxODBlYTExZjVlOWYzMTIyODJhYTc0NGE2ZWVkNTJlYmU4YzZlZmIyZjc24KSCRQ==: ]] 00:25:18.308 12:49:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mjg2NDkxODBlYTExZjVlOWYzMTIyODJhYTc0NGE2ZWVkNTJlYmU4YzZlZmIyZjc24KSCRQ==: 00:25:18.308 12:49:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:25:18.308 12:49:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:18.308 12:49:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:18.308 12:49:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:18.308 12:49:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:18.308 12:49:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:18.308 12:49:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:18.308 12:49:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.308 12:49:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.308 12:49:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.308 12:49:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:18.308 12:49:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:18.308 12:49:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:18.308 12:49:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:18.308 12:49:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:18.308 12:49:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:18.308 12:49:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:18.308 12:49:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:18.308 12:49:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:18.308 12:49:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:18.308 12:49:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:18.308 12:49:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:18.308 12:49:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.308 12:49:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.874 nvme0n1 00:25:18.874 12:49:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.874 12:49:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:18.874 12:49:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:18.874 12:49:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.874 12:49:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.874 12:49:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.874 12:49:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:18.874 12:49:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:18.874 12:49:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.874 12:49:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.874 12:49:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.874 12:49:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:18.874 12:49:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:25:18.874 12:49:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:18.874 12:49:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:18.874 12:49:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:18.874 12:49:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:18.874 12:49:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWNlZDFhZWM4ZjRmMDNlYjQ4YTc1ZDA2ODU2NDk2ZmZmvwd3: 00:25:18.874 12:49:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzQ4MGE2MDJkZTRlZDljYWI5NGVkMjFhMTJiOWQzNmTxy2LF: 00:25:18.874 12:49:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:18.874 12:49:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:18.874 12:49:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWNlZDFhZWM4ZjRmMDNlYjQ4YTc1ZDA2ODU2NDk2ZmZmvwd3: 00:25:18.874 12:49:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzQ4MGE2MDJkZTRlZDljYWI5NGVkMjFhMTJiOWQzNmTxy2LF: ]] 00:25:18.874 12:49:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzQ4MGE2MDJkZTRlZDljYWI5NGVkMjFhMTJiOWQzNmTxy2LF: 00:25:18.874 12:49:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:25:18.874 12:49:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:18.874 12:49:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:18.874 12:49:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:18.874 12:49:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:18.874 12:49:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:18.874 12:49:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:18.874 12:49:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.874 12:49:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.874 12:49:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.874 12:49:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:18.874 12:49:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:18.874 12:49:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:18.874 12:49:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:18.874 12:49:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:18.874 12:49:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:18.874 12:49:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:18.874 12:49:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:18.874 12:49:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:18.874 12:49:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:18.874 12:49:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:18.874 12:49:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:18.874 12:49:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.874 12:49:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.440 nvme0n1 00:25:19.440 12:49:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.440 12:49:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:19.440 12:49:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:19.440 12:49:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.440 12:49:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.698 12:49:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.698 12:49:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:19.698 12:49:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:19.698 12:49:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.698 12:49:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.698 12:49:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.698 12:49:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:19.698 12:49:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:25:19.698 12:49:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:19.698 12:49:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:19.698 12:49:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:19.698 12:49:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:19.698 12:49:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Zjg2NzQ4YmJhMDVlNDczNTUzZDdiMzg0ZGM3NzYyMTRjZWZhYTUyNGMxM2M0MjE1iBbTJA==: 00:25:19.698 12:49:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjY2MjNlMzZmMTcxZTZlN2VhYmQ5OWVkNzAyZTFhNjTxxFgU: 00:25:19.698 12:49:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:19.698 12:49:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:19.698 12:49:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Zjg2NzQ4YmJhMDVlNDczNTUzZDdiMzg0ZGM3NzYyMTRjZWZhYTUyNGMxM2M0MjE1iBbTJA==: 00:25:19.698 12:49:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjY2MjNlMzZmMTcxZTZlN2VhYmQ5OWVkNzAyZTFhNjTxxFgU: ]] 00:25:19.698 12:49:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjY2MjNlMzZmMTcxZTZlN2VhYmQ5OWVkNzAyZTFhNjTxxFgU: 00:25:19.698 12:49:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:25:19.698 12:49:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:19.698 12:49:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:19.698 12:49:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:19.698 12:49:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:19.698 12:49:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:19.698 12:49:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:19.698 12:49:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.698 12:49:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.698 12:49:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.698 12:49:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:19.698 12:49:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:19.698 12:49:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:19.698 12:49:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:19.698 12:49:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:19.698 12:49:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:19.698 12:49:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:19.698 12:49:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:19.698 12:49:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:19.698 12:49:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:19.698 12:49:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:19.698 12:49:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:19.698 12:49:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.698 12:49:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.265 nvme0n1 00:25:20.265 12:49:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.265 12:49:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:20.265 12:49:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:20.265 12:49:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.265 12:49:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.265 12:49:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.265 12:49:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:20.265 12:49:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:20.265 12:49:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.265 12:49:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.265 12:49:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.265 12:49:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:20.265 12:49:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:25:20.265 12:49:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:20.265 12:49:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:20.265 12:49:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:20.265 12:49:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:20.265 12:49:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzkwYjRiNTczM2EyMzliMzMwMzM3ZWNiYjBjNDUxZGY0OWU5MTFlNTNlN2MxOWI0MjNjYjViYzI0NmZjZTgxMVbClsc=: 00:25:20.265 12:49:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:20.265 12:49:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:20.265 12:49:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:20.265 12:49:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzkwYjRiNTczM2EyMzliMzMwMzM3ZWNiYjBjNDUxZGY0OWU5MTFlNTNlN2MxOWI0MjNjYjViYzI0NmZjZTgxMVbClsc=: 00:25:20.265 12:49:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:20.265 12:49:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:25:20.265 12:49:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:20.265 12:49:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:20.265 12:49:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:20.265 12:49:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:20.265 12:49:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:20.266 12:49:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:20.266 12:49:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.266 12:49:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.266 12:49:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.266 12:49:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:20.266 12:49:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:20.266 12:49:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:20.266 12:49:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:20.266 12:49:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:20.266 12:49:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:20.266 12:49:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:20.266 12:49:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:20.266 12:49:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:20.266 12:49:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:20.266 12:49:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:20.266 12:49:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:20.266 12:49:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.266 12:49:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.832 nvme0n1 00:25:20.832 12:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.832 12:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:20.832 12:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:20.832 12:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.832 12:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.832 12:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.832 12:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:20.832 12:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:20.832 12:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.832 12:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.832 12:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.832 12:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:20.832 12:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:20.832 12:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:20.832 12:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:25:20.832 12:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:20.832 12:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:20.832 12:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:20.832 12:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:20.832 12:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTc0NDQ1ZDE4YmVhMWVjNGZhNDRmMTA5ZGJlZjNhMzhwbF0K: 00:25:20.832 12:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjA2MTdhOGIyMGEzNDI4YWEyNmNlZmNhYWI0ZjFiOWQ4NDM5NTIwYzZiZTI4ZWFlYTdiNDZmNGRlYzY1YjM4YgEqzZ0=: 00:25:20.832 12:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:20.832 12:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:20.832 12:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTc0NDQ1ZDE4YmVhMWVjNGZhNDRmMTA5ZGJlZjNhMzhwbF0K: 00:25:20.832 12:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjA2MTdhOGIyMGEzNDI4YWEyNmNlZmNhYWI0ZjFiOWQ4NDM5NTIwYzZiZTI4ZWFlYTdiNDZmNGRlYzY1YjM4YgEqzZ0=: ]] 00:25:20.832 12:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjA2MTdhOGIyMGEzNDI4YWEyNmNlZmNhYWI0ZjFiOWQ4NDM5NTIwYzZiZTI4ZWFlYTdiNDZmNGRlYzY1YjM4YgEqzZ0=: 00:25:20.832 12:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:25:20.832 12:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:20.832 12:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:20.832 12:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:20.832 12:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:20.832 12:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:20.832 12:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:20.833 12:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.833 12:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.833 12:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.833 12:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:20.833 12:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:20.833 12:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:20.833 12:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:20.833 12:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:20.833 12:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:20.833 12:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:20.833 12:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:20.833 12:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:20.833 12:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:20.833 12:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:20.833 12:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:20.833 12:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.833 12:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.091 nvme0n1 00:25:21.091 12:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.091 12:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:21.091 12:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:21.091 12:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.091 12:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.091 12:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.091 12:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:21.091 12:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:21.091 12:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.091 12:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.091 12:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.091 12:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:21.091 12:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:25:21.091 12:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:21.091 12:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:21.091 12:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:21.091 12:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:21.091 12:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTM2OTQ0MDc3OGRlNTk3NThiZmZhZTkxZGUxNWIzNTRmZTgzYzE2NjAxNWY1OTZmBVBxsQ==: 00:25:21.091 12:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mjg2NDkxODBlYTExZjVlOWYzMTIyODJhYTc0NGE2ZWVkNTJlYmU4YzZlZmIyZjc24KSCRQ==: 00:25:21.091 12:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:21.091 12:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:21.091 12:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTM2OTQ0MDc3OGRlNTk3NThiZmZhZTkxZGUxNWIzNTRmZTgzYzE2NjAxNWY1OTZmBVBxsQ==: 00:25:21.091 12:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mjg2NDkxODBlYTExZjVlOWYzMTIyODJhYTc0NGE2ZWVkNTJlYmU4YzZlZmIyZjc24KSCRQ==: ]] 00:25:21.091 12:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mjg2NDkxODBlYTExZjVlOWYzMTIyODJhYTc0NGE2ZWVkNTJlYmU4YzZlZmIyZjc24KSCRQ==: 00:25:21.091 12:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:25:21.091 12:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:21.091 12:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:21.091 12:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:21.091 12:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:21.091 12:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:21.092 12:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:21.092 12:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.092 12:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.092 12:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.092 12:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:21.092 12:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:21.092 12:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:21.092 12:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:21.092 12:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:21.092 12:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:21.092 12:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:21.092 12:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:21.092 12:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:21.092 12:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:21.092 12:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:21.092 12:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:21.092 12:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.092 12:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.350 nvme0n1 00:25:21.350 12:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.350 12:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:21.350 12:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:21.350 12:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.350 12:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.350 12:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.350 12:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:21.350 12:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:21.350 12:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.350 12:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.350 12:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.350 12:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:21.350 12:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:25:21.350 12:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:21.350 12:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:21.350 12:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:21.350 12:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:21.350 12:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWNlZDFhZWM4ZjRmMDNlYjQ4YTc1ZDA2ODU2NDk2ZmZmvwd3: 00:25:21.350 12:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzQ4MGE2MDJkZTRlZDljYWI5NGVkMjFhMTJiOWQzNmTxy2LF: 00:25:21.350 12:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:21.350 12:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:21.350 12:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWNlZDFhZWM4ZjRmMDNlYjQ4YTc1ZDA2ODU2NDk2ZmZmvwd3: 00:25:21.350 12:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzQ4MGE2MDJkZTRlZDljYWI5NGVkMjFhMTJiOWQzNmTxy2LF: ]] 00:25:21.350 12:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzQ4MGE2MDJkZTRlZDljYWI5NGVkMjFhMTJiOWQzNmTxy2LF: 00:25:21.350 12:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:25:21.351 12:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:21.351 12:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:21.351 12:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:21.351 12:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:21.351 12:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:21.351 12:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:21.351 12:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.351 12:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.351 12:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.351 12:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:21.351 12:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:21.351 12:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:21.351 12:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:21.351 12:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:21.351 12:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:21.351 12:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:21.351 12:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:21.351 12:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:21.351 12:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:21.351 12:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:21.351 12:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:21.351 12:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.351 12:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.664 nvme0n1 00:25:21.664 12:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.664 12:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:21.664 12:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:21.665 12:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.665 12:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.665 12:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.665 12:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:21.665 12:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:21.665 12:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.665 12:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.665 12:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.665 12:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:21.665 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:25:21.665 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:21.665 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:21.665 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:21.665 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:21.665 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Zjg2NzQ4YmJhMDVlNDczNTUzZDdiMzg0ZGM3NzYyMTRjZWZhYTUyNGMxM2M0MjE1iBbTJA==: 00:25:21.665 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjY2MjNlMzZmMTcxZTZlN2VhYmQ5OWVkNzAyZTFhNjTxxFgU: 00:25:21.665 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:21.665 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:21.665 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Zjg2NzQ4YmJhMDVlNDczNTUzZDdiMzg0ZGM3NzYyMTRjZWZhYTUyNGMxM2M0MjE1iBbTJA==: 00:25:21.665 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjY2MjNlMzZmMTcxZTZlN2VhYmQ5OWVkNzAyZTFhNjTxxFgU: ]] 00:25:21.665 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjY2MjNlMzZmMTcxZTZlN2VhYmQ5OWVkNzAyZTFhNjTxxFgU: 00:25:21.665 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:25:21.665 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:21.665 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:21.665 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:21.665 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:21.665 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:21.665 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:21.665 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.665 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.665 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.665 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:21.665 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:21.665 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:21.665 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:21.665 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:21.665 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:21.665 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:21.665 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:21.665 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:21.665 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:21.665 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:21.665 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:21.665 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.665 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.956 nvme0n1 00:25:21.956 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.956 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:21.956 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:21.956 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.956 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.956 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.956 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:21.956 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:21.956 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.956 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.956 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.956 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:21.956 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:25:21.956 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:21.956 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:21.956 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:21.956 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:21.956 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzkwYjRiNTczM2EyMzliMzMwMzM3ZWNiYjBjNDUxZGY0OWU5MTFlNTNlN2MxOWI0MjNjYjViYzI0NmZjZTgxMVbClsc=: 00:25:21.956 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:21.956 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:21.956 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:21.956 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzkwYjRiNTczM2EyMzliMzMwMzM3ZWNiYjBjNDUxZGY0OWU5MTFlNTNlN2MxOWI0MjNjYjViYzI0NmZjZTgxMVbClsc=: 00:25:21.956 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:21.956 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:25:21.956 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:21.956 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:21.956 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:21.956 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:21.956 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:21.957 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:21.957 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.957 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.957 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.957 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:21.957 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:21.957 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:21.957 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:21.957 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:21.957 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:21.957 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:21.957 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:21.957 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:21.957 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:21.957 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:21.957 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:21.957 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.957 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.957 nvme0n1 00:25:21.957 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.957 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:21.957 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:21.957 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.957 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.957 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.957 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:21.957 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:21.957 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.957 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.957 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.957 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:21.957 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:21.957 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:25:21.957 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:21.957 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:21.957 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:21.957 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:21.957 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTc0NDQ1ZDE4YmVhMWVjNGZhNDRmMTA5ZGJlZjNhMzhwbF0K: 00:25:21.957 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjA2MTdhOGIyMGEzNDI4YWEyNmNlZmNhYWI0ZjFiOWQ4NDM5NTIwYzZiZTI4ZWFlYTdiNDZmNGRlYzY1YjM4YgEqzZ0=: 00:25:21.957 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:21.957 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:21.957 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTc0NDQ1ZDE4YmVhMWVjNGZhNDRmMTA5ZGJlZjNhMzhwbF0K: 00:25:21.957 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjA2MTdhOGIyMGEzNDI4YWEyNmNlZmNhYWI0ZjFiOWQ4NDM5NTIwYzZiZTI4ZWFlYTdiNDZmNGRlYzY1YjM4YgEqzZ0=: ]] 00:25:21.957 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjA2MTdhOGIyMGEzNDI4YWEyNmNlZmNhYWI0ZjFiOWQ4NDM5NTIwYzZiZTI4ZWFlYTdiNDZmNGRlYzY1YjM4YgEqzZ0=: 00:25:21.957 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:25:21.957 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:21.957 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:21.957 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:21.957 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:21.957 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:21.957 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:21.957 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.957 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.957 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.957 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:21.957 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:21.957 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:21.957 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:21.957 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:21.957 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:21.957 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:21.957 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:21.957 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:21.957 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:21.957 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:21.957 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:21.957 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.957 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.246 nvme0n1 00:25:22.246 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.246 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:22.246 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:22.246 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.246 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.246 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.246 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:22.246 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:22.246 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.246 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.246 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.246 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:22.246 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:25:22.247 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:22.247 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:22.247 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:22.247 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:22.247 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTM2OTQ0MDc3OGRlNTk3NThiZmZhZTkxZGUxNWIzNTRmZTgzYzE2NjAxNWY1OTZmBVBxsQ==: 00:25:22.247 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mjg2NDkxODBlYTExZjVlOWYzMTIyODJhYTc0NGE2ZWVkNTJlYmU4YzZlZmIyZjc24KSCRQ==: 00:25:22.247 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:22.247 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:22.247 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTM2OTQ0MDc3OGRlNTk3NThiZmZhZTkxZGUxNWIzNTRmZTgzYzE2NjAxNWY1OTZmBVBxsQ==: 00:25:22.247 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mjg2NDkxODBlYTExZjVlOWYzMTIyODJhYTc0NGE2ZWVkNTJlYmU4YzZlZmIyZjc24KSCRQ==: ]] 00:25:22.247 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mjg2NDkxODBlYTExZjVlOWYzMTIyODJhYTc0NGE2ZWVkNTJlYmU4YzZlZmIyZjc24KSCRQ==: 00:25:22.247 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:25:22.247 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:22.247 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:22.247 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:22.247 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:22.247 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:22.247 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:22.247 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.247 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.247 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.247 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:22.247 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:22.247 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:22.247 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:22.247 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:22.247 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:22.247 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:22.247 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:22.247 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:22.247 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:22.247 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:22.247 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:22.247 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.247 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.520 nvme0n1 00:25:22.520 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.520 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:22.520 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.520 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:22.520 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.520 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.520 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:22.520 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:22.520 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.520 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.520 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.520 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:22.520 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:25:22.520 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:22.520 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:22.520 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:22.520 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:22.520 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWNlZDFhZWM4ZjRmMDNlYjQ4YTc1ZDA2ODU2NDk2ZmZmvwd3: 00:25:22.520 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzQ4MGE2MDJkZTRlZDljYWI5NGVkMjFhMTJiOWQzNmTxy2LF: 00:25:22.520 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:22.520 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:22.520 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWNlZDFhZWM4ZjRmMDNlYjQ4YTc1ZDA2ODU2NDk2ZmZmvwd3: 00:25:22.520 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzQ4MGE2MDJkZTRlZDljYWI5NGVkMjFhMTJiOWQzNmTxy2LF: ]] 00:25:22.520 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzQ4MGE2MDJkZTRlZDljYWI5NGVkMjFhMTJiOWQzNmTxy2LF: 00:25:22.520 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:25:22.520 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:22.520 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:22.520 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:22.520 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:22.520 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:22.520 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:22.520 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.520 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.520 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.520 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:22.520 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:22.520 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:22.520 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:22.520 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:22.520 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:22.520 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:22.520 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:22.520 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:22.520 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:22.520 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:22.520 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:22.520 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.520 12:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.777 nvme0n1 00:25:22.777 12:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.777 12:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:22.777 12:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:22.777 12:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.777 12:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.777 12:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.777 12:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:22.777 12:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:22.777 12:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.777 12:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.777 12:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.777 12:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:22.777 12:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:25:22.777 12:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:22.777 12:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:22.777 12:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:22.777 12:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:22.777 12:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Zjg2NzQ4YmJhMDVlNDczNTUzZDdiMzg0ZGM3NzYyMTRjZWZhYTUyNGMxM2M0MjE1iBbTJA==: 00:25:22.777 12:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjY2MjNlMzZmMTcxZTZlN2VhYmQ5OWVkNzAyZTFhNjTxxFgU: 00:25:22.777 12:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:22.777 12:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:22.777 12:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Zjg2NzQ4YmJhMDVlNDczNTUzZDdiMzg0ZGM3NzYyMTRjZWZhYTUyNGMxM2M0MjE1iBbTJA==: 00:25:22.777 12:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjY2MjNlMzZmMTcxZTZlN2VhYmQ5OWVkNzAyZTFhNjTxxFgU: ]] 00:25:22.777 12:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjY2MjNlMzZmMTcxZTZlN2VhYmQ5OWVkNzAyZTFhNjTxxFgU: 00:25:22.777 12:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:25:22.777 12:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:22.777 12:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:22.777 12:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:22.777 12:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:22.777 12:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:22.777 12:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:22.777 12:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.777 12:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.777 12:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.777 12:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:22.777 12:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:22.777 12:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:22.777 12:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:22.777 12:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:22.777 12:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:22.777 12:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:22.777 12:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:22.777 12:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:22.777 12:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:22.777 12:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:22.777 12:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:22.777 12:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.777 12:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.035 nvme0n1 00:25:23.035 12:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.035 12:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:23.035 12:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:23.035 12:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.035 12:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.035 12:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.035 12:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:23.035 12:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:23.035 12:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.035 12:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.035 12:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.035 12:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:23.035 12:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:25:23.035 12:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:23.035 12:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:23.035 12:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:23.035 12:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:23.035 12:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzkwYjRiNTczM2EyMzliMzMwMzM3ZWNiYjBjNDUxZGY0OWU5MTFlNTNlN2MxOWI0MjNjYjViYzI0NmZjZTgxMVbClsc=: 00:25:23.035 12:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:23.035 12:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:23.035 12:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:23.035 12:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzkwYjRiNTczM2EyMzliMzMwMzM3ZWNiYjBjNDUxZGY0OWU5MTFlNTNlN2MxOWI0MjNjYjViYzI0NmZjZTgxMVbClsc=: 00:25:23.035 12:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:23.035 12:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:25:23.035 12:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:23.035 12:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:23.035 12:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:23.035 12:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:23.035 12:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:23.035 12:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:23.035 12:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.035 12:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.035 12:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.035 12:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:23.035 12:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:23.035 12:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:23.035 12:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:23.035 12:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:23.035 12:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:23.035 12:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:23.035 12:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:23.035 12:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:23.035 12:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:23.035 12:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:23.035 12:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:23.035 12:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.035 12:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.294 nvme0n1 00:25:23.295 12:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.295 12:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:23.295 12:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.295 12:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:23.295 12:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.295 12:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.295 12:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:23.295 12:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:23.295 12:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.295 12:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.295 12:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.295 12:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:23.295 12:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:23.295 12:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:25:23.295 12:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:23.295 12:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:23.295 12:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:23.295 12:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:23.295 12:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTc0NDQ1ZDE4YmVhMWVjNGZhNDRmMTA5ZGJlZjNhMzhwbF0K: 00:25:23.295 12:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjA2MTdhOGIyMGEzNDI4YWEyNmNlZmNhYWI0ZjFiOWQ4NDM5NTIwYzZiZTI4ZWFlYTdiNDZmNGRlYzY1YjM4YgEqzZ0=: 00:25:23.295 12:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:23.295 12:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:23.295 12:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTc0NDQ1ZDE4YmVhMWVjNGZhNDRmMTA5ZGJlZjNhMzhwbF0K: 00:25:23.295 12:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjA2MTdhOGIyMGEzNDI4YWEyNmNlZmNhYWI0ZjFiOWQ4NDM5NTIwYzZiZTI4ZWFlYTdiNDZmNGRlYzY1YjM4YgEqzZ0=: ]] 00:25:23.295 12:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjA2MTdhOGIyMGEzNDI4YWEyNmNlZmNhYWI0ZjFiOWQ4NDM5NTIwYzZiZTI4ZWFlYTdiNDZmNGRlYzY1YjM4YgEqzZ0=: 00:25:23.295 12:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:25:23.295 12:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:23.295 12:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:23.295 12:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:23.295 12:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:23.295 12:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:23.295 12:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:23.295 12:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.295 12:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.295 12:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.295 12:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:23.295 12:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:23.295 12:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:23.295 12:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:23.295 12:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:23.295 12:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:23.295 12:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:23.295 12:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:23.295 12:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:23.295 12:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:23.295 12:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:23.295 12:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:23.295 12:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.295 12:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.553 nvme0n1 00:25:23.553 12:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.553 12:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:23.553 12:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:23.553 12:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.553 12:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.553 12:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.553 12:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:23.553 12:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:23.553 12:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.553 12:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.553 12:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.553 12:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:23.553 12:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:25:23.553 12:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:23.553 12:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:23.553 12:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:23.553 12:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:23.553 12:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTM2OTQ0MDc3OGRlNTk3NThiZmZhZTkxZGUxNWIzNTRmZTgzYzE2NjAxNWY1OTZmBVBxsQ==: 00:25:23.553 12:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mjg2NDkxODBlYTExZjVlOWYzMTIyODJhYTc0NGE2ZWVkNTJlYmU4YzZlZmIyZjc24KSCRQ==: 00:25:23.553 12:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:23.553 12:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:23.553 12:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTM2OTQ0MDc3OGRlNTk3NThiZmZhZTkxZGUxNWIzNTRmZTgzYzE2NjAxNWY1OTZmBVBxsQ==: 00:25:23.553 12:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mjg2NDkxODBlYTExZjVlOWYzMTIyODJhYTc0NGE2ZWVkNTJlYmU4YzZlZmIyZjc24KSCRQ==: ]] 00:25:23.553 12:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mjg2NDkxODBlYTExZjVlOWYzMTIyODJhYTc0NGE2ZWVkNTJlYmU4YzZlZmIyZjc24KSCRQ==: 00:25:23.553 12:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:25:23.553 12:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:23.553 12:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:23.553 12:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:23.553 12:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:23.553 12:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:23.553 12:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:23.553 12:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.554 12:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.554 12:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.554 12:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:23.554 12:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:23.554 12:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:23.554 12:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:23.554 12:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:23.554 12:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:23.554 12:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:23.554 12:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:23.554 12:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:23.554 12:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:23.554 12:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:23.554 12:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:23.554 12:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.554 12:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.811 nvme0n1 00:25:23.811 12:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.811 12:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:23.811 12:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:23.811 12:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.811 12:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.811 12:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.069 12:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:24.069 12:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:24.069 12:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.069 12:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.069 12:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.069 12:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:24.069 12:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:25:24.069 12:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:24.069 12:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:24.069 12:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:24.069 12:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:24.069 12:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWNlZDFhZWM4ZjRmMDNlYjQ4YTc1ZDA2ODU2NDk2ZmZmvwd3: 00:25:24.069 12:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzQ4MGE2MDJkZTRlZDljYWI5NGVkMjFhMTJiOWQzNmTxy2LF: 00:25:24.069 12:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:24.069 12:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:24.069 12:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWNlZDFhZWM4ZjRmMDNlYjQ4YTc1ZDA2ODU2NDk2ZmZmvwd3: 00:25:24.069 12:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzQ4MGE2MDJkZTRlZDljYWI5NGVkMjFhMTJiOWQzNmTxy2LF: ]] 00:25:24.069 12:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzQ4MGE2MDJkZTRlZDljYWI5NGVkMjFhMTJiOWQzNmTxy2LF: 00:25:24.069 12:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:25:24.069 12:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:24.069 12:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:24.069 12:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:24.070 12:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:24.070 12:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:24.070 12:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:24.070 12:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.070 12:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.070 12:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.070 12:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:24.070 12:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:24.070 12:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:24.070 12:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:24.070 12:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:24.070 12:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:24.070 12:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:24.070 12:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:24.070 12:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:24.070 12:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:24.070 12:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:24.070 12:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:24.070 12:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.070 12:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.328 nvme0n1 00:25:24.328 12:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.328 12:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:24.328 12:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:24.328 12:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.328 12:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.328 12:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.328 12:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:24.328 12:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:24.328 12:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.328 12:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.328 12:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.328 12:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:24.328 12:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:25:24.328 12:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:24.328 12:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:24.328 12:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:24.328 12:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:24.328 12:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Zjg2NzQ4YmJhMDVlNDczNTUzZDdiMzg0ZGM3NzYyMTRjZWZhYTUyNGMxM2M0MjE1iBbTJA==: 00:25:24.328 12:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjY2MjNlMzZmMTcxZTZlN2VhYmQ5OWVkNzAyZTFhNjTxxFgU: 00:25:24.328 12:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:24.328 12:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:24.328 12:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Zjg2NzQ4YmJhMDVlNDczNTUzZDdiMzg0ZGM3NzYyMTRjZWZhYTUyNGMxM2M0MjE1iBbTJA==: 00:25:24.328 12:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjY2MjNlMzZmMTcxZTZlN2VhYmQ5OWVkNzAyZTFhNjTxxFgU: ]] 00:25:24.328 12:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjY2MjNlMzZmMTcxZTZlN2VhYmQ5OWVkNzAyZTFhNjTxxFgU: 00:25:24.328 12:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:25:24.328 12:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:24.328 12:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:24.328 12:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:24.328 12:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:24.328 12:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:24.328 12:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:24.328 12:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.328 12:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.328 12:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.328 12:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:24.328 12:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:24.328 12:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:24.328 12:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:24.328 12:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:24.328 12:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:24.328 12:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:24.328 12:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:24.328 12:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:24.328 12:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:24.328 12:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:24.328 12:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:24.328 12:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.328 12:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.587 nvme0n1 00:25:24.587 12:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.587 12:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:24.587 12:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:24.587 12:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.587 12:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.587 12:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.587 12:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:24.587 12:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:24.587 12:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.587 12:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.587 12:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.587 12:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:24.587 12:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:25:24.587 12:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:24.587 12:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:24.587 12:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:24.587 12:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:24.587 12:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzkwYjRiNTczM2EyMzliMzMwMzM3ZWNiYjBjNDUxZGY0OWU5MTFlNTNlN2MxOWI0MjNjYjViYzI0NmZjZTgxMVbClsc=: 00:25:24.587 12:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:24.587 12:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:24.587 12:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:24.587 12:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzkwYjRiNTczM2EyMzliMzMwMzM3ZWNiYjBjNDUxZGY0OWU5MTFlNTNlN2MxOWI0MjNjYjViYzI0NmZjZTgxMVbClsc=: 00:25:24.587 12:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:24.587 12:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:25:24.587 12:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:24.587 12:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:24.587 12:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:24.587 12:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:24.587 12:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:24.587 12:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:24.587 12:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.587 12:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.587 12:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.587 12:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:24.587 12:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:24.587 12:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:24.587 12:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:24.587 12:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:24.587 12:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:24.587 12:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:24.587 12:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:24.587 12:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:24.587 12:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:24.587 12:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:24.587 12:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:24.587 12:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.587 12:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.845 nvme0n1 00:25:24.845 12:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.845 12:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:24.845 12:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:24.845 12:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.845 12:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.845 12:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.845 12:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:24.845 12:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:24.845 12:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.845 12:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.845 12:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.845 12:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:24.845 12:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:24.845 12:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:25:24.845 12:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:24.845 12:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:24.845 12:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:24.845 12:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:24.845 12:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTc0NDQ1ZDE4YmVhMWVjNGZhNDRmMTA5ZGJlZjNhMzhwbF0K: 00:25:24.845 12:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjA2MTdhOGIyMGEzNDI4YWEyNmNlZmNhYWI0ZjFiOWQ4NDM5NTIwYzZiZTI4ZWFlYTdiNDZmNGRlYzY1YjM4YgEqzZ0=: 00:25:24.845 12:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:24.845 12:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:24.845 12:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTc0NDQ1ZDE4YmVhMWVjNGZhNDRmMTA5ZGJlZjNhMzhwbF0K: 00:25:24.845 12:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjA2MTdhOGIyMGEzNDI4YWEyNmNlZmNhYWI0ZjFiOWQ4NDM5NTIwYzZiZTI4ZWFlYTdiNDZmNGRlYzY1YjM4YgEqzZ0=: ]] 00:25:24.845 12:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjA2MTdhOGIyMGEzNDI4YWEyNmNlZmNhYWI0ZjFiOWQ4NDM5NTIwYzZiZTI4ZWFlYTdiNDZmNGRlYzY1YjM4YgEqzZ0=: 00:25:24.845 12:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:25:24.845 12:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:24.845 12:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:24.845 12:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:24.845 12:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:24.845 12:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:24.845 12:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:24.845 12:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.845 12:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.103 12:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.104 12:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:25.104 12:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:25.104 12:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:25.104 12:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:25.104 12:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:25.104 12:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:25.104 12:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:25.104 12:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:25.104 12:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:25.104 12:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:25.104 12:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:25.104 12:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:25.104 12:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.104 12:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.362 nvme0n1 00:25:25.362 12:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.362 12:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:25.362 12:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:25.362 12:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.362 12:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.362 12:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.362 12:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:25.362 12:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:25.362 12:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.362 12:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.362 12:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.362 12:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:25.362 12:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:25:25.362 12:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:25.362 12:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:25.362 12:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:25.362 12:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:25.362 12:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTM2OTQ0MDc3OGRlNTk3NThiZmZhZTkxZGUxNWIzNTRmZTgzYzE2NjAxNWY1OTZmBVBxsQ==: 00:25:25.362 12:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mjg2NDkxODBlYTExZjVlOWYzMTIyODJhYTc0NGE2ZWVkNTJlYmU4YzZlZmIyZjc24KSCRQ==: 00:25:25.362 12:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:25.362 12:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:25.362 12:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTM2OTQ0MDc3OGRlNTk3NThiZmZhZTkxZGUxNWIzNTRmZTgzYzE2NjAxNWY1OTZmBVBxsQ==: 00:25:25.362 12:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mjg2NDkxODBlYTExZjVlOWYzMTIyODJhYTc0NGE2ZWVkNTJlYmU4YzZlZmIyZjc24KSCRQ==: ]] 00:25:25.362 12:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mjg2NDkxODBlYTExZjVlOWYzMTIyODJhYTc0NGE2ZWVkNTJlYmU4YzZlZmIyZjc24KSCRQ==: 00:25:25.362 12:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:25:25.362 12:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:25.362 12:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:25.362 12:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:25.362 12:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:25.362 12:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:25.362 12:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:25.362 12:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.362 12:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.362 12:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.362 12:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:25.362 12:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:25.362 12:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:25.362 12:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:25.362 12:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:25.362 12:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:25.362 12:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:25.362 12:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:25.362 12:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:25.362 12:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:25.362 12:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:25.362 12:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:25.362 12:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.362 12:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.929 nvme0n1 00:25:25.929 12:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.929 12:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:25.929 12:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:25.929 12:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.929 12:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.929 12:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.929 12:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:25.929 12:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:25.929 12:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.929 12:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.929 12:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.929 12:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:25.929 12:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:25:25.929 12:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:25.929 12:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:25.929 12:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:25.929 12:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:25.929 12:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWNlZDFhZWM4ZjRmMDNlYjQ4YTc1ZDA2ODU2NDk2ZmZmvwd3: 00:25:25.929 12:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzQ4MGE2MDJkZTRlZDljYWI5NGVkMjFhMTJiOWQzNmTxy2LF: 00:25:25.929 12:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:25.929 12:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:25.929 12:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWNlZDFhZWM4ZjRmMDNlYjQ4YTc1ZDA2ODU2NDk2ZmZmvwd3: 00:25:25.929 12:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzQ4MGE2MDJkZTRlZDljYWI5NGVkMjFhMTJiOWQzNmTxy2LF: ]] 00:25:25.929 12:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzQ4MGE2MDJkZTRlZDljYWI5NGVkMjFhMTJiOWQzNmTxy2LF: 00:25:25.929 12:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:25:25.929 12:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:25.929 12:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:25.929 12:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:25.929 12:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:25.929 12:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:25.929 12:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:25.929 12:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.929 12:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.929 12:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.929 12:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:25.929 12:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:25.929 12:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:25.929 12:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:25.929 12:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:25.929 12:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:25.929 12:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:25.929 12:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:25.929 12:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:25.929 12:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:25.929 12:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:25.929 12:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:25.929 12:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.929 12:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.187 nvme0n1 00:25:26.187 12:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.187 12:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:26.187 12:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:26.187 12:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.187 12:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.187 12:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.187 12:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:26.187 12:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:26.187 12:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.187 12:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.446 12:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.446 12:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:26.446 12:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:25:26.446 12:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:26.446 12:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:26.446 12:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:26.446 12:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:26.446 12:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Zjg2NzQ4YmJhMDVlNDczNTUzZDdiMzg0ZGM3NzYyMTRjZWZhYTUyNGMxM2M0MjE1iBbTJA==: 00:25:26.446 12:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjY2MjNlMzZmMTcxZTZlN2VhYmQ5OWVkNzAyZTFhNjTxxFgU: 00:25:26.446 12:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:26.446 12:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:26.446 12:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Zjg2NzQ4YmJhMDVlNDczNTUzZDdiMzg0ZGM3NzYyMTRjZWZhYTUyNGMxM2M0MjE1iBbTJA==: 00:25:26.446 12:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjY2MjNlMzZmMTcxZTZlN2VhYmQ5OWVkNzAyZTFhNjTxxFgU: ]] 00:25:26.446 12:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjY2MjNlMzZmMTcxZTZlN2VhYmQ5OWVkNzAyZTFhNjTxxFgU: 00:25:26.446 12:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:25:26.446 12:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:26.446 12:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:26.446 12:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:26.446 12:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:26.446 12:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:26.446 12:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:26.446 12:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.446 12:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.446 12:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.446 12:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:26.446 12:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:26.446 12:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:26.446 12:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:26.446 12:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:26.446 12:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:26.446 12:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:26.446 12:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:26.446 12:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:26.446 12:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:26.446 12:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:26.446 12:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:26.446 12:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.446 12:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.705 nvme0n1 00:25:26.705 12:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.705 12:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:26.705 12:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:26.705 12:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.705 12:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.705 12:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.705 12:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:26.705 12:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:26.705 12:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.705 12:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.705 12:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.705 12:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:26.705 12:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:25:26.705 12:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:26.705 12:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:26.705 12:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:26.705 12:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:26.705 12:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzkwYjRiNTczM2EyMzliMzMwMzM3ZWNiYjBjNDUxZGY0OWU5MTFlNTNlN2MxOWI0MjNjYjViYzI0NmZjZTgxMVbClsc=: 00:25:26.705 12:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:26.705 12:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:26.705 12:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:26.705 12:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzkwYjRiNTczM2EyMzliMzMwMzM3ZWNiYjBjNDUxZGY0OWU5MTFlNTNlN2MxOWI0MjNjYjViYzI0NmZjZTgxMVbClsc=: 00:25:26.705 12:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:26.705 12:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:25:26.705 12:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:26.705 12:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:26.705 12:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:26.705 12:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:26.705 12:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:26.705 12:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:26.705 12:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.705 12:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.705 12:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.705 12:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:26.705 12:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:26.705 12:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:26.705 12:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:26.705 12:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:26.705 12:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:26.705 12:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:26.705 12:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:26.705 12:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:26.705 12:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:26.705 12:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:26.705 12:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:26.705 12:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.705 12:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.273 nvme0n1 00:25:27.273 12:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.273 12:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:27.273 12:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:27.273 12:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.273 12:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.273 12:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.273 12:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:27.273 12:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:27.273 12:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.273 12:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.273 12:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.273 12:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:27.273 12:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:27.273 12:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:25:27.273 12:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:27.273 12:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:27.273 12:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:27.273 12:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:27.273 12:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTc0NDQ1ZDE4YmVhMWVjNGZhNDRmMTA5ZGJlZjNhMzhwbF0K: 00:25:27.273 12:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjA2MTdhOGIyMGEzNDI4YWEyNmNlZmNhYWI0ZjFiOWQ4NDM5NTIwYzZiZTI4ZWFlYTdiNDZmNGRlYzY1YjM4YgEqzZ0=: 00:25:27.273 12:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:27.273 12:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:27.273 12:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTc0NDQ1ZDE4YmVhMWVjNGZhNDRmMTA5ZGJlZjNhMzhwbF0K: 00:25:27.273 12:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjA2MTdhOGIyMGEzNDI4YWEyNmNlZmNhYWI0ZjFiOWQ4NDM5NTIwYzZiZTI4ZWFlYTdiNDZmNGRlYzY1YjM4YgEqzZ0=: ]] 00:25:27.273 12:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjA2MTdhOGIyMGEzNDI4YWEyNmNlZmNhYWI0ZjFiOWQ4NDM5NTIwYzZiZTI4ZWFlYTdiNDZmNGRlYzY1YjM4YgEqzZ0=: 00:25:27.273 12:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:25:27.273 12:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:27.273 12:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:27.273 12:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:27.273 12:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:27.273 12:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:27.273 12:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:27.273 12:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.273 12:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.273 12:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.273 12:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:27.273 12:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:27.273 12:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:27.273 12:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:27.273 12:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:27.273 12:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:27.273 12:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:27.273 12:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:27.273 12:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:27.273 12:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:27.273 12:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:27.273 12:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:27.273 12:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.273 12:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.839 nvme0n1 00:25:27.839 12:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.839 12:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:27.839 12:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.839 12:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:27.839 12:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.839 12:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.839 12:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:27.839 12:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:27.839 12:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.839 12:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.839 12:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.839 12:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:27.839 12:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:25:27.839 12:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:27.839 12:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:27.839 12:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:27.839 12:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:27.839 12:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTM2OTQ0MDc3OGRlNTk3NThiZmZhZTkxZGUxNWIzNTRmZTgzYzE2NjAxNWY1OTZmBVBxsQ==: 00:25:27.839 12:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mjg2NDkxODBlYTExZjVlOWYzMTIyODJhYTc0NGE2ZWVkNTJlYmU4YzZlZmIyZjc24KSCRQ==: 00:25:27.839 12:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:27.839 12:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:27.839 12:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTM2OTQ0MDc3OGRlNTk3NThiZmZhZTkxZGUxNWIzNTRmZTgzYzE2NjAxNWY1OTZmBVBxsQ==: 00:25:27.839 12:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mjg2NDkxODBlYTExZjVlOWYzMTIyODJhYTc0NGE2ZWVkNTJlYmU4YzZlZmIyZjc24KSCRQ==: ]] 00:25:27.839 12:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mjg2NDkxODBlYTExZjVlOWYzMTIyODJhYTc0NGE2ZWVkNTJlYmU4YzZlZmIyZjc24KSCRQ==: 00:25:27.839 12:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:25:27.839 12:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:27.839 12:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:27.840 12:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:27.840 12:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:27.840 12:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:27.840 12:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:27.840 12:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.840 12:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.840 12:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.840 12:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:27.840 12:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:27.840 12:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:27.840 12:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:27.840 12:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:27.840 12:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:27.840 12:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:27.840 12:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:27.840 12:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:27.840 12:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:27.840 12:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:27.840 12:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:27.840 12:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.840 12:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.406 nvme0n1 00:25:28.406 12:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.406 12:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:28.406 12:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:28.406 12:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.406 12:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.406 12:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.406 12:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:28.406 12:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:28.406 12:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.406 12:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.406 12:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.678 12:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:28.678 12:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:25:28.678 12:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:28.678 12:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:28.678 12:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:28.678 12:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:28.678 12:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWNlZDFhZWM4ZjRmMDNlYjQ4YTc1ZDA2ODU2NDk2ZmZmvwd3: 00:25:28.678 12:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzQ4MGE2MDJkZTRlZDljYWI5NGVkMjFhMTJiOWQzNmTxy2LF: 00:25:28.678 12:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:28.678 12:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:28.678 12:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWNlZDFhZWM4ZjRmMDNlYjQ4YTc1ZDA2ODU2NDk2ZmZmvwd3: 00:25:28.678 12:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzQ4MGE2MDJkZTRlZDljYWI5NGVkMjFhMTJiOWQzNmTxy2LF: ]] 00:25:28.679 12:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzQ4MGE2MDJkZTRlZDljYWI5NGVkMjFhMTJiOWQzNmTxy2LF: 00:25:28.679 12:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:25:28.679 12:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:28.679 12:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:28.679 12:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:28.679 12:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:28.679 12:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:28.679 12:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:28.679 12:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.679 12:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.679 12:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.679 12:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:28.679 12:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:28.679 12:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:28.679 12:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:28.679 12:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:28.680 12:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:28.680 12:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:28.680 12:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:28.680 12:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:28.680 12:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:28.680 12:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:28.680 12:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:28.680 12:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.680 12:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.254 nvme0n1 00:25:29.254 12:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.254 12:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:29.254 12:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:29.254 12:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.254 12:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.254 12:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.254 12:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:29.254 12:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:29.254 12:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.254 12:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.254 12:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.254 12:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:29.255 12:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:25:29.255 12:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:29.255 12:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:29.255 12:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:29.255 12:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:29.255 12:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Zjg2NzQ4YmJhMDVlNDczNTUzZDdiMzg0ZGM3NzYyMTRjZWZhYTUyNGMxM2M0MjE1iBbTJA==: 00:25:29.255 12:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjY2MjNlMzZmMTcxZTZlN2VhYmQ5OWVkNzAyZTFhNjTxxFgU: 00:25:29.255 12:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:29.255 12:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:29.255 12:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Zjg2NzQ4YmJhMDVlNDczNTUzZDdiMzg0ZGM3NzYyMTRjZWZhYTUyNGMxM2M0MjE1iBbTJA==: 00:25:29.255 12:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjY2MjNlMzZmMTcxZTZlN2VhYmQ5OWVkNzAyZTFhNjTxxFgU: ]] 00:25:29.255 12:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjY2MjNlMzZmMTcxZTZlN2VhYmQ5OWVkNzAyZTFhNjTxxFgU: 00:25:29.255 12:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:25:29.255 12:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:29.255 12:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:29.255 12:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:29.255 12:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:29.255 12:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:29.255 12:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:29.255 12:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.255 12:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.255 12:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.255 12:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:29.255 12:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:29.255 12:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:29.255 12:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:29.255 12:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:29.255 12:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:29.255 12:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:29.255 12:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:29.255 12:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:29.255 12:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:29.255 12:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:29.255 12:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:29.255 12:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.255 12:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.822 nvme0n1 00:25:29.822 12:49:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.822 12:49:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:29.822 12:49:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:29.822 12:49:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.822 12:49:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.822 12:49:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.822 12:49:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:29.822 12:49:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:29.822 12:49:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.822 12:49:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.822 12:49:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.822 12:49:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:29.822 12:49:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:25:29.822 12:49:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:29.822 12:49:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:29.822 12:49:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:29.822 12:49:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:29.822 12:49:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzkwYjRiNTczM2EyMzliMzMwMzM3ZWNiYjBjNDUxZGY0OWU5MTFlNTNlN2MxOWI0MjNjYjViYzI0NmZjZTgxMVbClsc=: 00:25:29.822 12:49:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:29.822 12:49:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:29.822 12:49:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:29.822 12:49:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzkwYjRiNTczM2EyMzliMzMwMzM3ZWNiYjBjNDUxZGY0OWU5MTFlNTNlN2MxOWI0MjNjYjViYzI0NmZjZTgxMVbClsc=: 00:25:29.822 12:49:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:29.822 12:49:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:25:29.822 12:49:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:29.822 12:49:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:29.822 12:49:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:29.822 12:49:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:29.822 12:49:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:29.822 12:49:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:29.822 12:49:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.822 12:49:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.822 12:49:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.822 12:49:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:29.822 12:49:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:29.822 12:49:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:29.822 12:49:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:29.822 12:49:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:29.822 12:49:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:29.822 12:49:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:29.822 12:49:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:29.822 12:49:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:29.822 12:49:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:29.822 12:49:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:29.822 12:49:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:29.822 12:49:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.822 12:49:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.389 nvme0n1 00:25:30.389 12:49:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.389 12:49:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:30.389 12:49:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:30.389 12:49:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.389 12:49:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.389 12:49:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.389 12:49:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:30.389 12:49:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:30.389 12:49:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.389 12:49:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.389 12:49:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.389 12:49:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:30.389 12:49:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:30.389 12:49:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:30.389 12:49:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:25:30.389 12:49:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:30.389 12:49:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:30.389 12:49:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:30.389 12:49:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:30.648 12:49:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTc0NDQ1ZDE4YmVhMWVjNGZhNDRmMTA5ZGJlZjNhMzhwbF0K: 00:25:30.648 12:49:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjA2MTdhOGIyMGEzNDI4YWEyNmNlZmNhYWI0ZjFiOWQ4NDM5NTIwYzZiZTI4ZWFlYTdiNDZmNGRlYzY1YjM4YgEqzZ0=: 00:25:30.648 12:49:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:30.648 12:49:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:30.648 12:49:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTc0NDQ1ZDE4YmVhMWVjNGZhNDRmMTA5ZGJlZjNhMzhwbF0K: 00:25:30.648 12:49:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjA2MTdhOGIyMGEzNDI4YWEyNmNlZmNhYWI0ZjFiOWQ4NDM5NTIwYzZiZTI4ZWFlYTdiNDZmNGRlYzY1YjM4YgEqzZ0=: ]] 00:25:30.648 12:49:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjA2MTdhOGIyMGEzNDI4YWEyNmNlZmNhYWI0ZjFiOWQ4NDM5NTIwYzZiZTI4ZWFlYTdiNDZmNGRlYzY1YjM4YgEqzZ0=: 00:25:30.648 12:49:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:25:30.648 12:49:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:30.648 12:49:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:30.648 12:49:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:30.648 12:49:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:30.648 12:49:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:30.648 12:49:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:30.648 12:49:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.648 12:49:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.648 12:49:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.648 12:49:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:30.648 12:49:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:30.648 12:49:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:30.648 12:49:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:30.648 12:49:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:30.648 12:49:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:30.648 12:49:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:30.648 12:49:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:30.648 12:49:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:30.648 12:49:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:30.648 12:49:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:30.648 12:49:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:30.648 12:49:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.648 12:49:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.648 nvme0n1 00:25:30.648 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.648 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:30.648 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:30.648 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.648 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.648 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.648 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:30.648 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:30.648 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.648 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.648 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.648 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:30.648 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:25:30.648 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:30.648 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:30.648 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:30.648 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:30.648 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTM2OTQ0MDc3OGRlNTk3NThiZmZhZTkxZGUxNWIzNTRmZTgzYzE2NjAxNWY1OTZmBVBxsQ==: 00:25:30.648 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mjg2NDkxODBlYTExZjVlOWYzMTIyODJhYTc0NGE2ZWVkNTJlYmU4YzZlZmIyZjc24KSCRQ==: 00:25:30.648 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:30.648 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:30.648 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTM2OTQ0MDc3OGRlNTk3NThiZmZhZTkxZGUxNWIzNTRmZTgzYzE2NjAxNWY1OTZmBVBxsQ==: 00:25:30.648 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mjg2NDkxODBlYTExZjVlOWYzMTIyODJhYTc0NGE2ZWVkNTJlYmU4YzZlZmIyZjc24KSCRQ==: ]] 00:25:30.648 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mjg2NDkxODBlYTExZjVlOWYzMTIyODJhYTc0NGE2ZWVkNTJlYmU4YzZlZmIyZjc24KSCRQ==: 00:25:30.648 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:25:30.648 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:30.648 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:30.648 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:30.648 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:30.648 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:30.648 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:30.648 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.648 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.648 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.648 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:30.648 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:30.648 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:30.648 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:30.648 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:30.648 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:30.648 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:30.648 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:30.648 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:30.648 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:30.648 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:30.648 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:30.648 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.648 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.908 nvme0n1 00:25:30.908 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.908 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:30.908 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:30.908 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.908 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.908 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.908 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:30.908 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:30.908 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.908 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.908 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.908 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:30.908 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:25:30.908 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:30.908 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:30.908 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:30.908 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:30.908 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWNlZDFhZWM4ZjRmMDNlYjQ4YTc1ZDA2ODU2NDk2ZmZmvwd3: 00:25:30.908 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzQ4MGE2MDJkZTRlZDljYWI5NGVkMjFhMTJiOWQzNmTxy2LF: 00:25:30.908 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:30.908 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:30.908 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWNlZDFhZWM4ZjRmMDNlYjQ4YTc1ZDA2ODU2NDk2ZmZmvwd3: 00:25:30.908 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzQ4MGE2MDJkZTRlZDljYWI5NGVkMjFhMTJiOWQzNmTxy2LF: ]] 00:25:30.908 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzQ4MGE2MDJkZTRlZDljYWI5NGVkMjFhMTJiOWQzNmTxy2LF: 00:25:30.908 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:25:30.908 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:30.908 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:30.908 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:30.908 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:30.908 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:30.908 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:30.908 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.908 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.908 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.908 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:30.908 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:30.908 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:30.908 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:30.908 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:30.908 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:30.908 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:30.908 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:30.908 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:30.908 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:30.908 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:30.908 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:30.908 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.908 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.167 nvme0n1 00:25:31.167 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.167 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:31.167 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:31.167 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.167 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.167 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.167 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:31.167 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:31.167 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.167 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.167 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.167 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:31.167 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:25:31.167 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:31.167 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:31.167 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:31.167 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:31.167 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Zjg2NzQ4YmJhMDVlNDczNTUzZDdiMzg0ZGM3NzYyMTRjZWZhYTUyNGMxM2M0MjE1iBbTJA==: 00:25:31.167 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjY2MjNlMzZmMTcxZTZlN2VhYmQ5OWVkNzAyZTFhNjTxxFgU: 00:25:31.167 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:31.167 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:31.167 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Zjg2NzQ4YmJhMDVlNDczNTUzZDdiMzg0ZGM3NzYyMTRjZWZhYTUyNGMxM2M0MjE1iBbTJA==: 00:25:31.167 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjY2MjNlMzZmMTcxZTZlN2VhYmQ5OWVkNzAyZTFhNjTxxFgU: ]] 00:25:31.167 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjY2MjNlMzZmMTcxZTZlN2VhYmQ5OWVkNzAyZTFhNjTxxFgU: 00:25:31.167 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:25:31.167 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:31.167 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:31.167 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:31.167 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:31.167 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:31.167 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:31.167 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.167 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.167 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.167 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:31.167 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:31.167 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:31.167 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:31.167 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:31.167 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:31.167 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:31.167 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:31.167 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:31.167 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:31.167 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:31.167 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:31.167 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.167 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.426 nvme0n1 00:25:31.426 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.426 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:31.426 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:31.426 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.426 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.426 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.426 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:31.426 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:31.426 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.426 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.426 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.426 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:31.426 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:25:31.426 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:31.426 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:31.426 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:31.426 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:31.426 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzkwYjRiNTczM2EyMzliMzMwMzM3ZWNiYjBjNDUxZGY0OWU5MTFlNTNlN2MxOWI0MjNjYjViYzI0NmZjZTgxMVbClsc=: 00:25:31.426 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:31.426 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:31.426 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:31.426 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzkwYjRiNTczM2EyMzliMzMwMzM3ZWNiYjBjNDUxZGY0OWU5MTFlNTNlN2MxOWI0MjNjYjViYzI0NmZjZTgxMVbClsc=: 00:25:31.426 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:31.426 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:25:31.426 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:31.426 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:31.426 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:31.426 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:31.426 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:31.426 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:31.426 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.426 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.426 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.426 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:31.426 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:31.426 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:31.426 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:31.426 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:31.426 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:31.426 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:31.426 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:31.426 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:31.426 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:31.426 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:31.426 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:31.426 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.426 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.684 nvme0n1 00:25:31.684 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.684 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:31.684 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:31.684 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.684 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.684 12:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.684 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:31.684 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:31.684 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.684 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.684 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.684 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:31.684 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:31.684 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:25:31.684 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:31.684 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:31.684 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:31.684 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:31.684 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTc0NDQ1ZDE4YmVhMWVjNGZhNDRmMTA5ZGJlZjNhMzhwbF0K: 00:25:31.684 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjA2MTdhOGIyMGEzNDI4YWEyNmNlZmNhYWI0ZjFiOWQ4NDM5NTIwYzZiZTI4ZWFlYTdiNDZmNGRlYzY1YjM4YgEqzZ0=: 00:25:31.684 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:31.684 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:31.684 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTc0NDQ1ZDE4YmVhMWVjNGZhNDRmMTA5ZGJlZjNhMzhwbF0K: 00:25:31.684 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjA2MTdhOGIyMGEzNDI4YWEyNmNlZmNhYWI0ZjFiOWQ4NDM5NTIwYzZiZTI4ZWFlYTdiNDZmNGRlYzY1YjM4YgEqzZ0=: ]] 00:25:31.684 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjA2MTdhOGIyMGEzNDI4YWEyNmNlZmNhYWI0ZjFiOWQ4NDM5NTIwYzZiZTI4ZWFlYTdiNDZmNGRlYzY1YjM4YgEqzZ0=: 00:25:31.684 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:25:31.684 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:31.684 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:31.684 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:31.684 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:31.684 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:31.684 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:31.684 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.684 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.684 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.684 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:31.684 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:31.684 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:31.684 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:31.684 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:31.684 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:31.684 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:31.684 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:31.684 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:31.684 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:31.684 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:31.685 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:31.685 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.685 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.943 nvme0n1 00:25:31.943 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.943 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:31.943 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:31.943 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.943 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.943 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.943 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:31.943 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:31.943 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.943 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.943 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.943 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:31.943 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:25:31.943 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:31.943 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:31.943 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:31.943 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:31.943 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTM2OTQ0MDc3OGRlNTk3NThiZmZhZTkxZGUxNWIzNTRmZTgzYzE2NjAxNWY1OTZmBVBxsQ==: 00:25:31.943 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mjg2NDkxODBlYTExZjVlOWYzMTIyODJhYTc0NGE2ZWVkNTJlYmU4YzZlZmIyZjc24KSCRQ==: 00:25:31.943 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:31.943 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:31.943 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTM2OTQ0MDc3OGRlNTk3NThiZmZhZTkxZGUxNWIzNTRmZTgzYzE2NjAxNWY1OTZmBVBxsQ==: 00:25:31.943 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mjg2NDkxODBlYTExZjVlOWYzMTIyODJhYTc0NGE2ZWVkNTJlYmU4YzZlZmIyZjc24KSCRQ==: ]] 00:25:31.943 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mjg2NDkxODBlYTExZjVlOWYzMTIyODJhYTc0NGE2ZWVkNTJlYmU4YzZlZmIyZjc24KSCRQ==: 00:25:31.943 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:25:31.943 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:31.943 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:31.943 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:31.943 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:31.943 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:31.943 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:31.943 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.943 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.943 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.943 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:31.943 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:31.943 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:31.943 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:31.943 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:31.943 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:31.943 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:31.943 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:31.943 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:31.943 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:31.943 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:31.943 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:31.943 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.943 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.201 nvme0n1 00:25:32.201 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.201 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:32.201 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:32.201 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.201 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.201 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.201 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:32.201 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:32.201 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.201 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.201 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.201 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:32.201 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:25:32.202 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:32.202 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:32.202 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:32.202 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:32.202 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWNlZDFhZWM4ZjRmMDNlYjQ4YTc1ZDA2ODU2NDk2ZmZmvwd3: 00:25:32.202 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzQ4MGE2MDJkZTRlZDljYWI5NGVkMjFhMTJiOWQzNmTxy2LF: 00:25:32.202 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:32.202 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:32.202 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWNlZDFhZWM4ZjRmMDNlYjQ4YTc1ZDA2ODU2NDk2ZmZmvwd3: 00:25:32.202 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzQ4MGE2MDJkZTRlZDljYWI5NGVkMjFhMTJiOWQzNmTxy2LF: ]] 00:25:32.202 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzQ4MGE2MDJkZTRlZDljYWI5NGVkMjFhMTJiOWQzNmTxy2LF: 00:25:32.202 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:25:32.202 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:32.202 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:32.202 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:32.202 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:32.202 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:32.202 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:32.202 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.202 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.202 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.202 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:32.202 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:32.202 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:32.202 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:32.202 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:32.202 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:32.202 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:32.202 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:32.202 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:32.202 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:32.202 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:32.202 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:32.202 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.202 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.460 nvme0n1 00:25:32.460 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.460 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:32.460 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:32.460 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.460 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.460 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.460 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:32.460 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:32.460 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.460 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.460 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.460 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:32.460 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:25:32.460 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:32.460 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:32.460 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:32.460 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:32.460 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Zjg2NzQ4YmJhMDVlNDczNTUzZDdiMzg0ZGM3NzYyMTRjZWZhYTUyNGMxM2M0MjE1iBbTJA==: 00:25:32.460 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjY2MjNlMzZmMTcxZTZlN2VhYmQ5OWVkNzAyZTFhNjTxxFgU: 00:25:32.460 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:32.460 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:32.460 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Zjg2NzQ4YmJhMDVlNDczNTUzZDdiMzg0ZGM3NzYyMTRjZWZhYTUyNGMxM2M0MjE1iBbTJA==: 00:25:32.460 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjY2MjNlMzZmMTcxZTZlN2VhYmQ5OWVkNzAyZTFhNjTxxFgU: ]] 00:25:32.460 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjY2MjNlMzZmMTcxZTZlN2VhYmQ5OWVkNzAyZTFhNjTxxFgU: 00:25:32.460 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:25:32.460 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:32.460 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:32.460 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:32.460 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:32.460 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:32.460 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:32.460 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.460 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.460 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.460 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:32.460 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:32.460 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:32.460 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:32.460 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:32.460 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:32.460 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:32.460 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:32.460 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:32.460 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:32.460 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:32.460 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:32.460 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.460 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.719 nvme0n1 00:25:32.719 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.719 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:32.719 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:32.719 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.719 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.719 12:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.719 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:32.719 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:32.719 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.719 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.719 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.719 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:32.719 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:25:32.719 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:32.719 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:32.719 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:32.719 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:32.719 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzkwYjRiNTczM2EyMzliMzMwMzM3ZWNiYjBjNDUxZGY0OWU5MTFlNTNlN2MxOWI0MjNjYjViYzI0NmZjZTgxMVbClsc=: 00:25:32.719 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:32.719 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:32.719 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:32.719 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzkwYjRiNTczM2EyMzliMzMwMzM3ZWNiYjBjNDUxZGY0OWU5MTFlNTNlN2MxOWI0MjNjYjViYzI0NmZjZTgxMVbClsc=: 00:25:32.719 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:32.719 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:25:32.719 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:32.719 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:32.719 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:32.719 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:32.719 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:32.719 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:32.719 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.719 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.719 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.719 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:32.719 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:32.719 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:32.719 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:32.719 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:32.719 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:32.719 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:32.719 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:32.719 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:32.719 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:32.719 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:32.719 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:32.719 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.719 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.719 nvme0n1 00:25:32.719 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.719 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:32.719 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:32.719 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.719 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.976 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.976 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:32.976 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:32.976 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.977 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.977 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.977 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:32.977 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:32.977 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:25:32.977 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:32.977 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:32.977 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:32.977 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:32.977 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTc0NDQ1ZDE4YmVhMWVjNGZhNDRmMTA5ZGJlZjNhMzhwbF0K: 00:25:32.977 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjA2MTdhOGIyMGEzNDI4YWEyNmNlZmNhYWI0ZjFiOWQ4NDM5NTIwYzZiZTI4ZWFlYTdiNDZmNGRlYzY1YjM4YgEqzZ0=: 00:25:32.977 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:32.977 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:32.977 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTc0NDQ1ZDE4YmVhMWVjNGZhNDRmMTA5ZGJlZjNhMzhwbF0K: 00:25:32.977 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjA2MTdhOGIyMGEzNDI4YWEyNmNlZmNhYWI0ZjFiOWQ4NDM5NTIwYzZiZTI4ZWFlYTdiNDZmNGRlYzY1YjM4YgEqzZ0=: ]] 00:25:32.977 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjA2MTdhOGIyMGEzNDI4YWEyNmNlZmNhYWI0ZjFiOWQ4NDM5NTIwYzZiZTI4ZWFlYTdiNDZmNGRlYzY1YjM4YgEqzZ0=: 00:25:32.977 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:25:32.977 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:32.977 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:32.977 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:32.977 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:32.977 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:32.977 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:32.977 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.977 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.977 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.977 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:32.977 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:32.977 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:32.977 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:32.977 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:32.977 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:32.977 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:32.977 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:32.977 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:32.977 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:32.977 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:32.977 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:32.977 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.977 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.234 nvme0n1 00:25:33.234 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.235 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:33.235 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.235 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:33.235 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.235 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.235 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:33.235 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:33.235 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.235 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.235 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.235 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:33.235 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:25:33.235 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:33.235 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:33.235 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:33.235 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:33.235 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTM2OTQ0MDc3OGRlNTk3NThiZmZhZTkxZGUxNWIzNTRmZTgzYzE2NjAxNWY1OTZmBVBxsQ==: 00:25:33.235 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mjg2NDkxODBlYTExZjVlOWYzMTIyODJhYTc0NGE2ZWVkNTJlYmU4YzZlZmIyZjc24KSCRQ==: 00:25:33.235 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:33.235 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:33.235 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTM2OTQ0MDc3OGRlNTk3NThiZmZhZTkxZGUxNWIzNTRmZTgzYzE2NjAxNWY1OTZmBVBxsQ==: 00:25:33.235 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mjg2NDkxODBlYTExZjVlOWYzMTIyODJhYTc0NGE2ZWVkNTJlYmU4YzZlZmIyZjc24KSCRQ==: ]] 00:25:33.235 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mjg2NDkxODBlYTExZjVlOWYzMTIyODJhYTc0NGE2ZWVkNTJlYmU4YzZlZmIyZjc24KSCRQ==: 00:25:33.235 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:25:33.235 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:33.235 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:33.235 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:33.235 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:33.235 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:33.235 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:33.235 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.235 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.235 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.235 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:33.235 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:33.235 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:33.235 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:33.235 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:33.235 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:33.235 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:33.235 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:33.235 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:33.235 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:33.235 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:33.235 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:33.235 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.235 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.493 nvme0n1 00:25:33.493 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.493 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:33.493 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:33.493 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.493 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.493 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.493 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:33.493 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:33.493 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.493 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.493 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.493 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:33.493 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:25:33.493 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:33.493 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:33.493 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:33.493 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:33.493 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWNlZDFhZWM4ZjRmMDNlYjQ4YTc1ZDA2ODU2NDk2ZmZmvwd3: 00:25:33.493 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzQ4MGE2MDJkZTRlZDljYWI5NGVkMjFhMTJiOWQzNmTxy2LF: 00:25:33.493 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:33.493 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:33.493 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWNlZDFhZWM4ZjRmMDNlYjQ4YTc1ZDA2ODU2NDk2ZmZmvwd3: 00:25:33.493 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzQ4MGE2MDJkZTRlZDljYWI5NGVkMjFhMTJiOWQzNmTxy2LF: ]] 00:25:33.493 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzQ4MGE2MDJkZTRlZDljYWI5NGVkMjFhMTJiOWQzNmTxy2LF: 00:25:33.493 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:25:33.493 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:33.493 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:33.493 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:33.493 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:33.493 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:33.493 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:33.493 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.493 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.493 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.493 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:33.493 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:33.493 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:33.493 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:33.493 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:33.493 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:33.493 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:33.493 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:33.493 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:33.493 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:33.493 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:33.493 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:33.493 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.493 12:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.751 nvme0n1 00:25:33.751 12:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.751 12:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:33.751 12:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:33.751 12:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.751 12:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.751 12:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.751 12:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:33.751 12:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:33.751 12:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.751 12:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.751 12:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.751 12:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:33.751 12:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:25:33.751 12:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:33.751 12:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:33.751 12:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:33.751 12:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:33.751 12:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Zjg2NzQ4YmJhMDVlNDczNTUzZDdiMzg0ZGM3NzYyMTRjZWZhYTUyNGMxM2M0MjE1iBbTJA==: 00:25:33.751 12:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjY2MjNlMzZmMTcxZTZlN2VhYmQ5OWVkNzAyZTFhNjTxxFgU: 00:25:33.751 12:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:33.751 12:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:33.751 12:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Zjg2NzQ4YmJhMDVlNDczNTUzZDdiMzg0ZGM3NzYyMTRjZWZhYTUyNGMxM2M0MjE1iBbTJA==: 00:25:33.751 12:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjY2MjNlMzZmMTcxZTZlN2VhYmQ5OWVkNzAyZTFhNjTxxFgU: ]] 00:25:33.751 12:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjY2MjNlMzZmMTcxZTZlN2VhYmQ5OWVkNzAyZTFhNjTxxFgU: 00:25:33.751 12:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:25:33.751 12:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:33.751 12:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:33.751 12:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:33.751 12:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:33.751 12:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:33.751 12:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:33.752 12:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.752 12:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.752 12:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.009 12:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:34.009 12:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:34.009 12:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:34.009 12:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:34.009 12:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:34.009 12:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:34.009 12:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:34.009 12:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:34.009 12:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:34.009 12:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:34.009 12:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:34.009 12:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:34.009 12:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.009 12:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.009 nvme0n1 00:25:34.009 12:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.009 12:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:34.009 12:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.009 12:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:34.009 12:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.268 12:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.268 12:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:34.268 12:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:34.268 12:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.268 12:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.268 12:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.268 12:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:34.268 12:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:25:34.268 12:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:34.268 12:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:34.268 12:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:34.268 12:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:34.268 12:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzkwYjRiNTczM2EyMzliMzMwMzM3ZWNiYjBjNDUxZGY0OWU5MTFlNTNlN2MxOWI0MjNjYjViYzI0NmZjZTgxMVbClsc=: 00:25:34.268 12:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:34.268 12:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:34.268 12:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:34.268 12:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzkwYjRiNTczM2EyMzliMzMwMzM3ZWNiYjBjNDUxZGY0OWU5MTFlNTNlN2MxOWI0MjNjYjViYzI0NmZjZTgxMVbClsc=: 00:25:34.268 12:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:34.268 12:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:25:34.268 12:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:34.268 12:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:34.268 12:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:34.268 12:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:34.268 12:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:34.268 12:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:34.268 12:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.268 12:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.268 12:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.268 12:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:34.268 12:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:34.268 12:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:34.268 12:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:34.268 12:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:34.268 12:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:34.268 12:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:34.268 12:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:34.268 12:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:34.268 12:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:34.268 12:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:34.268 12:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:34.268 12:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.268 12:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.526 nvme0n1 00:25:34.526 12:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.526 12:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:34.526 12:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:34.526 12:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.526 12:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.526 12:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.526 12:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:34.526 12:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:34.526 12:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.526 12:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.526 12:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.526 12:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:34.526 12:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:34.526 12:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:25:34.526 12:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:34.526 12:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:34.526 12:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:34.526 12:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:34.526 12:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTc0NDQ1ZDE4YmVhMWVjNGZhNDRmMTA5ZGJlZjNhMzhwbF0K: 00:25:34.526 12:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjA2MTdhOGIyMGEzNDI4YWEyNmNlZmNhYWI0ZjFiOWQ4NDM5NTIwYzZiZTI4ZWFlYTdiNDZmNGRlYzY1YjM4YgEqzZ0=: 00:25:34.526 12:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:34.526 12:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:34.526 12:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTc0NDQ1ZDE4YmVhMWVjNGZhNDRmMTA5ZGJlZjNhMzhwbF0K: 00:25:34.526 12:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjA2MTdhOGIyMGEzNDI4YWEyNmNlZmNhYWI0ZjFiOWQ4NDM5NTIwYzZiZTI4ZWFlYTdiNDZmNGRlYzY1YjM4YgEqzZ0=: ]] 00:25:34.526 12:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjA2MTdhOGIyMGEzNDI4YWEyNmNlZmNhYWI0ZjFiOWQ4NDM5NTIwYzZiZTI4ZWFlYTdiNDZmNGRlYzY1YjM4YgEqzZ0=: 00:25:34.526 12:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:25:34.526 12:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:34.526 12:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:34.526 12:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:34.526 12:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:34.526 12:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:34.526 12:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:34.526 12:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.526 12:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.526 12:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.526 12:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:34.526 12:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:34.526 12:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:34.526 12:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:34.526 12:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:34.526 12:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:34.526 12:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:34.526 12:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:34.526 12:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:34.526 12:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:34.526 12:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:34.526 12:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:34.526 12:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.526 12:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.784 nvme0n1 00:25:34.784 12:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.784 12:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:34.784 12:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:34.784 12:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.784 12:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.784 12:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.042 12:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:35.042 12:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:35.042 12:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.042 12:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.042 12:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.042 12:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:35.042 12:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:25:35.042 12:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:35.042 12:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:35.042 12:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:35.042 12:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:35.042 12:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTM2OTQ0MDc3OGRlNTk3NThiZmZhZTkxZGUxNWIzNTRmZTgzYzE2NjAxNWY1OTZmBVBxsQ==: 00:25:35.042 12:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mjg2NDkxODBlYTExZjVlOWYzMTIyODJhYTc0NGE2ZWVkNTJlYmU4YzZlZmIyZjc24KSCRQ==: 00:25:35.042 12:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:35.042 12:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:35.042 12:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTM2OTQ0MDc3OGRlNTk3NThiZmZhZTkxZGUxNWIzNTRmZTgzYzE2NjAxNWY1OTZmBVBxsQ==: 00:25:35.042 12:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mjg2NDkxODBlYTExZjVlOWYzMTIyODJhYTc0NGE2ZWVkNTJlYmU4YzZlZmIyZjc24KSCRQ==: ]] 00:25:35.042 12:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mjg2NDkxODBlYTExZjVlOWYzMTIyODJhYTc0NGE2ZWVkNTJlYmU4YzZlZmIyZjc24KSCRQ==: 00:25:35.042 12:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:25:35.042 12:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:35.042 12:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:35.042 12:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:35.042 12:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:35.042 12:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:35.042 12:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:35.042 12:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.042 12:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.042 12:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.042 12:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:35.042 12:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:35.042 12:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:35.042 12:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:35.042 12:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:35.042 12:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:35.042 12:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:35.042 12:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:35.042 12:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:35.042 12:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:35.042 12:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:35.042 12:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:35.042 12:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.042 12:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.300 nvme0n1 00:25:35.300 12:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.300 12:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:35.300 12:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:35.300 12:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.300 12:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.300 12:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.300 12:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:35.300 12:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:35.300 12:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.300 12:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.300 12:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.300 12:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:35.300 12:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:25:35.300 12:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:35.300 12:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:35.300 12:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:35.300 12:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:35.300 12:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWNlZDFhZWM4ZjRmMDNlYjQ4YTc1ZDA2ODU2NDk2ZmZmvwd3: 00:25:35.300 12:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzQ4MGE2MDJkZTRlZDljYWI5NGVkMjFhMTJiOWQzNmTxy2LF: 00:25:35.300 12:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:35.300 12:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:35.300 12:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWNlZDFhZWM4ZjRmMDNlYjQ4YTc1ZDA2ODU2NDk2ZmZmvwd3: 00:25:35.300 12:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzQ4MGE2MDJkZTRlZDljYWI5NGVkMjFhMTJiOWQzNmTxy2LF: ]] 00:25:35.300 12:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzQ4MGE2MDJkZTRlZDljYWI5NGVkMjFhMTJiOWQzNmTxy2LF: 00:25:35.300 12:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:25:35.300 12:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:35.300 12:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:35.300 12:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:35.300 12:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:35.300 12:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:35.300 12:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:35.300 12:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.300 12:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.300 12:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.558 12:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:35.558 12:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:35.558 12:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:35.558 12:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:35.558 12:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:35.558 12:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:35.558 12:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:35.558 12:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:35.558 12:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:35.558 12:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:35.558 12:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:35.558 12:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:35.558 12:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.558 12:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.817 nvme0n1 00:25:35.817 12:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.817 12:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:35.817 12:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:35.817 12:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.817 12:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.817 12:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.817 12:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:35.817 12:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:35.817 12:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.817 12:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.817 12:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.817 12:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:35.817 12:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:25:35.817 12:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:35.817 12:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:35.817 12:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:35.817 12:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:35.817 12:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Zjg2NzQ4YmJhMDVlNDczNTUzZDdiMzg0ZGM3NzYyMTRjZWZhYTUyNGMxM2M0MjE1iBbTJA==: 00:25:35.817 12:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjY2MjNlMzZmMTcxZTZlN2VhYmQ5OWVkNzAyZTFhNjTxxFgU: 00:25:35.817 12:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:35.817 12:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:35.817 12:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Zjg2NzQ4YmJhMDVlNDczNTUzZDdiMzg0ZGM3NzYyMTRjZWZhYTUyNGMxM2M0MjE1iBbTJA==: 00:25:35.817 12:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjY2MjNlMzZmMTcxZTZlN2VhYmQ5OWVkNzAyZTFhNjTxxFgU: ]] 00:25:35.817 12:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjY2MjNlMzZmMTcxZTZlN2VhYmQ5OWVkNzAyZTFhNjTxxFgU: 00:25:35.817 12:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:25:35.817 12:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:35.817 12:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:35.817 12:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:35.817 12:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:35.817 12:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:35.817 12:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:35.817 12:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.817 12:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.817 12:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.817 12:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:35.817 12:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:35.817 12:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:35.817 12:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:35.817 12:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:35.817 12:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:35.817 12:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:35.817 12:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:35.817 12:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:35.817 12:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:35.817 12:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:35.817 12:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:35.817 12:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.817 12:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.383 nvme0n1 00:25:36.383 12:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.383 12:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:36.383 12:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.383 12:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.383 12:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:36.383 12:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.383 12:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:36.383 12:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:36.383 12:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.383 12:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.383 12:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.383 12:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:36.383 12:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:25:36.383 12:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:36.383 12:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:36.383 12:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:36.383 12:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:36.383 12:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzkwYjRiNTczM2EyMzliMzMwMzM3ZWNiYjBjNDUxZGY0OWU5MTFlNTNlN2MxOWI0MjNjYjViYzI0NmZjZTgxMVbClsc=: 00:25:36.383 12:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:36.383 12:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:36.383 12:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:36.383 12:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzkwYjRiNTczM2EyMzliMzMwMzM3ZWNiYjBjNDUxZGY0OWU5MTFlNTNlN2MxOWI0MjNjYjViYzI0NmZjZTgxMVbClsc=: 00:25:36.383 12:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:36.383 12:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:25:36.383 12:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:36.383 12:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:36.383 12:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:36.383 12:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:36.383 12:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:36.383 12:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:36.383 12:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.383 12:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.383 12:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.383 12:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:36.383 12:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:36.383 12:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:36.383 12:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:36.383 12:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:36.383 12:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:36.383 12:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:36.383 12:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:36.383 12:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:36.383 12:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:36.383 12:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:36.383 12:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:36.383 12:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.383 12:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.640 nvme0n1 00:25:36.640 12:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.640 12:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:36.640 12:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:36.640 12:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.640 12:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.640 12:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.640 12:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:36.640 12:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:36.640 12:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.640 12:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.640 12:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.640 12:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:36.640 12:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:36.640 12:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:25:36.640 12:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:36.640 12:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:36.640 12:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:36.640 12:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:36.640 12:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTc0NDQ1ZDE4YmVhMWVjNGZhNDRmMTA5ZGJlZjNhMzhwbF0K: 00:25:36.640 12:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjA2MTdhOGIyMGEzNDI4YWEyNmNlZmNhYWI0ZjFiOWQ4NDM5NTIwYzZiZTI4ZWFlYTdiNDZmNGRlYzY1YjM4YgEqzZ0=: 00:25:36.640 12:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:36.640 12:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:36.640 12:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTc0NDQ1ZDE4YmVhMWVjNGZhNDRmMTA5ZGJlZjNhMzhwbF0K: 00:25:36.640 12:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjA2MTdhOGIyMGEzNDI4YWEyNmNlZmNhYWI0ZjFiOWQ4NDM5NTIwYzZiZTI4ZWFlYTdiNDZmNGRlYzY1YjM4YgEqzZ0=: ]] 00:25:36.640 12:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjA2MTdhOGIyMGEzNDI4YWEyNmNlZmNhYWI0ZjFiOWQ4NDM5NTIwYzZiZTI4ZWFlYTdiNDZmNGRlYzY1YjM4YgEqzZ0=: 00:25:36.640 12:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:25:36.640 12:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:36.640 12:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:36.640 12:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:36.640 12:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:36.641 12:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:36.641 12:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:36.641 12:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.641 12:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.898 12:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.898 12:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:36.898 12:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:36.898 12:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:36.898 12:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:36.898 12:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:36.898 12:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:36.898 12:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:36.898 12:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:36.898 12:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:36.898 12:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:36.898 12:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:36.898 12:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:36.898 12:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.898 12:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.463 nvme0n1 00:25:37.463 12:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.463 12:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:37.463 12:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:37.463 12:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.463 12:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.463 12:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.463 12:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:37.463 12:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:37.463 12:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.463 12:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.463 12:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.463 12:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:37.463 12:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:25:37.463 12:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:37.463 12:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:37.463 12:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:37.463 12:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:37.463 12:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTM2OTQ0MDc3OGRlNTk3NThiZmZhZTkxZGUxNWIzNTRmZTgzYzE2NjAxNWY1OTZmBVBxsQ==: 00:25:37.463 12:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mjg2NDkxODBlYTExZjVlOWYzMTIyODJhYTc0NGE2ZWVkNTJlYmU4YzZlZmIyZjc24KSCRQ==: 00:25:37.463 12:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:37.463 12:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:37.463 12:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTM2OTQ0MDc3OGRlNTk3NThiZmZhZTkxZGUxNWIzNTRmZTgzYzE2NjAxNWY1OTZmBVBxsQ==: 00:25:37.463 12:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mjg2NDkxODBlYTExZjVlOWYzMTIyODJhYTc0NGE2ZWVkNTJlYmU4YzZlZmIyZjc24KSCRQ==: ]] 00:25:37.463 12:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mjg2NDkxODBlYTExZjVlOWYzMTIyODJhYTc0NGE2ZWVkNTJlYmU4YzZlZmIyZjc24KSCRQ==: 00:25:37.463 12:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:25:37.463 12:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:37.463 12:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:37.463 12:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:37.463 12:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:37.463 12:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:37.463 12:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:37.463 12:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.463 12:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.463 12:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.463 12:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:37.463 12:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:37.463 12:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:37.463 12:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:37.463 12:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:37.463 12:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:37.463 12:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:37.463 12:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:37.463 12:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:37.463 12:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:37.463 12:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:37.463 12:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:37.463 12:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.463 12:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.027 nvme0n1 00:25:38.027 12:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.027 12:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:38.027 12:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:38.027 12:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.027 12:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.027 12:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.027 12:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:38.027 12:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:38.027 12:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.027 12:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.027 12:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.027 12:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:38.027 12:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:25:38.027 12:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:38.027 12:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:38.027 12:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:38.027 12:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:38.027 12:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWNlZDFhZWM4ZjRmMDNlYjQ4YTc1ZDA2ODU2NDk2ZmZmvwd3: 00:25:38.027 12:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzQ4MGE2MDJkZTRlZDljYWI5NGVkMjFhMTJiOWQzNmTxy2LF: 00:25:38.027 12:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:38.027 12:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:38.027 12:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWNlZDFhZWM4ZjRmMDNlYjQ4YTc1ZDA2ODU2NDk2ZmZmvwd3: 00:25:38.027 12:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzQ4MGE2MDJkZTRlZDljYWI5NGVkMjFhMTJiOWQzNmTxy2LF: ]] 00:25:38.027 12:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzQ4MGE2MDJkZTRlZDljYWI5NGVkMjFhMTJiOWQzNmTxy2LF: 00:25:38.027 12:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:25:38.027 12:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:38.027 12:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:38.027 12:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:38.027 12:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:38.027 12:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:38.027 12:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:38.027 12:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.027 12:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.027 12:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.027 12:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:38.027 12:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:38.027 12:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:38.027 12:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:38.027 12:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:38.027 12:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:38.027 12:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:38.027 12:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:38.027 12:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:38.027 12:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:38.027 12:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:38.027 12:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:38.027 12:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.027 12:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.590 nvme0n1 00:25:38.591 12:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.591 12:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:38.591 12:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:38.591 12:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.591 12:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.591 12:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.591 12:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:38.591 12:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:38.591 12:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.591 12:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.848 12:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.849 12:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:38.849 12:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:25:38.849 12:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:38.849 12:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:38.849 12:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:38.849 12:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:38.849 12:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Zjg2NzQ4YmJhMDVlNDczNTUzZDdiMzg0ZGM3NzYyMTRjZWZhYTUyNGMxM2M0MjE1iBbTJA==: 00:25:38.849 12:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjY2MjNlMzZmMTcxZTZlN2VhYmQ5OWVkNzAyZTFhNjTxxFgU: 00:25:38.849 12:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:38.849 12:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:38.849 12:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Zjg2NzQ4YmJhMDVlNDczNTUzZDdiMzg0ZGM3NzYyMTRjZWZhYTUyNGMxM2M0MjE1iBbTJA==: 00:25:38.849 12:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjY2MjNlMzZmMTcxZTZlN2VhYmQ5OWVkNzAyZTFhNjTxxFgU: ]] 00:25:38.849 12:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjY2MjNlMzZmMTcxZTZlN2VhYmQ5OWVkNzAyZTFhNjTxxFgU: 00:25:38.849 12:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:25:38.849 12:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:38.849 12:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:38.849 12:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:38.849 12:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:38.849 12:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:38.849 12:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:38.849 12:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.849 12:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.849 12:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.849 12:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:38.849 12:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:38.849 12:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:38.849 12:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:38.849 12:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:38.849 12:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:38.849 12:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:38.849 12:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:38.849 12:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:38.849 12:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:38.849 12:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:38.849 12:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:38.849 12:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.849 12:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.412 nvme0n1 00:25:39.412 12:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.412 12:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:39.412 12:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:39.412 12:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.412 12:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.412 12:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.412 12:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:39.412 12:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:39.412 12:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.412 12:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.412 12:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.412 12:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:39.412 12:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:25:39.412 12:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:39.412 12:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:39.412 12:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:39.412 12:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:39.412 12:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzkwYjRiNTczM2EyMzliMzMwMzM3ZWNiYjBjNDUxZGY0OWU5MTFlNTNlN2MxOWI0MjNjYjViYzI0NmZjZTgxMVbClsc=: 00:25:39.412 12:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:39.412 12:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:39.412 12:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:39.412 12:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzkwYjRiNTczM2EyMzliMzMwMzM3ZWNiYjBjNDUxZGY0OWU5MTFlNTNlN2MxOWI0MjNjYjViYzI0NmZjZTgxMVbClsc=: 00:25:39.412 12:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:39.412 12:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:25:39.412 12:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:39.412 12:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:39.412 12:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:39.412 12:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:39.412 12:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:39.412 12:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:39.412 12:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.412 12:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.412 12:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.413 12:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:39.413 12:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:39.413 12:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:39.413 12:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:39.413 12:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:39.413 12:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:39.413 12:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:39.413 12:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:39.413 12:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:39.413 12:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:39.413 12:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:39.413 12:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:39.413 12:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.413 12:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.977 nvme0n1 00:25:39.977 12:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.977 12:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:39.977 12:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.977 12:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:39.977 12:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.977 12:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.977 12:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:39.977 12:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:39.977 12:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.977 12:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.977 12:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.977 12:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:39.977 12:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:39.977 12:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:39.977 12:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:39.977 12:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:39.977 12:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTM2OTQ0MDc3OGRlNTk3NThiZmZhZTkxZGUxNWIzNTRmZTgzYzE2NjAxNWY1OTZmBVBxsQ==: 00:25:39.977 12:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mjg2NDkxODBlYTExZjVlOWYzMTIyODJhYTc0NGE2ZWVkNTJlYmU4YzZlZmIyZjc24KSCRQ==: 00:25:39.978 12:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:39.978 12:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:39.978 12:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTM2OTQ0MDc3OGRlNTk3NThiZmZhZTkxZGUxNWIzNTRmZTgzYzE2NjAxNWY1OTZmBVBxsQ==: 00:25:39.978 12:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mjg2NDkxODBlYTExZjVlOWYzMTIyODJhYTc0NGE2ZWVkNTJlYmU4YzZlZmIyZjc24KSCRQ==: ]] 00:25:39.978 12:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mjg2NDkxODBlYTExZjVlOWYzMTIyODJhYTc0NGE2ZWVkNTJlYmU4YzZlZmIyZjc24KSCRQ==: 00:25:39.978 12:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:39.978 12:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.978 12:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.978 12:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.978 12:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:25:39.978 12:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:39.978 12:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:39.978 12:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:39.978 12:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:39.978 12:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:39.978 12:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:39.978 12:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:39.978 12:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:39.978 12:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:39.978 12:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:39.978 12:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:39.978 12:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:25:39.978 12:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:39.978 12:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:39.978 12:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:39.978 12:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:39.978 12:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:39.978 12:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:39.978 12:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.978 12:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.236 request: 00:25:40.236 { 00:25:40.236 "name": "nvme0", 00:25:40.236 "trtype": "tcp", 00:25:40.236 "traddr": "10.0.0.1", 00:25:40.236 "adrfam": "ipv4", 00:25:40.236 "trsvcid": "4420", 00:25:40.236 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:40.236 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:40.236 "prchk_reftag": false, 00:25:40.236 "prchk_guard": false, 00:25:40.236 "hdgst": false, 00:25:40.236 "ddgst": false, 00:25:40.236 "allow_unrecognized_csi": false, 00:25:40.236 "method": "bdev_nvme_attach_controller", 00:25:40.236 "req_id": 1 00:25:40.236 } 00:25:40.236 Got JSON-RPC error response 00:25:40.236 response: 00:25:40.236 { 00:25:40.236 "code": -5, 00:25:40.236 "message": "Input/output error" 00:25:40.236 } 00:25:40.236 12:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:40.236 12:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:25:40.236 12:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:40.236 12:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:40.236 12:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:40.236 12:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:25:40.236 12:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.236 12:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:25:40.236 12:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.236 12:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.236 12:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:25:40.236 12:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:25:40.236 12:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:40.236 12:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:40.236 12:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:40.236 12:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:40.236 12:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:40.236 12:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:40.236 12:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:40.236 12:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:40.236 12:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:40.236 12:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:40.236 12:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:40.236 12:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:25:40.236 12:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:40.236 12:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:40.236 12:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:40.236 12:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:40.236 12:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:40.236 12:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:40.236 12:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.236 12:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.236 request: 00:25:40.236 { 00:25:40.236 "name": "nvme0", 00:25:40.236 "trtype": "tcp", 00:25:40.236 "traddr": "10.0.0.1", 00:25:40.236 "adrfam": "ipv4", 00:25:40.236 "trsvcid": "4420", 00:25:40.237 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:40.237 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:40.237 "prchk_reftag": false, 00:25:40.237 "prchk_guard": false, 00:25:40.237 "hdgst": false, 00:25:40.237 "ddgst": false, 00:25:40.237 "dhchap_key": "key2", 00:25:40.237 "allow_unrecognized_csi": false, 00:25:40.237 "method": "bdev_nvme_attach_controller", 00:25:40.237 "req_id": 1 00:25:40.237 } 00:25:40.237 Got JSON-RPC error response 00:25:40.237 response: 00:25:40.237 { 00:25:40.237 "code": -5, 00:25:40.237 "message": "Input/output error" 00:25:40.237 } 00:25:40.237 12:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:40.237 12:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:25:40.237 12:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:40.237 12:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:40.237 12:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:40.237 12:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:25:40.237 12:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:25:40.237 12:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.237 12:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.237 12:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.237 12:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:25:40.237 12:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:25:40.237 12:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:40.237 12:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:40.237 12:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:40.237 12:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:40.237 12:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:40.237 12:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:40.237 12:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:40.237 12:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:40.237 12:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:40.237 12:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:40.237 12:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:40.237 12:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:25:40.237 12:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:40.237 12:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:40.237 12:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:40.237 12:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:40.237 12:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:40.237 12:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:40.237 12:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.237 12:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.495 request: 00:25:40.495 { 00:25:40.495 "name": "nvme0", 00:25:40.495 "trtype": "tcp", 00:25:40.495 "traddr": "10.0.0.1", 00:25:40.495 "adrfam": "ipv4", 00:25:40.495 "trsvcid": "4420", 00:25:40.495 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:40.495 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:40.495 "prchk_reftag": false, 00:25:40.495 "prchk_guard": false, 00:25:40.495 "hdgst": false, 00:25:40.495 "ddgst": false, 00:25:40.495 "dhchap_key": "key1", 00:25:40.495 "dhchap_ctrlr_key": "ckey2", 00:25:40.495 "allow_unrecognized_csi": false, 00:25:40.495 "method": "bdev_nvme_attach_controller", 00:25:40.495 "req_id": 1 00:25:40.495 } 00:25:40.495 Got JSON-RPC error response 00:25:40.495 response: 00:25:40.495 { 00:25:40.495 "code": -5, 00:25:40.495 "message": "Input/output error" 00:25:40.495 } 00:25:40.495 12:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:40.495 12:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:25:40.495 12:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:40.495 12:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:40.495 12:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:40.495 12:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:25:40.495 12:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:40.495 12:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:40.495 12:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:40.496 12:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:40.496 12:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:40.496 12:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:40.496 12:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:40.496 12:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:40.496 12:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:40.496 12:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:40.496 12:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:25:40.496 12:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.496 12:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.496 nvme0n1 00:25:40.496 12:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.496 12:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:25:40.496 12:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:40.496 12:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:40.496 12:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:40.496 12:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:40.496 12:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWNlZDFhZWM4ZjRmMDNlYjQ4YTc1ZDA2ODU2NDk2ZmZmvwd3: 00:25:40.496 12:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzQ4MGE2MDJkZTRlZDljYWI5NGVkMjFhMTJiOWQzNmTxy2LF: 00:25:40.496 12:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:40.496 12:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:40.496 12:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWNlZDFhZWM4ZjRmMDNlYjQ4YTc1ZDA2ODU2NDk2ZmZmvwd3: 00:25:40.496 12:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzQ4MGE2MDJkZTRlZDljYWI5NGVkMjFhMTJiOWQzNmTxy2LF: ]] 00:25:40.496 12:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzQ4MGE2MDJkZTRlZDljYWI5NGVkMjFhMTJiOWQzNmTxy2LF: 00:25:40.496 12:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:40.496 12:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.496 12:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.496 12:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.496 12:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:25:40.496 12:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.496 12:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:25:40.496 12:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.496 12:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.754 12:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:40.754 12:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:40.754 12:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:25:40.754 12:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:40.754 12:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:40.754 12:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:40.754 12:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:40.754 12:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:40.754 12:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:40.754 12:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.754 12:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.754 request: 00:25:40.754 { 00:25:40.754 "name": "nvme0", 00:25:40.754 "dhchap_key": "key1", 00:25:40.754 "dhchap_ctrlr_key": "ckey2", 00:25:40.754 "method": "bdev_nvme_set_keys", 00:25:40.754 "req_id": 1 00:25:40.754 } 00:25:40.754 Got JSON-RPC error response 00:25:40.754 response: 00:25:40.754 { 00:25:40.754 "code": -13, 00:25:40.754 "message": "Permission denied" 00:25:40.754 } 00:25:40.754 12:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:40.754 12:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:25:40.754 12:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:40.754 12:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:40.754 12:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:40.754 12:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:25:40.754 12:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.754 12:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:25:40.754 12:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.755 12:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.755 12:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:25:40.755 12:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:25:41.687 12:49:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:25:41.687 12:49:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:25:41.687 12:49:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.687 12:49:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.687 12:49:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.687 12:49:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:25:41.687 12:49:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:25:43.059 12:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:25:43.059 12:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:25:43.059 12:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.059 12:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.059 12:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.059 12:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:25:43.059 12:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:43.059 12:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:43.059 12:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:43.059 12:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:43.059 12:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:43.059 12:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTM2OTQ0MDc3OGRlNTk3NThiZmZhZTkxZGUxNWIzNTRmZTgzYzE2NjAxNWY1OTZmBVBxsQ==: 00:25:43.059 12:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mjg2NDkxODBlYTExZjVlOWYzMTIyODJhYTc0NGE2ZWVkNTJlYmU4YzZlZmIyZjc24KSCRQ==: 00:25:43.059 12:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:43.059 12:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:43.059 12:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTM2OTQ0MDc3OGRlNTk3NThiZmZhZTkxZGUxNWIzNTRmZTgzYzE2NjAxNWY1OTZmBVBxsQ==: 00:25:43.059 12:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mjg2NDkxODBlYTExZjVlOWYzMTIyODJhYTc0NGE2ZWVkNTJlYmU4YzZlZmIyZjc24KSCRQ==: ]] 00:25:43.059 12:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mjg2NDkxODBlYTExZjVlOWYzMTIyODJhYTc0NGE2ZWVkNTJlYmU4YzZlZmIyZjc24KSCRQ==: 00:25:43.059 12:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:25:43.059 12:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:43.059 12:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:43.059 12:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:43.059 12:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:43.059 12:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:43.059 12:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:43.059 12:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:43.059 12:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:43.059 12:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:43.059 12:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:43.059 12:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:25:43.059 12:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.059 12:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.059 nvme0n1 00:25:43.059 12:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.059 12:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:25:43.059 12:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:43.059 12:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:43.059 12:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:43.059 12:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:43.059 12:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWNlZDFhZWM4ZjRmMDNlYjQ4YTc1ZDA2ODU2NDk2ZmZmvwd3: 00:25:43.059 12:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzQ4MGE2MDJkZTRlZDljYWI5NGVkMjFhMTJiOWQzNmTxy2LF: 00:25:43.059 12:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:43.059 12:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:43.059 12:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWNlZDFhZWM4ZjRmMDNlYjQ4YTc1ZDA2ODU2NDk2ZmZmvwd3: 00:25:43.059 12:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzQ4MGE2MDJkZTRlZDljYWI5NGVkMjFhMTJiOWQzNmTxy2LF: ]] 00:25:43.059 12:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzQ4MGE2MDJkZTRlZDljYWI5NGVkMjFhMTJiOWQzNmTxy2LF: 00:25:43.060 12:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:25:43.060 12:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:25:43.060 12:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:25:43.060 12:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:43.060 12:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:43.060 12:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:43.060 12:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:43.060 12:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:25:43.060 12:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.060 12:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.060 request: 00:25:43.060 { 00:25:43.060 "name": "nvme0", 00:25:43.060 "dhchap_key": "key2", 00:25:43.060 "dhchap_ctrlr_key": "ckey1", 00:25:43.060 "method": "bdev_nvme_set_keys", 00:25:43.060 "req_id": 1 00:25:43.060 } 00:25:43.060 Got JSON-RPC error response 00:25:43.060 response: 00:25:43.060 { 00:25:43.060 "code": -13, 00:25:43.060 "message": "Permission denied" 00:25:43.060 } 00:25:43.060 12:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:43.060 12:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:25:43.060 12:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:43.060 12:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:43.060 12:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:43.060 12:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:25:43.060 12:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:25:43.060 12:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.060 12:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.060 12:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.060 12:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:25:43.060 12:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:25:44.434 12:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:25:44.434 12:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:25:44.434 12:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.434 12:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.434 12:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.434 12:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:25:44.434 12:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:25:44.434 12:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:25:44.434 12:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:25:44.434 12:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:44.434 12:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:25:44.434 12:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:44.434 12:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:25:44.434 12:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:44.434 12:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:44.434 rmmod nvme_tcp 00:25:44.434 rmmod nvme_fabrics 00:25:44.434 12:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:44.434 12:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:25:44.434 12:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:25:44.434 12:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 2653418 ']' 00:25:44.434 12:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 2653418 00:25:44.434 12:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 2653418 ']' 00:25:44.434 12:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 2653418 00:25:44.434 12:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:25:44.434 12:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:44.434 12:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2653418 00:25:44.434 12:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:44.434 12:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:44.434 12:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2653418' 00:25:44.434 killing process with pid 2653418 00:25:44.434 12:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 2653418 00:25:44.434 12:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 2653418 00:25:44.434 12:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:44.434 12:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:44.434 12:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:44.434 12:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:25:44.434 12:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:25:44.434 12:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:25:44.434 12:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:44.434 12:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:44.434 12:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:44.434 12:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:44.434 12:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:44.434 12:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:46.966 12:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:46.966 12:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:25:46.966 12:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:46.966 12:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:25:46.966 12:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:25:46.966 12:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:25:46.966 12:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:46.966 12:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:46.966 12:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:25:46.966 12:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:46.966 12:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:25:46.966 12:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:25:46.966 12:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:25:49.496 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:25:49.496 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:25:49.496 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:25:49.496 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:25:49.496 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:25:49.496 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:25:49.496 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:25:49.496 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:25:49.496 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:25:49.496 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:25:49.496 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:25:49.496 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:25:49.496 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:25:49.496 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:25:49.496 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:25:49.496 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:25:50.062 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:25:50.331 12:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.0p0 /tmp/spdk.key-null.h5x /tmp/spdk.key-sha256.dy0 /tmp/spdk.key-sha384.xpB /tmp/spdk.key-sha512.FUg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:25:50.331 12:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:25:52.866 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:25:52.866 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:25:52.866 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:25:52.866 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:25:52.866 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:25:52.866 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:25:52.866 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:25:52.866 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:25:52.866 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:25:52.866 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:25:52.866 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:25:52.866 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:25:52.866 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:25:52.866 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:25:52.866 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:25:52.866 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:25:52.866 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:25:53.124 00:25:53.124 real 0m53.515s 00:25:53.124 user 0m48.336s 00:25:53.124 sys 0m12.203s 00:25:53.124 12:49:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:53.124 12:49:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.124 ************************************ 00:25:53.124 END TEST nvmf_auth_host 00:25:53.124 ************************************ 00:25:53.124 12:49:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:25:53.124 12:49:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:25:53.124 12:49:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:53.124 12:49:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:53.124 12:49:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.124 ************************************ 00:25:53.124 START TEST nvmf_digest 00:25:53.124 ************************************ 00:25:53.124 12:49:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:25:53.124 * Looking for test storage... 00:25:53.124 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:53.124 12:49:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:53.124 12:49:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lcov --version 00:25:53.124 12:49:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:53.124 12:49:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:53.124 12:49:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:53.383 12:49:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:53.383 12:49:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:53.383 12:49:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:25:53.383 12:49:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:25:53.383 12:49:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:25:53.383 12:49:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:25:53.383 12:49:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:25:53.383 12:49:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:25:53.383 12:49:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:25:53.383 12:49:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:53.383 12:49:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:25:53.383 12:49:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:25:53.383 12:49:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:53.383 12:49:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:53.383 12:49:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:25:53.383 12:49:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:25:53.383 12:49:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:53.383 12:49:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:25:53.383 12:49:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:25:53.383 12:49:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:25:53.383 12:49:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:25:53.383 12:49:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:53.383 12:49:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:25:53.383 12:49:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:25:53.383 12:49:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:53.383 12:49:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:53.383 12:49:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:25:53.383 12:49:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:53.383 12:49:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:53.383 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:53.383 --rc genhtml_branch_coverage=1 00:25:53.383 --rc genhtml_function_coverage=1 00:25:53.383 --rc genhtml_legend=1 00:25:53.383 --rc geninfo_all_blocks=1 00:25:53.383 --rc geninfo_unexecuted_blocks=1 00:25:53.383 00:25:53.383 ' 00:25:53.383 12:49:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:53.383 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:53.383 --rc genhtml_branch_coverage=1 00:25:53.383 --rc genhtml_function_coverage=1 00:25:53.383 --rc genhtml_legend=1 00:25:53.383 --rc geninfo_all_blocks=1 00:25:53.383 --rc geninfo_unexecuted_blocks=1 00:25:53.383 00:25:53.383 ' 00:25:53.383 12:49:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:53.383 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:53.383 --rc genhtml_branch_coverage=1 00:25:53.383 --rc genhtml_function_coverage=1 00:25:53.383 --rc genhtml_legend=1 00:25:53.383 --rc geninfo_all_blocks=1 00:25:53.383 --rc geninfo_unexecuted_blocks=1 00:25:53.383 00:25:53.383 ' 00:25:53.383 12:49:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:53.383 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:53.383 --rc genhtml_branch_coverage=1 00:25:53.383 --rc genhtml_function_coverage=1 00:25:53.383 --rc genhtml_legend=1 00:25:53.383 --rc geninfo_all_blocks=1 00:25:53.383 --rc geninfo_unexecuted_blocks=1 00:25:53.383 00:25:53.383 ' 00:25:53.383 12:49:35 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:53.383 12:49:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:25:53.383 12:49:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:53.383 12:49:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:53.383 12:49:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:53.383 12:49:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:53.383 12:49:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:53.383 12:49:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:53.383 12:49:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:53.383 12:49:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:53.383 12:49:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:53.383 12:49:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:53.383 12:49:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:25:53.383 12:49:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:25:53.383 12:49:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:53.383 12:49:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:53.383 12:49:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:53.383 12:49:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:53.383 12:49:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:53.383 12:49:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:25:53.383 12:49:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:53.383 12:49:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:53.383 12:49:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:53.383 12:49:35 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:53.383 12:49:35 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:53.383 12:49:35 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:53.383 12:49:35 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:25:53.384 12:49:35 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:53.384 12:49:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:25:53.384 12:49:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:53.384 12:49:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:53.384 12:49:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:53.384 12:49:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:53.384 12:49:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:53.384 12:49:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:53.384 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:53.384 12:49:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:53.384 12:49:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:53.384 12:49:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:53.384 12:49:35 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:25:53.384 12:49:35 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:25:53.384 12:49:35 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:25:53.384 12:49:35 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:25:53.384 12:49:35 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:25:53.384 12:49:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:53.384 12:49:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:53.384 12:49:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:53.384 12:49:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:53.384 12:49:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:53.384 12:49:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:53.384 12:49:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:53.384 12:49:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:53.384 12:49:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:53.384 12:49:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:53.384 12:49:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:25:53.384 12:49:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:25:59.952 12:49:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:59.952 12:49:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:25:59.952 12:49:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:59.952 12:49:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:59.952 12:49:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:59.952 12:49:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:59.952 12:49:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:59.952 12:49:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:25:59.952 12:49:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:59.952 12:49:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:25:59.952 12:49:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:25:59.952 12:49:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:25:59.952 12:49:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:25:59.952 12:49:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:25:59.952 12:49:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:25:59.952 12:49:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:59.952 12:49:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:59.952 12:49:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:59.952 12:49:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:59.952 12:49:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:59.952 12:49:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:59.952 12:49:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:59.952 12:49:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:59.952 12:49:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:59.952 12:49:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:59.952 12:49:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:59.952 12:49:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:59.952 12:49:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:59.952 12:49:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:59.952 12:49:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:59.952 12:49:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:59.952 12:49:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:59.952 12:49:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:59.952 12:49:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:59.952 12:49:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:59.952 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:59.952 12:49:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:59.953 12:49:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:59.953 12:49:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:59.953 12:49:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:59.953 12:49:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:59.953 12:49:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:59.953 12:49:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:59.953 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:59.953 12:49:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:59.953 12:49:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:59.953 12:49:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:59.953 12:49:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:59.953 12:49:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:59.953 12:49:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:59.953 12:49:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:59.953 12:49:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:59.953 12:49:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:59.953 12:49:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:59.953 12:49:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:59.953 12:49:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:59.953 12:49:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:59.953 12:49:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:59.953 12:49:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:59.953 12:49:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:59.953 Found net devices under 0000:86:00.0: cvl_0_0 00:25:59.953 12:49:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:59.953 12:49:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:59.953 12:49:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:59.953 12:49:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:59.953 12:49:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:59.953 12:49:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:59.953 12:49:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:59.953 12:49:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:59.953 12:49:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:59.953 Found net devices under 0000:86:00.1: cvl_0_1 00:25:59.953 12:49:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:59.953 12:49:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:59.953 12:49:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:25:59.953 12:49:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:59.953 12:49:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:59.953 12:49:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:59.953 12:49:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:59.953 12:49:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:59.953 12:49:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:59.953 12:49:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:59.953 12:49:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:59.953 12:49:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:59.953 12:49:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:59.953 12:49:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:59.953 12:49:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:59.953 12:49:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:59.953 12:49:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:59.953 12:49:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:59.953 12:49:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:59.953 12:49:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:59.953 12:49:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:59.953 12:49:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:59.953 12:49:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:59.953 12:49:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:59.953 12:49:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:59.953 12:49:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:59.953 12:49:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:59.953 12:49:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:59.953 12:49:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:59.953 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:59.953 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.412 ms 00:25:59.953 00:25:59.953 --- 10.0.0.2 ping statistics --- 00:25:59.953 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:59.953 rtt min/avg/max/mdev = 0.412/0.412/0.412/0.000 ms 00:25:59.953 12:49:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:59.953 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:59.953 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.221 ms 00:25:59.953 00:25:59.953 --- 10.0.0.1 ping statistics --- 00:25:59.953 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:59.953 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:25:59.953 12:49:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:59.953 12:49:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:25:59.953 12:49:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:59.953 12:49:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:59.953 12:49:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:59.953 12:49:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:59.953 12:49:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:59.953 12:49:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:59.953 12:49:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:59.953 12:49:41 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:25:59.953 12:49:41 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:25:59.953 12:49:41 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:25:59.953 12:49:41 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:25:59.953 12:49:41 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:59.953 12:49:41 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:25:59.953 ************************************ 00:25:59.953 START TEST nvmf_digest_clean 00:25:59.953 ************************************ 00:25:59.953 12:49:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:25:59.953 12:49:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:25:59.953 12:49:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:25:59.954 12:49:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:25:59.954 12:49:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:25:59.954 12:49:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:25:59.954 12:49:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:59.954 12:49:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:59.954 12:49:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:59.954 12:49:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=2667191 00:25:59.954 12:49:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 2667191 00:25:59.954 12:49:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:25:59.954 12:49:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2667191 ']' 00:25:59.954 12:49:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:59.954 12:49:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:59.954 12:49:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:59.954 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:59.954 12:49:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:59.954 12:49:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:59.954 [2024-11-28 12:49:41.579003] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:25:59.954 [2024-11-28 12:49:41.579046] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:59.954 [2024-11-28 12:49:41.646222] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:59.954 [2024-11-28 12:49:41.684878] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:59.954 [2024-11-28 12:49:41.684913] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:59.954 [2024-11-28 12:49:41.684920] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:59.954 [2024-11-28 12:49:41.684927] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:59.954 [2024-11-28 12:49:41.684932] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:59.954 [2024-11-28 12:49:41.685487] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:59.954 12:49:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:59.954 12:49:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:25:59.954 12:49:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:59.954 12:49:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:59.954 12:49:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:59.954 12:49:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:59.954 12:49:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:25:59.954 12:49:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:25:59.954 12:49:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:25:59.954 12:49:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.954 12:49:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:59.954 null0 00:25:59.954 [2024-11-28 12:49:41.851397] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:59.954 [2024-11-28 12:49:41.875601] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:59.954 12:49:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.954 12:49:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:25:59.954 12:49:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:59.954 12:49:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:59.954 12:49:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:25:59.954 12:49:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:25:59.954 12:49:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:25:59.954 12:49:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:25:59.954 12:49:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2667211 00:25:59.954 12:49:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2667211 /var/tmp/bperf.sock 00:25:59.954 12:49:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:25:59.954 12:49:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2667211 ']' 00:25:59.954 12:49:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:59.954 12:49:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:59.954 12:49:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:59.954 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:59.954 12:49:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:59.954 12:49:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:59.954 [2024-11-28 12:49:41.929790] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:25:59.954 [2024-11-28 12:49:41.929830] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2667211 ] 00:25:59.954 [2024-11-28 12:49:41.992143] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:59.954 [2024-11-28 12:49:42.034984] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:59.954 12:49:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:59.954 12:49:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:25:59.954 12:49:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:25:59.954 12:49:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:59.954 12:49:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:59.954 12:49:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:59.954 12:49:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:00.213 nvme0n1 00:26:00.213 12:49:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:00.213 12:49:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:00.472 Running I/O for 2 seconds... 00:26:02.345 24611.00 IOPS, 96.14 MiB/s [2024-11-28T11:49:44.864Z] 24637.50 IOPS, 96.24 MiB/s 00:26:02.346 Latency(us) 00:26:02.346 [2024-11-28T11:49:44.865Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:02.346 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:26:02.346 nvme0n1 : 2.00 24655.22 96.31 0.00 0.00 5186.62 2592.95 11568.53 00:26:02.346 [2024-11-28T11:49:44.865Z] =================================================================================================================== 00:26:02.346 [2024-11-28T11:49:44.865Z] Total : 24655.22 96.31 0.00 0.00 5186.62 2592.95 11568.53 00:26:02.346 { 00:26:02.346 "results": [ 00:26:02.346 { 00:26:02.346 "job": "nvme0n1", 00:26:02.346 "core_mask": "0x2", 00:26:02.346 "workload": "randread", 00:26:02.346 "status": "finished", 00:26:02.346 "queue_depth": 128, 00:26:02.346 "io_size": 4096, 00:26:02.346 "runtime": 2.003754, 00:26:02.346 "iops": 24655.22214802815, 00:26:02.346 "mibps": 96.30946151573497, 00:26:02.346 "io_failed": 0, 00:26:02.346 "io_timeout": 0, 00:26:02.346 "avg_latency_us": 5186.6236559124645, 00:26:02.346 "min_latency_us": 2592.946086956522, 00:26:02.346 "max_latency_us": 11568.528695652174 00:26:02.346 } 00:26:02.346 ], 00:26:02.346 "core_count": 1 00:26:02.346 } 00:26:02.346 12:49:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:02.346 12:49:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:02.346 12:49:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:02.346 12:49:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:02.346 | select(.opcode=="crc32c") 00:26:02.346 | "\(.module_name) \(.executed)"' 00:26:02.346 12:49:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:02.604 12:49:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:02.604 12:49:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:02.604 12:49:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:02.604 12:49:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:02.604 12:49:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2667211 00:26:02.604 12:49:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2667211 ']' 00:26:02.604 12:49:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2667211 00:26:02.604 12:49:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:26:02.604 12:49:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:02.604 12:49:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2667211 00:26:02.604 12:49:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:02.604 12:49:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:02.604 12:49:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2667211' 00:26:02.604 killing process with pid 2667211 00:26:02.604 12:49:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2667211 00:26:02.604 Received shutdown signal, test time was about 2.000000 seconds 00:26:02.604 00:26:02.604 Latency(us) 00:26:02.604 [2024-11-28T11:49:45.123Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:02.604 [2024-11-28T11:49:45.123Z] =================================================================================================================== 00:26:02.604 [2024-11-28T11:49:45.123Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:02.604 12:49:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2667211 00:26:02.862 12:49:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:26:02.862 12:49:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:02.862 12:49:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:02.862 12:49:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:26:02.862 12:49:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:26:02.862 12:49:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:26:02.862 12:49:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:02.862 12:49:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2667691 00:26:02.862 12:49:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2667691 /var/tmp/bperf.sock 00:26:02.862 12:49:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:26:02.862 12:49:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2667691 ']' 00:26:02.862 12:49:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:02.862 12:49:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:02.862 12:49:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:02.862 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:02.862 12:49:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:02.862 12:49:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:02.862 [2024-11-28 12:49:45.260059] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:26:02.862 [2024-11-28 12:49:45.260110] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2667691 ] 00:26:02.862 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:02.862 Zero copy mechanism will not be used. 00:26:02.862 [2024-11-28 12:49:45.323696] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:02.862 [2024-11-28 12:49:45.363764] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:03.119 12:49:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:03.119 12:49:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:26:03.119 12:49:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:03.119 12:49:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:03.120 12:49:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:03.377 12:49:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:03.377 12:49:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:03.635 nvme0n1 00:26:03.635 12:49:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:03.635 12:49:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:03.635 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:03.635 Zero copy mechanism will not be used. 00:26:03.635 Running I/O for 2 seconds... 00:26:05.949 5370.00 IOPS, 671.25 MiB/s [2024-11-28T11:49:48.468Z] 5405.00 IOPS, 675.62 MiB/s 00:26:05.949 Latency(us) 00:26:05.949 [2024-11-28T11:49:48.468Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:05.949 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:26:05.949 nvme0n1 : 2.04 5300.88 662.61 0.00 0.00 2963.83 666.05 45590.26 00:26:05.950 [2024-11-28T11:49:48.469Z] =================================================================================================================== 00:26:05.950 [2024-11-28T11:49:48.469Z] Total : 5300.88 662.61 0.00 0.00 2963.83 666.05 45590.26 00:26:05.950 { 00:26:05.950 "results": [ 00:26:05.950 { 00:26:05.950 "job": "nvme0n1", 00:26:05.950 "core_mask": "0x2", 00:26:05.950 "workload": "randread", 00:26:05.950 "status": "finished", 00:26:05.950 "queue_depth": 16, 00:26:05.950 "io_size": 131072, 00:26:05.950 "runtime": 2.042303, 00:26:05.950 "iops": 5300.878469061643, 00:26:05.950 "mibps": 662.6098086327054, 00:26:05.950 "io_failed": 0, 00:26:05.950 "io_timeout": 0, 00:26:05.950 "avg_latency_us": 2963.8311735837233, 00:26:05.950 "min_latency_us": 666.0452173913044, 00:26:05.950 "max_latency_us": 45590.260869565216 00:26:05.950 } 00:26:05.950 ], 00:26:05.950 "core_count": 1 00:26:05.950 } 00:26:05.950 12:49:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:05.950 12:49:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:05.950 12:49:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:05.950 12:49:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:05.950 | select(.opcode=="crc32c") 00:26:05.950 | "\(.module_name) \(.executed)"' 00:26:05.950 12:49:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:05.950 12:49:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:05.950 12:49:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:05.950 12:49:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:05.950 12:49:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:05.950 12:49:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2667691 00:26:05.950 12:49:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2667691 ']' 00:26:05.950 12:49:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2667691 00:26:05.950 12:49:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:26:05.950 12:49:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:05.950 12:49:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2667691 00:26:05.950 12:49:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:05.950 12:49:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:05.950 12:49:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2667691' 00:26:05.950 killing process with pid 2667691 00:26:05.950 12:49:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2667691 00:26:05.950 Received shutdown signal, test time was about 2.000000 seconds 00:26:05.950 00:26:05.950 Latency(us) 00:26:05.950 [2024-11-28T11:49:48.469Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:05.950 [2024-11-28T11:49:48.469Z] =================================================================================================================== 00:26:05.950 [2024-11-28T11:49:48.469Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:05.950 12:49:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2667691 00:26:06.209 12:49:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:26:06.209 12:49:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:06.209 12:49:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:06.209 12:49:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:26:06.209 12:49:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:26:06.209 12:49:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:26:06.209 12:49:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:06.209 12:49:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2668319 00:26:06.209 12:49:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2668319 /var/tmp/bperf.sock 00:26:06.210 12:49:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:26:06.210 12:49:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2668319 ']' 00:26:06.210 12:49:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:06.210 12:49:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:06.210 12:49:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:06.210 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:06.210 12:49:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:06.210 12:49:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:06.210 [2024-11-28 12:49:48.671499] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:26:06.210 [2024-11-28 12:49:48.671551] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2668319 ] 00:26:06.469 [2024-11-28 12:49:48.734328] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:06.469 [2024-11-28 12:49:48.776934] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:06.469 12:49:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:06.469 12:49:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:26:06.469 12:49:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:06.469 12:49:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:06.469 12:49:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:06.728 12:49:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:06.728 12:49:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:06.986 nvme0n1 00:26:06.986 12:49:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:06.986 12:49:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:06.986 Running I/O for 2 seconds... 00:26:09.300 27288.00 IOPS, 106.59 MiB/s [2024-11-28T11:49:51.819Z] 27545.50 IOPS, 107.60 MiB/s 00:26:09.300 Latency(us) 00:26:09.300 [2024-11-28T11:49:51.819Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:09.300 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:09.300 nvme0n1 : 2.01 27541.57 107.58 0.00 0.00 4641.74 1837.86 10542.75 00:26:09.300 [2024-11-28T11:49:51.819Z] =================================================================================================================== 00:26:09.300 [2024-11-28T11:49:51.819Z] Total : 27541.57 107.58 0.00 0.00 4641.74 1837.86 10542.75 00:26:09.300 { 00:26:09.300 "results": [ 00:26:09.300 { 00:26:09.300 "job": "nvme0n1", 00:26:09.300 "core_mask": "0x2", 00:26:09.300 "workload": "randwrite", 00:26:09.300 "status": "finished", 00:26:09.300 "queue_depth": 128, 00:26:09.300 "io_size": 4096, 00:26:09.300 "runtime": 2.007257, 00:26:09.300 "iops": 27541.565429837832, 00:26:09.300 "mibps": 107.58423996030403, 00:26:09.300 "io_failed": 0, 00:26:09.300 "io_timeout": 0, 00:26:09.300 "avg_latency_us": 4641.743118137583, 00:26:09.300 "min_latency_us": 1837.8573913043479, 00:26:09.300 "max_latency_us": 10542.747826086956 00:26:09.300 } 00:26:09.300 ], 00:26:09.300 "core_count": 1 00:26:09.300 } 00:26:09.300 12:49:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:09.300 12:49:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:09.300 12:49:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:09.300 12:49:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:09.300 | select(.opcode=="crc32c") 00:26:09.300 | "\(.module_name) \(.executed)"' 00:26:09.300 12:49:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:09.300 12:49:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:09.300 12:49:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:09.300 12:49:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:09.300 12:49:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:09.300 12:49:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2668319 00:26:09.300 12:49:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2668319 ']' 00:26:09.300 12:49:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2668319 00:26:09.300 12:49:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:26:09.300 12:49:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:09.300 12:49:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2668319 00:26:09.300 12:49:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:09.300 12:49:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:09.300 12:49:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2668319' 00:26:09.300 killing process with pid 2668319 00:26:09.300 12:49:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2668319 00:26:09.300 Received shutdown signal, test time was about 2.000000 seconds 00:26:09.300 00:26:09.300 Latency(us) 00:26:09.300 [2024-11-28T11:49:51.819Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:09.300 [2024-11-28T11:49:51.819Z] =================================================================================================================== 00:26:09.300 [2024-11-28T11:49:51.819Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:09.300 12:49:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2668319 00:26:09.559 12:49:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:26:09.559 12:49:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:09.559 12:49:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:09.559 12:49:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:26:09.559 12:49:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:26:09.559 12:49:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:26:09.559 12:49:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:09.559 12:49:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2668847 00:26:09.559 12:49:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2668847 /var/tmp/bperf.sock 00:26:09.559 12:49:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:26:09.559 12:49:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2668847 ']' 00:26:09.560 12:49:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:09.560 12:49:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:09.560 12:49:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:09.560 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:09.560 12:49:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:09.560 12:49:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:09.560 [2024-11-28 12:49:51.966709] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:26:09.560 [2024-11-28 12:49:51.966759] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2668847 ] 00:26:09.560 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:09.560 Zero copy mechanism will not be used. 00:26:09.560 [2024-11-28 12:49:52.028706] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:09.560 [2024-11-28 12:49:52.071264] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:09.819 12:49:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:09.819 12:49:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:26:09.819 12:49:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:09.819 12:49:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:09.819 12:49:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:10.078 12:49:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:10.078 12:49:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:10.337 nvme0n1 00:26:10.337 12:49:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:10.337 12:49:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:10.337 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:10.337 Zero copy mechanism will not be used. 00:26:10.337 Running I/O for 2 seconds... 00:26:12.211 5937.00 IOPS, 742.12 MiB/s [2024-11-28T11:49:54.730Z] 6394.50 IOPS, 799.31 MiB/s 00:26:12.211 Latency(us) 00:26:12.211 [2024-11-28T11:49:54.730Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:12.211 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:26:12.211 nvme0n1 : 2.00 6392.91 799.11 0.00 0.00 2498.56 1745.25 8149.26 00:26:12.211 [2024-11-28T11:49:54.730Z] =================================================================================================================== 00:26:12.211 [2024-11-28T11:49:54.730Z] Total : 6392.91 799.11 0.00 0.00 2498.56 1745.25 8149.26 00:26:12.211 { 00:26:12.211 "results": [ 00:26:12.211 { 00:26:12.211 "job": "nvme0n1", 00:26:12.211 "core_mask": "0x2", 00:26:12.211 "workload": "randwrite", 00:26:12.211 "status": "finished", 00:26:12.211 "queue_depth": 16, 00:26:12.211 "io_size": 131072, 00:26:12.211 "runtime": 2.003, 00:26:12.211 "iops": 6392.910634048927, 00:26:12.211 "mibps": 799.1138292561159, 00:26:12.211 "io_failed": 0, 00:26:12.211 "io_timeout": 0, 00:26:12.211 "avg_latency_us": 2498.5626424460556, 00:26:12.211 "min_latency_us": 1745.2521739130434, 00:26:12.211 "max_latency_us": 8149.2591304347825 00:26:12.211 } 00:26:12.211 ], 00:26:12.211 "core_count": 1 00:26:12.211 } 00:26:12.471 12:49:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:12.471 12:49:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:12.471 12:49:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:12.471 12:49:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:12.471 | select(.opcode=="crc32c") 00:26:12.471 | "\(.module_name) \(.executed)"' 00:26:12.471 12:49:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:12.471 12:49:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:12.471 12:49:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:12.471 12:49:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:12.471 12:49:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:12.471 12:49:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2668847 00:26:12.471 12:49:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2668847 ']' 00:26:12.471 12:49:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2668847 00:26:12.471 12:49:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:26:12.471 12:49:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:12.471 12:49:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2668847 00:26:12.738 12:49:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:12.738 12:49:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:12.738 12:49:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2668847' 00:26:12.738 killing process with pid 2668847 00:26:12.738 12:49:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2668847 00:26:12.738 Received shutdown signal, test time was about 2.000000 seconds 00:26:12.738 00:26:12.738 Latency(us) 00:26:12.738 [2024-11-28T11:49:55.257Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:12.738 [2024-11-28T11:49:55.257Z] =================================================================================================================== 00:26:12.738 [2024-11-28T11:49:55.257Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:12.738 12:49:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2668847 00:26:12.738 12:49:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 2667191 00:26:12.738 12:49:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2667191 ']' 00:26:12.738 12:49:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2667191 00:26:12.738 12:49:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:26:12.738 12:49:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:12.738 12:49:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2667191 00:26:12.738 12:49:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:12.738 12:49:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:12.738 12:49:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2667191' 00:26:12.738 killing process with pid 2667191 00:26:12.738 12:49:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2667191 00:26:12.738 12:49:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2667191 00:26:12.996 00:26:12.996 real 0m13.852s 00:26:12.996 user 0m26.504s 00:26:12.996 sys 0m4.509s 00:26:12.996 12:49:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:12.996 12:49:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:12.996 ************************************ 00:26:12.996 END TEST nvmf_digest_clean 00:26:12.996 ************************************ 00:26:12.996 12:49:55 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:26:12.996 12:49:55 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:12.996 12:49:55 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:12.996 12:49:55 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:12.996 ************************************ 00:26:12.996 START TEST nvmf_digest_error 00:26:12.996 ************************************ 00:26:12.996 12:49:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:26:12.996 12:49:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:26:12.996 12:49:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:12.996 12:49:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:12.997 12:49:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:12.997 12:49:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=2669344 00:26:12.997 12:49:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 2669344 00:26:12.997 12:49:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:26:12.997 12:49:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2669344 ']' 00:26:12.997 12:49:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:12.997 12:49:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:12.997 12:49:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:12.997 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:12.997 12:49:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:12.997 12:49:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:12.997 [2024-11-28 12:49:55.500488] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:26:12.997 [2024-11-28 12:49:55.500536] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:13.256 [2024-11-28 12:49:55.570184] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:13.256 [2024-11-28 12:49:55.612262] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:13.256 [2024-11-28 12:49:55.612300] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:13.256 [2024-11-28 12:49:55.612308] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:13.256 [2024-11-28 12:49:55.612314] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:13.256 [2024-11-28 12:49:55.612319] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:13.256 [2024-11-28 12:49:55.612885] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:13.256 12:49:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:13.256 12:49:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:26:13.256 12:49:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:13.256 12:49:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:13.256 12:49:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:13.256 12:49:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:13.256 12:49:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:26:13.256 12:49:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.256 12:49:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:13.256 [2024-11-28 12:49:55.701392] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:26:13.256 12:49:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.256 12:49:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:26:13.256 12:49:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:26:13.256 12:49:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.256 12:49:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:13.515 null0 00:26:13.515 [2024-11-28 12:49:55.799653] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:13.516 [2024-11-28 12:49:55.823844] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:13.516 12:49:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.516 12:49:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:26:13.516 12:49:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:13.516 12:49:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:26:13.516 12:49:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:26:13.516 12:49:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:26:13.516 12:49:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2669544 00:26:13.516 12:49:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2669544 /var/tmp/bperf.sock 00:26:13.516 12:49:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:26:13.516 12:49:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2669544 ']' 00:26:13.516 12:49:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:13.516 12:49:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:13.516 12:49:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:13.516 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:13.516 12:49:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:13.516 12:49:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:13.516 [2024-11-28 12:49:55.878108] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:26:13.516 [2024-11-28 12:49:55.878153] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2669544 ] 00:26:13.516 [2024-11-28 12:49:55.939550] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:13.516 [2024-11-28 12:49:55.982987] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:13.774 12:49:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:13.774 12:49:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:26:13.774 12:49:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:13.774 12:49:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:13.774 12:49:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:13.774 12:49:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.774 12:49:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:14.032 12:49:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.032 12:49:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:14.032 12:49:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:14.032 nvme0n1 00:26:14.291 12:49:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:26:14.291 12:49:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.291 12:49:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:14.291 12:49:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.291 12:49:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:14.291 12:49:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:14.291 Running I/O for 2 seconds... 00:26:14.291 [2024-11-28 12:49:56.671033] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:14.291 [2024-11-28 12:49:56.671066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:352 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.291 [2024-11-28 12:49:56.671077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.291 [2024-11-28 12:49:56.680088] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:14.291 [2024-11-28 12:49:56.680113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:9313 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.291 [2024-11-28 12:49:56.680122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.291 [2024-11-28 12:49:56.692864] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:14.291 [2024-11-28 12:49:56.692886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20571 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.291 [2024-11-28 12:49:56.692895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.291 [2024-11-28 12:49:56.705646] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:14.291 [2024-11-28 12:49:56.705668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19805 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.291 [2024-11-28 12:49:56.705676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.291 [2024-11-28 12:49:56.714391] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:14.291 [2024-11-28 12:49:56.714412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:23889 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.291 [2024-11-28 12:49:56.714422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.291 [2024-11-28 12:49:56.726682] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:14.291 [2024-11-28 12:49:56.726704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:1798 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.291 [2024-11-28 12:49:56.726713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.291 [2024-11-28 12:49:56.738786] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:14.291 [2024-11-28 12:49:56.738807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21799 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.291 [2024-11-28 12:49:56.738815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.291 [2024-11-28 12:49:56.748234] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:14.291 [2024-11-28 12:49:56.748254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:18458 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.291 [2024-11-28 12:49:56.748263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.291 [2024-11-28 12:49:56.756832] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:14.291 [2024-11-28 12:49:56.756852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:5274 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.291 [2024-11-28 12:49:56.756860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.291 [2024-11-28 12:49:56.769540] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:14.291 [2024-11-28 12:49:56.769564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:12265 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.291 [2024-11-28 12:49:56.769572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.291 [2024-11-28 12:49:56.781766] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:14.291 [2024-11-28 12:49:56.781787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:23932 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.291 [2024-11-28 12:49:56.781796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.291 [2024-11-28 12:49:56.793115] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:14.291 [2024-11-28 12:49:56.793135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14604 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.291 [2024-11-28 12:49:56.793144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.291 [2024-11-28 12:49:56.801952] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:14.291 [2024-11-28 12:49:56.801973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6875 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.291 [2024-11-28 12:49:56.801981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.551 [2024-11-28 12:49:56.815387] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:14.551 [2024-11-28 12:49:56.815408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:6933 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.551 [2024-11-28 12:49:56.815417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.551 [2024-11-28 12:49:56.827333] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:14.551 [2024-11-28 12:49:56.827354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:24629 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.551 [2024-11-28 12:49:56.827363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.551 [2024-11-28 12:49:56.836143] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:14.551 [2024-11-28 12:49:56.836163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:9018 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.551 [2024-11-28 12:49:56.836171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.551 [2024-11-28 12:49:56.849489] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:14.551 [2024-11-28 12:49:56.849509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:2981 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.551 [2024-11-28 12:49:56.849521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.551 [2024-11-28 12:49:56.861978] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:14.551 [2024-11-28 12:49:56.861999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:8201 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.551 [2024-11-28 12:49:56.862007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.551 [2024-11-28 12:49:56.873105] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:14.551 [2024-11-28 12:49:56.873125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:15129 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.551 [2024-11-28 12:49:56.873134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.551 [2024-11-28 12:49:56.881365] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:14.551 [2024-11-28 12:49:56.881386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:9240 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.551 [2024-11-28 12:49:56.881395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.551 [2024-11-28 12:49:56.893600] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:14.551 [2024-11-28 12:49:56.893621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21627 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.551 [2024-11-28 12:49:56.893630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.551 [2024-11-28 12:49:56.902574] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:14.551 [2024-11-28 12:49:56.902594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:17331 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.551 [2024-11-28 12:49:56.902603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.551 [2024-11-28 12:49:56.912459] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:14.551 [2024-11-28 12:49:56.912480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:24804 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.551 [2024-11-28 12:49:56.912488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.551 [2024-11-28 12:49:56.922983] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:14.551 [2024-11-28 12:49:56.923005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:7564 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.551 [2024-11-28 12:49:56.923013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.551 [2024-11-28 12:49:56.931242] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:14.551 [2024-11-28 12:49:56.931265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18126 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.551 [2024-11-28 12:49:56.931274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.551 [2024-11-28 12:49:56.942384] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:14.551 [2024-11-28 12:49:56.942411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:23151 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.551 [2024-11-28 12:49:56.942420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.551 [2024-11-28 12:49:56.953989] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:14.551 [2024-11-28 12:49:56.954013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:20287 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.551 [2024-11-28 12:49:56.954023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.551 [2024-11-28 12:49:56.963294] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:14.551 [2024-11-28 12:49:56.963317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:17388 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.551 [2024-11-28 12:49:56.963327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.551 [2024-11-28 12:49:56.973138] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:14.551 [2024-11-28 12:49:56.973160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:10751 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.552 [2024-11-28 12:49:56.973168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.552 [2024-11-28 12:49:56.982254] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:14.552 [2024-11-28 12:49:56.982275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13835 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.552 [2024-11-28 12:49:56.982283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.552 [2024-11-28 12:49:56.993465] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:14.552 [2024-11-28 12:49:56.993496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23747 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.552 [2024-11-28 12:49:56.993505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.552 [2024-11-28 12:49:57.004342] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:14.552 [2024-11-28 12:49:57.004362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:1021 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.552 [2024-11-28 12:49:57.004371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.552 [2024-11-28 12:49:57.012829] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:14.552 [2024-11-28 12:49:57.012851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:2400 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.552 [2024-11-28 12:49:57.012859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.552 [2024-11-28 12:49:57.024687] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:14.552 [2024-11-28 12:49:57.024710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:13254 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.552 [2024-11-28 12:49:57.024719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.552 [2024-11-28 12:49:57.034881] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:14.552 [2024-11-28 12:49:57.034903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:20218 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.552 [2024-11-28 12:49:57.034911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.552 [2024-11-28 12:49:57.045743] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:14.552 [2024-11-28 12:49:57.045764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:19699 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.552 [2024-11-28 12:49:57.045773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.552 [2024-11-28 12:49:57.054606] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:14.552 [2024-11-28 12:49:57.054627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10803 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.552 [2024-11-28 12:49:57.054636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.552 [2024-11-28 12:49:57.063974] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:14.552 [2024-11-28 12:49:57.063995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:23115 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.552 [2024-11-28 12:49:57.064004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.811 [2024-11-28 12:49:57.074016] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:14.811 [2024-11-28 12:49:57.074038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:6237 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.811 [2024-11-28 12:49:57.074045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.811 [2024-11-28 12:49:57.084381] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:14.811 [2024-11-28 12:49:57.084403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:21110 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.811 [2024-11-28 12:49:57.084411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.811 [2024-11-28 12:49:57.092661] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:14.811 [2024-11-28 12:49:57.092683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15784 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.811 [2024-11-28 12:49:57.092692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.811 [2024-11-28 12:49:57.105550] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:14.811 [2024-11-28 12:49:57.105572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:21433 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.811 [2024-11-28 12:49:57.105580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.811 [2024-11-28 12:49:57.113657] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:14.811 [2024-11-28 12:49:57.113678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:9813 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.811 [2024-11-28 12:49:57.113694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.811 [2024-11-28 12:49:57.123473] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:14.811 [2024-11-28 12:49:57.123495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19099 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.811 [2024-11-28 12:49:57.123504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.811 [2024-11-28 12:49:57.133164] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:14.811 [2024-11-28 12:49:57.133185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25314 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.811 [2024-11-28 12:49:57.133193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.811 [2024-11-28 12:49:57.144001] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:14.811 [2024-11-28 12:49:57.144022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14237 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.811 [2024-11-28 12:49:57.144030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.811 [2024-11-28 12:49:57.152367] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:14.811 [2024-11-28 12:49:57.152387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:14140 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.811 [2024-11-28 12:49:57.152395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.811 [2024-11-28 12:49:57.162888] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:14.812 [2024-11-28 12:49:57.162909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:4070 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.812 [2024-11-28 12:49:57.162917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.812 [2024-11-28 12:49:57.174729] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:14.812 [2024-11-28 12:49:57.174749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:3314 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.812 [2024-11-28 12:49:57.174757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.812 [2024-11-28 12:49:57.183903] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:14.812 [2024-11-28 12:49:57.183926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:18677 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.812 [2024-11-28 12:49:57.183935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.812 [2024-11-28 12:49:57.196416] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:14.812 [2024-11-28 12:49:57.196439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:4189 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.812 [2024-11-28 12:49:57.196448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.812 [2024-11-28 12:49:57.205097] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:14.812 [2024-11-28 12:49:57.205123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:4942 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.812 [2024-11-28 12:49:57.205132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.812 [2024-11-28 12:49:57.217073] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:14.812 [2024-11-28 12:49:57.217095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:10456 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.812 [2024-11-28 12:49:57.217103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.812 [2024-11-28 12:49:57.227630] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:14.812 [2024-11-28 12:49:57.227651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3118 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.812 [2024-11-28 12:49:57.227660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.812 [2024-11-28 12:49:57.240484] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:14.812 [2024-11-28 12:49:57.240505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:16367 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.812 [2024-11-28 12:49:57.240514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.812 [2024-11-28 12:49:57.253075] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:14.812 [2024-11-28 12:49:57.253097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:20024 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.812 [2024-11-28 12:49:57.253106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.812 [2024-11-28 12:49:57.261416] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:14.812 [2024-11-28 12:49:57.261437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:25480 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.812 [2024-11-28 12:49:57.261445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.812 [2024-11-28 12:49:57.273546] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:14.812 [2024-11-28 12:49:57.273567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:1318 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.812 [2024-11-28 12:49:57.273575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.812 [2024-11-28 12:49:57.284910] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:14.812 [2024-11-28 12:49:57.284931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:4892 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.812 [2024-11-28 12:49:57.284939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.812 [2024-11-28 12:49:57.292943] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:14.812 [2024-11-28 12:49:57.292969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:8314 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.812 [2024-11-28 12:49:57.292978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.812 [2024-11-28 12:49:57.305293] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:14.812 [2024-11-28 12:49:57.305314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:19426 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.812 [2024-11-28 12:49:57.305322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.812 [2024-11-28 12:49:57.317938] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:14.812 [2024-11-28 12:49:57.317967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:12587 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.812 [2024-11-28 12:49:57.317976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.071 [2024-11-28 12:49:57.331121] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:15.071 [2024-11-28 12:49:57.331143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:2047 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.071 [2024-11-28 12:49:57.331151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.071 [2024-11-28 12:49:57.343871] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:15.071 [2024-11-28 12:49:57.343892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:25176 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.071 [2024-11-28 12:49:57.343900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.071 [2024-11-28 12:49:57.352101] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:15.071 [2024-11-28 12:49:57.352121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:6328 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.071 [2024-11-28 12:49:57.352129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.071 [2024-11-28 12:49:57.363863] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:15.071 [2024-11-28 12:49:57.363883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:21753 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.071 [2024-11-28 12:49:57.363892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.071 [2024-11-28 12:49:57.374089] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:15.071 [2024-11-28 12:49:57.374118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:7072 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.071 [2024-11-28 12:49:57.374127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.071 [2024-11-28 12:49:57.386178] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:15.071 [2024-11-28 12:49:57.386198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:5770 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.071 [2024-11-28 12:49:57.386206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.071 [2024-11-28 12:49:57.395021] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:15.071 [2024-11-28 12:49:57.395041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:1739 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.071 [2024-11-28 12:49:57.395052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.071 [2024-11-28 12:49:57.407834] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:15.071 [2024-11-28 12:49:57.407854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:19134 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.071 [2024-11-28 12:49:57.407862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.071 [2024-11-28 12:49:57.418578] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:15.071 [2024-11-28 12:49:57.418597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:19450 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.071 [2024-11-28 12:49:57.418605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.071 [2024-11-28 12:49:57.427139] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:15.071 [2024-11-28 12:49:57.427159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:15410 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.071 [2024-11-28 12:49:57.427167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.071 [2024-11-28 12:49:57.437930] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:15.071 [2024-11-28 12:49:57.437957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:3789 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.071 [2024-11-28 12:49:57.437967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.071 [2024-11-28 12:49:57.447103] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:15.071 [2024-11-28 12:49:57.447124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:21528 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.071 [2024-11-28 12:49:57.447132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.071 [2024-11-28 12:49:57.457542] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:15.071 [2024-11-28 12:49:57.457563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:9717 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.071 [2024-11-28 12:49:57.457572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.071 [2024-11-28 12:49:57.466749] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:15.071 [2024-11-28 12:49:57.466770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:14569 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.071 [2024-11-28 12:49:57.466778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.071 [2024-11-28 12:49:57.475848] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:15.071 [2024-11-28 12:49:57.475868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:20320 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.072 [2024-11-28 12:49:57.475876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.072 [2024-11-28 12:49:57.485723] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:15.072 [2024-11-28 12:49:57.485743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:2068 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.072 [2024-11-28 12:49:57.485752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.072 [2024-11-28 12:49:57.495604] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:15.072 [2024-11-28 12:49:57.495624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:22379 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.072 [2024-11-28 12:49:57.495632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.072 [2024-11-28 12:49:57.505067] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:15.072 [2024-11-28 12:49:57.505088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2738 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.072 [2024-11-28 12:49:57.505096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.072 [2024-11-28 12:49:57.513859] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:15.072 [2024-11-28 12:49:57.513880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:12052 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.072 [2024-11-28 12:49:57.513889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.072 [2024-11-28 12:49:57.525011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:15.072 [2024-11-28 12:49:57.525031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:5424 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.072 [2024-11-28 12:49:57.525040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.072 [2024-11-28 12:49:57.537580] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:15.072 [2024-11-28 12:49:57.537601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19750 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.072 [2024-11-28 12:49:57.537609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.072 [2024-11-28 12:49:57.546428] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:15.072 [2024-11-28 12:49:57.546449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:24068 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.072 [2024-11-28 12:49:57.546457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.072 [2024-11-28 12:49:57.559588] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:15.072 [2024-11-28 12:49:57.559609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:12908 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.072 [2024-11-28 12:49:57.559618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.072 [2024-11-28 12:49:57.572201] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:15.072 [2024-11-28 12:49:57.572222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:16217 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.072 [2024-11-28 12:49:57.572235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.072 [2024-11-28 12:49:57.584833] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:15.072 [2024-11-28 12:49:57.584854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:18655 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.072 [2024-11-28 12:49:57.584862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.331 [2024-11-28 12:49:57.593611] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:15.331 [2024-11-28 12:49:57.593632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:25540 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.331 [2024-11-28 12:49:57.593639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.331 [2024-11-28 12:49:57.604953] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:15.331 [2024-11-28 12:49:57.604990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1496 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.331 [2024-11-28 12:49:57.604999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.331 [2024-11-28 12:49:57.618693] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:15.331 [2024-11-28 12:49:57.618714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:9710 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.331 [2024-11-28 12:49:57.618722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.331 [2024-11-28 12:49:57.629609] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:15.331 [2024-11-28 12:49:57.629630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18239 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.331 [2024-11-28 12:49:57.629638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.331 [2024-11-28 12:49:57.639905] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:15.331 [2024-11-28 12:49:57.639925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:23367 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.331 [2024-11-28 12:49:57.639933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.331 [2024-11-28 12:49:57.648932] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:15.331 [2024-11-28 12:49:57.648959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16315 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.331 [2024-11-28 12:49:57.648968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.331 23882.00 IOPS, 93.29 MiB/s [2024-11-28T11:49:57.850Z] [2024-11-28 12:49:57.659082] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:15.331 [2024-11-28 12:49:57.659103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:13753 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.331 [2024-11-28 12:49:57.659112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.331 [2024-11-28 12:49:57.669248] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:15.331 [2024-11-28 12:49:57.669272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:19937 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.331 [2024-11-28 12:49:57.669281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.331 [2024-11-28 12:49:57.681563] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:15.331 [2024-11-28 12:49:57.681585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:16468 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.331 [2024-11-28 12:49:57.681593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.331 [2024-11-28 12:49:57.694492] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:15.331 [2024-11-28 12:49:57.694515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:3686 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.331 [2024-11-28 12:49:57.694524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.331 [2024-11-28 12:49:57.703520] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:15.331 [2024-11-28 12:49:57.703542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:24315 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.331 [2024-11-28 12:49:57.703551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.332 [2024-11-28 12:49:57.713884] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:15.332 [2024-11-28 12:49:57.713905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9549 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.332 [2024-11-28 12:49:57.713914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.332 [2024-11-28 12:49:57.724338] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:15.332 [2024-11-28 12:49:57.724357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:6430 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.332 [2024-11-28 12:49:57.724365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.332 [2024-11-28 12:49:57.733078] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:15.332 [2024-11-28 12:49:57.733099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:22042 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.332 [2024-11-28 12:49:57.733108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.332 [2024-11-28 12:49:57.743850] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:15.332 [2024-11-28 12:49:57.743871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:21345 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.332 [2024-11-28 12:49:57.743879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.332 [2024-11-28 12:49:57.756753] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:15.332 [2024-11-28 12:49:57.756775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:14078 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.332 [2024-11-28 12:49:57.756783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.332 [2024-11-28 12:49:57.769733] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:15.332 [2024-11-28 12:49:57.769753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:7245 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.332 [2024-11-28 12:49:57.769762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.332 [2024-11-28 12:49:57.782569] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:15.332 [2024-11-28 12:49:57.782590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1802 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.332 [2024-11-28 12:49:57.782598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.332 [2024-11-28 12:49:57.794486] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:15.332 [2024-11-28 12:49:57.794508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5126 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.332 [2024-11-28 12:49:57.794517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.332 [2024-11-28 12:49:57.806287] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:15.332 [2024-11-28 12:49:57.806307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:7004 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.332 [2024-11-28 12:49:57.806315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.332 [2024-11-28 12:49:57.815717] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:15.332 [2024-11-28 12:49:57.815738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16117 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.332 [2024-11-28 12:49:57.815746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.332 [2024-11-28 12:49:57.827397] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:15.332 [2024-11-28 12:49:57.827417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:8021 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.332 [2024-11-28 12:49:57.827426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.332 [2024-11-28 12:49:57.838538] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:15.332 [2024-11-28 12:49:57.838558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22394 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.332 [2024-11-28 12:49:57.838566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.332 [2024-11-28 12:49:57.846677] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:15.332 [2024-11-28 12:49:57.846698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:13249 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.332 [2024-11-28 12:49:57.846706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.591 [2024-11-28 12:49:57.858468] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:15.592 [2024-11-28 12:49:57.858490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:15005 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.592 [2024-11-28 12:49:57.858501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.592 [2024-11-28 12:49:57.870552] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:15.592 [2024-11-28 12:49:57.870573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:5390 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.592 [2024-11-28 12:49:57.870582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.592 [2024-11-28 12:49:57.881197] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:15.592 [2024-11-28 12:49:57.881217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:15290 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.592 [2024-11-28 12:49:57.881225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.592 [2024-11-28 12:49:57.890476] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:15.592 [2024-11-28 12:49:57.890496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:1143 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.592 [2024-11-28 12:49:57.890504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.592 [2024-11-28 12:49:57.899981] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:15.592 [2024-11-28 12:49:57.900002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9017 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.592 [2024-11-28 12:49:57.900010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.592 [2024-11-28 12:49:57.909465] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:15.592 [2024-11-28 12:49:57.909485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:14391 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.592 [2024-11-28 12:49:57.909493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.592 [2024-11-28 12:49:57.918882] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:15.592 [2024-11-28 12:49:57.918902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:10291 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.592 [2024-11-28 12:49:57.918911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.592 [2024-11-28 12:49:57.927684] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:15.592 [2024-11-28 12:49:57.927704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:695 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.592 [2024-11-28 12:49:57.927713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.592 [2024-11-28 12:49:57.937838] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:15.592 [2024-11-28 12:49:57.937858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:20782 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.592 [2024-11-28 12:49:57.937866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.592 [2024-11-28 12:49:57.947852] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:15.592 [2024-11-28 12:49:57.947878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:23744 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.592 [2024-11-28 12:49:57.947887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.592 [2024-11-28 12:49:57.956331] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:15.592 [2024-11-28 12:49:57.956352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:25539 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.592 [2024-11-28 12:49:57.956360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.592 [2024-11-28 12:49:57.966593] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:15.592 [2024-11-28 12:49:57.966614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:682 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.592 [2024-11-28 12:49:57.966623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.592 [2024-11-28 12:49:57.976117] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:15.592 [2024-11-28 12:49:57.976137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:25427 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.592 [2024-11-28 12:49:57.976145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.592 [2024-11-28 12:49:57.985385] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:15.592 [2024-11-28 12:49:57.985406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:2188 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.592 [2024-11-28 12:49:57.985414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.592 [2024-11-28 12:49:57.996258] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:15.592 [2024-11-28 12:49:57.996279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:2743 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.592 [2024-11-28 12:49:57.996287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.592 [2024-11-28 12:49:58.006954] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:15.592 [2024-11-28 12:49:58.006975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:10603 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.592 [2024-11-28 12:49:58.006983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.592 [2024-11-28 12:49:58.015500] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:15.592 [2024-11-28 12:49:58.015521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:23789 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.592 [2024-11-28 12:49:58.015529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.592 [2024-11-28 12:49:58.027358] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:15.592 [2024-11-28 12:49:58.027379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10604 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.592 [2024-11-28 12:49:58.027387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.592 [2024-11-28 12:49:58.035674] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:15.592 [2024-11-28 12:49:58.035695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:21709 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.592 [2024-11-28 12:49:58.035703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.592 [2024-11-28 12:49:58.047506] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:15.592 [2024-11-28 12:49:58.047526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:17154 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.592 [2024-11-28 12:49:58.047534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.592 [2024-11-28 12:49:58.055793] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:15.592 [2024-11-28 12:49:58.055813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17330 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.592 [2024-11-28 12:49:58.055821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.592 [2024-11-28 12:49:58.068325] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:15.592 [2024-11-28 12:49:58.068345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:800 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.592 [2024-11-28 12:49:58.068354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.592 [2024-11-28 12:49:58.079745] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:15.592 [2024-11-28 12:49:58.079764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:12082 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.592 [2024-11-28 12:49:58.079772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.592 [2024-11-28 12:49:58.087879] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:15.592 [2024-11-28 12:49:58.087900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:6249 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.592 [2024-11-28 12:49:58.087908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.592 [2024-11-28 12:49:58.098500] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:15.592 [2024-11-28 12:49:58.098520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:10694 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.592 [2024-11-28 12:49:58.098528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.851 [2024-11-28 12:49:58.108649] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:15.851 [2024-11-28 12:49:58.108670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:9564 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.852 [2024-11-28 12:49:58.108678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.852 [2024-11-28 12:49:58.117988] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:15.852 [2024-11-28 12:49:58.118011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:19075 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.852 [2024-11-28 12:49:58.118020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.852 [2024-11-28 12:49:58.127778] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:15.852 [2024-11-28 12:49:58.127798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:17802 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.852 [2024-11-28 12:49:58.127807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.852 [2024-11-28 12:49:58.137280] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:15.852 [2024-11-28 12:49:58.137301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:1985 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.852 [2024-11-28 12:49:58.137309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.852 [2024-11-28 12:49:58.148684] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:15.852 [2024-11-28 12:49:58.148704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:19905 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.852 [2024-11-28 12:49:58.148712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.852 [2024-11-28 12:49:58.160028] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:15.852 [2024-11-28 12:49:58.160048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:16591 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.852 [2024-11-28 12:49:58.160057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.852 [2024-11-28 12:49:58.168834] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:15.852 [2024-11-28 12:49:58.168854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:20286 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.852 [2024-11-28 12:49:58.168862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.852 [2024-11-28 12:49:58.179136] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:15.852 [2024-11-28 12:49:58.179156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:4774 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.852 [2024-11-28 12:49:58.179165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.852 [2024-11-28 12:49:58.188575] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:15.852 [2024-11-28 12:49:58.188595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:4575 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.852 [2024-11-28 12:49:58.188603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.852 [2024-11-28 12:49:58.198812] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:15.852 [2024-11-28 12:49:58.198835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:2964 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.852 [2024-11-28 12:49:58.198843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.852 [2024-11-28 12:49:58.209042] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:15.852 [2024-11-28 12:49:58.209064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:15695 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.852 [2024-11-28 12:49:58.209072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.852 [2024-11-28 12:49:58.222270] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:15.852 [2024-11-28 12:49:58.222292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:491 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.852 [2024-11-28 12:49:58.222300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.852 [2024-11-28 12:49:58.230781] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:15.852 [2024-11-28 12:49:58.230802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:4409 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.852 [2024-11-28 12:49:58.230810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.852 [2024-11-28 12:49:58.243131] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:15.852 [2024-11-28 12:49:58.243152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:6588 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.852 [2024-11-28 12:49:58.243160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.852 [2024-11-28 12:49:58.255662] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:15.852 [2024-11-28 12:49:58.255683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:23669 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.852 [2024-11-28 12:49:58.255691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.852 [2024-11-28 12:49:58.264209] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:15.852 [2024-11-28 12:49:58.264229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:2423 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.852 [2024-11-28 12:49:58.264238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.852 [2024-11-28 12:49:58.275596] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:15.852 [2024-11-28 12:49:58.275617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23359 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.852 [2024-11-28 12:49:58.275625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.852 [2024-11-28 12:49:58.286805] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:15.852 [2024-11-28 12:49:58.286828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:14759 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.852 [2024-11-28 12:49:58.286836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.852 [2024-11-28 12:49:58.295749] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:15.852 [2024-11-28 12:49:58.295770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:8284 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.852 [2024-11-28 12:49:58.295782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.852 [2024-11-28 12:49:58.304413] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:15.852 [2024-11-28 12:49:58.304434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:17310 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.852 [2024-11-28 12:49:58.304443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.852 [2024-11-28 12:49:58.315011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:15.852 [2024-11-28 12:49:58.315032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:3412 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.852 [2024-11-28 12:49:58.315040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.852 [2024-11-28 12:49:58.324145] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:15.852 [2024-11-28 12:49:58.324167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:1827 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.852 [2024-11-28 12:49:58.324176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.852 [2024-11-28 12:49:58.335166] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:15.852 [2024-11-28 12:49:58.335187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:5531 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.852 [2024-11-28 12:49:58.335196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.852 [2024-11-28 12:49:58.343665] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:15.852 [2024-11-28 12:49:58.343685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:8999 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.852 [2024-11-28 12:49:58.343693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.852 [2024-11-28 12:49:58.356075] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:15.852 [2024-11-28 12:49:58.356096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:6345 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.852 [2024-11-28 12:49:58.356104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.852 [2024-11-28 12:49:58.367738] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:16.112 [2024-11-28 12:49:58.367760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2414 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.112 [2024-11-28 12:49:58.367768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.112 [2024-11-28 12:49:58.376677] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:16.112 [2024-11-28 12:49:58.376698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:4029 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.112 [2024-11-28 12:49:58.376706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.112 [2024-11-28 12:49:58.386846] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:16.112 [2024-11-28 12:49:58.386870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:20805 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.112 [2024-11-28 12:49:58.386879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.112 [2024-11-28 12:49:58.396750] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:16.112 [2024-11-28 12:49:58.396771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:18654 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.113 [2024-11-28 12:49:58.396779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.113 [2024-11-28 12:49:58.406253] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:16.113 [2024-11-28 12:49:58.406273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:16691 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.113 [2024-11-28 12:49:58.406282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.113 [2024-11-28 12:49:58.416850] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:16.113 [2024-11-28 12:49:58.416871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:25404 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.113 [2024-11-28 12:49:58.416880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.113 [2024-11-28 12:49:58.426500] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:16.113 [2024-11-28 12:49:58.426522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:9119 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.113 [2024-11-28 12:49:58.426530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.113 [2024-11-28 12:49:58.435403] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:16.113 [2024-11-28 12:49:58.435425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:18631 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.113 [2024-11-28 12:49:58.435433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.113 [2024-11-28 12:49:58.445413] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:16.113 [2024-11-28 12:49:58.445436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:12944 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.113 [2024-11-28 12:49:58.445444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.113 [2024-11-28 12:49:58.454953] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:16.113 [2024-11-28 12:49:58.454977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:1919 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.113 [2024-11-28 12:49:58.454986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.113 [2024-11-28 12:49:58.464199] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:16.113 [2024-11-28 12:49:58.464220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:17996 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.113 [2024-11-28 12:49:58.464229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.113 [2024-11-28 12:49:58.474702] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:16.113 [2024-11-28 12:49:58.474725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3492 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.113 [2024-11-28 12:49:58.474734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.113 [2024-11-28 12:49:58.483905] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:16.113 [2024-11-28 12:49:58.483927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:17172 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.113 [2024-11-28 12:49:58.483935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.113 [2024-11-28 12:49:58.493905] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:16.113 [2024-11-28 12:49:58.493926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:5346 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.113 [2024-11-28 12:49:58.493934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.113 [2024-11-28 12:49:58.502298] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:16.113 [2024-11-28 12:49:58.502319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:13775 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.113 [2024-11-28 12:49:58.502327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.113 [2024-11-28 12:49:58.514101] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:16.113 [2024-11-28 12:49:58.514122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:11399 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.113 [2024-11-28 12:49:58.514130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.113 [2024-11-28 12:49:58.523602] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:16.113 [2024-11-28 12:49:58.523623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:8123 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.113 [2024-11-28 12:49:58.523632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.113 [2024-11-28 12:49:58.532029] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:16.113 [2024-11-28 12:49:58.532052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:18094 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.113 [2024-11-28 12:49:58.532060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.113 [2024-11-28 12:49:58.542866] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:16.113 [2024-11-28 12:49:58.542887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:7454 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.113 [2024-11-28 12:49:58.542895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.113 [2024-11-28 12:49:58.553376] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:16.113 [2024-11-28 12:49:58.553396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10353 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.113 [2024-11-28 12:49:58.553408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.113 [2024-11-28 12:49:58.562968] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:16.113 [2024-11-28 12:49:58.562989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:6900 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.113 [2024-11-28 12:49:58.562998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.113 [2024-11-28 12:49:58.571837] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:16.113 [2024-11-28 12:49:58.571858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:8476 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.113 [2024-11-28 12:49:58.571867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.113 [2024-11-28 12:49:58.582243] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:16.113 [2024-11-28 12:49:58.582263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:14240 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.113 [2024-11-28 12:49:58.582271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.113 [2024-11-28 12:49:58.591670] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:16.113 [2024-11-28 12:49:58.591690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:24660 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.113 [2024-11-28 12:49:58.591698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.113 [2024-11-28 12:49:58.601600] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:16.113 [2024-11-28 12:49:58.601627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:17051 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.113 [2024-11-28 12:49:58.601636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.113 [2024-11-28 12:49:58.611979] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:16.113 [2024-11-28 12:49:58.612000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:3925 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.113 [2024-11-28 12:49:58.612008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.113 [2024-11-28 12:49:58.622534] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:16.113 [2024-11-28 12:49:58.622555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:11049 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.113 [2024-11-28 12:49:58.622563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.372 [2024-11-28 12:49:58.630987] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:16.372 [2024-11-28 12:49:58.631009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13586 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.372 [2024-11-28 12:49:58.631017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.372 [2024-11-28 12:49:58.641928] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:16.372 [2024-11-28 12:49:58.641958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:4607 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.372 [2024-11-28 12:49:58.641967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.372 [2024-11-28 12:49:58.653264] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:16.372 [2024-11-28 12:49:58.653285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17779 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.372 [2024-11-28 12:49:58.653293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.372 24432.50 IOPS, 95.44 MiB/s [2024-11-28T11:49:58.891Z] [2024-11-28 12:49:58.661726] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e66b0) 00:26:16.372 [2024-11-28 12:49:58.661748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:19967 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.372 [2024-11-28 12:49:58.661756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.372 00:26:16.372 Latency(us) 00:26:16.372 [2024-11-28T11:49:58.891Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:16.372 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:26:16.372 nvme0n1 : 2.00 24442.41 95.48 0.00 0.00 5230.44 2251.02 18692.01 00:26:16.372 [2024-11-28T11:49:58.891Z] =================================================================================================================== 00:26:16.372 [2024-11-28T11:49:58.891Z] Total : 24442.41 95.48 0.00 0.00 5230.44 2251.02 18692.01 00:26:16.372 { 00:26:16.372 "results": [ 00:26:16.372 { 00:26:16.372 "job": "nvme0n1", 00:26:16.372 "core_mask": "0x2", 00:26:16.372 "workload": "randread", 00:26:16.372 "status": "finished", 00:26:16.372 "queue_depth": 128, 00:26:16.372 "io_size": 4096, 00:26:16.372 "runtime": 2.004426, 00:26:16.372 "iops": 24442.40894899587, 00:26:16.372 "mibps": 95.47815995701512, 00:26:16.372 "io_failed": 0, 00:26:16.372 "io_timeout": 0, 00:26:16.372 "avg_latency_us": 5230.436704160931, 00:26:16.372 "min_latency_us": 2251.0191304347827, 00:26:16.372 "max_latency_us": 18692.006956521738 00:26:16.373 } 00:26:16.373 ], 00:26:16.373 "core_count": 1 00:26:16.373 } 00:26:16.373 12:49:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:16.373 12:49:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:16.373 12:49:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:16.373 | .driver_specific 00:26:16.373 | .nvme_error 00:26:16.373 | .status_code 00:26:16.373 | .command_transient_transport_error' 00:26:16.373 12:49:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:16.373 12:49:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 192 > 0 )) 00:26:16.373 12:49:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2669544 00:26:16.373 12:49:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2669544 ']' 00:26:16.373 12:49:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2669544 00:26:16.373 12:49:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:26:16.632 12:49:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:16.632 12:49:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2669544 00:26:16.632 12:49:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:16.632 12:49:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:16.632 12:49:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2669544' 00:26:16.632 killing process with pid 2669544 00:26:16.632 12:49:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2669544 00:26:16.632 Received shutdown signal, test time was about 2.000000 seconds 00:26:16.632 00:26:16.632 Latency(us) 00:26:16.632 [2024-11-28T11:49:59.151Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:16.632 [2024-11-28T11:49:59.151Z] =================================================================================================================== 00:26:16.632 [2024-11-28T11:49:59.151Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:16.632 12:49:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2669544 00:26:16.632 12:49:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:26:16.632 12:49:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:16.632 12:49:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:26:16.632 12:49:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:26:16.632 12:49:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:26:16.632 12:49:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2670062 00:26:16.632 12:49:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2670062 /var/tmp/bperf.sock 00:26:16.632 12:49:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:26:16.632 12:49:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2670062 ']' 00:26:16.632 12:49:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:16.632 12:49:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:16.632 12:49:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:16.632 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:16.632 12:49:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:16.632 12:49:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:16.632 [2024-11-28 12:49:59.146454] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:26:16.632 [2024-11-28 12:49:59.146502] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2670062 ] 00:26:16.632 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:16.632 Zero copy mechanism will not be used. 00:26:16.891 [2024-11-28 12:49:59.208681] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:16.891 [2024-11-28 12:49:59.246658] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:16.891 12:49:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:16.891 12:49:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:26:16.891 12:49:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:16.891 12:49:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:17.149 12:49:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:17.149 12:49:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.149 12:49:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:17.149 12:49:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.149 12:49:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:17.149 12:49:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:17.717 nvme0n1 00:26:17.717 12:49:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:26:17.717 12:49:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.717 12:49:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:17.717 12:49:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.717 12:49:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:17.717 12:49:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:17.717 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:17.717 Zero copy mechanism will not be used. 00:26:17.717 Running I/O for 2 seconds... 00:26:17.717 [2024-11-28 12:50:00.050531] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:17.717 [2024-11-28 12:50:00.050572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.717 [2024-11-28 12:50:00.050584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:17.717 [2024-11-28 12:50:00.057568] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:17.717 [2024-11-28 12:50:00.057596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.717 [2024-11-28 12:50:00.057606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:17.717 [2024-11-28 12:50:00.065060] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:17.717 [2024-11-28 12:50:00.065085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.717 [2024-11-28 12:50:00.065095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:17.717 [2024-11-28 12:50:00.069293] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:17.717 [2024-11-28 12:50:00.069316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.717 [2024-11-28 12:50:00.069326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:17.717 [2024-11-28 12:50:00.074478] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:17.717 [2024-11-28 12:50:00.074502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.717 [2024-11-28 12:50:00.074516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:17.717 [2024-11-28 12:50:00.080710] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:17.717 [2024-11-28 12:50:00.080735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.717 [2024-11-28 12:50:00.080744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:17.717 [2024-11-28 12:50:00.086836] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:17.717 [2024-11-28 12:50:00.086861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.717 [2024-11-28 12:50:00.086870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:17.717 [2024-11-28 12:50:00.093049] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:17.717 [2024-11-28 12:50:00.093072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.717 [2024-11-28 12:50:00.093081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:17.717 [2024-11-28 12:50:00.099331] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:17.717 [2024-11-28 12:50:00.099354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.717 [2024-11-28 12:50:00.099362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:17.717 [2024-11-28 12:50:00.105665] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:17.717 [2024-11-28 12:50:00.105688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.717 [2024-11-28 12:50:00.105697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:17.717 [2024-11-28 12:50:00.111973] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:17.717 [2024-11-28 12:50:00.111996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.717 [2024-11-28 12:50:00.112005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:17.717 [2024-11-28 12:50:00.117657] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:17.717 [2024-11-28 12:50:00.117680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.717 [2024-11-28 12:50:00.117690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:17.717 [2024-11-28 12:50:00.123810] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:17.717 [2024-11-28 12:50:00.123834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.717 [2024-11-28 12:50:00.123842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:17.717 [2024-11-28 12:50:00.129527] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:17.717 [2024-11-28 12:50:00.129553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.717 [2024-11-28 12:50:00.129562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:17.717 [2024-11-28 12:50:00.135704] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:17.717 [2024-11-28 12:50:00.135727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.717 [2024-11-28 12:50:00.135736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:17.717 [2024-11-28 12:50:00.141603] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:17.718 [2024-11-28 12:50:00.141625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.718 [2024-11-28 12:50:00.141633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:17.718 [2024-11-28 12:50:00.147479] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:17.718 [2024-11-28 12:50:00.147501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.718 [2024-11-28 12:50:00.147510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:17.718 [2024-11-28 12:50:00.153508] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:17.718 [2024-11-28 12:50:00.153531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.718 [2024-11-28 12:50:00.153540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:17.718 [2024-11-28 12:50:00.159408] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:17.718 [2024-11-28 12:50:00.159430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.718 [2024-11-28 12:50:00.159440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:17.718 [2024-11-28 12:50:00.165586] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:17.718 [2024-11-28 12:50:00.165609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.718 [2024-11-28 12:50:00.165617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:17.718 [2024-11-28 12:50:00.171566] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:17.718 [2024-11-28 12:50:00.171590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.718 [2024-11-28 12:50:00.171600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:17.718 [2024-11-28 12:50:00.177647] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:17.718 [2024-11-28 12:50:00.177671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.718 [2024-11-28 12:50:00.177682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:17.718 [2024-11-28 12:50:00.183524] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:17.718 [2024-11-28 12:50:00.183547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.718 [2024-11-28 12:50:00.183557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:17.718 [2024-11-28 12:50:00.189473] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:17.718 [2024-11-28 12:50:00.189496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.718 [2024-11-28 12:50:00.189506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:17.718 [2024-11-28 12:50:00.195573] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:17.718 [2024-11-28 12:50:00.195596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.718 [2024-11-28 12:50:00.195606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:17.718 [2024-11-28 12:50:00.201553] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:17.718 [2024-11-28 12:50:00.201576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.718 [2024-11-28 12:50:00.201585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:17.718 [2024-11-28 12:50:00.207283] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:17.718 [2024-11-28 12:50:00.207306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.718 [2024-11-28 12:50:00.207314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:17.718 [2024-11-28 12:50:00.212837] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:17.718 [2024-11-28 12:50:00.212860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.718 [2024-11-28 12:50:00.212869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:17.718 [2024-11-28 12:50:00.218254] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:17.718 [2024-11-28 12:50:00.218277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.718 [2024-11-28 12:50:00.218285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:17.718 [2024-11-28 12:50:00.221318] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:17.718 [2024-11-28 12:50:00.221341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.718 [2024-11-28 12:50:00.221350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:17.718 [2024-11-28 12:50:00.227406] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:17.718 [2024-11-28 12:50:00.227429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.718 [2024-11-28 12:50:00.227442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:17.978 [2024-11-28 12:50:00.233276] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:17.978 [2024-11-28 12:50:00.233300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.978 [2024-11-28 12:50:00.233308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:17.978 [2024-11-28 12:50:00.239098] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:17.978 [2024-11-28 12:50:00.239120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.978 [2024-11-28 12:50:00.239129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:17.978 [2024-11-28 12:50:00.244908] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:17.978 [2024-11-28 12:50:00.244931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.978 [2024-11-28 12:50:00.244940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:17.978 [2024-11-28 12:50:00.250655] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:17.978 [2024-11-28 12:50:00.250679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.978 [2024-11-28 12:50:00.250689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:17.978 [2024-11-28 12:50:00.256421] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:17.978 [2024-11-28 12:50:00.256444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.978 [2024-11-28 12:50:00.256452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:17.978 [2024-11-28 12:50:00.262237] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:17.978 [2024-11-28 12:50:00.262259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.978 [2024-11-28 12:50:00.262267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:17.978 [2024-11-28 12:50:00.267939] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:17.978 [2024-11-28 12:50:00.267968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.978 [2024-11-28 12:50:00.267977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:17.978 [2024-11-28 12:50:00.273680] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:17.978 [2024-11-28 12:50:00.273702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.978 [2024-11-28 12:50:00.273712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:17.978 [2024-11-28 12:50:00.279346] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:17.978 [2024-11-28 12:50:00.279375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.978 [2024-11-28 12:50:00.279384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:17.978 [2024-11-28 12:50:00.285121] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:17.978 [2024-11-28 12:50:00.285143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.978 [2024-11-28 12:50:00.285154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:17.978 [2024-11-28 12:50:00.291642] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:17.978 [2024-11-28 12:50:00.291665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.978 [2024-11-28 12:50:00.291675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:17.978 [2024-11-28 12:50:00.299188] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:17.978 [2024-11-28 12:50:00.299212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.978 [2024-11-28 12:50:00.299221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:17.978 [2024-11-28 12:50:00.307109] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:17.978 [2024-11-28 12:50:00.307133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.978 [2024-11-28 12:50:00.307142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:17.978 [2024-11-28 12:50:00.315695] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:17.978 [2024-11-28 12:50:00.315719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.978 [2024-11-28 12:50:00.315730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:17.978 [2024-11-28 12:50:00.323580] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:17.978 [2024-11-28 12:50:00.323604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.978 [2024-11-28 12:50:00.323614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:17.978 [2024-11-28 12:50:00.331771] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:17.978 [2024-11-28 12:50:00.331796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.978 [2024-11-28 12:50:00.331807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:17.978 [2024-11-28 12:50:00.339876] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:17.978 [2024-11-28 12:50:00.339899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.978 [2024-11-28 12:50:00.339913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:17.978 [2024-11-28 12:50:00.347779] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:17.978 [2024-11-28 12:50:00.347803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.978 [2024-11-28 12:50:00.347812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:17.978 [2024-11-28 12:50:00.356074] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:17.978 [2024-11-28 12:50:00.356098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.978 [2024-11-28 12:50:00.356107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:17.978 [2024-11-28 12:50:00.363956] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:17.978 [2024-11-28 12:50:00.363996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.978 [2024-11-28 12:50:00.364006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:17.978 [2024-11-28 12:50:00.372006] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:17.978 [2024-11-28 12:50:00.372029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.978 [2024-11-28 12:50:00.372039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:17.978 [2024-11-28 12:50:00.380420] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:17.979 [2024-11-28 12:50:00.380443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.979 [2024-11-28 12:50:00.380452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:17.979 [2024-11-28 12:50:00.388284] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:17.979 [2024-11-28 12:50:00.388308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.979 [2024-11-28 12:50:00.388317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:17.979 [2024-11-28 12:50:00.396585] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:17.979 [2024-11-28 12:50:00.396609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.979 [2024-11-28 12:50:00.396618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:17.979 [2024-11-28 12:50:00.404610] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:17.979 [2024-11-28 12:50:00.404634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.979 [2024-11-28 12:50:00.404644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:17.979 [2024-11-28 12:50:00.411836] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:17.979 [2024-11-28 12:50:00.411863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.979 [2024-11-28 12:50:00.411872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:17.979 [2024-11-28 12:50:00.419392] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:17.979 [2024-11-28 12:50:00.419416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.979 [2024-11-28 12:50:00.419425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:17.979 [2024-11-28 12:50:00.427018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:17.979 [2024-11-28 12:50:00.427041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.979 [2024-11-28 12:50:00.427050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:17.979 [2024-11-28 12:50:00.432602] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:17.979 [2024-11-28 12:50:00.432626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.979 [2024-11-28 12:50:00.432635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:17.979 [2024-11-28 12:50:00.438780] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:17.979 [2024-11-28 12:50:00.438802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.979 [2024-11-28 12:50:00.438810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:17.979 [2024-11-28 12:50:00.444759] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:17.979 [2024-11-28 12:50:00.444782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.979 [2024-11-28 12:50:00.444790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:17.979 [2024-11-28 12:50:00.450760] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:17.979 [2024-11-28 12:50:00.450782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.979 [2024-11-28 12:50:00.450791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:17.979 [2024-11-28 12:50:00.456651] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:17.979 [2024-11-28 12:50:00.456674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.979 [2024-11-28 12:50:00.456682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:17.979 [2024-11-28 12:50:00.462422] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:17.979 [2024-11-28 12:50:00.462444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.979 [2024-11-28 12:50:00.462454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:17.979 [2024-11-28 12:50:00.468290] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:17.979 [2024-11-28 12:50:00.468313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.979 [2024-11-28 12:50:00.468322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:17.979 [2024-11-28 12:50:00.474257] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:17.979 [2024-11-28 12:50:00.474280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.979 [2024-11-28 12:50:00.474288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:17.979 [2024-11-28 12:50:00.480028] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:17.979 [2024-11-28 12:50:00.480050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.979 [2024-11-28 12:50:00.480059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:17.979 [2024-11-28 12:50:00.485761] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:17.979 [2024-11-28 12:50:00.485784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.979 [2024-11-28 12:50:00.485793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:17.979 [2024-11-28 12:50:00.491862] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:17.979 [2024-11-28 12:50:00.491885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.979 [2024-11-28 12:50:00.491894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:18.239 [2024-11-28 12:50:00.497880] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:18.239 [2024-11-28 12:50:00.497903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.239 [2024-11-28 12:50:00.497911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:18.239 [2024-11-28 12:50:00.503815] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:18.239 [2024-11-28 12:50:00.503837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.239 [2024-11-28 12:50:00.503846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:18.239 [2024-11-28 12:50:00.509803] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:18.239 [2024-11-28 12:50:00.509825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.239 [2024-11-28 12:50:00.509833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:18.239 [2024-11-28 12:50:00.515483] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:18.239 [2024-11-28 12:50:00.515506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.239 [2024-11-28 12:50:00.515518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:18.239 [2024-11-28 12:50:00.521176] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:18.239 [2024-11-28 12:50:00.521199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.239 [2024-11-28 12:50:00.521209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:18.239 [2024-11-28 12:50:00.527002] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:18.239 [2024-11-28 12:50:00.527025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.239 [2024-11-28 12:50:00.527034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:18.239 [2024-11-28 12:50:00.532410] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:18.239 [2024-11-28 12:50:00.532432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.239 [2024-11-28 12:50:00.532442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:18.239 [2024-11-28 12:50:00.537855] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:18.239 [2024-11-28 12:50:00.537878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.239 [2024-11-28 12:50:00.537887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:18.239 [2024-11-28 12:50:00.543507] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:18.239 [2024-11-28 12:50:00.543530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.239 [2024-11-28 12:50:00.543540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:18.239 [2024-11-28 12:50:00.549327] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:18.239 [2024-11-28 12:50:00.549350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.239 [2024-11-28 12:50:00.549359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:18.239 [2024-11-28 12:50:00.554812] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:18.239 [2024-11-28 12:50:00.554835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.239 [2024-11-28 12:50:00.554844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:18.239 [2024-11-28 12:50:00.559994] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:18.239 [2024-11-28 12:50:00.560018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.239 [2024-11-28 12:50:00.560027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:18.239 [2024-11-28 12:50:00.565334] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:18.239 [2024-11-28 12:50:00.565362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.239 [2024-11-28 12:50:00.565372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:18.239 [2024-11-28 12:50:00.571401] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:18.239 [2024-11-28 12:50:00.571426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.239 [2024-11-28 12:50:00.571436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:18.239 [2024-11-28 12:50:00.577636] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:18.239 [2024-11-28 12:50:00.577658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.239 [2024-11-28 12:50:00.577666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:18.239 [2024-11-28 12:50:00.583293] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:18.239 [2024-11-28 12:50:00.583317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.239 [2024-11-28 12:50:00.583327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:18.239 [2024-11-28 12:50:00.589287] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:18.239 [2024-11-28 12:50:00.589311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.239 [2024-11-28 12:50:00.589320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:18.240 [2024-11-28 12:50:00.595164] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:18.240 [2024-11-28 12:50:00.595188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.240 [2024-11-28 12:50:00.595198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:18.240 [2024-11-28 12:50:00.601085] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:18.240 [2024-11-28 12:50:00.601108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.240 [2024-11-28 12:50:00.601118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:18.240 [2024-11-28 12:50:00.607078] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:18.240 [2024-11-28 12:50:00.607101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.240 [2024-11-28 12:50:00.607109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:18.240 [2024-11-28 12:50:00.613144] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:18.240 [2024-11-28 12:50:00.613167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.240 [2024-11-28 12:50:00.613180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:18.240 [2024-11-28 12:50:00.618960] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:18.240 [2024-11-28 12:50:00.618982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.240 [2024-11-28 12:50:00.618991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:18.240 [2024-11-28 12:50:00.624604] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:18.240 [2024-11-28 12:50:00.624626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.240 [2024-11-28 12:50:00.624635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:18.240 [2024-11-28 12:50:00.630282] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:18.240 [2024-11-28 12:50:00.630304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.240 [2024-11-28 12:50:00.630314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:18.240 [2024-11-28 12:50:00.635943] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:18.240 [2024-11-28 12:50:00.635971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.240 [2024-11-28 12:50:00.635979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:18.240 [2024-11-28 12:50:00.641562] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:18.240 [2024-11-28 12:50:00.641583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.240 [2024-11-28 12:50:00.641593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:18.240 [2024-11-28 12:50:00.647355] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:18.240 [2024-11-28 12:50:00.647377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.240 [2024-11-28 12:50:00.647387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:18.240 [2024-11-28 12:50:00.653319] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:18.240 [2024-11-28 12:50:00.653341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.240 [2024-11-28 12:50:00.653350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:18.240 [2024-11-28 12:50:00.659132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:18.240 [2024-11-28 12:50:00.659154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.240 [2024-11-28 12:50:00.659163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:18.240 [2024-11-28 12:50:00.664790] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:18.240 [2024-11-28 12:50:00.664816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.240 [2024-11-28 12:50:00.664825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:18.240 [2024-11-28 12:50:00.670463] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:18.240 [2024-11-28 12:50:00.670486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.240 [2024-11-28 12:50:00.670494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:18.240 [2024-11-28 12:50:00.676129] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:18.240 [2024-11-28 12:50:00.676152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.240 [2024-11-28 12:50:00.676161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:18.240 [2024-11-28 12:50:00.681541] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:18.240 [2024-11-28 12:50:00.681564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.240 [2024-11-28 12:50:00.681573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:18.240 [2024-11-28 12:50:00.686958] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:18.240 [2024-11-28 12:50:00.686980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.240 [2024-11-28 12:50:00.686989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:18.240 [2024-11-28 12:50:00.692631] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:18.240 [2024-11-28 12:50:00.692654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.240 [2024-11-28 12:50:00.692663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:18.240 [2024-11-28 12:50:00.698408] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:18.240 [2024-11-28 12:50:00.698429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.240 [2024-11-28 12:50:00.698439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:18.240 [2024-11-28 12:50:00.704119] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:18.240 [2024-11-28 12:50:00.704141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.240 [2024-11-28 12:50:00.704149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:18.240 [2024-11-28 12:50:00.709687] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:18.240 [2024-11-28 12:50:00.709710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.240 [2024-11-28 12:50:00.709719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:18.240 [2024-11-28 12:50:00.715247] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:18.240 [2024-11-28 12:50:00.715270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.240 [2024-11-28 12:50:00.715279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:18.240 [2024-11-28 12:50:00.720840] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:18.240 [2024-11-28 12:50:00.720863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.240 [2024-11-28 12:50:00.720871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:18.240 [2024-11-28 12:50:00.726457] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:18.240 [2024-11-28 12:50:00.726479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.240 [2024-11-28 12:50:00.726488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:18.240 [2024-11-28 12:50:00.732104] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:18.240 [2024-11-28 12:50:00.732126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.240 [2024-11-28 12:50:00.732136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:18.240 [2024-11-28 12:50:00.737736] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:18.240 [2024-11-28 12:50:00.737759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.240 [2024-11-28 12:50:00.737768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:18.240 [2024-11-28 12:50:00.743372] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:18.240 [2024-11-28 12:50:00.743394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.241 [2024-11-28 12:50:00.743404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:18.241 [2024-11-28 12:50:00.749019] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:18.241 [2024-11-28 12:50:00.749041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.241 [2024-11-28 12:50:00.749051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:18.500 [2024-11-28 12:50:00.754753] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:18.500 [2024-11-28 12:50:00.754777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.500 [2024-11-28 12:50:00.754787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:18.500 [2024-11-28 12:50:00.760416] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:18.500 [2024-11-28 12:50:00.760440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.500 [2024-11-28 12:50:00.760453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:18.500 [2024-11-28 12:50:00.765916] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:18.500 [2024-11-28 12:50:00.765939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.500 [2024-11-28 12:50:00.765955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:18.500 [2024-11-28 12:50:00.771493] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:18.500 [2024-11-28 12:50:00.771516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.500 [2024-11-28 12:50:00.771526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:18.500 [2024-11-28 12:50:00.777140] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:18.500 [2024-11-28 12:50:00.777163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.500 [2024-11-28 12:50:00.777172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:18.500 [2024-11-28 12:50:00.782777] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:18.500 [2024-11-28 12:50:00.782800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.500 [2024-11-28 12:50:00.782809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:18.500 [2024-11-28 12:50:00.788485] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:18.500 [2024-11-28 12:50:00.788507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.500 [2024-11-28 12:50:00.788517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:18.500 [2024-11-28 12:50:00.794193] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:18.500 [2024-11-28 12:50:00.794215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.500 [2024-11-28 12:50:00.794224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:18.500 [2024-11-28 12:50:00.799839] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:18.500 [2024-11-28 12:50:00.799862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.500 [2024-11-28 12:50:00.799871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:18.500 [2024-11-28 12:50:00.805422] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:18.500 [2024-11-28 12:50:00.805444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.500 [2024-11-28 12:50:00.805454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:18.500 [2024-11-28 12:50:00.811085] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:18.500 [2024-11-28 12:50:00.811113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.500 [2024-11-28 12:50:00.811122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:18.500 [2024-11-28 12:50:00.816785] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:18.500 [2024-11-28 12:50:00.816810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.500 [2024-11-28 12:50:00.816820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:18.500 [2024-11-28 12:50:00.822561] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:18.500 [2024-11-28 12:50:00.822583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.500 [2024-11-28 12:50:00.822593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:18.500 [2024-11-28 12:50:00.825683] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:18.500 [2024-11-28 12:50:00.825705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.500 [2024-11-28 12:50:00.825715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:18.500 [2024-11-28 12:50:00.831499] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:18.500 [2024-11-28 12:50:00.831521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.500 [2024-11-28 12:50:00.831530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:18.500 [2024-11-28 12:50:00.837173] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:18.500 [2024-11-28 12:50:00.837195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.500 [2024-11-28 12:50:00.837204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:18.500 [2024-11-28 12:50:00.842698] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:18.500 [2024-11-28 12:50:00.842719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.500 [2024-11-28 12:50:00.842727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:18.500 [2024-11-28 12:50:00.847987] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:18.500 [2024-11-28 12:50:00.848009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.500 [2024-11-28 12:50:00.848019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:18.500 [2024-11-28 12:50:00.853152] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:18.501 [2024-11-28 12:50:00.853175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.501 [2024-11-28 12:50:00.853184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:18.501 [2024-11-28 12:50:00.858429] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:18.501 [2024-11-28 12:50:00.858451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.501 [2024-11-28 12:50:00.858461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:18.501 [2024-11-28 12:50:00.863795] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:18.501 [2024-11-28 12:50:00.863817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.501 [2024-11-28 12:50:00.863827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:18.501 [2024-11-28 12:50:00.869219] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:18.501 [2024-11-28 12:50:00.869241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.501 [2024-11-28 12:50:00.869250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:18.501 [2024-11-28 12:50:00.874740] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:18.501 [2024-11-28 12:50:00.874762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.501 [2024-11-28 12:50:00.874770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:18.501 [2024-11-28 12:50:00.880346] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:18.501 [2024-11-28 12:50:00.880368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.501 [2024-11-28 12:50:00.880376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:18.501 [2024-11-28 12:50:00.886057] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:18.501 [2024-11-28 12:50:00.886079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.501 [2024-11-28 12:50:00.886087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:18.501 [2024-11-28 12:50:00.891620] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:18.501 [2024-11-28 12:50:00.891643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.501 [2024-11-28 12:50:00.891653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:18.501 [2024-11-28 12:50:00.897360] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:18.501 [2024-11-28 12:50:00.897382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.501 [2024-11-28 12:50:00.897390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:18.501 [2024-11-28 12:50:00.903060] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:18.501 [2024-11-28 12:50:00.903082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.501 [2024-11-28 12:50:00.903095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:18.501 [2024-11-28 12:50:00.908633] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:18.501 [2024-11-28 12:50:00.908655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.501 [2024-11-28 12:50:00.908664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:18.501 [2024-11-28 12:50:00.914648] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:18.501 [2024-11-28 12:50:00.914672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.501 [2024-11-28 12:50:00.914681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:18.501 [2024-11-28 12:50:00.921004] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:18.501 [2024-11-28 12:50:00.921027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.501 [2024-11-28 12:50:00.921036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:18.501 [2024-11-28 12:50:00.926919] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:18.501 [2024-11-28 12:50:00.926942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.501 [2024-11-28 12:50:00.926956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:18.501 [2024-11-28 12:50:00.932606] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:18.501 [2024-11-28 12:50:00.932628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.501 [2024-11-28 12:50:00.932638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:18.501 [2024-11-28 12:50:00.938309] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:18.501 [2024-11-28 12:50:00.938332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.501 [2024-11-28 12:50:00.938341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:18.501 [2024-11-28 12:50:00.943896] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:18.501 [2024-11-28 12:50:00.943919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.501 [2024-11-28 12:50:00.943927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:18.501 [2024-11-28 12:50:00.949458] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:18.501 [2024-11-28 12:50:00.949480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.501 [2024-11-28 12:50:00.949490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:18.501 [2024-11-28 12:50:00.955041] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:18.501 [2024-11-28 12:50:00.955063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.501 [2024-11-28 12:50:00.955072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:18.501 [2024-11-28 12:50:00.960618] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:18.501 [2024-11-28 12:50:00.960640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.501 [2024-11-28 12:50:00.960650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:18.501 [2024-11-28 12:50:00.966088] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:18.501 [2024-11-28 12:50:00.966111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.501 [2024-11-28 12:50:00.966121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:18.501 [2024-11-28 12:50:00.971659] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:18.501 [2024-11-28 12:50:00.971682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.501 [2024-11-28 12:50:00.971690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:18.501 [2024-11-28 12:50:00.977121] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:18.501 [2024-11-28 12:50:00.977142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.501 [2024-11-28 12:50:00.977151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:18.501 [2024-11-28 12:50:00.982545] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:18.501 [2024-11-28 12:50:00.982567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.501 [2024-11-28 12:50:00.982577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:18.501 [2024-11-28 12:50:00.988120] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:18.501 [2024-11-28 12:50:00.988142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.501 [2024-11-28 12:50:00.988151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:18.501 [2024-11-28 12:50:00.993887] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:18.501 [2024-11-28 12:50:00.993909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.501 [2024-11-28 12:50:00.993919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:18.501 [2024-11-28 12:50:00.999509] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:18.501 [2024-11-28 12:50:00.999532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.501 [2024-11-28 12:50:00.999546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:18.501 [2024-11-28 12:50:01.005056] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:18.502 [2024-11-28 12:50:01.005078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.502 [2024-11-28 12:50:01.005088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:18.502 [2024-11-28 12:50:01.010690] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:18.502 [2024-11-28 12:50:01.010713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.502 [2024-11-28 12:50:01.010722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:18.760 [2024-11-28 12:50:01.016533] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:18.760 [2024-11-28 12:50:01.016556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.760 [2024-11-28 12:50:01.016564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:18.760 [2024-11-28 12:50:01.022228] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:18.760 [2024-11-28 12:50:01.022251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.760 [2024-11-28 12:50:01.022259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:18.760 [2024-11-28 12:50:01.027900] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:18.760 [2024-11-28 12:50:01.027924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.760 [2024-11-28 12:50:01.027933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:18.760 [2024-11-28 12:50:01.033521] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:18.760 [2024-11-28 12:50:01.033544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.760 [2024-11-28 12:50:01.033553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:18.760 [2024-11-28 12:50:01.039198] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:18.760 [2024-11-28 12:50:01.039221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.760 [2024-11-28 12:50:01.039230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:18.760 [2024-11-28 12:50:01.044857] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:18.760 [2024-11-28 12:50:01.044880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.760 [2024-11-28 12:50:01.044888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:18.760 5192.00 IOPS, 649.00 MiB/s [2024-11-28T11:50:01.279Z] [2024-11-28 12:50:01.052268] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:18.760 [2024-11-28 12:50:01.052297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.760 [2024-11-28 12:50:01.052306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:18.760 [2024-11-28 12:50:01.058736] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:18.760 [2024-11-28 12:50:01.058759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.760 [2024-11-28 12:50:01.058768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:18.760 [2024-11-28 12:50:01.064901] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:18.760 [2024-11-28 12:50:01.064924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.760 [2024-11-28 12:50:01.064933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:18.760 [2024-11-28 12:50:01.071173] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:18.760 [2024-11-28 12:50:01.071197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.760 [2024-11-28 12:50:01.071208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:18.760 [2024-11-28 12:50:01.077070] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:18.760 [2024-11-28 12:50:01.077093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.760 [2024-11-28 12:50:01.077103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:18.760 [2024-11-28 12:50:01.082841] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:18.760 [2024-11-28 12:50:01.082864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.760 [2024-11-28 12:50:01.082874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:18.760 [2024-11-28 12:50:01.088466] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:18.760 [2024-11-28 12:50:01.088490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.760 [2024-11-28 12:50:01.088499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:18.760 [2024-11-28 12:50:01.094120] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:18.760 [2024-11-28 12:50:01.094143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.760 [2024-11-28 12:50:01.094152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:18.760 [2024-11-28 12:50:01.100070] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:18.760 [2024-11-28 12:50:01.100092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.760 [2024-11-28 12:50:01.100101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:18.760 [2024-11-28 12:50:01.107016] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:18.760 [2024-11-28 12:50:01.107039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.760 [2024-11-28 12:50:01.107049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:18.760 [2024-11-28 12:50:01.114720] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:18.760 [2024-11-28 12:50:01.114745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.760 [2024-11-28 12:50:01.114755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:18.760 [2024-11-28 12:50:01.121809] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:18.760 [2024-11-28 12:50:01.121832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.760 [2024-11-28 12:50:01.121841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:18.760 [2024-11-28 12:50:01.125385] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:18.760 [2024-11-28 12:50:01.125409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.760 [2024-11-28 12:50:01.125419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:18.760 [2024-11-28 12:50:01.132301] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:18.760 [2024-11-28 12:50:01.132325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.760 [2024-11-28 12:50:01.132334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:18.760 [2024-11-28 12:50:01.138721] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:18.761 [2024-11-28 12:50:01.138746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.761 [2024-11-28 12:50:01.138755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:18.761 [2024-11-28 12:50:01.146390] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:18.761 [2024-11-28 12:50:01.146414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.761 [2024-11-28 12:50:01.146423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:18.761 [2024-11-28 12:50:01.153873] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:18.761 [2024-11-28 12:50:01.153897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.761 [2024-11-28 12:50:01.153906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:18.761 [2024-11-28 12:50:01.160645] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:18.761 [2024-11-28 12:50:01.160668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.761 [2024-11-28 12:50:01.160681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:18.761 [2024-11-28 12:50:01.168181] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:18.761 [2024-11-28 12:50:01.168205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.761 [2024-11-28 12:50:01.168215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:18.761 [2024-11-28 12:50:01.175864] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:18.761 [2024-11-28 12:50:01.175888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.761 [2024-11-28 12:50:01.175897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:18.761 [2024-11-28 12:50:01.183897] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:18.761 [2024-11-28 12:50:01.183919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.761 [2024-11-28 12:50:01.183930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:18.761 [2024-11-28 12:50:01.190944] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:18.761 [2024-11-28 12:50:01.190973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.761 [2024-11-28 12:50:01.190982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:18.761 [2024-11-28 12:50:01.198470] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:18.761 [2024-11-28 12:50:01.198494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.761 [2024-11-28 12:50:01.198505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:18.761 [2024-11-28 12:50:01.205964] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:18.761 [2024-11-28 12:50:01.205988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.761 [2024-11-28 12:50:01.205997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:18.761 [2024-11-28 12:50:01.214252] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:18.761 [2024-11-28 12:50:01.214275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.761 [2024-11-28 12:50:01.214285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:18.761 [2024-11-28 12:50:01.221206] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:18.761 [2024-11-28 12:50:01.221230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.761 [2024-11-28 12:50:01.221239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:18.761 [2024-11-28 12:50:01.228626] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:18.761 [2024-11-28 12:50:01.228650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.761 [2024-11-28 12:50:01.228659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:18.761 [2024-11-28 12:50:01.236225] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:18.761 [2024-11-28 12:50:01.236249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.761 [2024-11-28 12:50:01.236257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:18.761 [2024-11-28 12:50:01.244354] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:18.761 [2024-11-28 12:50:01.244379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.761 [2024-11-28 12:50:01.244388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:18.761 [2024-11-28 12:50:01.251271] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:18.761 [2024-11-28 12:50:01.251295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.761 [2024-11-28 12:50:01.251305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:18.761 [2024-11-28 12:50:01.258879] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:18.761 [2024-11-28 12:50:01.258902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.761 [2024-11-28 12:50:01.258912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:18.761 [2024-11-28 12:50:01.266429] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:18.761 [2024-11-28 12:50:01.266453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.761 [2024-11-28 12:50:01.266462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:18.761 [2024-11-28 12:50:01.274442] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:18.761 [2024-11-28 12:50:01.274465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.761 [2024-11-28 12:50:01.274474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:19.046 [2024-11-28 12:50:01.282861] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:19.046 [2024-11-28 12:50:01.282886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.046 [2024-11-28 12:50:01.282895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:19.046 [2024-11-28 12:50:01.291567] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:19.046 [2024-11-28 12:50:01.291590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.046 [2024-11-28 12:50:01.291604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:19.046 [2024-11-28 12:50:01.300431] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:19.046 [2024-11-28 12:50:01.300455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.046 [2024-11-28 12:50:01.300464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:19.046 [2024-11-28 12:50:01.308588] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:19.046 [2024-11-28 12:50:01.308613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.046 [2024-11-28 12:50:01.308621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:19.046 [2024-11-28 12:50:01.315504] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:19.046 [2024-11-28 12:50:01.315528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.046 [2024-11-28 12:50:01.315536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:19.046 [2024-11-28 12:50:01.321992] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:19.046 [2024-11-28 12:50:01.322016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.046 [2024-11-28 12:50:01.322026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:19.046 [2024-11-28 12:50:01.328325] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:19.046 [2024-11-28 12:50:01.328347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.046 [2024-11-28 12:50:01.328356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:19.046 [2024-11-28 12:50:01.334592] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:19.046 [2024-11-28 12:50:01.334614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.046 [2024-11-28 12:50:01.334623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:19.046 [2024-11-28 12:50:01.340498] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:19.046 [2024-11-28 12:50:01.340521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.046 [2024-11-28 12:50:01.340529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:19.046 [2024-11-28 12:50:01.346546] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:19.046 [2024-11-28 12:50:01.346569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.046 [2024-11-28 12:50:01.346577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:19.046 [2024-11-28 12:50:01.352479] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:19.046 [2024-11-28 12:50:01.352507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.046 [2024-11-28 12:50:01.352516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:19.046 [2024-11-28 12:50:01.358179] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:19.046 [2024-11-28 12:50:01.358201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.046 [2024-11-28 12:50:01.358210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:19.046 [2024-11-28 12:50:01.363890] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:19.046 [2024-11-28 12:50:01.363912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.046 [2024-11-28 12:50:01.363920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:19.046 [2024-11-28 12:50:01.369506] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:19.046 [2024-11-28 12:50:01.369529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.046 [2024-11-28 12:50:01.369537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:19.046 [2024-11-28 12:50:01.375106] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:19.046 [2024-11-28 12:50:01.375129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.046 [2024-11-28 12:50:01.375137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:19.046 [2024-11-28 12:50:01.380659] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:19.046 [2024-11-28 12:50:01.380681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.046 [2024-11-28 12:50:01.380690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:19.047 [2024-11-28 12:50:01.386293] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:19.047 [2024-11-28 12:50:01.386315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.047 [2024-11-28 12:50:01.386324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:19.047 [2024-11-28 12:50:01.392004] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:19.047 [2024-11-28 12:50:01.392028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.047 [2024-11-28 12:50:01.392036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:19.047 [2024-11-28 12:50:01.398008] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:19.047 [2024-11-28 12:50:01.398031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.047 [2024-11-28 12:50:01.398040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:19.047 [2024-11-28 12:50:01.404202] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:19.047 [2024-11-28 12:50:01.404225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.047 [2024-11-28 12:50:01.404235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:19.047 [2024-11-28 12:50:01.410181] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:19.047 [2024-11-28 12:50:01.410204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.047 [2024-11-28 12:50:01.410212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:19.047 [2024-11-28 12:50:01.416224] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:19.047 [2024-11-28 12:50:01.416247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.047 [2024-11-28 12:50:01.416256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:19.047 [2024-11-28 12:50:01.422080] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:19.047 [2024-11-28 12:50:01.422103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.047 [2024-11-28 12:50:01.422112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:19.047 [2024-11-28 12:50:01.428186] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:19.047 [2024-11-28 12:50:01.428210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.047 [2024-11-28 12:50:01.428219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:19.047 [2024-11-28 12:50:01.434693] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:19.047 [2024-11-28 12:50:01.434715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.047 [2024-11-28 12:50:01.434724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:19.047 [2024-11-28 12:50:01.440925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:19.047 [2024-11-28 12:50:01.440956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.047 [2024-11-28 12:50:01.440966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:19.047 [2024-11-28 12:50:01.447182] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:19.047 [2024-11-28 12:50:01.447204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.047 [2024-11-28 12:50:01.447213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:19.047 [2024-11-28 12:50:01.453412] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:19.047 [2024-11-28 12:50:01.453435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.047 [2024-11-28 12:50:01.453448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:19.047 [2024-11-28 12:50:01.459611] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:19.047 [2024-11-28 12:50:01.459634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.047 [2024-11-28 12:50:01.459642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:19.047 [2024-11-28 12:50:01.463455] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:19.047 [2024-11-28 12:50:01.463477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.047 [2024-11-28 12:50:01.463485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:19.047 [2024-11-28 12:50:01.467771] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:19.047 [2024-11-28 12:50:01.467794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.047 [2024-11-28 12:50:01.467802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:19.047 [2024-11-28 12:50:01.473205] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:19.047 [2024-11-28 12:50:01.473228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.047 [2024-11-28 12:50:01.473236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:19.047 [2024-11-28 12:50:01.478658] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:19.047 [2024-11-28 12:50:01.478681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.047 [2024-11-28 12:50:01.478692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:19.047 [2024-11-28 12:50:01.484321] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:19.047 [2024-11-28 12:50:01.484344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.047 [2024-11-28 12:50:01.484353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:19.047 [2024-11-28 12:50:01.489991] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:19.047 [2024-11-28 12:50:01.490013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.047 [2024-11-28 12:50:01.490022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:19.047 [2024-11-28 12:50:01.495683] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:19.047 [2024-11-28 12:50:01.495705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.047 [2024-11-28 12:50:01.495715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:19.047 [2024-11-28 12:50:01.501431] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:19.048 [2024-11-28 12:50:01.501457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.048 [2024-11-28 12:50:01.501466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:19.048 [2024-11-28 12:50:01.507229] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:19.048 [2024-11-28 12:50:01.507252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.048 [2024-11-28 12:50:01.507261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:19.048 [2024-11-28 12:50:01.512962] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:19.048 [2024-11-28 12:50:01.512985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.048 [2024-11-28 12:50:01.512994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:19.048 [2024-11-28 12:50:01.518603] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:19.048 [2024-11-28 12:50:01.518626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.048 [2024-11-28 12:50:01.518634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:19.048 [2024-11-28 12:50:01.524164] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:19.048 [2024-11-28 12:50:01.524187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.048 [2024-11-28 12:50:01.524198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:19.048 [2024-11-28 12:50:01.530006] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:19.048 [2024-11-28 12:50:01.530027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.048 [2024-11-28 12:50:01.530035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:19.048 [2024-11-28 12:50:01.535789] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:19.048 [2024-11-28 12:50:01.535811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.048 [2024-11-28 12:50:01.535820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:19.048 [2024-11-28 12:50:01.541276] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:19.048 [2024-11-28 12:50:01.541298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.048 [2024-11-28 12:50:01.541306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:19.048 [2024-11-28 12:50:01.547150] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:19.048 [2024-11-28 12:50:01.547173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.048 [2024-11-28 12:50:01.547182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:19.048 [2024-11-28 12:50:01.553575] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:19.048 [2024-11-28 12:50:01.553597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.048 [2024-11-28 12:50:01.553605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:19.048 [2024-11-28 12:50:01.560068] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:19.048 [2024-11-28 12:50:01.560091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.048 [2024-11-28 12:50:01.560099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:19.308 [2024-11-28 12:50:01.566418] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:19.308 [2024-11-28 12:50:01.566439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.308 [2024-11-28 12:50:01.566448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:19.308 [2024-11-28 12:50:01.572756] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:19.308 [2024-11-28 12:50:01.572780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.308 [2024-11-28 12:50:01.572789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:19.308 [2024-11-28 12:50:01.578837] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:19.308 [2024-11-28 12:50:01.578860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.308 [2024-11-28 12:50:01.578868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:19.308 [2024-11-28 12:50:01.584766] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:19.308 [2024-11-28 12:50:01.584789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.308 [2024-11-28 12:50:01.584797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:19.308 [2024-11-28 12:50:01.591031] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:19.308 [2024-11-28 12:50:01.591053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.308 [2024-11-28 12:50:01.591061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:19.308 [2024-11-28 12:50:01.597441] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:19.308 [2024-11-28 12:50:01.597464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.308 [2024-11-28 12:50:01.597472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:19.309 [2024-11-28 12:50:01.603869] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:19.309 [2024-11-28 12:50:01.603893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.309 [2024-11-28 12:50:01.603905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:19.309 [2024-11-28 12:50:01.611612] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:19.309 [2024-11-28 12:50:01.611635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.309 [2024-11-28 12:50:01.611644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:19.309 [2024-11-28 12:50:01.619016] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:19.309 [2024-11-28 12:50:01.619039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.309 [2024-11-28 12:50:01.619048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:19.309 [2024-11-28 12:50:01.626224] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:19.309 [2024-11-28 12:50:01.626248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.309 [2024-11-28 12:50:01.626256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:19.309 [2024-11-28 12:50:01.632528] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:19.309 [2024-11-28 12:50:01.632550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.309 [2024-11-28 12:50:01.632558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:19.309 [2024-11-28 12:50:01.639129] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:19.309 [2024-11-28 12:50:01.639152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.309 [2024-11-28 12:50:01.639161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:19.309 [2024-11-28 12:50:01.645062] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:19.309 [2024-11-28 12:50:01.645084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.309 [2024-11-28 12:50:01.645092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:19.309 [2024-11-28 12:50:01.650626] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:19.309 [2024-11-28 12:50:01.650647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.309 [2024-11-28 12:50:01.650655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:19.309 [2024-11-28 12:50:01.656701] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:19.309 [2024-11-28 12:50:01.656723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.309 [2024-11-28 12:50:01.656732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:19.309 [2024-11-28 12:50:01.662625] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:19.309 [2024-11-28 12:50:01.662647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.309 [2024-11-28 12:50:01.662656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:19.309 [2024-11-28 12:50:01.668818] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:19.309 [2024-11-28 12:50:01.668840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.309 [2024-11-28 12:50:01.668848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:19.309 [2024-11-28 12:50:01.675337] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:19.309 [2024-11-28 12:50:01.675359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.309 [2024-11-28 12:50:01.675368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:19.309 [2024-11-28 12:50:01.681642] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:19.309 [2024-11-28 12:50:01.681665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.309 [2024-11-28 12:50:01.681673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:19.309 [2024-11-28 12:50:01.687319] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:19.309 [2024-11-28 12:50:01.687342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.309 [2024-11-28 12:50:01.687350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:19.309 [2024-11-28 12:50:01.693433] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:19.309 [2024-11-28 12:50:01.693455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.309 [2024-11-28 12:50:01.693463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:19.309 [2024-11-28 12:50:01.699448] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:19.309 [2024-11-28 12:50:01.699470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.309 [2024-11-28 12:50:01.699479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:19.309 [2024-11-28 12:50:01.705131] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:19.309 [2024-11-28 12:50:01.705153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.309 [2024-11-28 12:50:01.705162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:19.309 [2024-11-28 12:50:01.711112] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:19.309 [2024-11-28 12:50:01.711134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.309 [2024-11-28 12:50:01.711145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:19.309 [2024-11-28 12:50:01.716810] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:19.309 [2024-11-28 12:50:01.716832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.309 [2024-11-28 12:50:01.716841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:19.309 [2024-11-28 12:50:01.720493] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:19.309 [2024-11-28 12:50:01.720516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.309 [2024-11-28 12:50:01.720524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:19.309 [2024-11-28 12:50:01.726333] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:19.309 [2024-11-28 12:50:01.726355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.309 [2024-11-28 12:50:01.726364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:19.309 [2024-11-28 12:50:01.733129] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:19.309 [2024-11-28 12:50:01.733152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.310 [2024-11-28 12:50:01.733160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:19.310 [2024-11-28 12:50:01.739761] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:19.310 [2024-11-28 12:50:01.739783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.310 [2024-11-28 12:50:01.739793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:19.310 [2024-11-28 12:50:01.746680] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:19.310 [2024-11-28 12:50:01.746702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.310 [2024-11-28 12:50:01.746711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:19.310 [2024-11-28 12:50:01.753526] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:19.310 [2024-11-28 12:50:01.753549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.310 [2024-11-28 12:50:01.753558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:19.310 [2024-11-28 12:50:01.760768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:19.310 [2024-11-28 12:50:01.760791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.310 [2024-11-28 12:50:01.760800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:19.310 [2024-11-28 12:50:01.767553] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:19.310 [2024-11-28 12:50:01.767579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.310 [2024-11-28 12:50:01.767588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:19.310 [2024-11-28 12:50:01.774287] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:19.310 [2024-11-28 12:50:01.774309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.310 [2024-11-28 12:50:01.774318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:19.310 [2024-11-28 12:50:01.781004] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:19.310 [2024-11-28 12:50:01.781027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.310 [2024-11-28 12:50:01.781035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:19.310 [2024-11-28 12:50:01.787684] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:19.310 [2024-11-28 12:50:01.787707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.310 [2024-11-28 12:50:01.787715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:19.310 [2024-11-28 12:50:01.794282] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:19.310 [2024-11-28 12:50:01.794304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.310 [2024-11-28 12:50:01.794313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:19.310 [2024-11-28 12:50:01.800078] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:19.310 [2024-11-28 12:50:01.800099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.310 [2024-11-28 12:50:01.800107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:19.310 [2024-11-28 12:50:01.806497] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:19.310 [2024-11-28 12:50:01.806519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.310 [2024-11-28 12:50:01.806528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:19.310 [2024-11-28 12:50:01.812683] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:19.310 [2024-11-28 12:50:01.812705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.310 [2024-11-28 12:50:01.812713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:19.310 [2024-11-28 12:50:01.819304] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:19.310 [2024-11-28 12:50:01.819344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.310 [2024-11-28 12:50:01.819353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:19.570 [2024-11-28 12:50:01.825552] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:19.570 [2024-11-28 12:50:01.825578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.570 [2024-11-28 12:50:01.825588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:19.570 [2024-11-28 12:50:01.832089] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:19.570 [2024-11-28 12:50:01.832112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.570 [2024-11-28 12:50:01.832122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:19.570 [2024-11-28 12:50:01.838807] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:19.570 [2024-11-28 12:50:01.838831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.570 [2024-11-28 12:50:01.838840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:19.570 [2024-11-28 12:50:01.845308] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:19.570 [2024-11-28 12:50:01.845332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.570 [2024-11-28 12:50:01.845340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:19.570 [2024-11-28 12:50:01.851828] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:19.570 [2024-11-28 12:50:01.851852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.570 [2024-11-28 12:50:01.851860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:19.570 [2024-11-28 12:50:01.858645] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:19.570 [2024-11-28 12:50:01.858668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.570 [2024-11-28 12:50:01.858676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:19.570 [2024-11-28 12:50:01.865379] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:19.570 [2024-11-28 12:50:01.865403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.570 [2024-11-28 12:50:01.865412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:19.570 [2024-11-28 12:50:01.872748] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:19.570 [2024-11-28 12:50:01.872770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.570 [2024-11-28 12:50:01.872778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:19.570 [2024-11-28 12:50:01.879765] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:19.570 [2024-11-28 12:50:01.879787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.570 [2024-11-28 12:50:01.879804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:19.570 [2024-11-28 12:50:01.886680] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:19.570 [2024-11-28 12:50:01.886703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.570 [2024-11-28 12:50:01.886712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:19.570 [2024-11-28 12:50:01.892721] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:19.570 [2024-11-28 12:50:01.892745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.570 [2024-11-28 12:50:01.892754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:19.570 [2024-11-28 12:50:01.898732] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:19.570 [2024-11-28 12:50:01.898755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.570 [2024-11-28 12:50:01.898763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:19.570 [2024-11-28 12:50:01.904611] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:19.570 [2024-11-28 12:50:01.904634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.570 [2024-11-28 12:50:01.904642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:19.570 [2024-11-28 12:50:01.910629] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:19.570 [2024-11-28 12:50:01.910651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.570 [2024-11-28 12:50:01.910660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:19.570 [2024-11-28 12:50:01.916709] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:19.570 [2024-11-28 12:50:01.916732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.571 [2024-11-28 12:50:01.916740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:19.571 [2024-11-28 12:50:01.923214] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:19.571 [2024-11-28 12:50:01.923237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.571 [2024-11-28 12:50:01.923245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:19.571 [2024-11-28 12:50:01.929789] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:19.571 [2024-11-28 12:50:01.929812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.571 [2024-11-28 12:50:01.929820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:19.571 [2024-11-28 12:50:01.936050] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:19.571 [2024-11-28 12:50:01.936076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.571 [2024-11-28 12:50:01.936085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:19.571 [2024-11-28 12:50:01.942217] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:19.571 [2024-11-28 12:50:01.942239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.571 [2024-11-28 12:50:01.942248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:19.571 [2024-11-28 12:50:01.948671] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:19.571 [2024-11-28 12:50:01.948694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.571 [2024-11-28 12:50:01.948702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:19.571 [2024-11-28 12:50:01.954814] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:19.571 [2024-11-28 12:50:01.954836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.571 [2024-11-28 12:50:01.954845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:19.571 [2024-11-28 12:50:01.960789] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:19.571 [2024-11-28 12:50:01.960812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.571 [2024-11-28 12:50:01.960820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:19.571 [2024-11-28 12:50:01.966733] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:19.571 [2024-11-28 12:50:01.966756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.571 [2024-11-28 12:50:01.966765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:19.571 [2024-11-28 12:50:01.972638] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:19.571 [2024-11-28 12:50:01.972661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.571 [2024-11-28 12:50:01.972669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:19.571 [2024-11-28 12:50:01.978359] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:19.571 [2024-11-28 12:50:01.978381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.571 [2024-11-28 12:50:01.978390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:19.571 [2024-11-28 12:50:01.984399] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:19.571 [2024-11-28 12:50:01.984423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.571 [2024-11-28 12:50:01.984435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:19.571 [2024-11-28 12:50:01.991018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:19.571 [2024-11-28 12:50:01.991041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.571 [2024-11-28 12:50:01.991049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:19.571 [2024-11-28 12:50:01.996752] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:19.571 [2024-11-28 12:50:01.996774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.571 [2024-11-28 12:50:01.996782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:19.571 [2024-11-28 12:50:02.002511] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:19.571 [2024-11-28 12:50:02.002534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.571 [2024-11-28 12:50:02.002543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:19.571 [2024-11-28 12:50:02.008187] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:19.571 [2024-11-28 12:50:02.008209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.571 [2024-11-28 12:50:02.008218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:19.571 [2024-11-28 12:50:02.013902] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:19.571 [2024-11-28 12:50:02.013924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.571 [2024-11-28 12:50:02.013932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:19.571 [2024-11-28 12:50:02.019775] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:19.571 [2024-11-28 12:50:02.019797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.571 [2024-11-28 12:50:02.019805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:19.571 [2024-11-28 12:50:02.025699] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:19.571 [2024-11-28 12:50:02.025722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.571 [2024-11-28 12:50:02.025730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:19.571 [2024-11-28 12:50:02.031838] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:19.571 [2024-11-28 12:50:02.031861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.571 [2024-11-28 12:50:02.031870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:19.571 [2024-11-28 12:50:02.038820] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:19.571 [2024-11-28 12:50:02.038846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.571 [2024-11-28 12:50:02.038854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:19.571 [2024-11-28 12:50:02.045220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:19.571 [2024-11-28 12:50:02.045242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.572 [2024-11-28 12:50:02.045251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:19.572 5035.00 IOPS, 629.38 MiB/s [2024-11-28T11:50:02.091Z] [2024-11-28 12:50:02.052591] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fab1a0) 00:26:19.572 [2024-11-28 12:50:02.052614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.572 [2024-11-28 12:50:02.052623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:19.830 00:26:19.830 Latency(us) 00:26:19.830 [2024-11-28T11:50:02.349Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:19.830 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:26:19.830 nvme0n1 : 2.04 4934.90 616.86 0.00 0.00 3179.37 487.96 44906.41 00:26:19.830 [2024-11-28T11:50:02.349Z] =================================================================================================================== 00:26:19.830 [2024-11-28T11:50:02.349Z] Total : 4934.90 616.86 0.00 0.00 3179.37 487.96 44906.41 00:26:19.830 { 00:26:19.830 "results": [ 00:26:19.830 { 00:26:19.830 "job": "nvme0n1", 00:26:19.830 "core_mask": "0x2", 00:26:19.830 "workload": "randread", 00:26:19.830 "status": "finished", 00:26:19.830 "queue_depth": 16, 00:26:19.830 "io_size": 131072, 00:26:19.830 "runtime": 2.04381, 00:26:19.830 "iops": 4934.900993732294, 00:26:19.830 "mibps": 616.8626242165368, 00:26:19.830 "io_failed": 0, 00:26:19.830 "io_timeout": 0, 00:26:19.830 "avg_latency_us": 3179.374816577434, 00:26:19.830 "min_latency_us": 487.9582608695652, 00:26:19.830 "max_latency_us": 44906.406956521736 00:26:19.830 } 00:26:19.830 ], 00:26:19.830 "core_count": 1 00:26:19.830 } 00:26:19.830 12:50:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:19.830 12:50:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:19.830 12:50:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:19.830 | .driver_specific 00:26:19.830 | .nvme_error 00:26:19.830 | .status_code 00:26:19.830 | .command_transient_transport_error' 00:26:19.830 12:50:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:19.830 12:50:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 326 > 0 )) 00:26:19.830 12:50:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2670062 00:26:19.830 12:50:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2670062 ']' 00:26:19.830 12:50:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2670062 00:26:19.830 12:50:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:26:19.830 12:50:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:19.830 12:50:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2670062 00:26:20.088 12:50:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:20.088 12:50:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:20.088 12:50:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2670062' 00:26:20.088 killing process with pid 2670062 00:26:20.088 12:50:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2670062 00:26:20.088 Received shutdown signal, test time was about 2.000000 seconds 00:26:20.088 00:26:20.088 Latency(us) 00:26:20.088 [2024-11-28T11:50:02.607Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:20.088 [2024-11-28T11:50:02.607Z] =================================================================================================================== 00:26:20.088 [2024-11-28T11:50:02.607Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:20.088 12:50:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2670062 00:26:20.088 12:50:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:26:20.088 12:50:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:20.088 12:50:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:26:20.088 12:50:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:26:20.088 12:50:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:26:20.088 12:50:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2670552 00:26:20.088 12:50:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2670552 /var/tmp/bperf.sock 00:26:20.088 12:50:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:26:20.088 12:50:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2670552 ']' 00:26:20.088 12:50:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:20.089 12:50:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:20.089 12:50:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:20.089 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:20.089 12:50:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:20.089 12:50:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:20.089 [2024-11-28 12:50:02.571477] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:26:20.089 [2024-11-28 12:50:02.571531] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2670552 ] 00:26:20.346 [2024-11-28 12:50:02.635593] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:20.346 [2024-11-28 12:50:02.678585] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:20.346 12:50:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:20.346 12:50:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:26:20.346 12:50:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:20.346 12:50:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:20.603 12:50:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:20.603 12:50:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.603 12:50:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:20.603 12:50:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.603 12:50:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:20.603 12:50:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:20.861 nvme0n1 00:26:20.861 12:50:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:26:20.861 12:50:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.861 12:50:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:20.861 12:50:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.861 12:50:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:20.861 12:50:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:21.120 Running I/O for 2 seconds... 00:26:21.120 [2024-11-28 12:50:03.462803] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:21.120 [2024-11-28 12:50:03.462989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:22035 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.120 [2024-11-28 12:50:03.463017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:21.120 [2024-11-28 12:50:03.472613] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:21.120 [2024-11-28 12:50:03.472782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:22772 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.120 [2024-11-28 12:50:03.472802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:21.120 [2024-11-28 12:50:03.482416] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:21.120 [2024-11-28 12:50:03.482582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:3 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.120 [2024-11-28 12:50:03.482602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:21.120 [2024-11-28 12:50:03.492186] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:21.120 [2024-11-28 12:50:03.492357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:21331 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.120 [2024-11-28 12:50:03.492376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:21.120 [2024-11-28 12:50:03.502134] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:21.120 [2024-11-28 12:50:03.502304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:3936 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.120 [2024-11-28 12:50:03.502324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:21.120 [2024-11-28 12:50:03.511905] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:21.120 [2024-11-28 12:50:03.512108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:1617 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.120 [2024-11-28 12:50:03.512128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:21.120 [2024-11-28 12:50:03.521663] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:21.120 [2024-11-28 12:50:03.521830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:21588 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.120 [2024-11-28 12:50:03.521848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:21.120 [2024-11-28 12:50:03.531391] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:21.120 [2024-11-28 12:50:03.531552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:21405 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.120 [2024-11-28 12:50:03.531570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:21.120 [2024-11-28 12:50:03.541106] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:21.121 [2024-11-28 12:50:03.541268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:23348 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.121 [2024-11-28 12:50:03.541286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:21.121 [2024-11-28 12:50:03.550797] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:21.121 [2024-11-28 12:50:03.550964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:1836 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.121 [2024-11-28 12:50:03.550982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:21.121 [2024-11-28 12:50:03.560498] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:21.121 [2024-11-28 12:50:03.560659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:8048 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.121 [2024-11-28 12:50:03.560678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:21.121 [2024-11-28 12:50:03.570177] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:21.121 [2024-11-28 12:50:03.570338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:24997 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.121 [2024-11-28 12:50:03.570356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:21.121 [2024-11-28 12:50:03.579877] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:21.121 [2024-11-28 12:50:03.580044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:18816 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.121 [2024-11-28 12:50:03.580062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:21.121 [2024-11-28 12:50:03.589535] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:21.121 [2024-11-28 12:50:03.589696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:25042 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.121 [2024-11-28 12:50:03.589715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:21.121 [2024-11-28 12:50:03.599222] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:21.121 [2024-11-28 12:50:03.599384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:24539 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.121 [2024-11-28 12:50:03.599402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:21.121 [2024-11-28 12:50:03.608963] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:21.121 [2024-11-28 12:50:03.609125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:19728 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.121 [2024-11-28 12:50:03.609143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:21.121 [2024-11-28 12:50:03.618756] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:21.121 [2024-11-28 12:50:03.618921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:15074 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.121 [2024-11-28 12:50:03.618939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:21.121 [2024-11-28 12:50:03.628366] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:21.121 [2024-11-28 12:50:03.628527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:24255 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.121 [2024-11-28 12:50:03.628545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:21.380 [2024-11-28 12:50:03.638423] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:21.380 [2024-11-28 12:50:03.638591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:3023 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.380 [2024-11-28 12:50:03.638611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:21.380 [2024-11-28 12:50:03.648258] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:21.380 [2024-11-28 12:50:03.648423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:18097 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.380 [2024-11-28 12:50:03.648441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:21.380 [2024-11-28 12:50:03.657929] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:21.380 [2024-11-28 12:50:03.658102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:23210 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.380 [2024-11-28 12:50:03.658120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:21.380 [2024-11-28 12:50:03.667609] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:21.380 [2024-11-28 12:50:03.667771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:11933 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.380 [2024-11-28 12:50:03.667789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:21.380 [2024-11-28 12:50:03.677313] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:21.380 [2024-11-28 12:50:03.677471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:4915 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.380 [2024-11-28 12:50:03.677492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:21.380 [2024-11-28 12:50:03.686979] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:21.380 [2024-11-28 12:50:03.687140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:9867 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.380 [2024-11-28 12:50:03.687158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:21.380 [2024-11-28 12:50:03.696657] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:21.380 [2024-11-28 12:50:03.696819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:16599 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.380 [2024-11-28 12:50:03.696838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:21.380 [2024-11-28 12:50:03.706393] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:21.380 [2024-11-28 12:50:03.706555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:8264 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.380 [2024-11-28 12:50:03.706574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:21.380 [2024-11-28 12:50:03.716104] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:21.380 [2024-11-28 12:50:03.716268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:15774 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.380 [2024-11-28 12:50:03.716290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:21.380 [2024-11-28 12:50:03.726013] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:21.380 [2024-11-28 12:50:03.726182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:10706 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.380 [2024-11-28 12:50:03.726203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:21.380 [2024-11-28 12:50:03.735706] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:21.380 [2024-11-28 12:50:03.735869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:16409 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.380 [2024-11-28 12:50:03.735888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:21.380 [2024-11-28 12:50:03.745416] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:21.381 [2024-11-28 12:50:03.745579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:21195 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.381 [2024-11-28 12:50:03.745597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:21.381 [2024-11-28 12:50:03.755103] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:21.381 [2024-11-28 12:50:03.755268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:24664 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.381 [2024-11-28 12:50:03.755286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:21.381 [2024-11-28 12:50:03.764935] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:21.381 [2024-11-28 12:50:03.765116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:231 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.381 [2024-11-28 12:50:03.765135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:21.381 [2024-11-28 12:50:03.774533] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:21.381 [2024-11-28 12:50:03.774695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:11217 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.381 [2024-11-28 12:50:03.774714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:21.381 [2024-11-28 12:50:03.784194] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:21.381 [2024-11-28 12:50:03.784356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:21345 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.381 [2024-11-28 12:50:03.784374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:21.381 [2024-11-28 12:50:03.793902] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:21.381 [2024-11-28 12:50:03.794072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:13400 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.381 [2024-11-28 12:50:03.794091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:21.381 [2024-11-28 12:50:03.803614] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:21.381 [2024-11-28 12:50:03.803778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:24930 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.381 [2024-11-28 12:50:03.803796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:21.381 [2024-11-28 12:50:03.813368] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:21.381 [2024-11-28 12:50:03.813531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:6668 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.381 [2024-11-28 12:50:03.813549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:21.381 [2024-11-28 12:50:03.823035] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:21.381 [2024-11-28 12:50:03.823199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:20344 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.381 [2024-11-28 12:50:03.823217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:21.381 [2024-11-28 12:50:03.832703] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:21.381 [2024-11-28 12:50:03.832866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:6214 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.381 [2024-11-28 12:50:03.832885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:21.381 [2024-11-28 12:50:03.842327] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:21.381 [2024-11-28 12:50:03.842491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:19459 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.381 [2024-11-28 12:50:03.842509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:21.381 [2024-11-28 12:50:03.852014] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:21.381 [2024-11-28 12:50:03.852177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:18827 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.381 [2024-11-28 12:50:03.852196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:21.381 [2024-11-28 12:50:03.861610] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:21.381 [2024-11-28 12:50:03.861772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:2729 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.381 [2024-11-28 12:50:03.861791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:21.381 [2024-11-28 12:50:03.871307] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:21.381 [2024-11-28 12:50:03.871473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:15994 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.381 [2024-11-28 12:50:03.871492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:21.381 [2024-11-28 12:50:03.880990] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:21.381 [2024-11-28 12:50:03.881159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:8896 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.381 [2024-11-28 12:50:03.881177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:21.381 [2024-11-28 12:50:03.890687] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:21.381 [2024-11-28 12:50:03.890854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:13245 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.381 [2024-11-28 12:50:03.890873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:21.640 [2024-11-28 12:50:03.900836] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:21.640 [2024-11-28 12:50:03.901011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:11457 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.640 [2024-11-28 12:50:03.901030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:21.640 [2024-11-28 12:50:03.910570] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:21.640 [2024-11-28 12:50:03.910738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:24313 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.640 [2024-11-28 12:50:03.910756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:21.640 [2024-11-28 12:50:03.920351] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:21.640 [2024-11-28 12:50:03.920513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:2746 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.640 [2024-11-28 12:50:03.920532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:21.640 [2024-11-28 12:50:03.930027] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:21.640 [2024-11-28 12:50:03.930193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:2752 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.640 [2024-11-28 12:50:03.930215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:21.640 [2024-11-28 12:50:03.939711] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:21.640 [2024-11-28 12:50:03.939876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:5050 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.640 [2024-11-28 12:50:03.939894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:21.640 [2024-11-28 12:50:03.949385] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:21.640 [2024-11-28 12:50:03.949549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:17790 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.640 [2024-11-28 12:50:03.949566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:21.640 [2024-11-28 12:50:03.959063] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:21.640 [2024-11-28 12:50:03.959229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:11846 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.640 [2024-11-28 12:50:03.959249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:21.640 [2024-11-28 12:50:03.968741] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:21.640 [2024-11-28 12:50:03.968907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:7014 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.640 [2024-11-28 12:50:03.968929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:21.640 [2024-11-28 12:50:03.978631] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:21.640 [2024-11-28 12:50:03.978793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:11714 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.640 [2024-11-28 12:50:03.978813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:21.640 [2024-11-28 12:50:03.988358] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:21.640 [2024-11-28 12:50:03.988521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:20333 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.641 [2024-11-28 12:50:03.988541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:21.641 [2024-11-28 12:50:03.998086] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:21.641 [2024-11-28 12:50:03.998252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:14055 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.641 [2024-11-28 12:50:03.998271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:21.641 [2024-11-28 12:50:04.007769] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:21.641 [2024-11-28 12:50:04.007934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:4911 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.641 [2024-11-28 12:50:04.007958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:21.641 [2024-11-28 12:50:04.017620] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:21.641 [2024-11-28 12:50:04.017791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:24370 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.641 [2024-11-28 12:50:04.017809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:21.641 [2024-11-28 12:50:04.027501] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:21.641 [2024-11-28 12:50:04.027666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:9697 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.641 [2024-11-28 12:50:04.027685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:21.641 [2024-11-28 12:50:04.037232] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:21.641 [2024-11-28 12:50:04.037397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:4945 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.641 [2024-11-28 12:50:04.037416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:21.641 [2024-11-28 12:50:04.046934] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:21.641 [2024-11-28 12:50:04.047104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:6728 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.641 [2024-11-28 12:50:04.047123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:21.641 [2024-11-28 12:50:04.056636] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:21.641 [2024-11-28 12:50:04.056802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:7318 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.641 [2024-11-28 12:50:04.056820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:21.641 [2024-11-28 12:50:04.066341] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:21.641 [2024-11-28 12:50:04.066506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:2975 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.641 [2024-11-28 12:50:04.066526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:21.641 [2024-11-28 12:50:04.076035] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:21.641 [2024-11-28 12:50:04.076201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:4454 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.641 [2024-11-28 12:50:04.076220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:21.641 [2024-11-28 12:50:04.085745] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:21.641 [2024-11-28 12:50:04.085909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:1425 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.641 [2024-11-28 12:50:04.085927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:21.641 [2024-11-28 12:50:04.095453] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:21.641 [2024-11-28 12:50:04.095617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:7339 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.641 [2024-11-28 12:50:04.095635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:21.641 [2024-11-28 12:50:04.105148] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:21.641 [2024-11-28 12:50:04.105309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:4896 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.641 [2024-11-28 12:50:04.105327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:21.641 [2024-11-28 12:50:04.114885] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:21.641 [2024-11-28 12:50:04.115075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:16445 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.641 [2024-11-28 12:50:04.115093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:21.641 [2024-11-28 12:50:04.124659] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:21.641 [2024-11-28 12:50:04.124820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:22682 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.641 [2024-11-28 12:50:04.124838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:21.641 [2024-11-28 12:50:04.134367] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:21.641 [2024-11-28 12:50:04.134532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:22106 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.641 [2024-11-28 12:50:04.134550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:21.641 [2024-11-28 12:50:04.144080] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:21.641 [2024-11-28 12:50:04.144244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:7306 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.641 [2024-11-28 12:50:04.144263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:21.641 [2024-11-28 12:50:04.153871] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:21.641 [2024-11-28 12:50:04.154068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:14870 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.641 [2024-11-28 12:50:04.154087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:21.901 [2024-11-28 12:50:04.163938] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:21.901 [2024-11-28 12:50:04.164113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:932 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.901 [2024-11-28 12:50:04.164131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:21.901 [2024-11-28 12:50:04.173685] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:21.901 [2024-11-28 12:50:04.173849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:2753 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.901 [2024-11-28 12:50:04.173869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:21.901 [2024-11-28 12:50:04.183384] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:21.901 [2024-11-28 12:50:04.183546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:10166 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.901 [2024-11-28 12:50:04.183570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:21.901 [2024-11-28 12:50:04.193139] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:21.901 [2024-11-28 12:50:04.193303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:23141 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.901 [2024-11-28 12:50:04.193322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:21.901 [2024-11-28 12:50:04.202791] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:21.901 [2024-11-28 12:50:04.202959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:8603 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.901 [2024-11-28 12:50:04.202978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:21.901 [2024-11-28 12:50:04.212542] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:21.901 [2024-11-28 12:50:04.212705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:9002 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.901 [2024-11-28 12:50:04.212723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:21.901 [2024-11-28 12:50:04.222380] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:21.901 [2024-11-28 12:50:04.222546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:2602 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.901 [2024-11-28 12:50:04.222567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:21.901 [2024-11-28 12:50:04.232231] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:21.901 [2024-11-28 12:50:04.232396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:974 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.901 [2024-11-28 12:50:04.232415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:21.901 [2024-11-28 12:50:04.241904] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:21.901 [2024-11-28 12:50:04.242094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:21026 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.901 [2024-11-28 12:50:04.242113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:21.901 [2024-11-28 12:50:04.251896] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:21.901 [2024-11-28 12:50:04.252077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:20241 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.901 [2024-11-28 12:50:04.252096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:21.901 [2024-11-28 12:50:04.261606] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:21.901 [2024-11-28 12:50:04.261769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:5407 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.901 [2024-11-28 12:50:04.261788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:21.901 [2024-11-28 12:50:04.271303] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:21.901 [2024-11-28 12:50:04.271474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:20006 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.901 [2024-11-28 12:50:04.271493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:21.901 [2024-11-28 12:50:04.281003] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:21.901 [2024-11-28 12:50:04.281166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:25479 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.901 [2024-11-28 12:50:04.281184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:21.901 [2024-11-28 12:50:04.290723] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:21.901 [2024-11-28 12:50:04.290890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:20002 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.901 [2024-11-28 12:50:04.290909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:21.901 [2024-11-28 12:50:04.300408] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:21.901 [2024-11-28 12:50:04.300567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:23420 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.901 [2024-11-28 12:50:04.300586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:21.901 [2024-11-28 12:50:04.310164] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:21.901 [2024-11-28 12:50:04.310325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:3182 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.901 [2024-11-28 12:50:04.310344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:21.901 [2024-11-28 12:50:04.319903] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:21.901 [2024-11-28 12:50:04.320072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:5321 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.901 [2024-11-28 12:50:04.320091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:21.901 [2024-11-28 12:50:04.329603] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:21.901 [2024-11-28 12:50:04.329769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:8759 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.901 [2024-11-28 12:50:04.329787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:21.901 [2024-11-28 12:50:04.339306] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:21.901 [2024-11-28 12:50:04.339468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:6104 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.901 [2024-11-28 12:50:04.339487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:21.902 [2024-11-28 12:50:04.349009] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:21.902 [2024-11-28 12:50:04.349179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:16091 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.902 [2024-11-28 12:50:04.349197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:21.902 [2024-11-28 12:50:04.358715] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:21.902 [2024-11-28 12:50:04.358878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:10136 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.902 [2024-11-28 12:50:04.358898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:21.902 [2024-11-28 12:50:04.368417] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:21.902 [2024-11-28 12:50:04.368583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:21853 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.902 [2024-11-28 12:50:04.368602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:21.902 [2024-11-28 12:50:04.378120] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:21.902 [2024-11-28 12:50:04.378284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:5754 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.902 [2024-11-28 12:50:04.378302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:21.902 [2024-11-28 12:50:04.387821] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:21.902 [2024-11-28 12:50:04.387992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:11473 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.902 [2024-11-28 12:50:04.388011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:21.902 [2024-11-28 12:50:04.397500] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:21.902 [2024-11-28 12:50:04.397664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:2645 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.902 [2024-11-28 12:50:04.397684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:21.902 [2024-11-28 12:50:04.407217] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:21.902 [2024-11-28 12:50:04.407379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:23503 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.902 [2024-11-28 12:50:04.407397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:22.161 [2024-11-28 12:50:04.417222] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:22.161 [2024-11-28 12:50:04.417392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:24411 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.161 [2024-11-28 12:50:04.417411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:22.161 [2024-11-28 12:50:04.427139] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:22.161 [2024-11-28 12:50:04.427302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:10978 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.161 [2024-11-28 12:50:04.427320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:22.161 [2024-11-28 12:50:04.436829] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:22.161 [2024-11-28 12:50:04.437001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:4620 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.161 [2024-11-28 12:50:04.437023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:22.161 [2024-11-28 12:50:04.446529] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:22.161 [2024-11-28 12:50:04.446692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:10744 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.161 [2024-11-28 12:50:04.446710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:22.161 26077.00 IOPS, 101.86 MiB/s [2024-11-28T11:50:04.680Z] [2024-11-28 12:50:04.456230] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:22.161 [2024-11-28 12:50:04.456392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:12322 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.161 [2024-11-28 12:50:04.456411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:22.161 [2024-11-28 12:50:04.465923] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:22.161 [2024-11-28 12:50:04.466092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:20690 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.161 [2024-11-28 12:50:04.466111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:22.161 [2024-11-28 12:50:04.475620] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:22.161 [2024-11-28 12:50:04.475785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:14464 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.161 [2024-11-28 12:50:04.475804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:22.161 [2024-11-28 12:50:04.485416] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:22.161 [2024-11-28 12:50:04.485579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:5748 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.161 [2024-11-28 12:50:04.485600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:22.161 [2024-11-28 12:50:04.495134] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:22.161 [2024-11-28 12:50:04.495297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:14885 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.161 [2024-11-28 12:50:04.495315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:22.161 [2024-11-28 12:50:04.504811] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:22.161 [2024-11-28 12:50:04.504974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:24468 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.161 [2024-11-28 12:50:04.504993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:22.161 [2024-11-28 12:50:04.514701] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:22.161 [2024-11-28 12:50:04.514869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:15901 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.161 [2024-11-28 12:50:04.514889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:22.161 [2024-11-28 12:50:04.524425] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:22.161 [2024-11-28 12:50:04.524589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:21518 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.161 [2024-11-28 12:50:04.524607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:22.161 [2024-11-28 12:50:04.534170] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:22.161 [2024-11-28 12:50:04.534332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:24687 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.161 [2024-11-28 12:50:04.534350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:22.161 [2024-11-28 12:50:04.543836] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:22.161 [2024-11-28 12:50:04.544005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:16206 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.161 [2024-11-28 12:50:04.544024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:22.161 [2024-11-28 12:50:04.553573] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:22.161 [2024-11-28 12:50:04.553737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:11271 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.161 [2024-11-28 12:50:04.553755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:22.161 [2024-11-28 12:50:04.563265] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:22.161 [2024-11-28 12:50:04.563429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:7107 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.161 [2024-11-28 12:50:04.563447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:22.161 [2024-11-28 12:50:04.572957] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:22.161 [2024-11-28 12:50:04.573120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:22964 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.161 [2024-11-28 12:50:04.573139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:22.161 [2024-11-28 12:50:04.582653] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:22.161 [2024-11-28 12:50:04.582813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:15697 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.161 [2024-11-28 12:50:04.582831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:22.161 [2024-11-28 12:50:04.592359] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:22.161 [2024-11-28 12:50:04.592522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:4617 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.161 [2024-11-28 12:50:04.592542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:22.161 [2024-11-28 12:50:04.602037] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:22.161 [2024-11-28 12:50:04.602204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:14013 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.161 [2024-11-28 12:50:04.602227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:22.161 [2024-11-28 12:50:04.611687] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:22.161 [2024-11-28 12:50:04.611849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:23132 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.161 [2024-11-28 12:50:04.611867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:22.161 [2024-11-28 12:50:04.621414] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:22.161 [2024-11-28 12:50:04.621576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:6689 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.161 [2024-11-28 12:50:04.621594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:22.161 [2024-11-28 12:50:04.631185] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:22.161 [2024-11-28 12:50:04.631349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:5287 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.161 [2024-11-28 12:50:04.631367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:22.161 [2024-11-28 12:50:04.640878] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:22.162 [2024-11-28 12:50:04.641048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:11352 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.162 [2024-11-28 12:50:04.641067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:22.162 [2024-11-28 12:50:04.650579] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:22.162 [2024-11-28 12:50:04.650741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:10881 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.162 [2024-11-28 12:50:04.650759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:22.162 [2024-11-28 12:50:04.660278] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:22.162 [2024-11-28 12:50:04.660439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:13169 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.162 [2024-11-28 12:50:04.660458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:22.162 [2024-11-28 12:50:04.669958] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:22.162 [2024-11-28 12:50:04.670123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:15116 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.162 [2024-11-28 12:50:04.670141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:22.421 [2024-11-28 12:50:04.680023] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:22.421 [2024-11-28 12:50:04.680192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:1055 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.421 [2024-11-28 12:50:04.680210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:22.421 [2024-11-28 12:50:04.689834] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:22.421 [2024-11-28 12:50:04.690011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:6119 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.421 [2024-11-28 12:50:04.690030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:22.421 [2024-11-28 12:50:04.699554] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:22.421 [2024-11-28 12:50:04.699718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:22054 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.421 [2024-11-28 12:50:04.699736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:22.421 [2024-11-28 12:50:04.709307] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:22.421 [2024-11-28 12:50:04.709472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:10522 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.421 [2024-11-28 12:50:04.709490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:22.421 [2024-11-28 12:50:04.719106] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:22.421 [2024-11-28 12:50:04.719272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:12238 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.421 [2024-11-28 12:50:04.719293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:22.421 [2024-11-28 12:50:04.728795] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:22.421 [2024-11-28 12:50:04.728967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:14344 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.421 [2024-11-28 12:50:04.728989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:22.421 [2024-11-28 12:50:04.738699] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:22.421 [2024-11-28 12:50:04.738864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:8274 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.421 [2024-11-28 12:50:04.738884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:22.421 [2024-11-28 12:50:04.748454] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:22.421 [2024-11-28 12:50:04.748618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:4449 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.421 [2024-11-28 12:50:04.748637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:22.421 [2024-11-28 12:50:04.758165] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:22.421 [2024-11-28 12:50:04.758330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:8528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.421 [2024-11-28 12:50:04.758348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:22.421 [2024-11-28 12:50:04.767855] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:22.421 [2024-11-28 12:50:04.768026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:16908 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.421 [2024-11-28 12:50:04.768044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:22.421 [2024-11-28 12:50:04.777555] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:22.421 [2024-11-28 12:50:04.777717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:2874 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.421 [2024-11-28 12:50:04.777736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:22.421 [2024-11-28 12:50:04.787302] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:22.421 [2024-11-28 12:50:04.787465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:15134 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.421 [2024-11-28 12:50:04.787484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:22.421 [2024-11-28 12:50:04.796995] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:22.421 [2024-11-28 12:50:04.797159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:12482 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.421 [2024-11-28 12:50:04.797178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:22.421 [2024-11-28 12:50:04.806634] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:22.421 [2024-11-28 12:50:04.806796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:12488 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.421 [2024-11-28 12:50:04.806813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:22.421 [2024-11-28 12:50:04.816447] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:22.421 [2024-11-28 12:50:04.816608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:336 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.421 [2024-11-28 12:50:04.816627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:22.421 [2024-11-28 12:50:04.826133] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:22.421 [2024-11-28 12:50:04.826294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:3139 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.421 [2024-11-28 12:50:04.826313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:22.421 [2024-11-28 12:50:04.835821] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:22.421 [2024-11-28 12:50:04.835990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:11885 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.421 [2024-11-28 12:50:04.836008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:22.421 [2024-11-28 12:50:04.845522] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:22.421 [2024-11-28 12:50:04.845687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:23749 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.421 [2024-11-28 12:50:04.845705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:22.421 [2024-11-28 12:50:04.855222] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:22.421 [2024-11-28 12:50:04.855386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:20536 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.421 [2024-11-28 12:50:04.855407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:22.421 [2024-11-28 12:50:04.864897] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:22.421 [2024-11-28 12:50:04.865067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:3950 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.421 [2024-11-28 12:50:04.865088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:22.421 [2024-11-28 12:50:04.874579] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:22.421 [2024-11-28 12:50:04.874742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:18640 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.421 [2024-11-28 12:50:04.874760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:22.421 [2024-11-28 12:50:04.884327] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:22.421 [2024-11-28 12:50:04.884489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:7739 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.421 [2024-11-28 12:50:04.884507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:22.421 [2024-11-28 12:50:04.894020] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:22.421 [2024-11-28 12:50:04.894183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:18118 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.421 [2024-11-28 12:50:04.894201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:22.421 [2024-11-28 12:50:04.903709] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:22.421 [2024-11-28 12:50:04.903869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:13808 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.421 [2024-11-28 12:50:04.903886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:22.421 [2024-11-28 12:50:04.913409] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:22.421 [2024-11-28 12:50:04.913569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:13970 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.421 [2024-11-28 12:50:04.913587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:22.421 [2024-11-28 12:50:04.923151] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:22.421 [2024-11-28 12:50:04.923312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:2832 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.422 [2024-11-28 12:50:04.923330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:22.422 [2024-11-28 12:50:04.932883] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:22.422 [2024-11-28 12:50:04.933057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:5886 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.422 [2024-11-28 12:50:04.933076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:22.680 [2024-11-28 12:50:04.943021] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:22.680 [2024-11-28 12:50:04.943189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:18887 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.680 [2024-11-28 12:50:04.943208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:22.680 [2024-11-28 12:50:04.952713] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:22.680 [2024-11-28 12:50:04.952875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:20627 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.680 [2024-11-28 12:50:04.952893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:22.680 [2024-11-28 12:50:04.962391] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:22.680 [2024-11-28 12:50:04.962552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:16604 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.680 [2024-11-28 12:50:04.962570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:22.680 [2024-11-28 12:50:04.972138] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:22.680 [2024-11-28 12:50:04.972301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:20498 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.680 [2024-11-28 12:50:04.972320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:22.681 [2024-11-28 12:50:04.981825] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:22.681 [2024-11-28 12:50:04.982000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:24483 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.681 [2024-11-28 12:50:04.982021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:22.681 [2024-11-28 12:50:04.991690] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:22.681 [2024-11-28 12:50:04.991853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:3035 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.681 [2024-11-28 12:50:04.991874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:22.681 [2024-11-28 12:50:05.001402] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:22.681 [2024-11-28 12:50:05.001566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:14056 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.681 [2024-11-28 12:50:05.001584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:22.681 [2024-11-28 12:50:05.011130] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:22.681 [2024-11-28 12:50:05.011290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:1126 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.681 [2024-11-28 12:50:05.011310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:22.681 [2024-11-28 12:50:05.020867] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:22.681 [2024-11-28 12:50:05.021038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:16611 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.681 [2024-11-28 12:50:05.021056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:22.681 [2024-11-28 12:50:05.030436] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:22.681 [2024-11-28 12:50:05.030598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:22440 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.681 [2024-11-28 12:50:05.030617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:22.681 [2024-11-28 12:50:05.040215] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:22.681 [2024-11-28 12:50:05.040376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:4138 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.681 [2024-11-28 12:50:05.040394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:22.681 [2024-11-28 12:50:05.049907] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:22.681 [2024-11-28 12:50:05.050076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:1933 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.681 [2024-11-28 12:50:05.050094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:22.681 [2024-11-28 12:50:05.059602] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:22.681 [2024-11-28 12:50:05.059764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:4504 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.681 [2024-11-28 12:50:05.059782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:22.681 [2024-11-28 12:50:05.069293] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:22.681 [2024-11-28 12:50:05.069456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:4964 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.681 [2024-11-28 12:50:05.069474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:22.681 [2024-11-28 12:50:05.079012] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:22.681 [2024-11-28 12:50:05.079176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:2376 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.681 [2024-11-28 12:50:05.079195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:22.681 [2024-11-28 12:50:05.088694] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:22.681 [2024-11-28 12:50:05.088856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:2124 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.681 [2024-11-28 12:50:05.088874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:22.681 [2024-11-28 12:50:05.098389] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:22.681 [2024-11-28 12:50:05.098549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:7809 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.681 [2024-11-28 12:50:05.098567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:22.681 [2024-11-28 12:50:05.108119] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:22.681 [2024-11-28 12:50:05.108282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:14507 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.681 [2024-11-28 12:50:05.108303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:22.681 [2024-11-28 12:50:05.117858] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:22.681 [2024-11-28 12:50:05.118027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:23198 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.681 [2024-11-28 12:50:05.118044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:22.681 [2024-11-28 12:50:05.127530] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:22.681 [2024-11-28 12:50:05.127690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:19094 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.681 [2024-11-28 12:50:05.127708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:22.681 [2024-11-28 12:50:05.137220] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:22.681 [2024-11-28 12:50:05.137382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:17163 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.681 [2024-11-28 12:50:05.137400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:22.681 [2024-11-28 12:50:05.146914] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:22.681 [2024-11-28 12:50:05.147081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:3400 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.681 [2024-11-28 12:50:05.147099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:22.681 [2024-11-28 12:50:05.156592] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:22.681 [2024-11-28 12:50:05.156754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:11823 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.681 [2024-11-28 12:50:05.156772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:22.681 [2024-11-28 12:50:05.166292] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:22.681 [2024-11-28 12:50:05.166451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:7566 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.681 [2024-11-28 12:50:05.166469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:22.681 [2024-11-28 12:50:05.176038] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:22.681 [2024-11-28 12:50:05.176199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:15193 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.681 [2024-11-28 12:50:05.176217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:22.681 [2024-11-28 12:50:05.185657] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:22.681 [2024-11-28 12:50:05.185817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:16695 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.681 [2024-11-28 12:50:05.185835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:22.681 [2024-11-28 12:50:05.195555] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:22.681 [2024-11-28 12:50:05.195730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:23989 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.681 [2024-11-28 12:50:05.195748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:22.939 [2024-11-28 12:50:05.205617] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:22.939 [2024-11-28 12:50:05.205781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:13703 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.939 [2024-11-28 12:50:05.205799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:22.939 [2024-11-28 12:50:05.215352] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:22.939 [2024-11-28 12:50:05.215517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:3641 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.939 [2024-11-28 12:50:05.215535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:22.939 [2024-11-28 12:50:05.225148] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:22.939 [2024-11-28 12:50:05.225312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:16548 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.939 [2024-11-28 12:50:05.225329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:22.939 [2024-11-28 12:50:05.234844] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:22.939 [2024-11-28 12:50:05.235018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:2199 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.939 [2024-11-28 12:50:05.235041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:22.939 [2024-11-28 12:50:05.244922] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:22.939 [2024-11-28 12:50:05.245095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:25093 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.940 [2024-11-28 12:50:05.245117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:22.940 [2024-11-28 12:50:05.254635] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:22.940 [2024-11-28 12:50:05.254799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:22314 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.940 [2024-11-28 12:50:05.254819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:22.940 [2024-11-28 12:50:05.264329] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:22.940 [2024-11-28 12:50:05.264491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:23017 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.940 [2024-11-28 12:50:05.264510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:22.940 [2024-11-28 12:50:05.274041] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:22.940 [2024-11-28 12:50:05.274203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:3564 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.940 [2024-11-28 12:50:05.274222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:22.940 [2024-11-28 12:50:05.283760] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:22.940 [2024-11-28 12:50:05.283925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:10173 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.940 [2024-11-28 12:50:05.283943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:22.940 [2024-11-28 12:50:05.293491] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:22.940 [2024-11-28 12:50:05.293652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:21596 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.940 [2024-11-28 12:50:05.293670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:22.940 [2024-11-28 12:50:05.303258] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:22.940 [2024-11-28 12:50:05.303419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:9108 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.940 [2024-11-28 12:50:05.303438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:22.940 [2024-11-28 12:50:05.312959] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:22.940 [2024-11-28 12:50:05.313122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:635 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.940 [2024-11-28 12:50:05.313141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:22.940 [2024-11-28 12:50:05.322824] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:22.940 [2024-11-28 12:50:05.322993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:10897 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.940 [2024-11-28 12:50:05.323011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:22.940 [2024-11-28 12:50:05.332520] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:22.940 [2024-11-28 12:50:05.332682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:15536 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.940 [2024-11-28 12:50:05.332700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:22.940 [2024-11-28 12:50:05.342219] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:22.940 [2024-11-28 12:50:05.342379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:15965 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.940 [2024-11-28 12:50:05.342397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:22.940 [2024-11-28 12:50:05.351900] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:22.940 [2024-11-28 12:50:05.352071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:18109 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.940 [2024-11-28 12:50:05.352089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:22.940 [2024-11-28 12:50:05.361581] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:22.940 [2024-11-28 12:50:05.361742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:12180 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.940 [2024-11-28 12:50:05.361764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:22.940 [2024-11-28 12:50:05.371293] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:22.940 [2024-11-28 12:50:05.371454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:19891 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.940 [2024-11-28 12:50:05.371472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:22.940 [2024-11-28 12:50:05.380959] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:22.940 [2024-11-28 12:50:05.381124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:9010 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.940 [2024-11-28 12:50:05.381141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:22.940 [2024-11-28 12:50:05.390675] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:22.940 [2024-11-28 12:50:05.390837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:2266 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.940 [2024-11-28 12:50:05.390857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:22.940 [2024-11-28 12:50:05.400369] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:22.940 [2024-11-28 12:50:05.400531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:18187 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.940 [2024-11-28 12:50:05.400550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:22.940 [2024-11-28 12:50:05.410079] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:22.940 [2024-11-28 12:50:05.410244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:20901 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.940 [2024-11-28 12:50:05.410262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:22.940 [2024-11-28 12:50:05.419838] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:22.940 [2024-11-28 12:50:05.420008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:22831 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.940 [2024-11-28 12:50:05.420027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:22.940 [2024-11-28 12:50:05.429583] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:22.940 [2024-11-28 12:50:05.429748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:23206 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.940 [2024-11-28 12:50:05.429767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:22.940 [2024-11-28 12:50:05.439334] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:22.940 [2024-11-28 12:50:05.439497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:20071 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.940 [2024-11-28 12:50:05.439516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:22.940 [2024-11-28 12:50:05.449045] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1460180) with pdu=0x200016efd640 00:26:22.940 [2024-11-28 12:50:05.449213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:3447 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.940 [2024-11-28 12:50:05.449231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:23.198 26171.00 IOPS, 102.23 MiB/s 00:26:23.198 Latency(us) 00:26:23.198 [2024-11-28T11:50:05.717Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:23.198 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:23.198 nvme0n1 : 2.01 26169.14 102.22 0.00 0.00 4882.57 3675.71 11283.59 00:26:23.198 [2024-11-28T11:50:05.717Z] =================================================================================================================== 00:26:23.198 [2024-11-28T11:50:05.717Z] Total : 26169.14 102.22 0.00 0.00 4882.57 3675.71 11283.59 00:26:23.198 { 00:26:23.198 "results": [ 00:26:23.198 { 00:26:23.198 "job": "nvme0n1", 00:26:23.198 "core_mask": "0x2", 00:26:23.198 "workload": "randwrite", 00:26:23.198 "status": "finished", 00:26:23.198 "queue_depth": 128, 00:26:23.198 "io_size": 4096, 00:26:23.198 "runtime": 2.006256, 00:26:23.198 "iops": 26169.14292094329, 00:26:23.198 "mibps": 102.22321453493473, 00:26:23.198 "io_failed": 0, 00:26:23.198 "io_timeout": 0, 00:26:23.198 "avg_latency_us": 4882.5683856681235, 00:26:23.198 "min_latency_us": 3675.7147826086957, 00:26:23.198 "max_latency_us": 11283.589565217391 00:26:23.198 } 00:26:23.198 ], 00:26:23.198 "core_count": 1 00:26:23.198 } 00:26:23.198 12:50:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:23.198 12:50:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:23.198 12:50:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:23.198 | .driver_specific 00:26:23.198 | .nvme_error 00:26:23.198 | .status_code 00:26:23.198 | .command_transient_transport_error' 00:26:23.198 12:50:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:23.198 12:50:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 205 > 0 )) 00:26:23.198 12:50:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2670552 00:26:23.198 12:50:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2670552 ']' 00:26:23.198 12:50:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2670552 00:26:23.198 12:50:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:26:23.198 12:50:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:23.198 12:50:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2670552 00:26:23.460 12:50:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:23.460 12:50:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:23.460 12:50:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2670552' 00:26:23.460 killing process with pid 2670552 00:26:23.460 12:50:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2670552 00:26:23.460 Received shutdown signal, test time was about 2.000000 seconds 00:26:23.460 00:26:23.460 Latency(us) 00:26:23.460 [2024-11-28T11:50:05.979Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:23.460 [2024-11-28T11:50:05.979Z] =================================================================================================================== 00:26:23.460 [2024-11-28T11:50:05.979Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:23.460 12:50:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2670552 00:26:23.460 12:50:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:26:23.460 12:50:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:23.460 12:50:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:26:23.460 12:50:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:26:23.460 12:50:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:26:23.460 12:50:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2671240 00:26:23.460 12:50:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2671240 /var/tmp/bperf.sock 00:26:23.460 12:50:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:26:23.460 12:50:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2671240 ']' 00:26:23.460 12:50:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:23.460 12:50:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:23.460 12:50:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:23.460 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:23.460 12:50:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:23.460 12:50:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:23.460 [2024-11-28 12:50:05.940495] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:26:23.460 [2024-11-28 12:50:05.940543] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2671240 ] 00:26:23.460 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:23.460 Zero copy mechanism will not be used. 00:26:23.789 [2024-11-28 12:50:06.001575] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:23.789 [2024-11-28 12:50:06.042260] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:23.789 12:50:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:23.789 12:50:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:26:23.789 12:50:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:23.789 12:50:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:24.097 12:50:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:24.097 12:50:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.097 12:50:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:24.097 12:50:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.097 12:50:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:24.097 12:50:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:24.379 nvme0n1 00:26:24.379 12:50:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:26:24.379 12:50:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.379 12:50:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:24.379 12:50:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.379 12:50:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:24.380 12:50:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:24.380 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:24.380 Zero copy mechanism will not be used. 00:26:24.380 Running I/O for 2 seconds... 00:26:24.380 [2024-11-28 12:50:06.869160] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:24.380 [2024-11-28 12:50:06.869270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.380 [2024-11-28 12:50:06.869299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:24.380 [2024-11-28 12:50:06.875340] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:24.380 [2024-11-28 12:50:06.875498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.380 [2024-11-28 12:50:06.875522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:24.380 [2024-11-28 12:50:06.882127] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:24.380 [2024-11-28 12:50:06.882263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.380 [2024-11-28 12:50:06.882286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:24.649 [2024-11-28 12:50:06.888891] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:24.649 [2024-11-28 12:50:06.889020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.649 [2024-11-28 12:50:06.889042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:24.649 [2024-11-28 12:50:06.895077] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:24.649 [2024-11-28 12:50:06.895219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.649 [2024-11-28 12:50:06.895240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:24.649 [2024-11-28 12:50:06.899875] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:24.649 [2024-11-28 12:50:06.899963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.649 [2024-11-28 12:50:06.899983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:24.649 [2024-11-28 12:50:06.904986] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:24.649 [2024-11-28 12:50:06.905059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.649 [2024-11-28 12:50:06.905082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:24.649 [2024-11-28 12:50:06.910135] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:24.649 [2024-11-28 12:50:06.910218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.649 [2024-11-28 12:50:06.910238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:24.649 [2024-11-28 12:50:06.914974] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:24.649 [2024-11-28 12:50:06.915056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.650 [2024-11-28 12:50:06.915075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:24.650 [2024-11-28 12:50:06.919595] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:24.650 [2024-11-28 12:50:06.919666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.650 [2024-11-28 12:50:06.919685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:24.650 [2024-11-28 12:50:06.924132] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:24.650 [2024-11-28 12:50:06.924217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.650 [2024-11-28 12:50:06.924236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:24.650 [2024-11-28 12:50:06.928647] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:24.650 [2024-11-28 12:50:06.928724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.650 [2024-11-28 12:50:06.928743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:24.650 [2024-11-28 12:50:06.933198] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:24.650 [2024-11-28 12:50:06.933272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.650 [2024-11-28 12:50:06.933291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:24.650 [2024-11-28 12:50:06.937737] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:24.650 [2024-11-28 12:50:06.937809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.650 [2024-11-28 12:50:06.937827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:24.650 [2024-11-28 12:50:06.942209] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:24.650 [2024-11-28 12:50:06.942274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.650 [2024-11-28 12:50:06.942293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:24.650 [2024-11-28 12:50:06.946656] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:24.650 [2024-11-28 12:50:06.946752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.650 [2024-11-28 12:50:06.946774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:24.650 [2024-11-28 12:50:06.951133] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:24.650 [2024-11-28 12:50:06.951214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.650 [2024-11-28 12:50:06.951234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:24.650 [2024-11-28 12:50:06.955572] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:24.650 [2024-11-28 12:50:06.955652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.650 [2024-11-28 12:50:06.955671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:24.650 [2024-11-28 12:50:06.960007] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:24.650 [2024-11-28 12:50:06.960070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.650 [2024-11-28 12:50:06.960090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:24.650 [2024-11-28 12:50:06.964485] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:24.650 [2024-11-28 12:50:06.964563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.650 [2024-11-28 12:50:06.964581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:24.650 [2024-11-28 12:50:06.969694] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:24.650 [2024-11-28 12:50:06.969762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.650 [2024-11-28 12:50:06.969781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:24.650 [2024-11-28 12:50:06.974344] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:24.650 [2024-11-28 12:50:06.974430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.650 [2024-11-28 12:50:06.974449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:24.650 [2024-11-28 12:50:06.978905] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:24.650 [2024-11-28 12:50:06.978990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.650 [2024-11-28 12:50:06.979009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:24.650 [2024-11-28 12:50:06.983421] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:24.650 [2024-11-28 12:50:06.983519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.650 [2024-11-28 12:50:06.983537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:24.650 [2024-11-28 12:50:06.988177] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:24.650 [2024-11-28 12:50:06.988267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.650 [2024-11-28 12:50:06.988286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:24.650 [2024-11-28 12:50:06.993170] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:24.650 [2024-11-28 12:50:06.993244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.650 [2024-11-28 12:50:06.993262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:24.650 [2024-11-28 12:50:06.997814] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:24.650 [2024-11-28 12:50:06.997895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.650 [2024-11-28 12:50:06.997914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:24.650 [2024-11-28 12:50:07.002756] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:24.650 [2024-11-28 12:50:07.002835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.650 [2024-11-28 12:50:07.002854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:24.650 [2024-11-28 12:50:07.008188] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:24.650 [2024-11-28 12:50:07.008278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.650 [2024-11-28 12:50:07.008297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:24.650 [2024-11-28 12:50:07.015122] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:24.650 [2024-11-28 12:50:07.015188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.650 [2024-11-28 12:50:07.015208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:24.650 [2024-11-28 12:50:07.020940] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:24.650 [2024-11-28 12:50:07.021033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.650 [2024-11-28 12:50:07.021052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:24.650 [2024-11-28 12:50:07.028526] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:24.650 [2024-11-28 12:50:07.028699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.650 [2024-11-28 12:50:07.028718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:24.650 [2024-11-28 12:50:07.035842] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:24.650 [2024-11-28 12:50:07.036153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.650 [2024-11-28 12:50:07.036178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:24.650 [2024-11-28 12:50:07.042781] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:24.650 [2024-11-28 12:50:07.043103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.650 [2024-11-28 12:50:07.043124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:24.650 [2024-11-28 12:50:07.049222] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:24.650 [2024-11-28 12:50:07.049537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.650 [2024-11-28 12:50:07.049556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:24.650 [2024-11-28 12:50:07.055534] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:24.650 [2024-11-28 12:50:07.055858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.650 [2024-11-28 12:50:07.055878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:24.650 [2024-11-28 12:50:07.061824] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:24.650 [2024-11-28 12:50:07.062141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.650 [2024-11-28 12:50:07.062162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:24.650 [2024-11-28 12:50:07.067957] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:24.650 [2024-11-28 12:50:07.068273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.650 [2024-11-28 12:50:07.068293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:24.650 [2024-11-28 12:50:07.074573] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:24.650 [2024-11-28 12:50:07.074893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.650 [2024-11-28 12:50:07.074914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:24.650 [2024-11-28 12:50:07.081304] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:24.650 [2024-11-28 12:50:07.081628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.650 [2024-11-28 12:50:07.081648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:24.650 [2024-11-28 12:50:07.087738] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:24.650 [2024-11-28 12:50:07.088049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.650 [2024-11-28 12:50:07.088069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:24.650 [2024-11-28 12:50:07.094582] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:24.650 [2024-11-28 12:50:07.094902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.650 [2024-11-28 12:50:07.094923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:24.650 [2024-11-28 12:50:07.101315] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:24.650 [2024-11-28 12:50:07.101644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.650 [2024-11-28 12:50:07.101664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:24.650 [2024-11-28 12:50:07.107852] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:24.650 [2024-11-28 12:50:07.108157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.650 [2024-11-28 12:50:07.108178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:24.650 [2024-11-28 12:50:07.114103] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:24.650 [2024-11-28 12:50:07.114402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.650 [2024-11-28 12:50:07.114423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:24.650 [2024-11-28 12:50:07.120008] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:24.650 [2024-11-28 12:50:07.120265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.650 [2024-11-28 12:50:07.120296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:24.650 [2024-11-28 12:50:07.124736] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:24.650 [2024-11-28 12:50:07.125014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.650 [2024-11-28 12:50:07.125037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:24.650 [2024-11-28 12:50:07.129435] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:24.650 [2024-11-28 12:50:07.129684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.650 [2024-11-28 12:50:07.129706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:24.650 [2024-11-28 12:50:07.134151] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:24.650 [2024-11-28 12:50:07.134406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.650 [2024-11-28 12:50:07.134427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:24.650 [2024-11-28 12:50:07.138578] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:24.650 [2024-11-28 12:50:07.138829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.650 [2024-11-28 12:50:07.138850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:24.650 [2024-11-28 12:50:07.143658] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:24.650 [2024-11-28 12:50:07.143912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.650 [2024-11-28 12:50:07.143933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:24.650 [2024-11-28 12:50:07.149370] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:24.650 [2024-11-28 12:50:07.149616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.650 [2024-11-28 12:50:07.149636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:24.650 [2024-11-28 12:50:07.154761] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:24.650 [2024-11-28 12:50:07.155021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.650 [2024-11-28 12:50:07.155044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:24.650 [2024-11-28 12:50:07.160097] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:24.650 [2024-11-28 12:50:07.160345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.650 [2024-11-28 12:50:07.160365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:24.650 [2024-11-28 12:50:07.165223] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:24.650 [2024-11-28 12:50:07.165474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.650 [2024-11-28 12:50:07.165495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:24.910 [2024-11-28 12:50:07.170044] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:24.910 [2024-11-28 12:50:07.170289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.910 [2024-11-28 12:50:07.170309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:24.910 [2024-11-28 12:50:07.174647] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:24.910 [2024-11-28 12:50:07.174901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.910 [2024-11-28 12:50:07.174921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:24.910 [2024-11-28 12:50:07.179127] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:24.910 [2024-11-28 12:50:07.179364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.910 [2024-11-28 12:50:07.179384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:24.910 [2024-11-28 12:50:07.183924] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:24.910 [2024-11-28 12:50:07.184190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.910 [2024-11-28 12:50:07.184217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:24.910 [2024-11-28 12:50:07.188551] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:24.910 [2024-11-28 12:50:07.188808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.910 [2024-11-28 12:50:07.188828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:24.910 [2024-11-28 12:50:07.193707] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:24.910 [2024-11-28 12:50:07.193967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.910 [2024-11-28 12:50:07.193987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:24.910 [2024-11-28 12:50:07.198399] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:24.910 [2024-11-28 12:50:07.198660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.910 [2024-11-28 12:50:07.198681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:24.910 [2024-11-28 12:50:07.202821] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:24.910 [2024-11-28 12:50:07.203088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.910 [2024-11-28 12:50:07.203109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:24.910 [2024-11-28 12:50:07.207115] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:24.910 [2024-11-28 12:50:07.207370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.910 [2024-11-28 12:50:07.207390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:24.910 [2024-11-28 12:50:07.211375] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:24.910 [2024-11-28 12:50:07.211629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.910 [2024-11-28 12:50:07.211649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:24.910 [2024-11-28 12:50:07.215720] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:24.910 [2024-11-28 12:50:07.215985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.910 [2024-11-28 12:50:07.216007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:24.910 [2024-11-28 12:50:07.219976] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:24.910 [2024-11-28 12:50:07.220226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.910 [2024-11-28 12:50:07.220246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:24.910 [2024-11-28 12:50:07.224221] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:24.910 [2024-11-28 12:50:07.224481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.910 [2024-11-28 12:50:07.224501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:24.910 [2024-11-28 12:50:07.228425] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:24.910 [2024-11-28 12:50:07.228687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.910 [2024-11-28 12:50:07.228708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:24.910 [2024-11-28 12:50:07.232647] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:24.910 [2024-11-28 12:50:07.232917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.910 [2024-11-28 12:50:07.232938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:24.911 [2024-11-28 12:50:07.236865] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:24.911 [2024-11-28 12:50:07.237136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.911 [2024-11-28 12:50:07.237157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:24.911 [2024-11-28 12:50:07.241103] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:24.911 [2024-11-28 12:50:07.241357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.911 [2024-11-28 12:50:07.241377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:24.911 [2024-11-28 12:50:07.246136] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:24.911 [2024-11-28 12:50:07.246397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.911 [2024-11-28 12:50:07.246418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:24.911 [2024-11-28 12:50:07.250450] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:24.911 [2024-11-28 12:50:07.250704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.911 [2024-11-28 12:50:07.250726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:24.911 [2024-11-28 12:50:07.254712] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:24.911 [2024-11-28 12:50:07.254970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.911 [2024-11-28 12:50:07.254990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:24.911 [2024-11-28 12:50:07.258931] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:24.911 [2024-11-28 12:50:07.259204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.911 [2024-11-28 12:50:07.259224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:24.911 [2024-11-28 12:50:07.263132] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:24.911 [2024-11-28 12:50:07.263379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.911 [2024-11-28 12:50:07.263400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:24.911 [2024-11-28 12:50:07.267583] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:24.911 [2024-11-28 12:50:07.267836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.911 [2024-11-28 12:50:07.267857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:24.911 [2024-11-28 12:50:07.273206] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:24.911 [2024-11-28 12:50:07.273538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.911 [2024-11-28 12:50:07.273559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:24.911 [2024-11-28 12:50:07.279259] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:24.911 [2024-11-28 12:50:07.279501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.911 [2024-11-28 12:50:07.279522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:24.911 [2024-11-28 12:50:07.284404] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:24.911 [2024-11-28 12:50:07.284747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.911 [2024-11-28 12:50:07.284767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:24.911 [2024-11-28 12:50:07.290630] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:24.911 [2024-11-28 12:50:07.290974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.911 [2024-11-28 12:50:07.290995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:24.911 [2024-11-28 12:50:07.296855] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:24.911 [2024-11-28 12:50:07.297207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.911 [2024-11-28 12:50:07.297227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:24.911 [2024-11-28 12:50:07.303212] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:24.911 [2024-11-28 12:50:07.303510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.911 [2024-11-28 12:50:07.303529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:24.911 [2024-11-28 12:50:07.309333] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:24.911 [2024-11-28 12:50:07.309647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.911 [2024-11-28 12:50:07.309670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:24.911 [2024-11-28 12:50:07.315817] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:24.911 [2024-11-28 12:50:07.316136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.911 [2024-11-28 12:50:07.316182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:24.911 [2024-11-28 12:50:07.322133] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:24.911 [2024-11-28 12:50:07.322448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.911 [2024-11-28 12:50:07.322468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:24.911 [2024-11-28 12:50:07.328513] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:24.911 [2024-11-28 12:50:07.328816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.911 [2024-11-28 12:50:07.328837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:24.911 [2024-11-28 12:50:07.334588] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:24.911 [2024-11-28 12:50:07.334910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.911 [2024-11-28 12:50:07.334930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:24.911 [2024-11-28 12:50:07.340854] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:24.911 [2024-11-28 12:50:07.341216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.911 [2024-11-28 12:50:07.341237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:24.911 [2024-11-28 12:50:07.347477] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:24.911 [2024-11-28 12:50:07.347797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.911 [2024-11-28 12:50:07.347817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:24.911 [2024-11-28 12:50:07.354124] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:24.911 [2024-11-28 12:50:07.354450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.911 [2024-11-28 12:50:07.354470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:24.911 [2024-11-28 12:50:07.360847] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:24.911 [2024-11-28 12:50:07.361124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.911 [2024-11-28 12:50:07.361145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:24.911 [2024-11-28 12:50:07.367252] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:24.911 [2024-11-28 12:50:07.367494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.911 [2024-11-28 12:50:07.367515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:24.911 [2024-11-28 12:50:07.373309] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:24.911 [2024-11-28 12:50:07.373546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.911 [2024-11-28 12:50:07.373566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:24.911 [2024-11-28 12:50:07.378889] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:24.911 [2024-11-28 12:50:07.379137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.912 [2024-11-28 12:50:07.379158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:24.912 [2024-11-28 12:50:07.383925] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:24.912 [2024-11-28 12:50:07.384191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.912 [2024-11-28 12:50:07.384212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:24.912 [2024-11-28 12:50:07.388567] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:24.912 [2024-11-28 12:50:07.388823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.912 [2024-11-28 12:50:07.388844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:24.912 [2024-11-28 12:50:07.392873] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:24.912 [2024-11-28 12:50:07.393129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.912 [2024-11-28 12:50:07.393150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:24.912 [2024-11-28 12:50:07.397521] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:24.912 [2024-11-28 12:50:07.397777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.912 [2024-11-28 12:50:07.397797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:24.912 [2024-11-28 12:50:07.402205] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:24.912 [2024-11-28 12:50:07.402452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.912 [2024-11-28 12:50:07.402473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:24.912 [2024-11-28 12:50:07.406463] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:24.912 [2024-11-28 12:50:07.406719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.912 [2024-11-28 12:50:07.406739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:24.912 [2024-11-28 12:50:07.410685] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:24.912 [2024-11-28 12:50:07.410936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.912 [2024-11-28 12:50:07.410963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:24.912 [2024-11-28 12:50:07.414943] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:24.912 [2024-11-28 12:50:07.415213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.912 [2024-11-28 12:50:07.415234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:24.912 [2024-11-28 12:50:07.419169] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:24.912 [2024-11-28 12:50:07.419428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.912 [2024-11-28 12:50:07.419449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:24.912 [2024-11-28 12:50:07.423782] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:24.912 [2024-11-28 12:50:07.424048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.912 [2024-11-28 12:50:07.424068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:25.171 [2024-11-28 12:50:07.428397] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.171 [2024-11-28 12:50:07.428652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.171 [2024-11-28 12:50:07.428672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:25.171 [2024-11-28 12:50:07.432722] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.171 [2024-11-28 12:50:07.433000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.171 [2024-11-28 12:50:07.433020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:25.171 [2024-11-28 12:50:07.436959] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.171 [2024-11-28 12:50:07.437218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.171 [2024-11-28 12:50:07.437239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:25.171 [2024-11-28 12:50:07.441141] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.171 [2024-11-28 12:50:07.441398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.171 [2024-11-28 12:50:07.441420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:25.171 [2024-11-28 12:50:07.445306] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.171 [2024-11-28 12:50:07.445555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.171 [2024-11-28 12:50:07.445579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:25.171 [2024-11-28 12:50:07.449454] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.171 [2024-11-28 12:50:07.449706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.171 [2024-11-28 12:50:07.449727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:25.171 [2024-11-28 12:50:07.453642] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.171 [2024-11-28 12:50:07.453901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.171 [2024-11-28 12:50:07.453922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:25.171 [2024-11-28 12:50:07.457802] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.171 [2024-11-28 12:50:07.458076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.171 [2024-11-28 12:50:07.458096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:25.171 [2024-11-28 12:50:07.462030] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.171 [2024-11-28 12:50:07.462291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.171 [2024-11-28 12:50:07.462311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:25.171 [2024-11-28 12:50:07.466208] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.171 [2024-11-28 12:50:07.466466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.171 [2024-11-28 12:50:07.466486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:25.171 [2024-11-28 12:50:07.470374] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.171 [2024-11-28 12:50:07.470637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.171 [2024-11-28 12:50:07.470657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:25.171 [2024-11-28 12:50:07.474563] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.171 [2024-11-28 12:50:07.474835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.171 [2024-11-28 12:50:07.474855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:25.171 [2024-11-28 12:50:07.478745] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.171 [2024-11-28 12:50:07.479013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.171 [2024-11-28 12:50:07.479033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:25.172 [2024-11-28 12:50:07.482933] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.172 [2024-11-28 12:50:07.483209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.172 [2024-11-28 12:50:07.483230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:25.172 [2024-11-28 12:50:07.487126] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.172 [2024-11-28 12:50:07.487389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.172 [2024-11-28 12:50:07.487409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:25.172 [2024-11-28 12:50:07.491313] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.172 [2024-11-28 12:50:07.491572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.172 [2024-11-28 12:50:07.491591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:25.172 [2024-11-28 12:50:07.496171] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.172 [2024-11-28 12:50:07.496426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.172 [2024-11-28 12:50:07.496446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:25.172 [2024-11-28 12:50:07.500766] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.172 [2024-11-28 12:50:07.501052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.172 [2024-11-28 12:50:07.501072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:25.172 [2024-11-28 12:50:07.506329] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.172 [2024-11-28 12:50:07.506663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.172 [2024-11-28 12:50:07.506683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:25.172 [2024-11-28 12:50:07.512093] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.172 [2024-11-28 12:50:07.512335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.172 [2024-11-28 12:50:07.512356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:25.172 [2024-11-28 12:50:07.517032] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.172 [2024-11-28 12:50:07.517298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.172 [2024-11-28 12:50:07.517318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:25.172 [2024-11-28 12:50:07.521825] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.172 [2024-11-28 12:50:07.522089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.172 [2024-11-28 12:50:07.522109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:25.172 [2024-11-28 12:50:07.526045] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.172 [2024-11-28 12:50:07.526297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.172 [2024-11-28 12:50:07.526317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:25.172 [2024-11-28 12:50:07.530613] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.172 [2024-11-28 12:50:07.530871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.172 [2024-11-28 12:50:07.530891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:25.172 [2024-11-28 12:50:07.535310] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.172 [2024-11-28 12:50:07.535565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.172 [2024-11-28 12:50:07.535586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:25.172 [2024-11-28 12:50:07.540306] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.172 [2024-11-28 12:50:07.540563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.172 [2024-11-28 12:50:07.540584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:25.172 [2024-11-28 12:50:07.545053] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.172 [2024-11-28 12:50:07.545300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.172 [2024-11-28 12:50:07.545320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:25.172 [2024-11-28 12:50:07.549727] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.172 [2024-11-28 12:50:07.549983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.172 [2024-11-28 12:50:07.550003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:25.172 [2024-11-28 12:50:07.554392] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.172 [2024-11-28 12:50:07.554649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.172 [2024-11-28 12:50:07.554669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:25.172 [2024-11-28 12:50:07.559264] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.172 [2024-11-28 12:50:07.559527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.172 [2024-11-28 12:50:07.559547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:25.172 [2024-11-28 12:50:07.564187] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.172 [2024-11-28 12:50:07.564440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.172 [2024-11-28 12:50:07.564463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:25.172 [2024-11-28 12:50:07.568770] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.172 [2024-11-28 12:50:07.569055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.172 [2024-11-28 12:50:07.569076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:25.172 [2024-11-28 12:50:07.573468] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.172 [2024-11-28 12:50:07.573726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.172 [2024-11-28 12:50:07.573746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:25.172 [2024-11-28 12:50:07.578215] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.172 [2024-11-28 12:50:07.578470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.172 [2024-11-28 12:50:07.578490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:25.172 [2024-11-28 12:50:07.583084] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.172 [2024-11-28 12:50:07.583332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.172 [2024-11-28 12:50:07.583352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:25.172 [2024-11-28 12:50:07.587787] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.172 [2024-11-28 12:50:07.588047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.172 [2024-11-28 12:50:07.588068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:25.172 [2024-11-28 12:50:07.592445] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.172 [2024-11-28 12:50:07.592705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.172 [2024-11-28 12:50:07.592725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:25.172 [2024-11-28 12:50:07.597004] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.172 [2024-11-28 12:50:07.597265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.172 [2024-11-28 12:50:07.597284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:25.172 [2024-11-28 12:50:07.601719] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.172 [2024-11-28 12:50:07.601968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.172 [2024-11-28 12:50:07.601988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:25.172 [2024-11-28 12:50:07.606329] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.172 [2024-11-28 12:50:07.606586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.172 [2024-11-28 12:50:07.606606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:25.173 [2024-11-28 12:50:07.610849] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.173 [2024-11-28 12:50:07.611113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.173 [2024-11-28 12:50:07.611134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:25.173 [2024-11-28 12:50:07.615331] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.173 [2024-11-28 12:50:07.615586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.173 [2024-11-28 12:50:07.615606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:25.173 [2024-11-28 12:50:07.620387] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.173 [2024-11-28 12:50:07.620649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.173 [2024-11-28 12:50:07.620669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:25.173 [2024-11-28 12:50:07.624746] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.173 [2024-11-28 12:50:07.625007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.173 [2024-11-28 12:50:07.625043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:25.173 [2024-11-28 12:50:07.629231] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.173 [2024-11-28 12:50:07.629499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.173 [2024-11-28 12:50:07.629520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:25.173 [2024-11-28 12:50:07.633677] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.173 [2024-11-28 12:50:07.633940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.173 [2024-11-28 12:50:07.633967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:25.173 [2024-11-28 12:50:07.638073] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.173 [2024-11-28 12:50:07.638328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.173 [2024-11-28 12:50:07.638348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:25.173 [2024-11-28 12:50:07.642418] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.173 [2024-11-28 12:50:07.642680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.173 [2024-11-28 12:50:07.642700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:25.173 [2024-11-28 12:50:07.646796] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.173 [2024-11-28 12:50:07.647059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.173 [2024-11-28 12:50:07.647079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:25.173 [2024-11-28 12:50:07.651174] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.173 [2024-11-28 12:50:07.651421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.173 [2024-11-28 12:50:07.651441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:25.173 [2024-11-28 12:50:07.655967] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.173 [2024-11-28 12:50:07.656217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.173 [2024-11-28 12:50:07.656237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:25.173 [2024-11-28 12:50:07.660806] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.173 [2024-11-28 12:50:07.661070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.173 [2024-11-28 12:50:07.661091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:25.173 [2024-11-28 12:50:07.666498] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.173 [2024-11-28 12:50:07.666761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.173 [2024-11-28 12:50:07.666780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:25.173 [2024-11-28 12:50:07.671901] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.173 [2024-11-28 12:50:07.672148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.173 [2024-11-28 12:50:07.672170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:25.173 [2024-11-28 12:50:07.677201] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.173 [2024-11-28 12:50:07.677451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.173 [2024-11-28 12:50:07.677472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:25.173 [2024-11-28 12:50:07.682706] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.173 [2024-11-28 12:50:07.682956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.173 [2024-11-28 12:50:07.682976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:25.433 [2024-11-28 12:50:07.688198] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.433 [2024-11-28 12:50:07.688443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.433 [2024-11-28 12:50:07.688466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:25.433 [2024-11-28 12:50:07.694154] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.433 [2024-11-28 12:50:07.694401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.433 [2024-11-28 12:50:07.694421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:25.433 [2024-11-28 12:50:07.699389] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.433 [2024-11-28 12:50:07.699643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.433 [2024-11-28 12:50:07.699664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:25.433 [2024-11-28 12:50:07.704555] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.433 [2024-11-28 12:50:07.704808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.433 [2024-11-28 12:50:07.704828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:25.433 [2024-11-28 12:50:07.709844] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.433 [2024-11-28 12:50:07.710092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.433 [2024-11-28 12:50:07.710113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:25.433 [2024-11-28 12:50:07.715632] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.433 [2024-11-28 12:50:07.715879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.433 [2024-11-28 12:50:07.715899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:25.433 [2024-11-28 12:50:07.721131] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.433 [2024-11-28 12:50:07.721391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.433 [2024-11-28 12:50:07.721411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:25.433 [2024-11-28 12:50:07.725920] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.433 [2024-11-28 12:50:07.726232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.433 [2024-11-28 12:50:07.726252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:25.433 [2024-11-28 12:50:07.730487] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.433 [2024-11-28 12:50:07.730727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.433 [2024-11-28 12:50:07.730747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:25.433 [2024-11-28 12:50:07.734848] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.433 [2024-11-28 12:50:07.735112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.433 [2024-11-28 12:50:07.735133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:25.433 [2024-11-28 12:50:07.739212] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.433 [2024-11-28 12:50:07.739468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.433 [2024-11-28 12:50:07.739489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:25.433 [2024-11-28 12:50:07.743585] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.433 [2024-11-28 12:50:07.743853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.433 [2024-11-28 12:50:07.743874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:25.433 [2024-11-28 12:50:07.747943] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.433 [2024-11-28 12:50:07.748206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.433 [2024-11-28 12:50:07.748228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:25.433 [2024-11-28 12:50:07.752435] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.433 [2024-11-28 12:50:07.752702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.433 [2024-11-28 12:50:07.752723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:25.433 [2024-11-28 12:50:07.757230] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.433 [2024-11-28 12:50:07.757491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.433 [2024-11-28 12:50:07.757510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:25.433 [2024-11-28 12:50:07.761635] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.433 [2024-11-28 12:50:07.761887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.433 [2024-11-28 12:50:07.761908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:25.433 [2024-11-28 12:50:07.766476] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.433 [2024-11-28 12:50:07.766736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.434 [2024-11-28 12:50:07.766756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:25.434 [2024-11-28 12:50:07.772111] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.434 [2024-11-28 12:50:07.772362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.434 [2024-11-28 12:50:07.772383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:25.434 [2024-11-28 12:50:07.778130] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.434 [2024-11-28 12:50:07.778386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.434 [2024-11-28 12:50:07.778406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:25.434 [2024-11-28 12:50:07.783460] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.434 [2024-11-28 12:50:07.783704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.434 [2024-11-28 12:50:07.783725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:25.434 [2024-11-28 12:50:07.788751] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.434 [2024-11-28 12:50:07.789004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.434 [2024-11-28 12:50:07.789025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:25.434 [2024-11-28 12:50:07.793974] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.434 [2024-11-28 12:50:07.794216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.434 [2024-11-28 12:50:07.794237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:25.434 [2024-11-28 12:50:07.799672] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.434 [2024-11-28 12:50:07.799907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.434 [2024-11-28 12:50:07.799928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:25.434 [2024-11-28 12:50:07.805501] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.434 [2024-11-28 12:50:07.805758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.434 [2024-11-28 12:50:07.805778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:25.434 [2024-11-28 12:50:07.810738] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.434 [2024-11-28 12:50:07.810987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.434 [2024-11-28 12:50:07.811008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:25.434 [2024-11-28 12:50:07.816465] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.434 [2024-11-28 12:50:07.816715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.434 [2024-11-28 12:50:07.816735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:25.434 [2024-11-28 12:50:07.822123] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.434 [2024-11-28 12:50:07.822347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.434 [2024-11-28 12:50:07.822371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:25.434 [2024-11-28 12:50:07.827550] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.434 [2024-11-28 12:50:07.827795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.434 [2024-11-28 12:50:07.827815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:25.434 [2024-11-28 12:50:07.832638] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.434 [2024-11-28 12:50:07.832890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.434 [2024-11-28 12:50:07.832910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:25.434 [2024-11-28 12:50:07.837855] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.434 [2024-11-28 12:50:07.838126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.434 [2024-11-28 12:50:07.838147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:25.434 [2024-11-28 12:50:07.843324] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.434 [2024-11-28 12:50:07.843563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.434 [2024-11-28 12:50:07.843583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:25.434 [2024-11-28 12:50:07.849130] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.434 [2024-11-28 12:50:07.849379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.434 [2024-11-28 12:50:07.849399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:25.434 [2024-11-28 12:50:07.854693] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.434 [2024-11-28 12:50:07.854943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.434 [2024-11-28 12:50:07.854969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:25.434 [2024-11-28 12:50:07.860025] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.434 [2024-11-28 12:50:07.860276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.434 [2024-11-28 12:50:07.860296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:25.434 [2024-11-28 12:50:07.865515] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.434 [2024-11-28 12:50:07.865759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.434 [2024-11-28 12:50:07.865779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:25.434 6077.00 IOPS, 759.62 MiB/s [2024-11-28T11:50:07.953Z] [2024-11-28 12:50:07.872023] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.434 [2024-11-28 12:50:07.872269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.434 [2024-11-28 12:50:07.872288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:25.434 [2024-11-28 12:50:07.877383] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.434 [2024-11-28 12:50:07.877656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.434 [2024-11-28 12:50:07.877677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:25.434 [2024-11-28 12:50:07.882408] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.434 [2024-11-28 12:50:07.882664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.434 [2024-11-28 12:50:07.882684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:25.434 [2024-11-28 12:50:07.887071] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.434 [2024-11-28 12:50:07.887326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.434 [2024-11-28 12:50:07.887347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:25.434 [2024-11-28 12:50:07.891523] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.434 [2024-11-28 12:50:07.891781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.434 [2024-11-28 12:50:07.891801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:25.434 [2024-11-28 12:50:07.895869] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.434 [2024-11-28 12:50:07.896126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.435 [2024-11-28 12:50:07.896146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:25.435 [2024-11-28 12:50:07.900173] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.435 [2024-11-28 12:50:07.900422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.435 [2024-11-28 12:50:07.900443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:25.435 [2024-11-28 12:50:07.904525] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.435 [2024-11-28 12:50:07.904782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.435 [2024-11-28 12:50:07.904802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:25.435 [2024-11-28 12:50:07.908890] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.435 [2024-11-28 12:50:07.909150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.435 [2024-11-28 12:50:07.909172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:25.435 [2024-11-28 12:50:07.913212] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.435 [2024-11-28 12:50:07.913458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.435 [2024-11-28 12:50:07.913479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:25.435 [2024-11-28 12:50:07.917582] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.435 [2024-11-28 12:50:07.917842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.435 [2024-11-28 12:50:07.917863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:25.435 [2024-11-28 12:50:07.921917] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.435 [2024-11-28 12:50:07.922174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.435 [2024-11-28 12:50:07.922195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:25.435 [2024-11-28 12:50:07.926220] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.435 [2024-11-28 12:50:07.926487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.435 [2024-11-28 12:50:07.926507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:25.435 [2024-11-28 12:50:07.930760] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.435 [2024-11-28 12:50:07.931022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.435 [2024-11-28 12:50:07.931042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:25.435 [2024-11-28 12:50:07.935338] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.435 [2024-11-28 12:50:07.935589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.435 [2024-11-28 12:50:07.935609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:25.435 [2024-11-28 12:50:07.939684] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.435 [2024-11-28 12:50:07.939940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.435 [2024-11-28 12:50:07.939967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:25.435 [2024-11-28 12:50:07.944095] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.435 [2024-11-28 12:50:07.944372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.435 [2024-11-28 12:50:07.944392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:25.694 [2024-11-28 12:50:07.948596] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.694 [2024-11-28 12:50:07.948851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.694 [2024-11-28 12:50:07.948876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:25.694 [2024-11-28 12:50:07.952923] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.694 [2024-11-28 12:50:07.953210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.694 [2024-11-28 12:50:07.953231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:25.694 [2024-11-28 12:50:07.957344] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.694 [2024-11-28 12:50:07.957602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.694 [2024-11-28 12:50:07.957622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:25.694 [2024-11-28 12:50:07.961652] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.694 [2024-11-28 12:50:07.961896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.694 [2024-11-28 12:50:07.961917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:25.694 [2024-11-28 12:50:07.966037] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.694 [2024-11-28 12:50:07.966290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.694 [2024-11-28 12:50:07.966310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:25.694 [2024-11-28 12:50:07.970373] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.694 [2024-11-28 12:50:07.970641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.694 [2024-11-28 12:50:07.970662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:25.694 [2024-11-28 12:50:07.974712] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.694 [2024-11-28 12:50:07.974973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.694 [2024-11-28 12:50:07.974993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:25.694 [2024-11-28 12:50:07.979085] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.694 [2024-11-28 12:50:07.979351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.694 [2024-11-28 12:50:07.979371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:25.694 [2024-11-28 12:50:07.983413] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.694 [2024-11-28 12:50:07.983674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.694 [2024-11-28 12:50:07.983694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:25.694 [2024-11-28 12:50:07.987690] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.694 [2024-11-28 12:50:07.987945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.694 [2024-11-28 12:50:07.987972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:25.694 [2024-11-28 12:50:07.991978] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.694 [2024-11-28 12:50:07.992237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.694 [2024-11-28 12:50:07.992257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:25.694 [2024-11-28 12:50:07.996664] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.694 [2024-11-28 12:50:07.996918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.694 [2024-11-28 12:50:07.996938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:25.694 [2024-11-28 12:50:08.001144] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.694 [2024-11-28 12:50:08.001394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.694 [2024-11-28 12:50:08.001414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:25.694 [2024-11-28 12:50:08.006090] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.694 [2024-11-28 12:50:08.006345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.694 [2024-11-28 12:50:08.006364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:25.694 [2024-11-28 12:50:08.011394] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.694 [2024-11-28 12:50:08.011662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.694 [2024-11-28 12:50:08.011682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:25.694 [2024-11-28 12:50:08.017167] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.695 [2024-11-28 12:50:08.017416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.695 [2024-11-28 12:50:08.017436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:25.695 [2024-11-28 12:50:08.021892] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.695 [2024-11-28 12:50:08.022158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.695 [2024-11-28 12:50:08.022178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:25.695 [2024-11-28 12:50:08.026611] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.695 [2024-11-28 12:50:08.026857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.695 [2024-11-28 12:50:08.026877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:25.695 [2024-11-28 12:50:08.031309] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.695 [2024-11-28 12:50:08.031567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.695 [2024-11-28 12:50:08.031587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:25.695 [2024-11-28 12:50:08.035913] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.695 [2024-11-28 12:50:08.036177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.695 [2024-11-28 12:50:08.036197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:25.695 [2024-11-28 12:50:08.040540] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.695 [2024-11-28 12:50:08.040793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.695 [2024-11-28 12:50:08.040813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:25.695 [2024-11-28 12:50:08.045237] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.695 [2024-11-28 12:50:08.045492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.695 [2024-11-28 12:50:08.045512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:25.695 [2024-11-28 12:50:08.050020] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.695 [2024-11-28 12:50:08.050282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.695 [2024-11-28 12:50:08.050303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:25.695 [2024-11-28 12:50:08.054684] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.695 [2024-11-28 12:50:08.054937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.695 [2024-11-28 12:50:08.054964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:25.695 [2024-11-28 12:50:08.059389] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.695 [2024-11-28 12:50:08.059646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.695 [2024-11-28 12:50:08.059666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:25.695 [2024-11-28 12:50:08.064114] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.695 [2024-11-28 12:50:08.064371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.695 [2024-11-28 12:50:08.064391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:25.695 [2024-11-28 12:50:08.068923] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.695 [2024-11-28 12:50:08.069188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.695 [2024-11-28 12:50:08.069211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:25.695 [2024-11-28 12:50:08.073698] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.695 [2024-11-28 12:50:08.073957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.695 [2024-11-28 12:50:08.073978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:25.695 [2024-11-28 12:50:08.078492] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.695 [2024-11-28 12:50:08.078732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.695 [2024-11-28 12:50:08.078752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:25.695 [2024-11-28 12:50:08.082854] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.695 [2024-11-28 12:50:08.083124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.695 [2024-11-28 12:50:08.083143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:25.695 [2024-11-28 12:50:08.087192] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.695 [2024-11-28 12:50:08.087454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.695 [2024-11-28 12:50:08.087474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:25.695 [2024-11-28 12:50:08.091539] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.695 [2024-11-28 12:50:08.091791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.695 [2024-11-28 12:50:08.091810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:25.695 [2024-11-28 12:50:08.095876] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.695 [2024-11-28 12:50:08.096131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.695 [2024-11-28 12:50:08.096151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:25.695 [2024-11-28 12:50:08.100228] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.695 [2024-11-28 12:50:08.100485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.695 [2024-11-28 12:50:08.100505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:25.695 [2024-11-28 12:50:08.104572] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.695 [2024-11-28 12:50:08.104836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.695 [2024-11-28 12:50:08.104856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:25.695 [2024-11-28 12:50:08.109166] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.695 [2024-11-28 12:50:08.109417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.695 [2024-11-28 12:50:08.109437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:25.695 [2024-11-28 12:50:08.114145] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.695 [2024-11-28 12:50:08.114401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.695 [2024-11-28 12:50:08.114421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:25.695 [2024-11-28 12:50:08.118636] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.695 [2024-11-28 12:50:08.118885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.695 [2024-11-28 12:50:08.118905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:25.695 [2024-11-28 12:50:08.123687] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.695 [2024-11-28 12:50:08.123933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.695 [2024-11-28 12:50:08.123960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:25.695 [2024-11-28 12:50:08.128298] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.695 [2024-11-28 12:50:08.128549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.695 [2024-11-28 12:50:08.128569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:25.695 [2024-11-28 12:50:08.133271] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.695 [2024-11-28 12:50:08.133552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.695 [2024-11-28 12:50:08.133573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:25.695 [2024-11-28 12:50:08.138785] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.695 [2024-11-28 12:50:08.139066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.695 [2024-11-28 12:50:08.139087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:25.695 [2024-11-28 12:50:08.143621] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.695 [2024-11-28 12:50:08.143874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.696 [2024-11-28 12:50:08.143894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:25.696 [2024-11-28 12:50:08.148387] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.696 [2024-11-28 12:50:08.148634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.696 [2024-11-28 12:50:08.148654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:25.696 [2024-11-28 12:50:08.153095] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.696 [2024-11-28 12:50:08.153350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.696 [2024-11-28 12:50:08.153370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:25.696 [2024-11-28 12:50:08.157877] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.696 [2024-11-28 12:50:08.158134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.696 [2024-11-28 12:50:08.158154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:25.696 [2024-11-28 12:50:08.162639] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.696 [2024-11-28 12:50:08.162897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.696 [2024-11-28 12:50:08.162917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:25.696 [2024-11-28 12:50:08.167381] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.696 [2024-11-28 12:50:08.167631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.696 [2024-11-28 12:50:08.167651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:25.696 [2024-11-28 12:50:08.172098] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.696 [2024-11-28 12:50:08.172363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.696 [2024-11-28 12:50:08.172382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:25.696 [2024-11-28 12:50:08.176784] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.696 [2024-11-28 12:50:08.177044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.696 [2024-11-28 12:50:08.177064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:25.696 [2024-11-28 12:50:08.181478] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.696 [2024-11-28 12:50:08.181730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.696 [2024-11-28 12:50:08.181750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:25.696 [2024-11-28 12:50:08.186275] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.696 [2024-11-28 12:50:08.186530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.696 [2024-11-28 12:50:08.186550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:25.696 [2024-11-28 12:50:08.191309] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.696 [2024-11-28 12:50:08.191560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.696 [2024-11-28 12:50:08.191584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:25.696 [2024-11-28 12:50:08.195941] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.696 [2024-11-28 12:50:08.196191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.696 [2024-11-28 12:50:08.196211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:25.696 [2024-11-28 12:50:08.200515] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.696 [2024-11-28 12:50:08.200760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.696 [2024-11-28 12:50:08.200781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:25.696 [2024-11-28 12:50:08.204964] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.696 [2024-11-28 12:50:08.205223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.696 [2024-11-28 12:50:08.205243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:25.696 [2024-11-28 12:50:08.209750] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.955 [2024-11-28 12:50:08.210004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.955 [2024-11-28 12:50:08.210023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:25.955 [2024-11-28 12:50:08.215205] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.955 [2024-11-28 12:50:08.215454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.955 [2024-11-28 12:50:08.215474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:25.955 [2024-11-28 12:50:08.222297] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.955 [2024-11-28 12:50:08.222604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.955 [2024-11-28 12:50:08.222625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:25.955 [2024-11-28 12:50:08.229581] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.955 [2024-11-28 12:50:08.229930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.955 [2024-11-28 12:50:08.229955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:25.955 [2024-11-28 12:50:08.236820] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.955 [2024-11-28 12:50:08.237121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.955 [2024-11-28 12:50:08.237141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:25.955 [2024-11-28 12:50:08.244422] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.955 [2024-11-28 12:50:08.244744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.955 [2024-11-28 12:50:08.244765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:25.955 [2024-11-28 12:50:08.252049] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.955 [2024-11-28 12:50:08.252398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.955 [2024-11-28 12:50:08.252419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:25.955 [2024-11-28 12:50:08.259432] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.955 [2024-11-28 12:50:08.259748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.955 [2024-11-28 12:50:08.259768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:25.956 [2024-11-28 12:50:08.266354] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.956 [2024-11-28 12:50:08.266666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.956 [2024-11-28 12:50:08.266686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:25.956 [2024-11-28 12:50:08.274015] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.956 [2024-11-28 12:50:08.274332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.956 [2024-11-28 12:50:08.274352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:25.956 [2024-11-28 12:50:08.281665] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.956 [2024-11-28 12:50:08.281996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.956 [2024-11-28 12:50:08.282016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:25.956 [2024-11-28 12:50:08.288971] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.956 [2024-11-28 12:50:08.289293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.956 [2024-11-28 12:50:08.289313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:25.956 [2024-11-28 12:50:08.296255] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.956 [2024-11-28 12:50:08.296568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.956 [2024-11-28 12:50:08.296590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:25.956 [2024-11-28 12:50:08.303352] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.956 [2024-11-28 12:50:08.303687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.956 [2024-11-28 12:50:08.303708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:25.956 [2024-11-28 12:50:08.310159] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.956 [2024-11-28 12:50:08.310521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.956 [2024-11-28 12:50:08.310542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:25.956 [2024-11-28 12:50:08.316813] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.956 [2024-11-28 12:50:08.317092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.956 [2024-11-28 12:50:08.317114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:25.956 [2024-11-28 12:50:08.323614] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.956 [2024-11-28 12:50:08.323894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.956 [2024-11-28 12:50:08.323914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:25.956 [2024-11-28 12:50:08.330521] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.956 [2024-11-28 12:50:08.330823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.956 [2024-11-28 12:50:08.330843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:25.956 [2024-11-28 12:50:08.338040] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.956 [2024-11-28 12:50:08.338318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.956 [2024-11-28 12:50:08.338339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:25.956 [2024-11-28 12:50:08.344828] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.956 [2024-11-28 12:50:08.345201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.956 [2024-11-28 12:50:08.345223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:25.956 [2024-11-28 12:50:08.352223] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.956 [2024-11-28 12:50:08.352465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.956 [2024-11-28 12:50:08.352485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:25.956 [2024-11-28 12:50:08.358282] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.956 [2024-11-28 12:50:08.358532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.956 [2024-11-28 12:50:08.358552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:25.956 [2024-11-28 12:50:08.364695] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.956 [2024-11-28 12:50:08.364960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.956 [2024-11-28 12:50:08.364985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:25.956 [2024-11-28 12:50:08.371208] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.956 [2024-11-28 12:50:08.371459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.956 [2024-11-28 12:50:08.371480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:25.956 [2024-11-28 12:50:08.378035] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.956 [2024-11-28 12:50:08.378284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.956 [2024-11-28 12:50:08.378304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:25.956 [2024-11-28 12:50:08.384241] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.956 [2024-11-28 12:50:08.384494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.956 [2024-11-28 12:50:08.384515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:25.956 [2024-11-28 12:50:08.389032] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.956 [2024-11-28 12:50:08.389295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.956 [2024-11-28 12:50:08.389316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:25.956 [2024-11-28 12:50:08.393621] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.956 [2024-11-28 12:50:08.393877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.956 [2024-11-28 12:50:08.393898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:25.956 [2024-11-28 12:50:08.398132] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.956 [2024-11-28 12:50:08.398386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.956 [2024-11-28 12:50:08.398406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:25.956 [2024-11-28 12:50:08.403014] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.956 [2024-11-28 12:50:08.403261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.956 [2024-11-28 12:50:08.403281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:25.956 [2024-11-28 12:50:08.407800] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.956 [2024-11-28 12:50:08.408083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.956 [2024-11-28 12:50:08.408103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:25.956 [2024-11-28 12:50:08.412422] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.956 [2024-11-28 12:50:08.412688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.956 [2024-11-28 12:50:08.412709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:25.956 [2024-11-28 12:50:08.417043] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.956 [2024-11-28 12:50:08.417328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.956 [2024-11-28 12:50:08.417348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:25.956 [2024-11-28 12:50:08.421702] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.956 [2024-11-28 12:50:08.421970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.956 [2024-11-28 12:50:08.421990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:25.956 [2024-11-28 12:50:08.426489] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.956 [2024-11-28 12:50:08.426740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.956 [2024-11-28 12:50:08.426760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:25.956 [2024-11-28 12:50:08.431259] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.957 [2024-11-28 12:50:08.431513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.957 [2024-11-28 12:50:08.431533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:25.957 [2024-11-28 12:50:08.435971] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.957 [2024-11-28 12:50:08.436225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.957 [2024-11-28 12:50:08.436245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:25.957 [2024-11-28 12:50:08.440668] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.957 [2024-11-28 12:50:08.440920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.957 [2024-11-28 12:50:08.440942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:25.957 [2024-11-28 12:50:08.445659] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.957 [2024-11-28 12:50:08.445921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.957 [2024-11-28 12:50:08.445941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:25.957 [2024-11-28 12:50:08.450313] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.957 [2024-11-28 12:50:08.450574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.957 [2024-11-28 12:50:08.450594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:25.957 [2024-11-28 12:50:08.455247] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.957 [2024-11-28 12:50:08.455495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.957 [2024-11-28 12:50:08.455515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:25.957 [2024-11-28 12:50:08.459830] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.957 [2024-11-28 12:50:08.460083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.957 [2024-11-28 12:50:08.460104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:25.957 [2024-11-28 12:50:08.464728] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.957 [2024-11-28 12:50:08.465004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.957 [2024-11-28 12:50:08.465025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:25.957 [2024-11-28 12:50:08.469434] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:25.957 [2024-11-28 12:50:08.469691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.957 [2024-11-28 12:50:08.469711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:26.214 [2024-11-28 12:50:08.474170] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:26.214 [2024-11-28 12:50:08.474419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.214 [2024-11-28 12:50:08.474440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:26.214 [2024-11-28 12:50:08.478899] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:26.214 [2024-11-28 12:50:08.479162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.214 [2024-11-28 12:50:08.479183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:26.214 [2024-11-28 12:50:08.483519] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:26.214 [2024-11-28 12:50:08.483766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.214 [2024-11-28 12:50:08.483786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:26.214 [2024-11-28 12:50:08.487972] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:26.214 [2024-11-28 12:50:08.488231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.214 [2024-11-28 12:50:08.488252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:26.214 [2024-11-28 12:50:08.492430] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:26.214 [2024-11-28 12:50:08.492692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.214 [2024-11-28 12:50:08.492715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:26.214 [2024-11-28 12:50:08.497408] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:26.214 [2024-11-28 12:50:08.497649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.214 [2024-11-28 12:50:08.497669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:26.214 [2024-11-28 12:50:08.502878] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:26.214 [2024-11-28 12:50:08.503142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.214 [2024-11-28 12:50:08.503162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:26.214 [2024-11-28 12:50:08.508306] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:26.214 [2024-11-28 12:50:08.508554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.214 [2024-11-28 12:50:08.508575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:26.214 [2024-11-28 12:50:08.513054] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:26.214 [2024-11-28 12:50:08.513305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.214 [2024-11-28 12:50:08.513326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:26.214 [2024-11-28 12:50:08.518410] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:26.214 [2024-11-28 12:50:08.518652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.214 [2024-11-28 12:50:08.518672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:26.214 [2024-11-28 12:50:08.523542] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:26.214 [2024-11-28 12:50:08.523800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.214 [2024-11-28 12:50:08.523820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:26.214 [2024-11-28 12:50:08.528674] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:26.214 [2024-11-28 12:50:08.528919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.214 [2024-11-28 12:50:08.528939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:26.214 [2024-11-28 12:50:08.533460] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:26.214 [2024-11-28 12:50:08.533704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.214 [2024-11-28 12:50:08.533724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:26.214 [2024-11-28 12:50:08.539185] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:26.214 [2024-11-28 12:50:08.539428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.214 [2024-11-28 12:50:08.539448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:26.214 [2024-11-28 12:50:08.544827] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:26.214 [2024-11-28 12:50:08.545108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.214 [2024-11-28 12:50:08.545129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:26.214 [2024-11-28 12:50:08.550071] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:26.214 [2024-11-28 12:50:08.550328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.214 [2024-11-28 12:50:08.550349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:26.214 [2024-11-28 12:50:08.555767] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:26.214 [2024-11-28 12:50:08.556036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.214 [2024-11-28 12:50:08.556057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:26.214 [2024-11-28 12:50:08.561332] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:26.214 [2024-11-28 12:50:08.561596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.214 [2024-11-28 12:50:08.561616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:26.214 [2024-11-28 12:50:08.566420] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:26.214 [2024-11-28 12:50:08.566677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.214 [2024-11-28 12:50:08.566697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:26.214 [2024-11-28 12:50:08.571186] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:26.214 [2024-11-28 12:50:08.571444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.214 [2024-11-28 12:50:08.571465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:26.214 [2024-11-28 12:50:08.575655] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:26.214 [2024-11-28 12:50:08.575912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.214 [2024-11-28 12:50:08.575932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:26.214 [2024-11-28 12:50:08.580023] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:26.214 [2024-11-28 12:50:08.580274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.214 [2024-11-28 12:50:08.580294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:26.214 [2024-11-28 12:50:08.584378] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:26.214 [2024-11-28 12:50:08.584642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.214 [2024-11-28 12:50:08.584663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:26.214 [2024-11-28 12:50:08.588862] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:26.214 [2024-11-28 12:50:08.589123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.214 [2024-11-28 12:50:08.589142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:26.214 [2024-11-28 12:50:08.593271] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:26.214 [2024-11-28 12:50:08.593531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.214 [2024-11-28 12:50:08.593552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:26.214 [2024-11-28 12:50:08.598174] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:26.214 [2024-11-28 12:50:08.598437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.214 [2024-11-28 12:50:08.598457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:26.214 [2024-11-28 12:50:08.603420] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:26.214 [2024-11-28 12:50:08.603669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.214 [2024-11-28 12:50:08.603689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:26.214 [2024-11-28 12:50:08.608577] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:26.214 [2024-11-28 12:50:08.608824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.214 [2024-11-28 12:50:08.608844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:26.214 [2024-11-28 12:50:08.613518] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:26.214 [2024-11-28 12:50:08.613775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.214 [2024-11-28 12:50:08.613795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:26.214 [2024-11-28 12:50:08.619206] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:26.214 [2024-11-28 12:50:08.619458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.214 [2024-11-28 12:50:08.619478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:26.214 [2024-11-28 12:50:08.623958] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:26.214 [2024-11-28 12:50:08.624211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.214 [2024-11-28 12:50:08.624235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:26.215 [2024-11-28 12:50:08.628616] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:26.215 [2024-11-28 12:50:08.628873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.215 [2024-11-28 12:50:08.628895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:26.215 [2024-11-28 12:50:08.633121] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:26.215 [2024-11-28 12:50:08.633383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.215 [2024-11-28 12:50:08.633404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:26.215 [2024-11-28 12:50:08.637732] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:26.215 [2024-11-28 12:50:08.638009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.215 [2024-11-28 12:50:08.638029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:26.215 [2024-11-28 12:50:08.642491] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:26.215 [2024-11-28 12:50:08.642749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.215 [2024-11-28 12:50:08.642769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:26.215 [2024-11-28 12:50:08.647241] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:26.215 [2024-11-28 12:50:08.647504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.215 [2024-11-28 12:50:08.647525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:26.215 [2024-11-28 12:50:08.651880] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:26.215 [2024-11-28 12:50:08.652144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.215 [2024-11-28 12:50:08.652164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:26.215 [2024-11-28 12:50:08.656686] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:26.215 [2024-11-28 12:50:08.656941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.215 [2024-11-28 12:50:08.656970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:26.215 [2024-11-28 12:50:08.661329] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:26.215 [2024-11-28 12:50:08.661581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.215 [2024-11-28 12:50:08.661601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:26.215 [2024-11-28 12:50:08.666118] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:26.215 [2024-11-28 12:50:08.666363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.215 [2024-11-28 12:50:08.666383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:26.215 [2024-11-28 12:50:08.670806] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:26.215 [2024-11-28 12:50:08.671067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.215 [2024-11-28 12:50:08.671087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:26.215 [2024-11-28 12:50:08.675224] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:26.215 [2024-11-28 12:50:08.675473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.215 [2024-11-28 12:50:08.675493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:26.215 [2024-11-28 12:50:08.679915] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:26.215 [2024-11-28 12:50:08.680175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.215 [2024-11-28 12:50:08.680195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:26.215 [2024-11-28 12:50:08.685302] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:26.215 [2024-11-28 12:50:08.685542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.215 [2024-11-28 12:50:08.685562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:26.215 [2024-11-28 12:50:08.690421] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:26.215 [2024-11-28 12:50:08.690686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.215 [2024-11-28 12:50:08.690705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:26.215 [2024-11-28 12:50:08.696252] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:26.215 [2024-11-28 12:50:08.696503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.215 [2024-11-28 12:50:08.696523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:26.215 [2024-11-28 12:50:08.701495] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:26.215 [2024-11-28 12:50:08.701738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.215 [2024-11-28 12:50:08.701758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:26.215 [2024-11-28 12:50:08.707091] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:26.215 [2024-11-28 12:50:08.707342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.215 [2024-11-28 12:50:08.707362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:26.215 [2024-11-28 12:50:08.712362] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:26.215 [2024-11-28 12:50:08.712613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.215 [2024-11-28 12:50:08.712635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:26.215 [2024-11-28 12:50:08.718219] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:26.215 [2024-11-28 12:50:08.718481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.215 [2024-11-28 12:50:08.718501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:26.215 [2024-11-28 12:50:08.723392] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:26.215 [2024-11-28 12:50:08.723641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.215 [2024-11-28 12:50:08.723661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:26.215 [2024-11-28 12:50:08.728921] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:26.215 [2024-11-28 12:50:08.729165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.215 [2024-11-28 12:50:08.729185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:26.473 [2024-11-28 12:50:08.734309] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:26.473 [2024-11-28 12:50:08.734546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.473 [2024-11-28 12:50:08.734566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:26.473 [2024-11-28 12:50:08.740283] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:26.473 [2024-11-28 12:50:08.740537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.473 [2024-11-28 12:50:08.740558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:26.473 [2024-11-28 12:50:08.746086] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:26.473 [2024-11-28 12:50:08.746339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.473 [2024-11-28 12:50:08.746359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:26.473 [2024-11-28 12:50:08.753331] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:26.473 [2024-11-28 12:50:08.753673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.473 [2024-11-28 12:50:08.753695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:26.473 [2024-11-28 12:50:08.759991] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:26.473 [2024-11-28 12:50:08.760234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.473 [2024-11-28 12:50:08.760259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:26.473 [2024-11-28 12:50:08.765919] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:26.473 [2024-11-28 12:50:08.766169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.473 [2024-11-28 12:50:08.766190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:26.473 [2024-11-28 12:50:08.771255] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:26.473 [2024-11-28 12:50:08.771503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.473 [2024-11-28 12:50:08.771524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:26.473 [2024-11-28 12:50:08.777429] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:26.473 [2024-11-28 12:50:08.777731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.473 [2024-11-28 12:50:08.777751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:26.473 [2024-11-28 12:50:08.783794] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:26.473 [2024-11-28 12:50:08.784133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.473 [2024-11-28 12:50:08.784153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:26.473 [2024-11-28 12:50:08.790769] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:26.473 [2024-11-28 12:50:08.791012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.473 [2024-11-28 12:50:08.791033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:26.473 [2024-11-28 12:50:08.797552] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:26.473 [2024-11-28 12:50:08.797876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.473 [2024-11-28 12:50:08.797896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:26.473 [2024-11-28 12:50:08.805047] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:26.473 [2024-11-28 12:50:08.805293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.473 [2024-11-28 12:50:08.805313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:26.473 [2024-11-28 12:50:08.812454] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:26.474 [2024-11-28 12:50:08.812753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.474 [2024-11-28 12:50:08.812773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:26.474 [2024-11-28 12:50:08.819184] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:26.474 [2024-11-28 12:50:08.819431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.474 [2024-11-28 12:50:08.819452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:26.474 [2024-11-28 12:50:08.825272] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:26.474 [2024-11-28 12:50:08.825568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.474 [2024-11-28 12:50:08.825589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:26.474 [2024-11-28 12:50:08.832044] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:26.474 [2024-11-28 12:50:08.832316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.474 [2024-11-28 12:50:08.832337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:26.474 [2024-11-28 12:50:08.838013] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:26.474 [2024-11-28 12:50:08.838258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.474 [2024-11-28 12:50:08.838279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:26.474 [2024-11-28 12:50:08.843895] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:26.474 [2024-11-28 12:50:08.844146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.474 [2024-11-28 12:50:08.844166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:26.474 [2024-11-28 12:50:08.848639] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:26.474 [2024-11-28 12:50:08.848891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.474 [2024-11-28 12:50:08.848912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:26.474 [2024-11-28 12:50:08.853024] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:26.474 [2024-11-28 12:50:08.853276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.474 [2024-11-28 12:50:08.853297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:26.474 [2024-11-28 12:50:08.857374] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:26.474 [2024-11-28 12:50:08.857629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.474 [2024-11-28 12:50:08.857649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:26.474 [2024-11-28 12:50:08.861678] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:26.474 [2024-11-28 12:50:08.861924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.474 [2024-11-28 12:50:08.861945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:26.474 [2024-11-28 12:50:08.866027] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:26.474 [2024-11-28 12:50:08.866280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.474 [2024-11-28 12:50:08.866300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:26.474 6022.00 IOPS, 752.75 MiB/s [2024-11-28T11:50:08.993Z] [2024-11-28 12:50:08.871266] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14604c0) with pdu=0x200016eff3c8 00:26:26.474 [2024-11-28 12:50:08.871375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.474 [2024-11-28 12:50:08.871394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:26.474 00:26:26.474 Latency(us) 00:26:26.474 [2024-11-28T11:50:08.993Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:26.474 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:26:26.474 nvme0n1 : 2.00 6021.33 752.67 0.00 0.00 2652.82 1731.01 8662.15 00:26:26.474 [2024-11-28T11:50:08.993Z] =================================================================================================================== 00:26:26.474 [2024-11-28T11:50:08.993Z] Total : 6021.33 752.67 0.00 0.00 2652.82 1731.01 8662.15 00:26:26.474 { 00:26:26.474 "results": [ 00:26:26.474 { 00:26:26.474 "job": "nvme0n1", 00:26:26.474 "core_mask": "0x2", 00:26:26.474 "workload": "randwrite", 00:26:26.474 "status": "finished", 00:26:26.474 "queue_depth": 16, 00:26:26.474 "io_size": 131072, 00:26:26.474 "runtime": 2.003543, 00:26:26.474 "iops": 6021.333208221636, 00:26:26.474 "mibps": 752.6666510277045, 00:26:26.474 "io_failed": 0, 00:26:26.474 "io_timeout": 0, 00:26:26.474 "avg_latency_us": 2652.821035636028, 00:26:26.474 "min_latency_us": 1731.0052173913043, 00:26:26.474 "max_latency_us": 8662.14956521739 00:26:26.474 } 00:26:26.474 ], 00:26:26.474 "core_count": 1 00:26:26.474 } 00:26:26.474 12:50:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:26.474 12:50:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:26.474 12:50:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:26.474 | .driver_specific 00:26:26.474 | .nvme_error 00:26:26.474 | .status_code 00:26:26.474 | .command_transient_transport_error' 00:26:26.474 12:50:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:26.732 12:50:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 390 > 0 )) 00:26:26.732 12:50:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2671240 00:26:26.732 12:50:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2671240 ']' 00:26:26.732 12:50:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2671240 00:26:26.732 12:50:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:26:26.732 12:50:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:26.732 12:50:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2671240 00:26:26.732 12:50:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:26.732 12:50:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:26.732 12:50:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2671240' 00:26:26.732 killing process with pid 2671240 00:26:26.732 12:50:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2671240 00:26:26.732 Received shutdown signal, test time was about 2.000000 seconds 00:26:26.732 00:26:26.732 Latency(us) 00:26:26.732 [2024-11-28T11:50:09.251Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:26.732 [2024-11-28T11:50:09.251Z] =================================================================================================================== 00:26:26.732 [2024-11-28T11:50:09.251Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:26.732 12:50:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2671240 00:26:26.990 12:50:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 2669344 00:26:26.990 12:50:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2669344 ']' 00:26:26.990 12:50:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2669344 00:26:26.990 12:50:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:26:26.990 12:50:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:26.990 12:50:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2669344 00:26:26.990 12:50:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:26.990 12:50:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:26.990 12:50:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2669344' 00:26:26.990 killing process with pid 2669344 00:26:26.990 12:50:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2669344 00:26:26.990 12:50:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2669344 00:26:27.248 00:26:27.248 real 0m14.080s 00:26:27.248 user 0m26.975s 00:26:27.248 sys 0m4.520s 00:26:27.248 12:50:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:27.248 12:50:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:27.248 ************************************ 00:26:27.248 END TEST nvmf_digest_error 00:26:27.248 ************************************ 00:26:27.248 12:50:09 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:26:27.248 12:50:09 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:26:27.248 12:50:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:27.248 12:50:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:26:27.248 12:50:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:27.248 12:50:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:26:27.248 12:50:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:27.248 12:50:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:27.248 rmmod nvme_tcp 00:26:27.248 rmmod nvme_fabrics 00:26:27.248 rmmod nvme_keyring 00:26:27.248 12:50:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:27.248 12:50:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:26:27.248 12:50:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:26:27.248 12:50:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 2669344 ']' 00:26:27.248 12:50:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 2669344 00:26:27.248 12:50:09 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 2669344 ']' 00:26:27.248 12:50:09 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 2669344 00:26:27.248 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2669344) - No such process 00:26:27.248 12:50:09 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 2669344 is not found' 00:26:27.248 Process with pid 2669344 is not found 00:26:27.248 12:50:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:27.248 12:50:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:27.248 12:50:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:27.248 12:50:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:26:27.248 12:50:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:26:27.248 12:50:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:27.248 12:50:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:26:27.248 12:50:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:27.248 12:50:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:27.248 12:50:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:27.248 12:50:09 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:27.248 12:50:09 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:29.177 12:50:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:29.434 00:26:29.434 real 0m36.212s 00:26:29.434 user 0m55.269s 00:26:29.434 sys 0m13.536s 00:26:29.434 12:50:11 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:29.434 12:50:11 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:29.434 ************************************ 00:26:29.434 END TEST nvmf_digest 00:26:29.434 ************************************ 00:26:29.434 12:50:11 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:26:29.434 12:50:11 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:26:29.434 12:50:11 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:26:29.434 12:50:11 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:26:29.434 12:50:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:29.434 12:50:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:29.434 12:50:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.434 ************************************ 00:26:29.434 START TEST nvmf_bdevperf 00:26:29.434 ************************************ 00:26:29.434 12:50:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:26:29.434 * Looking for test storage... 00:26:29.434 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:29.434 12:50:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:29.434 12:50:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lcov --version 00:26:29.434 12:50:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:29.434 12:50:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:29.434 12:50:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:29.434 12:50:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:29.434 12:50:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:29.434 12:50:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:26:29.434 12:50:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:26:29.434 12:50:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:26:29.434 12:50:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:26:29.434 12:50:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:26:29.434 12:50:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:26:29.434 12:50:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:26:29.434 12:50:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:29.434 12:50:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:26:29.434 12:50:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:26:29.434 12:50:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:29.434 12:50:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:29.434 12:50:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:26:29.434 12:50:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:26:29.434 12:50:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:29.434 12:50:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:26:29.434 12:50:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:26:29.434 12:50:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:26:29.434 12:50:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:26:29.434 12:50:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:29.434 12:50:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:26:29.434 12:50:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:26:29.435 12:50:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:29.435 12:50:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:29.435 12:50:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:26:29.435 12:50:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:29.435 12:50:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:29.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:29.435 --rc genhtml_branch_coverage=1 00:26:29.435 --rc genhtml_function_coverage=1 00:26:29.435 --rc genhtml_legend=1 00:26:29.435 --rc geninfo_all_blocks=1 00:26:29.435 --rc geninfo_unexecuted_blocks=1 00:26:29.435 00:26:29.435 ' 00:26:29.435 12:50:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:29.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:29.435 --rc genhtml_branch_coverage=1 00:26:29.435 --rc genhtml_function_coverage=1 00:26:29.435 --rc genhtml_legend=1 00:26:29.435 --rc geninfo_all_blocks=1 00:26:29.435 --rc geninfo_unexecuted_blocks=1 00:26:29.435 00:26:29.435 ' 00:26:29.435 12:50:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:29.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:29.435 --rc genhtml_branch_coverage=1 00:26:29.435 --rc genhtml_function_coverage=1 00:26:29.435 --rc genhtml_legend=1 00:26:29.435 --rc geninfo_all_blocks=1 00:26:29.435 --rc geninfo_unexecuted_blocks=1 00:26:29.435 00:26:29.435 ' 00:26:29.435 12:50:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:29.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:29.435 --rc genhtml_branch_coverage=1 00:26:29.435 --rc genhtml_function_coverage=1 00:26:29.435 --rc genhtml_legend=1 00:26:29.435 --rc geninfo_all_blocks=1 00:26:29.435 --rc geninfo_unexecuted_blocks=1 00:26:29.435 00:26:29.435 ' 00:26:29.435 12:50:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:29.435 12:50:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:26:29.435 12:50:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:29.435 12:50:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:29.435 12:50:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:29.435 12:50:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:29.435 12:50:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:29.435 12:50:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:29.435 12:50:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:29.435 12:50:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:29.435 12:50:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:29.435 12:50:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:29.693 12:50:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:26:29.693 12:50:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:26:29.693 12:50:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:29.693 12:50:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:29.693 12:50:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:29.693 12:50:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:29.693 12:50:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:29.693 12:50:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:26:29.693 12:50:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:29.693 12:50:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:29.693 12:50:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:29.693 12:50:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:29.693 12:50:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:29.693 12:50:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:29.693 12:50:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:26:29.693 12:50:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:29.693 12:50:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:26:29.693 12:50:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:29.693 12:50:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:29.693 12:50:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:29.693 12:50:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:29.693 12:50:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:29.693 12:50:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:29.693 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:29.693 12:50:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:29.693 12:50:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:29.693 12:50:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:29.693 12:50:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:29.693 12:50:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:29.693 12:50:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:26:29.693 12:50:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:29.693 12:50:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:29.693 12:50:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:29.693 12:50:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:29.693 12:50:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:29.693 12:50:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:29.693 12:50:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:29.693 12:50:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:29.693 12:50:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:29.693 12:50:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:29.693 12:50:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:26:29.693 12:50:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:35.004 12:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:35.004 12:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:26:35.004 12:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:35.004 12:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:35.004 12:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:35.004 12:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:35.004 12:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:35.004 12:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:26:35.004 12:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:35.004 12:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:26:35.004 12:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:26:35.004 12:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:26:35.004 12:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:26:35.004 12:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:26:35.004 12:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:26:35.004 12:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:35.004 12:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:35.004 12:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:35.004 12:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:35.004 12:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:35.004 12:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:35.004 12:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:35.004 12:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:35.004 12:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:35.004 12:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:35.004 12:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:35.004 12:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:35.004 12:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:35.004 12:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:35.004 12:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:35.004 12:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:35.004 12:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:35.004 12:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:35.004 12:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:35.004 12:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:35.004 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:35.004 12:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:35.004 12:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:35.004 12:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:35.004 12:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:35.004 12:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:35.004 12:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:35.004 12:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:35.004 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:35.004 12:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:35.004 12:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:35.004 12:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:35.004 12:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:35.004 12:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:35.004 12:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:35.004 12:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:35.004 12:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:35.004 12:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:35.004 12:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:35.004 12:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:35.004 12:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:35.004 12:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:35.004 12:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:35.004 12:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:35.005 12:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:35.005 Found net devices under 0000:86:00.0: cvl_0_0 00:26:35.005 12:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:35.005 12:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:35.005 12:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:35.005 12:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:35.005 12:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:35.005 12:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:35.005 12:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:35.005 12:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:35.005 12:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:35.005 Found net devices under 0000:86:00.1: cvl_0_1 00:26:35.005 12:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:35.005 12:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:35.005 12:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:26:35.005 12:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:35.005 12:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:35.005 12:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:35.005 12:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:35.005 12:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:35.005 12:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:35.005 12:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:35.005 12:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:35.005 12:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:35.005 12:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:35.005 12:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:35.005 12:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:35.005 12:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:35.005 12:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:35.005 12:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:35.005 12:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:35.005 12:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:35.005 12:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:35.005 12:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:35.005 12:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:35.005 12:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:35.005 12:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:35.005 12:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:35.005 12:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:35.005 12:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:35.005 12:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:35.005 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:35.005 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.371 ms 00:26:35.005 00:26:35.005 --- 10.0.0.2 ping statistics --- 00:26:35.005 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:35.005 rtt min/avg/max/mdev = 0.371/0.371/0.371/0.000 ms 00:26:35.005 12:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:35.005 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:35.005 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.222 ms 00:26:35.005 00:26:35.005 --- 10.0.0.1 ping statistics --- 00:26:35.005 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:35.005 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:26:35.005 12:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:35.005 12:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:26:35.005 12:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:35.005 12:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:35.005 12:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:35.005 12:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:35.005 12:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:35.005 12:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:35.005 12:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:35.005 12:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:26:35.005 12:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:26:35.005 12:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:35.005 12:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:35.005 12:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:35.005 12:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=2675257 00:26:35.005 12:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:35.005 12:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 2675257 00:26:35.005 12:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 2675257 ']' 00:26:35.005 12:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:35.005 12:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:35.005 12:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:35.005 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:35.005 12:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:35.005 12:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:35.005 [2024-11-28 12:50:17.388764] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:26:35.005 [2024-11-28 12:50:17.388807] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:35.005 [2024-11-28 12:50:17.456292] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:35.005 [2024-11-28 12:50:17.499844] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:35.005 [2024-11-28 12:50:17.499879] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:35.005 [2024-11-28 12:50:17.499889] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:35.005 [2024-11-28 12:50:17.499896] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:35.005 [2024-11-28 12:50:17.499901] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:35.005 [2024-11-28 12:50:17.501244] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:35.005 [2024-11-28 12:50:17.501263] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:35.005 [2024-11-28 12:50:17.501267] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:35.264 12:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:35.264 12:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:26:35.264 12:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:35.264 12:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:35.264 12:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:35.264 12:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:35.264 12:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:35.264 12:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.264 12:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:35.264 [2024-11-28 12:50:17.650324] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:35.264 12:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.264 12:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:35.264 12:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.264 12:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:35.264 Malloc0 00:26:35.264 12:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.264 12:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:35.264 12:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.264 12:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:35.264 12:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.264 12:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:35.264 12:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.264 12:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:35.264 12:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.264 12:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:35.264 12:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.264 12:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:35.264 [2024-11-28 12:50:17.709640] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:35.265 12:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.265 12:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:26:35.265 12:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:26:35.265 12:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:26:35.265 12:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:26:35.265 12:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:35.265 12:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:35.265 { 00:26:35.265 "params": { 00:26:35.265 "name": "Nvme$subsystem", 00:26:35.265 "trtype": "$TEST_TRANSPORT", 00:26:35.265 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:35.265 "adrfam": "ipv4", 00:26:35.265 "trsvcid": "$NVMF_PORT", 00:26:35.265 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:35.265 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:35.265 "hdgst": ${hdgst:-false}, 00:26:35.265 "ddgst": ${ddgst:-false} 00:26:35.265 }, 00:26:35.265 "method": "bdev_nvme_attach_controller" 00:26:35.265 } 00:26:35.265 EOF 00:26:35.265 )") 00:26:35.265 12:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:26:35.265 12:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:26:35.265 12:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:26:35.265 12:50:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:26:35.265 "params": { 00:26:35.265 "name": "Nvme1", 00:26:35.265 "trtype": "tcp", 00:26:35.265 "traddr": "10.0.0.2", 00:26:35.265 "adrfam": "ipv4", 00:26:35.265 "trsvcid": "4420", 00:26:35.265 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:35.265 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:35.265 "hdgst": false, 00:26:35.265 "ddgst": false 00:26:35.265 }, 00:26:35.265 "method": "bdev_nvme_attach_controller" 00:26:35.265 }' 00:26:35.265 [2024-11-28 12:50:17.761208] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:26:35.265 [2024-11-28 12:50:17.761251] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2675282 ] 00:26:35.523 [2024-11-28 12:50:17.823473] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:35.523 [2024-11-28 12:50:17.864992] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:35.781 Running I/O for 1 seconds... 00:26:36.715 10704.00 IOPS, 41.81 MiB/s 00:26:36.715 Latency(us) 00:26:36.715 [2024-11-28T11:50:19.234Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:36.715 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:36.715 Verification LBA range: start 0x0 length 0x4000 00:26:36.716 Nvme1n1 : 1.01 10753.50 42.01 0.00 0.00 11856.91 1146.88 12366.36 00:26:36.716 [2024-11-28T11:50:19.235Z] =================================================================================================================== 00:26:36.716 [2024-11-28T11:50:19.235Z] Total : 10753.50 42.01 0.00 0.00 11856.91 1146.88 12366.36 00:26:36.974 12:50:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=2675519 00:26:36.974 12:50:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:26:36.974 12:50:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:26:36.974 12:50:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:26:36.974 12:50:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:26:36.974 12:50:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:26:36.974 12:50:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:36.974 12:50:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:36.974 { 00:26:36.974 "params": { 00:26:36.974 "name": "Nvme$subsystem", 00:26:36.974 "trtype": "$TEST_TRANSPORT", 00:26:36.974 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:36.974 "adrfam": "ipv4", 00:26:36.974 "trsvcid": "$NVMF_PORT", 00:26:36.974 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:36.974 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:36.974 "hdgst": ${hdgst:-false}, 00:26:36.974 "ddgst": ${ddgst:-false} 00:26:36.974 }, 00:26:36.974 "method": "bdev_nvme_attach_controller" 00:26:36.974 } 00:26:36.974 EOF 00:26:36.974 )") 00:26:36.974 12:50:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:26:36.974 12:50:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:26:36.974 12:50:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:26:36.974 12:50:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:26:36.974 "params": { 00:26:36.974 "name": "Nvme1", 00:26:36.974 "trtype": "tcp", 00:26:36.974 "traddr": "10.0.0.2", 00:26:36.974 "adrfam": "ipv4", 00:26:36.974 "trsvcid": "4420", 00:26:36.974 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:36.974 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:36.974 "hdgst": false, 00:26:36.974 "ddgst": false 00:26:36.974 }, 00:26:36.974 "method": "bdev_nvme_attach_controller" 00:26:36.974 }' 00:26:36.974 [2024-11-28 12:50:19.291156] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:26:36.974 [2024-11-28 12:50:19.291204] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2675519 ] 00:26:36.974 [2024-11-28 12:50:19.353786] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:36.974 [2024-11-28 12:50:19.394824] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:37.234 Running I/O for 15 seconds... 00:26:39.102 10588.00 IOPS, 41.36 MiB/s [2024-11-28T11:50:22.558Z] 10621.50 IOPS, 41.49 MiB/s [2024-11-28T11:50:22.558Z] 12:50:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 2675257 00:26:40.039 12:50:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:26:40.039 [2024-11-28 12:50:22.257891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:96600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.039 [2024-11-28 12:50:22.257931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.039 [2024-11-28 12:50:22.257954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:96608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.039 [2024-11-28 12:50:22.257963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.039 [2024-11-28 12:50:22.257973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:96616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.039 [2024-11-28 12:50:22.257981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.039 [2024-11-28 12:50:22.257991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:96624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.039 [2024-11-28 12:50:22.257999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.039 [2024-11-28 12:50:22.258009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:96632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.039 [2024-11-28 12:50:22.258016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.039 [2024-11-28 12:50:22.258025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:96640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.039 [2024-11-28 12:50:22.258033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.039 [2024-11-28 12:50:22.258041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:96648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.039 [2024-11-28 12:50:22.258049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.039 [2024-11-28 12:50:22.258059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:96656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.039 [2024-11-28 12:50:22.258066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.039 [2024-11-28 12:50:22.258075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:96664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.039 [2024-11-28 12:50:22.258083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.039 [2024-11-28 12:50:22.258091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:96672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.039 [2024-11-28 12:50:22.258097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.039 [2024-11-28 12:50:22.258106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:96680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.039 [2024-11-28 12:50:22.258114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.039 [2024-11-28 12:50:22.258124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:96688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.039 [2024-11-28 12:50:22.258134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.039 [2024-11-28 12:50:22.258143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:96696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.039 [2024-11-28 12:50:22.258156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.039 [2024-11-28 12:50:22.258166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:96704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.039 [2024-11-28 12:50:22.258174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.039 [2024-11-28 12:50:22.258184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:96712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.039 [2024-11-28 12:50:22.258193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.039 [2024-11-28 12:50:22.258204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:96720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.039 [2024-11-28 12:50:22.258212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.039 [2024-11-28 12:50:22.258221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:96728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.039 [2024-11-28 12:50:22.258228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.039 [2024-11-28 12:50:22.258238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:96736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.039 [2024-11-28 12:50:22.258245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.039 [2024-11-28 12:50:22.258253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:96744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.039 [2024-11-28 12:50:22.258260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.040 [2024-11-28 12:50:22.258269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:96752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.040 [2024-11-28 12:50:22.258275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.040 [2024-11-28 12:50:22.258283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:96760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.040 [2024-11-28 12:50:22.258290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.040 [2024-11-28 12:50:22.258298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:96768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.040 [2024-11-28 12:50:22.258305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.040 [2024-11-28 12:50:22.258313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.040 [2024-11-28 12:50:22.258320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.040 [2024-11-28 12:50:22.258328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:96784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.040 [2024-11-28 12:50:22.258334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.040 [2024-11-28 12:50:22.258342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:96792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.040 [2024-11-28 12:50:22.258349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.040 [2024-11-28 12:50:22.258359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:96800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.040 [2024-11-28 12:50:22.258367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.040 [2024-11-28 12:50:22.258375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:96808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.040 [2024-11-28 12:50:22.258382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.040 [2024-11-28 12:50:22.258390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:96816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.040 [2024-11-28 12:50:22.258396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.040 [2024-11-28 12:50:22.258405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:96824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.040 [2024-11-28 12:50:22.258412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.040 [2024-11-28 12:50:22.258420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:96832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.040 [2024-11-28 12:50:22.258427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.040 [2024-11-28 12:50:22.258435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:96840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.040 [2024-11-28 12:50:22.258441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.040 [2024-11-28 12:50:22.258449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:96848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.040 [2024-11-28 12:50:22.258456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.040 [2024-11-28 12:50:22.258464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:96856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.040 [2024-11-28 12:50:22.258471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.040 [2024-11-28 12:50:22.258485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:96864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.040 [2024-11-28 12:50:22.258491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.040 [2024-11-28 12:50:22.258499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:96872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.040 [2024-11-28 12:50:22.258506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.040 [2024-11-28 12:50:22.258514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:96880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.040 [2024-11-28 12:50:22.258521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.040 [2024-11-28 12:50:22.258528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:96888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.040 [2024-11-28 12:50:22.258536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.040 [2024-11-28 12:50:22.258544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:96896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.040 [2024-11-28 12:50:22.258552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.040 [2024-11-28 12:50:22.258560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:96904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.040 [2024-11-28 12:50:22.258567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.040 [2024-11-28 12:50:22.258575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:96912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.040 [2024-11-28 12:50:22.258582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.040 [2024-11-28 12:50:22.258590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:96920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.040 [2024-11-28 12:50:22.258596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.040 [2024-11-28 12:50:22.258604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:96928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.040 [2024-11-28 12:50:22.258611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.040 [2024-11-28 12:50:22.258620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:96936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.040 [2024-11-28 12:50:22.258626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.040 [2024-11-28 12:50:22.258634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:96944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.040 [2024-11-28 12:50:22.258641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.040 [2024-11-28 12:50:22.258648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:96952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.040 [2024-11-28 12:50:22.258655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.040 [2024-11-28 12:50:22.258663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:96960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.040 [2024-11-28 12:50:22.258670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.040 [2024-11-28 12:50:22.258678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:96968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.040 [2024-11-28 12:50:22.258684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.040 [2024-11-28 12:50:22.258692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:96976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.040 [2024-11-28 12:50:22.258699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.040 [2024-11-28 12:50:22.258706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:96984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.040 [2024-11-28 12:50:22.258713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.040 [2024-11-28 12:50:22.258723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:96992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.040 [2024-11-28 12:50:22.258729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.040 [2024-11-28 12:50:22.258739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:97000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.040 [2024-11-28 12:50:22.258746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.040 [2024-11-28 12:50:22.258754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:97008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.040 [2024-11-28 12:50:22.258760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.040 [2024-11-28 12:50:22.258768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:97016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.040 [2024-11-28 12:50:22.258775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.040 [2024-11-28 12:50:22.258784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:97024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.040 [2024-11-28 12:50:22.258790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.040 [2024-11-28 12:50:22.258799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:97032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.040 [2024-11-28 12:50:22.258805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.040 [2024-11-28 12:50:22.258813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:97040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.040 [2024-11-28 12:50:22.258820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.040 [2024-11-28 12:50:22.258828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:97048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.040 [2024-11-28 12:50:22.258834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.040 [2024-11-28 12:50:22.258842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:97056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.040 [2024-11-28 12:50:22.258849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.040 [2024-11-28 12:50:22.258856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:97064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.040 [2024-11-28 12:50:22.258863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.041 [2024-11-28 12:50:22.258871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:97072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.041 [2024-11-28 12:50:22.258878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.041 [2024-11-28 12:50:22.258886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:97080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.041 [2024-11-28 12:50:22.258893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.041 [2024-11-28 12:50:22.258901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:97088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.041 [2024-11-28 12:50:22.258907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.041 [2024-11-28 12:50:22.258915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:97096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.041 [2024-11-28 12:50:22.258922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.041 [2024-11-28 12:50:22.258931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:97104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.041 [2024-11-28 12:50:22.258938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.041 [2024-11-28 12:50:22.258945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:97112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.041 [2024-11-28 12:50:22.258956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.041 [2024-11-28 12:50:22.258965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:97120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.041 [2024-11-28 12:50:22.258972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.041 [2024-11-28 12:50:22.258980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:97128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.041 [2024-11-28 12:50:22.258987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.041 [2024-11-28 12:50:22.258995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:97136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.041 [2024-11-28 12:50:22.259002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.041 [2024-11-28 12:50:22.259010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:97144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.041 [2024-11-28 12:50:22.259017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.041 [2024-11-28 12:50:22.259025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:97152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.041 [2024-11-28 12:50:22.259031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.041 [2024-11-28 12:50:22.259039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:97160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.041 [2024-11-28 12:50:22.259045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.041 [2024-11-28 12:50:22.259053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:97168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.041 [2024-11-28 12:50:22.259060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.041 [2024-11-28 12:50:22.259067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:97176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.041 [2024-11-28 12:50:22.259074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.041 [2024-11-28 12:50:22.259082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:97184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.041 [2024-11-28 12:50:22.259089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.041 [2024-11-28 12:50:22.259097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:97192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.041 [2024-11-28 12:50:22.259103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.041 [2024-11-28 12:50:22.259111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:97200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.041 [2024-11-28 12:50:22.259119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.041 [2024-11-28 12:50:22.259127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:97208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.041 [2024-11-28 12:50:22.259134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.041 [2024-11-28 12:50:22.259142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.041 [2024-11-28 12:50:22.259148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.041 [2024-11-28 12:50:22.259156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:97224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.041 [2024-11-28 12:50:22.259163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.041 [2024-11-28 12:50:22.259171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:97232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.041 [2024-11-28 12:50:22.259177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.041 [2024-11-28 12:50:22.259185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:97240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.041 [2024-11-28 12:50:22.259192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.041 [2024-11-28 12:50:22.259202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:97248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.041 [2024-11-28 12:50:22.259209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.041 [2024-11-28 12:50:22.259217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:97256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.041 [2024-11-28 12:50:22.259224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.041 [2024-11-28 12:50:22.259232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:97264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.041 [2024-11-28 12:50:22.259238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.041 [2024-11-28 12:50:22.259246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:97272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.041 [2024-11-28 12:50:22.259253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.041 [2024-11-28 12:50:22.259261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:97280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.041 [2024-11-28 12:50:22.259267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.041 [2024-11-28 12:50:22.259275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:97288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.041 [2024-11-28 12:50:22.259282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.041 [2024-11-28 12:50:22.259290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:97296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.041 [2024-11-28 12:50:22.259297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.041 [2024-11-28 12:50:22.259307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:97304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.041 [2024-11-28 12:50:22.259314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.041 [2024-11-28 12:50:22.259322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:97312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.041 [2024-11-28 12:50:22.259328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.041 [2024-11-28 12:50:22.259336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:97320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.041 [2024-11-28 12:50:22.259342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.041 [2024-11-28 12:50:22.259350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:97328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.041 [2024-11-28 12:50:22.259357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.041 [2024-11-28 12:50:22.259365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:97336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.041 [2024-11-28 12:50:22.259372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.041 [2024-11-28 12:50:22.259379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:97344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.041 [2024-11-28 12:50:22.259386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.041 [2024-11-28 12:50:22.259394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:97352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.041 [2024-11-28 12:50:22.259400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.041 [2024-11-28 12:50:22.259408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:97360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.041 [2024-11-28 12:50:22.259416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.041 [2024-11-28 12:50:22.259424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:97368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.041 [2024-11-28 12:50:22.259431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.041 [2024-11-28 12:50:22.259441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:97376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.041 [2024-11-28 12:50:22.259448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.041 [2024-11-28 12:50:22.259456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:97384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.041 [2024-11-28 12:50:22.259463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.042 [2024-11-28 12:50:22.259471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:97392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.042 [2024-11-28 12:50:22.259478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.042 [2024-11-28 12:50:22.259486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:97400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.042 [2024-11-28 12:50:22.259494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.042 [2024-11-28 12:50:22.259502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:97408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.042 [2024-11-28 12:50:22.259508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.042 [2024-11-28 12:50:22.259517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:97416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.042 [2024-11-28 12:50:22.259524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.042 [2024-11-28 12:50:22.259532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:97424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.042 [2024-11-28 12:50:22.259539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.042 [2024-11-28 12:50:22.259547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:97432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.042 [2024-11-28 12:50:22.259553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.042 [2024-11-28 12:50:22.259561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:97440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.042 [2024-11-28 12:50:22.259568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.042 [2024-11-28 12:50:22.259576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:97448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.042 [2024-11-28 12:50:22.259583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.042 [2024-11-28 12:50:22.259591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:97456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.042 [2024-11-28 12:50:22.259597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.042 [2024-11-28 12:50:22.259606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:97464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.042 [2024-11-28 12:50:22.259612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.042 [2024-11-28 12:50:22.259621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:97472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.042 [2024-11-28 12:50:22.259628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.042 [2024-11-28 12:50:22.259636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:97480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.042 [2024-11-28 12:50:22.259642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.042 [2024-11-28 12:50:22.259650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:96472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.042 [2024-11-28 12:50:22.259657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.042 [2024-11-28 12:50:22.259665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:96480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.042 [2024-11-28 12:50:22.259672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.042 [2024-11-28 12:50:22.259681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:96488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.042 [2024-11-28 12:50:22.259692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.042 [2024-11-28 12:50:22.259700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:96496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.042 [2024-11-28 12:50:22.259707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.042 [2024-11-28 12:50:22.259715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:96504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.042 [2024-11-28 12:50:22.259721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.042 [2024-11-28 12:50:22.259730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:96512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.042 [2024-11-28 12:50:22.259737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.042 [2024-11-28 12:50:22.259745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:96520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.042 [2024-11-28 12:50:22.259752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.042 [2024-11-28 12:50:22.259760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:96528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.042 [2024-11-28 12:50:22.259766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.042 [2024-11-28 12:50:22.259775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:96536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.042 [2024-11-28 12:50:22.259782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.042 [2024-11-28 12:50:22.259790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:96544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.042 [2024-11-28 12:50:22.259797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.042 [2024-11-28 12:50:22.259805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:96552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.042 [2024-11-28 12:50:22.259812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.042 [2024-11-28 12:50:22.259821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:96560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.042 [2024-11-28 12:50:22.259827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.042 [2024-11-28 12:50:22.259836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:96568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.042 [2024-11-28 12:50:22.259843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.042 [2024-11-28 12:50:22.259851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:96576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.042 [2024-11-28 12:50:22.259857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.042 [2024-11-28 12:50:22.259865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:96584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.042 [2024-11-28 12:50:22.259872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.042 [2024-11-28 12:50:22.259881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:97488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.042 [2024-11-28 12:50:22.259888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.042 [2024-11-28 12:50:22.259896] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa1a6c0 is same with the state(6) to be set 00:26:40.042 [2024-11-28 12:50:22.259904] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:40.042 [2024-11-28 12:50:22.259910] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:40.042 [2024-11-28 12:50:22.259916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96592 len:8 PRP1 0x0 PRP2 0x0 00:26:40.042 [2024-11-28 12:50:22.259923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.042 [2024-11-28 12:50:22.262903] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.042 [2024-11-28 12:50:22.262965] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:40.042 [2024-11-28 12:50:22.263581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.042 [2024-11-28 12:50:22.263598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:40.042 [2024-11-28 12:50:22.263605] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:40.042 [2024-11-28 12:50:22.263785] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:40.042 [2024-11-28 12:50:22.263969] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.042 [2024-11-28 12:50:22.263978] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.042 [2024-11-28 12:50:22.263986] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.042 [2024-11-28 12:50:22.263994] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.042 [2024-11-28 12:50:22.276125] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.042 [2024-11-28 12:50:22.276586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.042 [2024-11-28 12:50:22.276633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:40.042 [2024-11-28 12:50:22.276657] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:40.042 [2024-11-28 12:50:22.277092] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:40.042 [2024-11-28 12:50:22.277268] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.042 [2024-11-28 12:50:22.277276] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.042 [2024-11-28 12:50:22.277283] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.042 [2024-11-28 12:50:22.277290] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.042 [2024-11-28 12:50:22.288963] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.042 [2024-11-28 12:50:22.289393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.043 [2024-11-28 12:50:22.289409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:40.043 [2024-11-28 12:50:22.289419] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:40.043 [2024-11-28 12:50:22.289585] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:40.043 [2024-11-28 12:50:22.289749] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.043 [2024-11-28 12:50:22.289757] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.043 [2024-11-28 12:50:22.289763] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.043 [2024-11-28 12:50:22.289769] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.043 [2024-11-28 12:50:22.301913] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.043 [2024-11-28 12:50:22.302378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.043 [2024-11-28 12:50:22.302395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:40.043 [2024-11-28 12:50:22.302403] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:40.043 [2024-11-28 12:50:22.302576] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:40.043 [2024-11-28 12:50:22.302750] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.043 [2024-11-28 12:50:22.302758] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.043 [2024-11-28 12:50:22.302764] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.043 [2024-11-28 12:50:22.302770] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.043 [2024-11-28 12:50:22.314769] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.043 [2024-11-28 12:50:22.315227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.043 [2024-11-28 12:50:22.315272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:40.043 [2024-11-28 12:50:22.315296] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:40.043 [2024-11-28 12:50:22.315880] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:40.043 [2024-11-28 12:50:22.316061] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.043 [2024-11-28 12:50:22.316070] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.043 [2024-11-28 12:50:22.316076] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.043 [2024-11-28 12:50:22.316083] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.043 [2024-11-28 12:50:22.328368] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.043 [2024-11-28 12:50:22.328733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.043 [2024-11-28 12:50:22.328749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:40.043 [2024-11-28 12:50:22.328757] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:40.043 [2024-11-28 12:50:22.328930] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:40.043 [2024-11-28 12:50:22.329116] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.043 [2024-11-28 12:50:22.329125] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.043 [2024-11-28 12:50:22.329131] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.043 [2024-11-28 12:50:22.329137] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.043 [2024-11-28 12:50:22.341274] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.043 [2024-11-28 12:50:22.341670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.043 [2024-11-28 12:50:22.341686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:40.043 [2024-11-28 12:50:22.341693] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:40.043 [2024-11-28 12:50:22.341867] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:40.043 [2024-11-28 12:50:22.342046] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.043 [2024-11-28 12:50:22.342054] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.043 [2024-11-28 12:50:22.342061] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.043 [2024-11-28 12:50:22.342067] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.043 [2024-11-28 12:50:22.354191] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.043 [2024-11-28 12:50:22.354630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.043 [2024-11-28 12:50:22.354674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:40.043 [2024-11-28 12:50:22.354697] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:40.043 [2024-11-28 12:50:22.355138] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:40.043 [2024-11-28 12:50:22.355303] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.043 [2024-11-28 12:50:22.355311] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.043 [2024-11-28 12:50:22.355317] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.043 [2024-11-28 12:50:22.355322] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.043 [2024-11-28 12:50:22.367054] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.043 [2024-11-28 12:50:22.367486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.043 [2024-11-28 12:50:22.367502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:40.043 [2024-11-28 12:50:22.367509] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:40.043 [2024-11-28 12:50:22.367682] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:40.043 [2024-11-28 12:50:22.367857] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.043 [2024-11-28 12:50:22.367865] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.043 [2024-11-28 12:50:22.367874] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.043 [2024-11-28 12:50:22.367881] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.043 [2024-11-28 12:50:22.379951] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.043 [2024-11-28 12:50:22.380377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.043 [2024-11-28 12:50:22.380393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:40.043 [2024-11-28 12:50:22.380400] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:40.043 [2024-11-28 12:50:22.380563] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:40.043 [2024-11-28 12:50:22.380727] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.043 [2024-11-28 12:50:22.380735] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.043 [2024-11-28 12:50:22.380741] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.043 [2024-11-28 12:50:22.380746] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.043 [2024-11-28 12:50:22.392880] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.043 [2024-11-28 12:50:22.393365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.043 [2024-11-28 12:50:22.393411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:40.043 [2024-11-28 12:50:22.393435] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:40.043 [2024-11-28 12:50:22.393945] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:40.043 [2024-11-28 12:50:22.394126] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.043 [2024-11-28 12:50:22.394134] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.043 [2024-11-28 12:50:22.394140] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.044 [2024-11-28 12:50:22.394146] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.044 [2024-11-28 12:50:22.405825] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.044 [2024-11-28 12:50:22.406275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.044 [2024-11-28 12:50:22.406318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:40.044 [2024-11-28 12:50:22.406343] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:40.044 [2024-11-28 12:50:22.406928] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:40.044 [2024-11-28 12:50:22.407211] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.044 [2024-11-28 12:50:22.407219] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.044 [2024-11-28 12:50:22.407225] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.044 [2024-11-28 12:50:22.407232] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.044 [2024-11-28 12:50:22.418743] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.044 [2024-11-28 12:50:22.419071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.044 [2024-11-28 12:50:22.419088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:40.044 [2024-11-28 12:50:22.419096] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:40.044 [2024-11-28 12:50:22.419270] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:40.044 [2024-11-28 12:50:22.419444] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.044 [2024-11-28 12:50:22.419452] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.044 [2024-11-28 12:50:22.419459] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.044 [2024-11-28 12:50:22.419465] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.044 [2024-11-28 12:50:22.431597] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.044 [2024-11-28 12:50:22.432031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.044 [2024-11-28 12:50:22.432048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:40.044 [2024-11-28 12:50:22.432056] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:40.044 [2024-11-28 12:50:22.432228] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:40.044 [2024-11-28 12:50:22.432402] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.044 [2024-11-28 12:50:22.432410] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.044 [2024-11-28 12:50:22.432417] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.044 [2024-11-28 12:50:22.432423] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.044 [2024-11-28 12:50:22.444554] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.044 [2024-11-28 12:50:22.444921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.044 [2024-11-28 12:50:22.444937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:40.044 [2024-11-28 12:50:22.444944] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:40.044 [2024-11-28 12:50:22.445138] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:40.044 [2024-11-28 12:50:22.445312] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.044 [2024-11-28 12:50:22.445320] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.044 [2024-11-28 12:50:22.445326] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.044 [2024-11-28 12:50:22.445333] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.044 [2024-11-28 12:50:22.457461] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.044 [2024-11-28 12:50:22.457897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.044 [2024-11-28 12:50:22.457942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:40.044 [2024-11-28 12:50:22.458000] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:40.044 [2024-11-28 12:50:22.458433] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:40.044 [2024-11-28 12:50:22.458607] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.044 [2024-11-28 12:50:22.458615] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.044 [2024-11-28 12:50:22.458621] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.044 [2024-11-28 12:50:22.458627] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.044 [2024-11-28 12:50:22.470294] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.044 [2024-11-28 12:50:22.470720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.044 [2024-11-28 12:50:22.470736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:40.044 [2024-11-28 12:50:22.470743] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:40.044 [2024-11-28 12:50:22.470907] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:40.044 [2024-11-28 12:50:22.471098] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.044 [2024-11-28 12:50:22.471107] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.044 [2024-11-28 12:50:22.471113] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.044 [2024-11-28 12:50:22.471119] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.044 [2024-11-28 12:50:22.483238] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.044 [2024-11-28 12:50:22.483709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.044 [2024-11-28 12:50:22.483753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:40.044 [2024-11-28 12:50:22.483776] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:40.044 [2024-11-28 12:50:22.484280] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:40.044 [2024-11-28 12:50:22.484455] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.044 [2024-11-28 12:50:22.484463] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.044 [2024-11-28 12:50:22.484469] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.044 [2024-11-28 12:50:22.484475] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.044 [2024-11-28 12:50:22.496083] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.044 [2024-11-28 12:50:22.496538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.044 [2024-11-28 12:50:22.496582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:40.044 [2024-11-28 12:50:22.496605] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:40.044 [2024-11-28 12:50:22.497204] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:40.044 [2024-11-28 12:50:22.497643] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.044 [2024-11-28 12:50:22.497654] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.044 [2024-11-28 12:50:22.497661] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.044 [2024-11-28 12:50:22.497667] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.044 [2024-11-28 12:50:22.509845] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.044 [2024-11-28 12:50:22.510306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.044 [2024-11-28 12:50:22.510323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:40.044 [2024-11-28 12:50:22.510331] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:40.044 [2024-11-28 12:50:22.510509] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:40.044 [2024-11-28 12:50:22.510688] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.044 [2024-11-28 12:50:22.510696] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.044 [2024-11-28 12:50:22.510702] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.044 [2024-11-28 12:50:22.510709] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.044 [2024-11-28 12:50:22.522963] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.044 [2024-11-28 12:50:22.523381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.044 [2024-11-28 12:50:22.523399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:40.044 [2024-11-28 12:50:22.523406] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:40.045 [2024-11-28 12:50:22.523584] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:40.045 [2024-11-28 12:50:22.523771] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.045 [2024-11-28 12:50:22.523794] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.045 [2024-11-28 12:50:22.523801] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.045 [2024-11-28 12:50:22.523808] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.045 [2024-11-28 12:50:22.536156] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.045 [2024-11-28 12:50:22.536609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.045 [2024-11-28 12:50:22.536652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:40.045 [2024-11-28 12:50:22.536675] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:40.045 [2024-11-28 12:50:22.537273] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:40.045 [2024-11-28 12:50:22.537547] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.045 [2024-11-28 12:50:22.537555] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.045 [2024-11-28 12:50:22.537562] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.045 [2024-11-28 12:50:22.537571] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.045 [2024-11-28 12:50:22.549208] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.045 [2024-11-28 12:50:22.549572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.045 [2024-11-28 12:50:22.549588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:40.045 [2024-11-28 12:50:22.549596] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:40.045 [2024-11-28 12:50:22.549774] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:40.045 [2024-11-28 12:50:22.549957] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.045 [2024-11-28 12:50:22.549966] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.045 [2024-11-28 12:50:22.549973] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.045 [2024-11-28 12:50:22.549978] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.305 [2024-11-28 12:50:22.562195] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.305 [2024-11-28 12:50:22.562651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.305 [2024-11-28 12:50:22.562694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:40.305 [2024-11-28 12:50:22.562717] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:40.305 [2024-11-28 12:50:22.563319] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:40.305 [2024-11-28 12:50:22.563781] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.305 [2024-11-28 12:50:22.563789] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.305 [2024-11-28 12:50:22.563796] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.305 [2024-11-28 12:50:22.563802] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.305 [2024-11-28 12:50:22.575191] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.305 [2024-11-28 12:50:22.575618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.305 [2024-11-28 12:50:22.575635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:40.305 [2024-11-28 12:50:22.575642] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:40.305 [2024-11-28 12:50:22.575816] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:40.305 [2024-11-28 12:50:22.575994] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.305 [2024-11-28 12:50:22.576003] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.305 [2024-11-28 12:50:22.576009] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.305 [2024-11-28 12:50:22.576016] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.305 [2024-11-28 12:50:22.588158] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.305 [2024-11-28 12:50:22.588624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.305 [2024-11-28 12:50:22.588678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:40.305 [2024-11-28 12:50:22.588702] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:40.305 [2024-11-28 12:50:22.589302] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:40.305 [2024-11-28 12:50:22.589833] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.305 [2024-11-28 12:50:22.589841] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.305 [2024-11-28 12:50:22.589848] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.305 [2024-11-28 12:50:22.589854] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.305 [2024-11-28 12:50:22.601094] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.305 [2024-11-28 12:50:22.601469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.305 [2024-11-28 12:50:22.601486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:40.305 [2024-11-28 12:50:22.601494] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:40.305 [2024-11-28 12:50:22.601668] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:40.305 [2024-11-28 12:50:22.601843] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.305 [2024-11-28 12:50:22.601851] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.305 [2024-11-28 12:50:22.601858] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.305 [2024-11-28 12:50:22.601864] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.305 9481.00 IOPS, 37.04 MiB/s [2024-11-28T11:50:22.824Z] [2024-11-28 12:50:22.613995] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.305 [2024-11-28 12:50:22.614451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.305 [2024-11-28 12:50:22.614467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:40.305 [2024-11-28 12:50:22.614474] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:40.305 [2024-11-28 12:50:22.614648] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:40.305 [2024-11-28 12:50:22.614823] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.305 [2024-11-28 12:50:22.614832] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.305 [2024-11-28 12:50:22.614838] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.305 [2024-11-28 12:50:22.614844] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.305 [2024-11-28 12:50:22.626907] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.305 [2024-11-28 12:50:22.627344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.305 [2024-11-28 12:50:22.627361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:40.305 [2024-11-28 12:50:22.627371] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:40.305 [2024-11-28 12:50:22.627545] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:40.305 [2024-11-28 12:50:22.627719] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.305 [2024-11-28 12:50:22.627727] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.305 [2024-11-28 12:50:22.627734] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.305 [2024-11-28 12:50:22.627740] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.305 [2024-11-28 12:50:22.639820] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.305 [2024-11-28 12:50:22.640277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.305 [2024-11-28 12:50:22.640294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:40.305 [2024-11-28 12:50:22.640302] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:40.305 [2024-11-28 12:50:22.640475] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:40.305 [2024-11-28 12:50:22.640649] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.305 [2024-11-28 12:50:22.640657] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.305 [2024-11-28 12:50:22.640664] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.305 [2024-11-28 12:50:22.640670] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.305 [2024-11-28 12:50:22.652795] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.305 [2024-11-28 12:50:22.653178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.305 [2024-11-28 12:50:22.653195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:40.305 [2024-11-28 12:50:22.653203] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:40.305 [2024-11-28 12:50:22.653376] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:40.305 [2024-11-28 12:50:22.653554] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.305 [2024-11-28 12:50:22.653562] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.305 [2024-11-28 12:50:22.653568] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.305 [2024-11-28 12:50:22.653574] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.305 [2024-11-28 12:50:22.665733] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.305 [2024-11-28 12:50:22.666177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.305 [2024-11-28 12:50:22.666194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:40.305 [2024-11-28 12:50:22.666201] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:40.305 [2024-11-28 12:50:22.666375] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:40.306 [2024-11-28 12:50:22.666548] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.306 [2024-11-28 12:50:22.666560] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.306 [2024-11-28 12:50:22.666568] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.306 [2024-11-28 12:50:22.666574] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.306 [2024-11-28 12:50:22.678684] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.306 [2024-11-28 12:50:22.679067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.306 [2024-11-28 12:50:22.679084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:40.306 [2024-11-28 12:50:22.679092] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:40.306 [2024-11-28 12:50:22.679265] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:40.306 [2024-11-28 12:50:22.679439] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.306 [2024-11-28 12:50:22.679447] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.306 [2024-11-28 12:50:22.679454] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.306 [2024-11-28 12:50:22.679460] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.306 [2024-11-28 12:50:22.691622] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.306 [2024-11-28 12:50:22.692074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.306 [2024-11-28 12:50:22.692135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:40.306 [2024-11-28 12:50:22.692160] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:40.306 [2024-11-28 12:50:22.692705] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:40.306 [2024-11-28 12:50:22.692871] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.306 [2024-11-28 12:50:22.692879] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.306 [2024-11-28 12:50:22.692885] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.306 [2024-11-28 12:50:22.692891] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.306 [2024-11-28 12:50:22.704782] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.306 [2024-11-28 12:50:22.705231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.306 [2024-11-28 12:50:22.705251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:40.306 [2024-11-28 12:50:22.705259] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:40.306 [2024-11-28 12:50:22.705441] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:40.306 [2024-11-28 12:50:22.705621] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.306 [2024-11-28 12:50:22.705630] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.306 [2024-11-28 12:50:22.705637] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.306 [2024-11-28 12:50:22.705647] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.306 [2024-11-28 12:50:22.717813] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.306 [2024-11-28 12:50:22.718242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.306 [2024-11-28 12:50:22.718259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:40.306 [2024-11-28 12:50:22.718267] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:40.306 [2024-11-28 12:50:22.718440] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:40.306 [2024-11-28 12:50:22.718617] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.306 [2024-11-28 12:50:22.718626] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.306 [2024-11-28 12:50:22.718632] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.306 [2024-11-28 12:50:22.718638] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.306 [2024-11-28 12:50:22.730692] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.306 [2024-11-28 12:50:22.731104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.306 [2024-11-28 12:50:22.731122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:40.306 [2024-11-28 12:50:22.731130] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:40.306 [2024-11-28 12:50:22.731303] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:40.306 [2024-11-28 12:50:22.731476] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.306 [2024-11-28 12:50:22.731484] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.306 [2024-11-28 12:50:22.731490] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.306 [2024-11-28 12:50:22.731496] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.306 [2024-11-28 12:50:22.743617] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.306 [2024-11-28 12:50:22.744076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.306 [2024-11-28 12:50:22.744092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:40.306 [2024-11-28 12:50:22.744100] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:40.306 [2024-11-28 12:50:22.744275] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:40.306 [2024-11-28 12:50:22.744439] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.306 [2024-11-28 12:50:22.744447] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.306 [2024-11-28 12:50:22.744453] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.306 [2024-11-28 12:50:22.744459] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.306 [2024-11-28 12:50:22.756564] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.306 [2024-11-28 12:50:22.757010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.306 [2024-11-28 12:50:22.757026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:40.306 [2024-11-28 12:50:22.757033] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:40.306 [2024-11-28 12:50:22.757213] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:40.306 [2024-11-28 12:50:22.757375] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.306 [2024-11-28 12:50:22.757383] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.306 [2024-11-28 12:50:22.757389] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.306 [2024-11-28 12:50:22.757394] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.306 [2024-11-28 12:50:22.769525] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.306 [2024-11-28 12:50:22.769969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.306 [2024-11-28 12:50:22.769986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:40.306 [2024-11-28 12:50:22.769993] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:40.306 [2024-11-28 12:50:22.770172] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:40.306 [2024-11-28 12:50:22.770350] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.306 [2024-11-28 12:50:22.770358] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.306 [2024-11-28 12:50:22.770365] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.306 [2024-11-28 12:50:22.770371] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.306 [2024-11-28 12:50:22.782700] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.306 [2024-11-28 12:50:22.783078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.306 [2024-11-28 12:50:22.783095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:40.306 [2024-11-28 12:50:22.783102] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:40.306 [2024-11-28 12:50:22.783282] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:40.306 [2024-11-28 12:50:22.783461] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.306 [2024-11-28 12:50:22.783469] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.306 [2024-11-28 12:50:22.783476] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.306 [2024-11-28 12:50:22.783482] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.306 [2024-11-28 12:50:22.795797] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.306 [2024-11-28 12:50:22.796243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.306 [2024-11-28 12:50:22.796288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:40.306 [2024-11-28 12:50:22.796311] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:40.306 [2024-11-28 12:50:22.796910] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:40.306 [2024-11-28 12:50:22.797335] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.307 [2024-11-28 12:50:22.797343] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.307 [2024-11-28 12:50:22.797350] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.307 [2024-11-28 12:50:22.797356] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.307 [2024-11-28 12:50:22.808762] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.307 [2024-11-28 12:50:22.809157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.307 [2024-11-28 12:50:22.809174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:40.307 [2024-11-28 12:50:22.809181] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:40.307 [2024-11-28 12:50:22.809354] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:40.307 [2024-11-28 12:50:22.809537] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.307 [2024-11-28 12:50:22.809546] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.307 [2024-11-28 12:50:22.809552] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.307 [2024-11-28 12:50:22.809558] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.566 [2024-11-28 12:50:22.822015] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.566 [2024-11-28 12:50:22.822375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.566 [2024-11-28 12:50:22.822419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:40.566 [2024-11-28 12:50:22.822442] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:40.566 [2024-11-28 12:50:22.822968] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:40.566 [2024-11-28 12:50:22.823144] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.566 [2024-11-28 12:50:22.823152] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.566 [2024-11-28 12:50:22.823160] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.566 [2024-11-28 12:50:22.823166] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.566 [2024-11-28 12:50:22.835040] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.566 [2024-11-28 12:50:22.835401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.566 [2024-11-28 12:50:22.835417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:40.566 [2024-11-28 12:50:22.835424] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:40.567 [2024-11-28 12:50:22.835598] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:40.567 [2024-11-28 12:50:22.835772] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.567 [2024-11-28 12:50:22.835783] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.567 [2024-11-28 12:50:22.835790] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.567 [2024-11-28 12:50:22.835796] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.567 [2024-11-28 12:50:22.847961] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.567 [2024-11-28 12:50:22.848410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.567 [2024-11-28 12:50:22.848449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:40.567 [2024-11-28 12:50:22.848475] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:40.567 [2024-11-28 12:50:22.849029] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:40.567 [2024-11-28 12:50:22.849205] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.567 [2024-11-28 12:50:22.849213] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.567 [2024-11-28 12:50:22.849219] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.567 [2024-11-28 12:50:22.849225] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.567 [2024-11-28 12:50:22.860924] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.567 [2024-11-28 12:50:22.861242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.567 [2024-11-28 12:50:22.861259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:40.567 [2024-11-28 12:50:22.861267] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:40.567 [2024-11-28 12:50:22.861441] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:40.567 [2024-11-28 12:50:22.861618] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.567 [2024-11-28 12:50:22.861627] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.567 [2024-11-28 12:50:22.861633] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.567 [2024-11-28 12:50:22.861639] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.567 [2024-11-28 12:50:22.873805] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.567 [2024-11-28 12:50:22.874259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.567 [2024-11-28 12:50:22.874275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:40.567 [2024-11-28 12:50:22.874282] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:40.567 [2024-11-28 12:50:22.874455] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:40.567 [2024-11-28 12:50:22.874629] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.567 [2024-11-28 12:50:22.874637] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.567 [2024-11-28 12:50:22.874643] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.567 [2024-11-28 12:50:22.874653] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.567 [2024-11-28 12:50:22.886635] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.567 [2024-11-28 12:50:22.887089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.567 [2024-11-28 12:50:22.887135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:40.567 [2024-11-28 12:50:22.887158] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:40.567 [2024-11-28 12:50:22.887573] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:40.567 [2024-11-28 12:50:22.887747] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.567 [2024-11-28 12:50:22.887755] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.567 [2024-11-28 12:50:22.887761] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.567 [2024-11-28 12:50:22.887767] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.567 [2024-11-28 12:50:22.899459] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.567 [2024-11-28 12:50:22.899926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.567 [2024-11-28 12:50:22.899943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:40.567 [2024-11-28 12:50:22.899957] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:40.567 [2024-11-28 12:50:22.900130] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:40.567 [2024-11-28 12:50:22.900304] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.567 [2024-11-28 12:50:22.900312] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.567 [2024-11-28 12:50:22.900319] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.567 [2024-11-28 12:50:22.900325] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.567 [2024-11-28 12:50:22.912328] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.567 [2024-11-28 12:50:22.912800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.567 [2024-11-28 12:50:22.912816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:40.567 [2024-11-28 12:50:22.912823] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:40.567 [2024-11-28 12:50:22.913002] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:40.567 [2024-11-28 12:50:22.913176] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.567 [2024-11-28 12:50:22.913184] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.567 [2024-11-28 12:50:22.913191] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.567 [2024-11-28 12:50:22.913197] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.567 [2024-11-28 12:50:22.925207] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.567 [2024-11-28 12:50:22.925657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.567 [2024-11-28 12:50:22.925709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:40.567 [2024-11-28 12:50:22.925732] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:40.567 [2024-11-28 12:50:22.926249] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:40.567 [2024-11-28 12:50:22.926423] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.567 [2024-11-28 12:50:22.926431] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.567 [2024-11-28 12:50:22.926437] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.567 [2024-11-28 12:50:22.926444] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.567 [2024-11-28 12:50:22.938164] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.567 [2024-11-28 12:50:22.938549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.567 [2024-11-28 12:50:22.938593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:40.567 [2024-11-28 12:50:22.938616] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:40.567 [2024-11-28 12:50:22.939215] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:40.567 [2024-11-28 12:50:22.939715] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.567 [2024-11-28 12:50:22.939723] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.567 [2024-11-28 12:50:22.939730] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.567 [2024-11-28 12:50:22.939736] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.567 [2024-11-28 12:50:22.951137] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.567 [2024-11-28 12:50:22.951513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.567 [2024-11-28 12:50:22.951530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:40.567 [2024-11-28 12:50:22.951537] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:40.567 [2024-11-28 12:50:22.951711] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:40.567 [2024-11-28 12:50:22.951886] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.567 [2024-11-28 12:50:22.951894] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.567 [2024-11-28 12:50:22.951900] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.567 [2024-11-28 12:50:22.951906] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.567 [2024-11-28 12:50:22.964073] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.567 [2024-11-28 12:50:22.964448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.567 [2024-11-28 12:50:22.964464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:40.568 [2024-11-28 12:50:22.964471] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:40.568 [2024-11-28 12:50:22.964649] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:40.568 [2024-11-28 12:50:22.964823] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.568 [2024-11-28 12:50:22.964831] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.568 [2024-11-28 12:50:22.964837] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.568 [2024-11-28 12:50:22.964843] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.568 [2024-11-28 12:50:22.976996] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.568 [2024-11-28 12:50:22.977361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.568 [2024-11-28 12:50:22.977405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:40.568 [2024-11-28 12:50:22.977427] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:40.568 [2024-11-28 12:50:22.978023] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:40.568 [2024-11-28 12:50:22.978595] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.568 [2024-11-28 12:50:22.978604] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.568 [2024-11-28 12:50:22.978610] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.568 [2024-11-28 12:50:22.978617] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.568 [2024-11-28 12:50:22.989834] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.568 [2024-11-28 12:50:22.990253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.568 [2024-11-28 12:50:22.990270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:40.568 [2024-11-28 12:50:22.990277] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:40.568 [2024-11-28 12:50:22.990450] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:40.568 [2024-11-28 12:50:22.990624] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.568 [2024-11-28 12:50:22.990632] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.568 [2024-11-28 12:50:22.990638] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.568 [2024-11-28 12:50:22.990644] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.568 [2024-11-28 12:50:23.002672] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.568 [2024-11-28 12:50:23.003102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.568 [2024-11-28 12:50:23.003119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:40.568 [2024-11-28 12:50:23.003126] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:40.568 [2024-11-28 12:50:23.003300] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:40.568 [2024-11-28 12:50:23.003473] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.568 [2024-11-28 12:50:23.003484] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.568 [2024-11-28 12:50:23.003490] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.568 [2024-11-28 12:50:23.003497] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.568 [2024-11-28 12:50:23.015515] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.568 [2024-11-28 12:50:23.015975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.568 [2024-11-28 12:50:23.015991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:40.568 [2024-11-28 12:50:23.015998] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:40.568 [2024-11-28 12:50:23.016180] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:40.568 [2024-11-28 12:50:23.016345] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.568 [2024-11-28 12:50:23.016352] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.568 [2024-11-28 12:50:23.016358] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.568 [2024-11-28 12:50:23.016364] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.568 [2024-11-28 12:50:23.028535] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.568 [2024-11-28 12:50:23.029007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.568 [2024-11-28 12:50:23.029026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:40.568 [2024-11-28 12:50:23.029033] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:40.568 [2024-11-28 12:50:23.029212] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:40.568 [2024-11-28 12:50:23.029423] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.568 [2024-11-28 12:50:23.029432] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.568 [2024-11-28 12:50:23.029438] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.568 [2024-11-28 12:50:23.029445] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.568 [2024-11-28 12:50:23.041703] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.568 [2024-11-28 12:50:23.042094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.568 [2024-11-28 12:50:23.042112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:40.568 [2024-11-28 12:50:23.042120] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:40.568 [2024-11-28 12:50:23.042299] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:40.568 [2024-11-28 12:50:23.042479] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.568 [2024-11-28 12:50:23.042487] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.568 [2024-11-28 12:50:23.042494] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.568 [2024-11-28 12:50:23.042500] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.568 [2024-11-28 12:50:23.054747] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.568 [2024-11-28 12:50:23.055101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.568 [2024-11-28 12:50:23.055117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:40.568 [2024-11-28 12:50:23.055125] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:40.568 [2024-11-28 12:50:23.055299] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:40.568 [2024-11-28 12:50:23.055476] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.568 [2024-11-28 12:50:23.055484] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.568 [2024-11-28 12:50:23.055491] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.568 [2024-11-28 12:50:23.055497] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.568 [2024-11-28 12:50:23.067656] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.568 [2024-11-28 12:50:23.068096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.568 [2024-11-28 12:50:23.068141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:40.568 [2024-11-28 12:50:23.068165] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:40.568 [2024-11-28 12:50:23.068750] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:40.568 [2024-11-28 12:50:23.069118] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.568 [2024-11-28 12:50:23.069127] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.568 [2024-11-28 12:50:23.069134] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.568 [2024-11-28 12:50:23.069140] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.568 [2024-11-28 12:50:23.080856] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.568 [2024-11-28 12:50:23.081163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.568 [2024-11-28 12:50:23.081180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:40.568 [2024-11-28 12:50:23.081187] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:40.568 [2024-11-28 12:50:23.081391] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:40.568 [2024-11-28 12:50:23.081572] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.568 [2024-11-28 12:50:23.081581] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.568 [2024-11-28 12:50:23.081587] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.568 [2024-11-28 12:50:23.081594] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.828 [2024-11-28 12:50:23.093780] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.828 [2024-11-28 12:50:23.094207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.828 [2024-11-28 12:50:23.094227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:40.828 [2024-11-28 12:50:23.094234] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:40.828 [2024-11-28 12:50:23.094408] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:40.828 [2024-11-28 12:50:23.094585] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.828 [2024-11-28 12:50:23.094593] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.828 [2024-11-28 12:50:23.094600] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.828 [2024-11-28 12:50:23.094606] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.828 [2024-11-28 12:50:23.106618] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.828 [2024-11-28 12:50:23.107075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.828 [2024-11-28 12:50:23.107091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:40.828 [2024-11-28 12:50:23.107099] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:40.828 [2024-11-28 12:50:23.107271] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:40.828 [2024-11-28 12:50:23.107444] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.828 [2024-11-28 12:50:23.107452] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.828 [2024-11-28 12:50:23.107458] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.828 [2024-11-28 12:50:23.107464] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.828 [2024-11-28 12:50:23.119463] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.828 [2024-11-28 12:50:23.119901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.828 [2024-11-28 12:50:23.119945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:40.828 [2024-11-28 12:50:23.119985] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:40.828 [2024-11-28 12:50:23.120570] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:40.828 [2024-11-28 12:50:23.121025] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.828 [2024-11-28 12:50:23.121034] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.828 [2024-11-28 12:50:23.121040] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.828 [2024-11-28 12:50:23.121047] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.828 [2024-11-28 12:50:23.132364] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.829 [2024-11-28 12:50:23.132791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.829 [2024-11-28 12:50:23.132808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:40.829 [2024-11-28 12:50:23.132816] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:40.829 [2024-11-28 12:50:23.133000] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:40.829 [2024-11-28 12:50:23.133175] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.829 [2024-11-28 12:50:23.133183] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.829 [2024-11-28 12:50:23.133189] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.829 [2024-11-28 12:50:23.133195] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.829 [2024-11-28 12:50:23.145293] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.829 [2024-11-28 12:50:23.145673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.829 [2024-11-28 12:50:23.145689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:40.829 [2024-11-28 12:50:23.145696] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:40.829 [2024-11-28 12:50:23.145870] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:40.829 [2024-11-28 12:50:23.146053] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.829 [2024-11-28 12:50:23.146061] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.829 [2024-11-28 12:50:23.146068] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.829 [2024-11-28 12:50:23.146074] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.829 [2024-11-28 12:50:23.158250] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.829 [2024-11-28 12:50:23.158715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.829 [2024-11-28 12:50:23.158759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:40.829 [2024-11-28 12:50:23.158782] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:40.829 [2024-11-28 12:50:23.159213] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:40.829 [2024-11-28 12:50:23.159378] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.829 [2024-11-28 12:50:23.159386] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.829 [2024-11-28 12:50:23.159392] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.829 [2024-11-28 12:50:23.159398] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.829 [2024-11-28 12:50:23.171284] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.829 [2024-11-28 12:50:23.171664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.829 [2024-11-28 12:50:23.171681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:40.829 [2024-11-28 12:50:23.171688] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:40.829 [2024-11-28 12:50:23.171861] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:40.829 [2024-11-28 12:50:23.172041] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.829 [2024-11-28 12:50:23.172050] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.829 [2024-11-28 12:50:23.172060] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.829 [2024-11-28 12:50:23.172067] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.829 [2024-11-28 12:50:23.184234] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.829 [2024-11-28 12:50:23.184680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.829 [2024-11-28 12:50:23.184696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:40.829 [2024-11-28 12:50:23.184703] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:40.829 [2024-11-28 12:50:23.184877] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:40.829 [2024-11-28 12:50:23.185058] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.829 [2024-11-28 12:50:23.185067] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.829 [2024-11-28 12:50:23.185074] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.829 [2024-11-28 12:50:23.185080] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.829 [2024-11-28 12:50:23.197074] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.829 [2024-11-28 12:50:23.197503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.829 [2024-11-28 12:50:23.197519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:40.829 [2024-11-28 12:50:23.197542] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:40.829 [2024-11-28 12:50:23.197722] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:40.829 [2024-11-28 12:50:23.197902] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.829 [2024-11-28 12:50:23.197910] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.829 [2024-11-28 12:50:23.197916] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.829 [2024-11-28 12:50:23.197923] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.829 [2024-11-28 12:50:23.210091] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.829 [2024-11-28 12:50:23.210503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.829 [2024-11-28 12:50:23.210548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:40.829 [2024-11-28 12:50:23.210571] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:40.829 [2024-11-28 12:50:23.211031] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:40.829 [2024-11-28 12:50:23.211213] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.829 [2024-11-28 12:50:23.211222] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.829 [2024-11-28 12:50:23.211228] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.829 [2024-11-28 12:50:23.211234] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.829 [2024-11-28 12:50:23.223031] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.829 [2024-11-28 12:50:23.223455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.829 [2024-11-28 12:50:23.223472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:40.829 [2024-11-28 12:50:23.223479] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:40.829 [2024-11-28 12:50:23.223652] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:40.829 [2024-11-28 12:50:23.223826] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.829 [2024-11-28 12:50:23.223834] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.829 [2024-11-28 12:50:23.223841] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.829 [2024-11-28 12:50:23.223847] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.829 [2024-11-28 12:50:23.236152] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.829 [2024-11-28 12:50:23.236519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.829 [2024-11-28 12:50:23.236536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:40.829 [2024-11-28 12:50:23.236545] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:40.829 [2024-11-28 12:50:23.236723] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:40.829 [2024-11-28 12:50:23.236904] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.829 [2024-11-28 12:50:23.236913] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.829 [2024-11-28 12:50:23.236920] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.829 [2024-11-28 12:50:23.236926] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.829 [2024-11-28 12:50:23.249312] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.829 [2024-11-28 12:50:23.249685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.829 [2024-11-28 12:50:23.249703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:40.829 [2024-11-28 12:50:23.249711] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:40.829 [2024-11-28 12:50:23.249885] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:40.829 [2024-11-28 12:50:23.250067] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.829 [2024-11-28 12:50:23.250077] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.829 [2024-11-28 12:50:23.250084] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.829 [2024-11-28 12:50:23.250090] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.829 [2024-11-28 12:50:23.262179] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.830 [2024-11-28 12:50:23.262581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.830 [2024-11-28 12:50:23.262601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:40.830 [2024-11-28 12:50:23.262608] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:40.830 [2024-11-28 12:50:23.262781] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:40.830 [2024-11-28 12:50:23.262963] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.830 [2024-11-28 12:50:23.262971] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.830 [2024-11-28 12:50:23.262978] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.830 [2024-11-28 12:50:23.262984] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.830 [2024-11-28 12:50:23.275089] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.830 [2024-11-28 12:50:23.275516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.830 [2024-11-28 12:50:23.275533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:40.830 [2024-11-28 12:50:23.275540] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:40.830 [2024-11-28 12:50:23.275714] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:40.830 [2024-11-28 12:50:23.275891] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.830 [2024-11-28 12:50:23.275899] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.830 [2024-11-28 12:50:23.275906] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.830 [2024-11-28 12:50:23.275912] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.830 [2024-11-28 12:50:23.287990] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.830 [2024-11-28 12:50:23.288364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.830 [2024-11-28 12:50:23.288381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:40.830 [2024-11-28 12:50:23.288388] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:40.830 [2024-11-28 12:50:23.288562] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:40.830 [2024-11-28 12:50:23.288736] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.830 [2024-11-28 12:50:23.288744] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.830 [2024-11-28 12:50:23.288751] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.830 [2024-11-28 12:50:23.288757] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.830 [2024-11-28 12:50:23.300895] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.830 [2024-11-28 12:50:23.301246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.830 [2024-11-28 12:50:23.301262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:40.830 [2024-11-28 12:50:23.301270] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:40.830 [2024-11-28 12:50:23.301443] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:40.830 [2024-11-28 12:50:23.301621] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.830 [2024-11-28 12:50:23.301630] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.830 [2024-11-28 12:50:23.301636] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.830 [2024-11-28 12:50:23.301642] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.830 [2024-11-28 12:50:23.313801] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.830 [2024-11-28 12:50:23.314222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.830 [2024-11-28 12:50:23.314238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:40.830 [2024-11-28 12:50:23.314245] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:40.830 [2024-11-28 12:50:23.314419] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:40.830 [2024-11-28 12:50:23.314594] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.830 [2024-11-28 12:50:23.314602] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.830 [2024-11-28 12:50:23.314608] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.830 [2024-11-28 12:50:23.314615] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.830 [2024-11-28 12:50:23.326617] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.830 [2024-11-28 12:50:23.326993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.830 [2024-11-28 12:50:23.327010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:40.830 [2024-11-28 12:50:23.327018] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:40.830 [2024-11-28 12:50:23.327192] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:40.830 [2024-11-28 12:50:23.327367] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.830 [2024-11-28 12:50:23.327375] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.830 [2024-11-28 12:50:23.327381] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.830 [2024-11-28 12:50:23.327387] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.830 [2024-11-28 12:50:23.339641] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.830 [2024-11-28 12:50:23.340086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.830 [2024-11-28 12:50:23.340103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:40.830 [2024-11-28 12:50:23.340110] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:40.830 [2024-11-28 12:50:23.340290] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:40.830 [2024-11-28 12:50:23.340469] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.830 [2024-11-28 12:50:23.340477] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.830 [2024-11-28 12:50:23.340488] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.830 [2024-11-28 12:50:23.340494] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.090 [2024-11-28 12:50:23.352675] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.090 [2024-11-28 12:50:23.353105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.090 [2024-11-28 12:50:23.353122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:41.090 [2024-11-28 12:50:23.353129] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:41.090 [2024-11-28 12:50:23.353303] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:41.090 [2024-11-28 12:50:23.353477] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.090 [2024-11-28 12:50:23.353485] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.090 [2024-11-28 12:50:23.353491] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.090 [2024-11-28 12:50:23.353497] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.090 [2024-11-28 12:50:23.365491] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.090 [2024-11-28 12:50:23.365840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.090 [2024-11-28 12:50:23.365856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:41.090 [2024-11-28 12:50:23.365863] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:41.090 [2024-11-28 12:50:23.366044] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:41.090 [2024-11-28 12:50:23.366218] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.090 [2024-11-28 12:50:23.366226] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.090 [2024-11-28 12:50:23.366232] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.090 [2024-11-28 12:50:23.366239] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.090 [2024-11-28 12:50:23.378375] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.090 [2024-11-28 12:50:23.378812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.090 [2024-11-28 12:50:23.378829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:41.090 [2024-11-28 12:50:23.378836] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:41.090 [2024-11-28 12:50:23.379017] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:41.090 [2024-11-28 12:50:23.379191] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.090 [2024-11-28 12:50:23.379199] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.090 [2024-11-28 12:50:23.379206] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.090 [2024-11-28 12:50:23.379212] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.090 [2024-11-28 12:50:23.391306] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.090 [2024-11-28 12:50:23.391734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.090 [2024-11-28 12:50:23.391750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:41.090 [2024-11-28 12:50:23.391757] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:41.090 [2024-11-28 12:50:23.391931] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:41.090 [2024-11-28 12:50:23.392112] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.090 [2024-11-28 12:50:23.392121] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.090 [2024-11-28 12:50:23.392127] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.090 [2024-11-28 12:50:23.392133] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.090 [2024-11-28 12:50:23.404139] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.090 [2024-11-28 12:50:23.404547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.090 [2024-11-28 12:50:23.404564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:41.090 [2024-11-28 12:50:23.404571] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:41.090 [2024-11-28 12:50:23.404745] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:41.090 [2024-11-28 12:50:23.404919] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.090 [2024-11-28 12:50:23.404927] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.090 [2024-11-28 12:50:23.404933] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.090 [2024-11-28 12:50:23.404940] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.090 [2024-11-28 12:50:23.417106] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.090 [2024-11-28 12:50:23.417531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.090 [2024-11-28 12:50:23.417547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:41.090 [2024-11-28 12:50:23.417554] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:41.090 [2024-11-28 12:50:23.417728] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:41.090 [2024-11-28 12:50:23.417903] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.090 [2024-11-28 12:50:23.417911] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.090 [2024-11-28 12:50:23.417918] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.090 [2024-11-28 12:50:23.417924] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.090 [2024-11-28 12:50:23.430171] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.090 [2024-11-28 12:50:23.430598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.090 [2024-11-28 12:50:23.430614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:41.090 [2024-11-28 12:50:23.430627] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:41.090 [2024-11-28 12:50:23.430801] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:41.090 [2024-11-28 12:50:23.430982] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.091 [2024-11-28 12:50:23.430991] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.091 [2024-11-28 12:50:23.430998] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.091 [2024-11-28 12:50:23.431004] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.091 [2024-11-28 12:50:23.443090] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.091 [2024-11-28 12:50:23.443507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.091 [2024-11-28 12:50:23.443551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:41.091 [2024-11-28 12:50:23.443574] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:41.091 [2024-11-28 12:50:23.444174] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:41.091 [2024-11-28 12:50:23.444764] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.091 [2024-11-28 12:50:23.444788] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.091 [2024-11-28 12:50:23.444809] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.091 [2024-11-28 12:50:23.444815] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.091 [2024-11-28 12:50:23.455941] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.091 [2024-11-28 12:50:23.456386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.091 [2024-11-28 12:50:23.456402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:41.091 [2024-11-28 12:50:23.456409] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:41.091 [2024-11-28 12:50:23.456583] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:41.091 [2024-11-28 12:50:23.456758] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.091 [2024-11-28 12:50:23.456766] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.091 [2024-11-28 12:50:23.456772] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.091 [2024-11-28 12:50:23.456778] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.091 [2024-11-28 12:50:23.468768] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.091 [2024-11-28 12:50:23.469176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.091 [2024-11-28 12:50:23.469193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:41.091 [2024-11-28 12:50:23.469200] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:41.091 [2024-11-28 12:50:23.469372] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:41.091 [2024-11-28 12:50:23.469549] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.091 [2024-11-28 12:50:23.469558] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.091 [2024-11-28 12:50:23.469564] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.091 [2024-11-28 12:50:23.469570] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.091 [2024-11-28 12:50:23.481710] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.091 [2024-11-28 12:50:23.482141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.091 [2024-11-28 12:50:23.482158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:41.091 [2024-11-28 12:50:23.482166] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:41.091 [2024-11-28 12:50:23.482339] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:41.091 [2024-11-28 12:50:23.482513] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.091 [2024-11-28 12:50:23.482521] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.091 [2024-11-28 12:50:23.482527] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.091 [2024-11-28 12:50:23.482534] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.091 [2024-11-28 12:50:23.494672] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.091 [2024-11-28 12:50:23.495097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.091 [2024-11-28 12:50:23.495113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:41.091 [2024-11-28 12:50:23.495120] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:41.091 [2024-11-28 12:50:23.495294] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:41.091 [2024-11-28 12:50:23.495467] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.091 [2024-11-28 12:50:23.495475] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.091 [2024-11-28 12:50:23.495482] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.091 [2024-11-28 12:50:23.495488] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.091 [2024-11-28 12:50:23.507481] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.091 [2024-11-28 12:50:23.507890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.091 [2024-11-28 12:50:23.507942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:41.091 [2024-11-28 12:50:23.507983] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:41.091 [2024-11-28 12:50:23.508569] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:41.091 [2024-11-28 12:50:23.508824] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.091 [2024-11-28 12:50:23.508832] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.091 [2024-11-28 12:50:23.508858] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.091 [2024-11-28 12:50:23.508867] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.091 [2024-11-28 12:50:23.520886] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.091 [2024-11-28 12:50:23.521313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.091 [2024-11-28 12:50:23.521329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:41.091 [2024-11-28 12:50:23.521337] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:41.091 [2024-11-28 12:50:23.521510] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:41.091 [2024-11-28 12:50:23.521684] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.091 [2024-11-28 12:50:23.521692] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.091 [2024-11-28 12:50:23.521699] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.091 [2024-11-28 12:50:23.521705] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.091 [2024-11-28 12:50:23.533707] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.091 [2024-11-28 12:50:23.534119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.091 [2024-11-28 12:50:23.534163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:41.091 [2024-11-28 12:50:23.534186] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:41.091 [2024-11-28 12:50:23.534770] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:41.091 [2024-11-28 12:50:23.535241] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.091 [2024-11-28 12:50:23.535250] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.091 [2024-11-28 12:50:23.535256] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.091 [2024-11-28 12:50:23.535262] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.091 [2024-11-28 12:50:23.546547] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.091 [2024-11-28 12:50:23.547009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.091 [2024-11-28 12:50:23.547027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:41.091 [2024-11-28 12:50:23.547034] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:41.091 [2024-11-28 12:50:23.547214] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:41.091 [2024-11-28 12:50:23.547393] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.091 [2024-11-28 12:50:23.547402] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.091 [2024-11-28 12:50:23.547408] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.091 [2024-11-28 12:50:23.547415] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.091 [2024-11-28 12:50:23.559739] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.091 [2024-11-28 12:50:23.560131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.091 [2024-11-28 12:50:23.560181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:41.091 [2024-11-28 12:50:23.560206] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:41.091 [2024-11-28 12:50:23.560794] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:41.091 [2024-11-28 12:50:23.561038] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.092 [2024-11-28 12:50:23.561048] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.092 [2024-11-28 12:50:23.561054] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.092 [2024-11-28 12:50:23.561061] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.092 [2024-11-28 12:50:23.572829] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.092 [2024-11-28 12:50:23.573250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.092 [2024-11-28 12:50:23.573268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:41.092 [2024-11-28 12:50:23.573275] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:41.092 [2024-11-28 12:50:23.573449] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:41.092 [2024-11-28 12:50:23.573622] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.092 [2024-11-28 12:50:23.573631] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.092 [2024-11-28 12:50:23.573637] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.092 [2024-11-28 12:50:23.573643] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.092 [2024-11-28 12:50:23.585725] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.092 [2024-11-28 12:50:23.586154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.092 [2024-11-28 12:50:23.586172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:41.092 [2024-11-28 12:50:23.586180] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:41.092 [2024-11-28 12:50:23.586354] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:41.092 [2024-11-28 12:50:23.586529] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.092 [2024-11-28 12:50:23.586537] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.092 [2024-11-28 12:50:23.586543] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.092 [2024-11-28 12:50:23.586550] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.092 [2024-11-28 12:50:23.598620] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.092 [2024-11-28 12:50:23.599075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.092 [2024-11-28 12:50:23.599092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:41.092 [2024-11-28 12:50:23.599104] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:41.092 [2024-11-28 12:50:23.599289] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:41.092 [2024-11-28 12:50:23.599455] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.092 [2024-11-28 12:50:23.599463] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.092 [2024-11-28 12:50:23.599469] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.092 [2024-11-28 12:50:23.599475] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.353 7110.75 IOPS, 27.78 MiB/s [2024-11-28T11:50:23.872Z] [2024-11-28 12:50:23.613103] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.353 [2024-11-28 12:50:23.613487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.353 [2024-11-28 12:50:23.613503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:41.353 [2024-11-28 12:50:23.613511] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:41.353 [2024-11-28 12:50:23.613685] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:41.353 [2024-11-28 12:50:23.613858] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.353 [2024-11-28 12:50:23.613867] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.353 [2024-11-28 12:50:23.613873] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.353 [2024-11-28 12:50:23.613880] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.353 [2024-11-28 12:50:23.626215] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.353 [2024-11-28 12:50:23.626670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.353 [2024-11-28 12:50:23.626687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:41.353 [2024-11-28 12:50:23.626695] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:41.353 [2024-11-28 12:50:23.626868] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:41.353 [2024-11-28 12:50:23.627049] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.353 [2024-11-28 12:50:23.627058] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.353 [2024-11-28 12:50:23.627065] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.353 [2024-11-28 12:50:23.627071] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.353 [2024-11-28 12:50:23.639308] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.353 [2024-11-28 12:50:23.639742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.353 [2024-11-28 12:50:23.639758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:41.353 [2024-11-28 12:50:23.639766] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:41.353 [2024-11-28 12:50:23.639944] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:41.353 [2024-11-28 12:50:23.640133] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.353 [2024-11-28 12:50:23.640142] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.353 [2024-11-28 12:50:23.640148] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.353 [2024-11-28 12:50:23.640155] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.353 [2024-11-28 12:50:23.652325] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.353 [2024-11-28 12:50:23.652745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.353 [2024-11-28 12:50:23.652762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:41.353 [2024-11-28 12:50:23.652769] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:41.353 [2024-11-28 12:50:23.652943] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:41.353 [2024-11-28 12:50:23.653122] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.353 [2024-11-28 12:50:23.653131] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.353 [2024-11-28 12:50:23.653137] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.353 [2024-11-28 12:50:23.653143] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.353 [2024-11-28 12:50:23.665429] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.353 [2024-11-28 12:50:23.665866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.353 [2024-11-28 12:50:23.665882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:41.353 [2024-11-28 12:50:23.665890] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:41.353 [2024-11-28 12:50:23.666068] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:41.353 [2024-11-28 12:50:23.666243] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.353 [2024-11-28 12:50:23.666251] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.353 [2024-11-28 12:50:23.666257] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.353 [2024-11-28 12:50:23.666264] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.353 [2024-11-28 12:50:23.678547] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.353 [2024-11-28 12:50:23.678905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.353 [2024-11-28 12:50:23.678922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:41.354 [2024-11-28 12:50:23.678930] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:41.354 [2024-11-28 12:50:23.679109] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:41.354 [2024-11-28 12:50:23.679283] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.354 [2024-11-28 12:50:23.679291] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.354 [2024-11-28 12:50:23.679302] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.354 [2024-11-28 12:50:23.679308] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.354 [2024-11-28 12:50:23.691579] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.354 [2024-11-28 12:50:23.692024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.354 [2024-11-28 12:50:23.692041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:41.354 [2024-11-28 12:50:23.692049] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:41.354 [2024-11-28 12:50:23.692223] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:41.354 [2024-11-28 12:50:23.692397] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.354 [2024-11-28 12:50:23.692406] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.354 [2024-11-28 12:50:23.692412] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.354 [2024-11-28 12:50:23.692418] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.354 [2024-11-28 12:50:23.704586] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.354 [2024-11-28 12:50:23.704968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.354 [2024-11-28 12:50:23.704985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:41.354 [2024-11-28 12:50:23.704992] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:41.354 [2024-11-28 12:50:23.705165] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:41.354 [2024-11-28 12:50:23.705340] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.354 [2024-11-28 12:50:23.705348] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.354 [2024-11-28 12:50:23.705354] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.354 [2024-11-28 12:50:23.705360] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.354 [2024-11-28 12:50:23.717672] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.354 [2024-11-28 12:50:23.718108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.354 [2024-11-28 12:50:23.718126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:41.354 [2024-11-28 12:50:23.718133] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:41.354 [2024-11-28 12:50:23.718308] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:41.354 [2024-11-28 12:50:23.718482] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.354 [2024-11-28 12:50:23.718490] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.354 [2024-11-28 12:50:23.718497] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.354 [2024-11-28 12:50:23.718502] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.354 [2024-11-28 12:50:23.730713] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.354 [2024-11-28 12:50:23.731133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.354 [2024-11-28 12:50:23.731150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:41.354 [2024-11-28 12:50:23.731157] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:41.354 [2024-11-28 12:50:23.731331] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:41.354 [2024-11-28 12:50:23.731504] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.354 [2024-11-28 12:50:23.731512] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.354 [2024-11-28 12:50:23.731519] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.354 [2024-11-28 12:50:23.731525] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.354 [2024-11-28 12:50:23.743801] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.354 [2024-11-28 12:50:23.744171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.354 [2024-11-28 12:50:23.744188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:41.354 [2024-11-28 12:50:23.744195] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:41.354 [2024-11-28 12:50:23.744369] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:41.354 [2024-11-28 12:50:23.744542] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.354 [2024-11-28 12:50:23.744550] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.354 [2024-11-28 12:50:23.744557] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.354 [2024-11-28 12:50:23.744563] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.354 [2024-11-28 12:50:23.756881] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.354 [2024-11-28 12:50:23.757299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.354 [2024-11-28 12:50:23.757316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:41.354 [2024-11-28 12:50:23.757323] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:41.354 [2024-11-28 12:50:23.757497] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:41.354 [2024-11-28 12:50:23.757675] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.354 [2024-11-28 12:50:23.757683] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.354 [2024-11-28 12:50:23.757689] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.354 [2024-11-28 12:50:23.757695] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.354 [2024-11-28 12:50:23.769974] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.354 [2024-11-28 12:50:23.770409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.354 [2024-11-28 12:50:23.770426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:41.354 [2024-11-28 12:50:23.770437] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:41.354 [2024-11-28 12:50:23.770617] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:41.354 [2024-11-28 12:50:23.770795] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.354 [2024-11-28 12:50:23.770803] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.354 [2024-11-28 12:50:23.770810] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.354 [2024-11-28 12:50:23.770816] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.354 [2024-11-28 12:50:23.783093] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.354 [2024-11-28 12:50:23.783506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.354 [2024-11-28 12:50:23.783522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:41.354 [2024-11-28 12:50:23.783529] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:41.354 [2024-11-28 12:50:23.783702] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:41.354 [2024-11-28 12:50:23.783875] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.354 [2024-11-28 12:50:23.783883] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.354 [2024-11-28 12:50:23.783890] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.354 [2024-11-28 12:50:23.783896] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.354 [2024-11-28 12:50:23.796172] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.354 [2024-11-28 12:50:23.796594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.354 [2024-11-28 12:50:23.796611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:41.354 [2024-11-28 12:50:23.796618] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:41.354 [2024-11-28 12:50:23.796791] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:41.354 [2024-11-28 12:50:23.796994] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.354 [2024-11-28 12:50:23.797003] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.354 [2024-11-28 12:50:23.797010] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.354 [2024-11-28 12:50:23.797016] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.354 [2024-11-28 12:50:23.809336] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.354 [2024-11-28 12:50:23.809740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.354 [2024-11-28 12:50:23.809756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:41.355 [2024-11-28 12:50:23.809764] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:41.355 [2024-11-28 12:50:23.809937] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:41.355 [2024-11-28 12:50:23.810119] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.355 [2024-11-28 12:50:23.810128] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.355 [2024-11-28 12:50:23.810134] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.355 [2024-11-28 12:50:23.810140] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.355 [2024-11-28 12:50:23.822421] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.355 [2024-11-28 12:50:23.822765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.355 [2024-11-28 12:50:23.822782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:41.355 [2024-11-28 12:50:23.822789] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:41.355 [2024-11-28 12:50:23.822969] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:41.355 [2024-11-28 12:50:23.823143] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.355 [2024-11-28 12:50:23.823151] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.355 [2024-11-28 12:50:23.823158] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.355 [2024-11-28 12:50:23.823164] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.355 [2024-11-28 12:50:23.835426] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.355 [2024-11-28 12:50:23.835839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.355 [2024-11-28 12:50:23.835856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:41.355 [2024-11-28 12:50:23.835863] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:41.355 [2024-11-28 12:50:23.836042] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:41.355 [2024-11-28 12:50:23.836216] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.355 [2024-11-28 12:50:23.836224] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.355 [2024-11-28 12:50:23.836231] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.355 [2024-11-28 12:50:23.836237] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.355 [2024-11-28 12:50:23.848490] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.355 [2024-11-28 12:50:23.848907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.355 [2024-11-28 12:50:23.848923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:41.355 [2024-11-28 12:50:23.848930] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:41.355 [2024-11-28 12:50:23.849110] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:41.355 [2024-11-28 12:50:23.849284] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.355 [2024-11-28 12:50:23.849292] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.355 [2024-11-28 12:50:23.849302] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.355 [2024-11-28 12:50:23.849308] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.355 [2024-11-28 12:50:23.861577] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.355 [2024-11-28 12:50:23.861987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.355 [2024-11-28 12:50:23.862004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:41.355 [2024-11-28 12:50:23.862011] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:41.355 [2024-11-28 12:50:23.862184] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:41.355 [2024-11-28 12:50:23.862358] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.355 [2024-11-28 12:50:23.862366] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.355 [2024-11-28 12:50:23.862372] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.355 [2024-11-28 12:50:23.862378] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.615 [2024-11-28 12:50:23.874672] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.615 [2024-11-28 12:50:23.875089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.615 [2024-11-28 12:50:23.875106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:41.615 [2024-11-28 12:50:23.875114] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:41.615 [2024-11-28 12:50:23.875292] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:41.615 [2024-11-28 12:50:23.875478] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.616 [2024-11-28 12:50:23.875486] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.616 [2024-11-28 12:50:23.875493] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.616 [2024-11-28 12:50:23.875499] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.616 [2024-11-28 12:50:23.887652] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.616 [2024-11-28 12:50:23.887986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.616 [2024-11-28 12:50:23.888003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:41.616 [2024-11-28 12:50:23.888010] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:41.616 [2024-11-28 12:50:23.888183] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:41.616 [2024-11-28 12:50:23.888356] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.616 [2024-11-28 12:50:23.888364] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.616 [2024-11-28 12:50:23.888370] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.616 [2024-11-28 12:50:23.888376] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.616 [2024-11-28 12:50:23.900661] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.616 [2024-11-28 12:50:23.901080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.616 [2024-11-28 12:50:23.901097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:41.616 [2024-11-28 12:50:23.901104] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:41.616 [2024-11-28 12:50:23.901278] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:41.616 [2024-11-28 12:50:23.901452] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.616 [2024-11-28 12:50:23.901461] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.616 [2024-11-28 12:50:23.901467] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.616 [2024-11-28 12:50:23.901473] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.616 [2024-11-28 12:50:23.913750] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.616 [2024-11-28 12:50:23.914165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.616 [2024-11-28 12:50:23.914181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:41.616 [2024-11-28 12:50:23.914188] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:41.616 [2024-11-28 12:50:23.914360] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:41.616 [2024-11-28 12:50:23.914537] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.616 [2024-11-28 12:50:23.914545] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.616 [2024-11-28 12:50:23.914551] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.616 [2024-11-28 12:50:23.914558] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.616 [2024-11-28 12:50:23.926814] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.616 [2024-11-28 12:50:23.927205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.616 [2024-11-28 12:50:23.927221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:41.616 [2024-11-28 12:50:23.927229] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:41.616 [2024-11-28 12:50:23.927402] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:41.616 [2024-11-28 12:50:23.927576] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.616 [2024-11-28 12:50:23.927584] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.616 [2024-11-28 12:50:23.927590] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.616 [2024-11-28 12:50:23.927596] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.616 [2024-11-28 12:50:23.939913] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.616 [2024-11-28 12:50:23.940329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.616 [2024-11-28 12:50:23.940346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:41.616 [2024-11-28 12:50:23.940356] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:41.616 [2024-11-28 12:50:23.940531] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:41.616 [2024-11-28 12:50:23.940705] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.616 [2024-11-28 12:50:23.940713] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.616 [2024-11-28 12:50:23.940719] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.616 [2024-11-28 12:50:23.940725] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.616 [2024-11-28 12:50:23.952990] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.616 [2024-11-28 12:50:23.953384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.616 [2024-11-28 12:50:23.953401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:41.616 [2024-11-28 12:50:23.953408] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:41.616 [2024-11-28 12:50:23.953582] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:41.616 [2024-11-28 12:50:23.953756] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.616 [2024-11-28 12:50:23.953764] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.616 [2024-11-28 12:50:23.953770] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.616 [2024-11-28 12:50:23.953776] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.616 [2024-11-28 12:50:23.966025] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.616 [2024-11-28 12:50:23.966436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.616 [2024-11-28 12:50:23.966453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:41.616 [2024-11-28 12:50:23.966460] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:41.616 [2024-11-28 12:50:23.966633] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:41.616 [2024-11-28 12:50:23.966806] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.616 [2024-11-28 12:50:23.966814] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.616 [2024-11-28 12:50:23.966821] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.616 [2024-11-28 12:50:23.966827] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.616 [2024-11-28 12:50:23.979152] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.616 [2024-11-28 12:50:23.979563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.616 [2024-11-28 12:50:23.979579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:41.616 [2024-11-28 12:50:23.979587] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:41.616 [2024-11-28 12:50:23.979761] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:41.616 [2024-11-28 12:50:23.979935] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.616 [2024-11-28 12:50:23.979953] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.616 [2024-11-28 12:50:23.979960] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.616 [2024-11-28 12:50:23.979967] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.616 [2024-11-28 12:50:23.992226] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.616 [2024-11-28 12:50:23.992573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.616 [2024-11-28 12:50:23.992589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:41.616 [2024-11-28 12:50:23.992596] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:41.616 [2024-11-28 12:50:23.992770] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:41.616 [2024-11-28 12:50:23.992954] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.616 [2024-11-28 12:50:23.992962] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.616 [2024-11-28 12:50:23.992969] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.616 [2024-11-28 12:50:23.992975] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.616 [2024-11-28 12:50:24.005246] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.616 [2024-11-28 12:50:24.005627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.616 [2024-11-28 12:50:24.005644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:41.616 [2024-11-28 12:50:24.005651] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:41.617 [2024-11-28 12:50:24.005824] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:41.617 [2024-11-28 12:50:24.006004] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.617 [2024-11-28 12:50:24.006013] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.617 [2024-11-28 12:50:24.006019] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.617 [2024-11-28 12:50:24.006025] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.617 [2024-11-28 12:50:24.018298] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.617 [2024-11-28 12:50:24.018707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.617 [2024-11-28 12:50:24.018724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:41.617 [2024-11-28 12:50:24.018731] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:41.617 [2024-11-28 12:50:24.018905] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:41.617 [2024-11-28 12:50:24.019084] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.617 [2024-11-28 12:50:24.019093] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.617 [2024-11-28 12:50:24.019100] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.617 [2024-11-28 12:50:24.019109] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.617 [2024-11-28 12:50:24.031377] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.617 [2024-11-28 12:50:24.031764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.617 [2024-11-28 12:50:24.031780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:41.617 [2024-11-28 12:50:24.031787] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:41.617 [2024-11-28 12:50:24.031966] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:41.617 [2024-11-28 12:50:24.032140] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.617 [2024-11-28 12:50:24.032148] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.617 [2024-11-28 12:50:24.032154] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.617 [2024-11-28 12:50:24.032160] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.617 [2024-11-28 12:50:24.044426] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.617 [2024-11-28 12:50:24.044858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.617 [2024-11-28 12:50:24.044874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:41.617 [2024-11-28 12:50:24.044881] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:41.617 [2024-11-28 12:50:24.045061] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:41.617 [2024-11-28 12:50:24.045236] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.617 [2024-11-28 12:50:24.045244] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.617 [2024-11-28 12:50:24.045250] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.617 [2024-11-28 12:50:24.045256] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.617 [2024-11-28 12:50:24.057545] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.617 [2024-11-28 12:50:24.057881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.617 [2024-11-28 12:50:24.057897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:41.617 [2024-11-28 12:50:24.057904] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:41.617 [2024-11-28 12:50:24.058083] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:41.617 [2024-11-28 12:50:24.058258] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.617 [2024-11-28 12:50:24.058265] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.617 [2024-11-28 12:50:24.058272] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.617 [2024-11-28 12:50:24.058277] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.617 [2024-11-28 12:50:24.070824] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.617 [2024-11-28 12:50:24.071247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.617 [2024-11-28 12:50:24.071264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:41.617 [2024-11-28 12:50:24.071271] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:41.617 [2024-11-28 12:50:24.071450] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:41.617 [2024-11-28 12:50:24.071629] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.617 [2024-11-28 12:50:24.071637] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.617 [2024-11-28 12:50:24.071643] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.617 [2024-11-28 12:50:24.071649] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.617 [2024-11-28 12:50:24.083961] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.617 [2024-11-28 12:50:24.084396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.617 [2024-11-28 12:50:24.084413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:41.617 [2024-11-28 12:50:24.084420] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:41.617 [2024-11-28 12:50:24.084594] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:41.617 [2024-11-28 12:50:24.084769] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.617 [2024-11-28 12:50:24.084778] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.617 [2024-11-28 12:50:24.084784] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.617 [2024-11-28 12:50:24.084790] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.617 [2024-11-28 12:50:24.097083] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.617 [2024-11-28 12:50:24.097435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.617 [2024-11-28 12:50:24.097452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:41.617 [2024-11-28 12:50:24.097459] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:41.617 [2024-11-28 12:50:24.097632] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:41.617 [2024-11-28 12:50:24.097810] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.617 [2024-11-28 12:50:24.097818] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.617 [2024-11-28 12:50:24.097824] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.617 [2024-11-28 12:50:24.097830] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.617 [2024-11-28 12:50:24.110097] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.617 [2024-11-28 12:50:24.110520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.617 [2024-11-28 12:50:24.110537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:41.617 [2024-11-28 12:50:24.110544] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:41.617 [2024-11-28 12:50:24.110722] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:41.617 [2024-11-28 12:50:24.110897] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.617 [2024-11-28 12:50:24.110905] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.617 [2024-11-28 12:50:24.110912] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.617 [2024-11-28 12:50:24.110919] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.617 [2024-11-28 12:50:24.123200] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.617 [2024-11-28 12:50:24.123555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.617 [2024-11-28 12:50:24.123572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:41.617 [2024-11-28 12:50:24.123579] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:41.617 [2024-11-28 12:50:24.123752] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:41.617 [2024-11-28 12:50:24.123925] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.617 [2024-11-28 12:50:24.123933] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.617 [2024-11-28 12:50:24.123940] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.617 [2024-11-28 12:50:24.123946] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.878 [2024-11-28 12:50:24.136345] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.878 [2024-11-28 12:50:24.136718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.878 [2024-11-28 12:50:24.136735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:41.878 [2024-11-28 12:50:24.136742] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:41.878 [2024-11-28 12:50:24.136921] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:41.878 [2024-11-28 12:50:24.137106] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.878 [2024-11-28 12:50:24.137115] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.878 [2024-11-28 12:50:24.137121] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.878 [2024-11-28 12:50:24.137127] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.878 [2024-11-28 12:50:24.149364] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.878 [2024-11-28 12:50:24.149824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.878 [2024-11-28 12:50:24.149841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:41.878 [2024-11-28 12:50:24.149848] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:41.878 [2024-11-28 12:50:24.150026] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:41.878 [2024-11-28 12:50:24.150201] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.878 [2024-11-28 12:50:24.150212] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.878 [2024-11-28 12:50:24.150219] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.878 [2024-11-28 12:50:24.150225] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.878 [2024-11-28 12:50:24.162295] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.878 [2024-11-28 12:50:24.162713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.878 [2024-11-28 12:50:24.162730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:41.878 [2024-11-28 12:50:24.162738] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:41.878 [2024-11-28 12:50:24.162911] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:41.878 [2024-11-28 12:50:24.163091] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.878 [2024-11-28 12:50:24.163100] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.878 [2024-11-28 12:50:24.163107] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.878 [2024-11-28 12:50:24.163113] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.878 [2024-11-28 12:50:24.175563] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.878 [2024-11-28 12:50:24.175913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.878 [2024-11-28 12:50:24.175931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:41.878 [2024-11-28 12:50:24.175940] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:41.878 [2024-11-28 12:50:24.176125] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:41.878 [2024-11-28 12:50:24.176311] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.878 [2024-11-28 12:50:24.176320] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.878 [2024-11-28 12:50:24.176328] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.878 [2024-11-28 12:50:24.176335] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.878 [2024-11-28 12:50:24.188668] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.878 [2024-11-28 12:50:24.188972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.878 [2024-11-28 12:50:24.188990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:41.878 [2024-11-28 12:50:24.188997] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:41.878 [2024-11-28 12:50:24.189177] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:41.878 [2024-11-28 12:50:24.189356] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.878 [2024-11-28 12:50:24.189364] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.878 [2024-11-28 12:50:24.189371] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.878 [2024-11-28 12:50:24.189381] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.878 [2024-11-28 12:50:24.201861] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.878 [2024-11-28 12:50:24.202313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.878 [2024-11-28 12:50:24.202332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:41.878 [2024-11-28 12:50:24.202340] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:41.878 [2024-11-28 12:50:24.202524] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:41.878 [2024-11-28 12:50:24.202710] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.878 [2024-11-28 12:50:24.202718] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.878 [2024-11-28 12:50:24.202725] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.878 [2024-11-28 12:50:24.202732] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.878 [2024-11-28 12:50:24.215001] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.878 [2024-11-28 12:50:24.215457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.878 [2024-11-28 12:50:24.215475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:41.878 [2024-11-28 12:50:24.215482] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:41.878 [2024-11-28 12:50:24.215667] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:41.878 [2024-11-28 12:50:24.215852] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.878 [2024-11-28 12:50:24.215861] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.878 [2024-11-28 12:50:24.215868] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.878 [2024-11-28 12:50:24.215875] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.878 [2024-11-28 12:50:24.228238] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.878 [2024-11-28 12:50:24.228717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.878 [2024-11-28 12:50:24.228735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:41.878 [2024-11-28 12:50:24.228743] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:41.878 [2024-11-28 12:50:24.228938] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:41.878 [2024-11-28 12:50:24.229144] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.878 [2024-11-28 12:50:24.229153] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.878 [2024-11-28 12:50:24.229161] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.878 [2024-11-28 12:50:24.229167] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.878 [2024-11-28 12:50:24.241715] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.878 [2024-11-28 12:50:24.242164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.878 [2024-11-28 12:50:24.242182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:41.878 [2024-11-28 12:50:24.242190] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:41.878 [2024-11-28 12:50:24.242386] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:41.878 [2024-11-28 12:50:24.242584] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.878 [2024-11-28 12:50:24.242593] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.879 [2024-11-28 12:50:24.242600] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.879 [2024-11-28 12:50:24.242607] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.879 [2024-11-28 12:50:24.255244] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.879 [2024-11-28 12:50:24.255690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.879 [2024-11-28 12:50:24.255709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:41.879 [2024-11-28 12:50:24.255718] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:41.879 [2024-11-28 12:50:24.255915] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:41.879 [2024-11-28 12:50:24.256122] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.879 [2024-11-28 12:50:24.256132] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.879 [2024-11-28 12:50:24.256139] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.879 [2024-11-28 12:50:24.256146] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.879 [2024-11-28 12:50:24.268421] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.879 [2024-11-28 12:50:24.268834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.879 [2024-11-28 12:50:24.268850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:41.879 [2024-11-28 12:50:24.268857] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:41.879 [2024-11-28 12:50:24.269043] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:41.879 [2024-11-28 12:50:24.269222] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.879 [2024-11-28 12:50:24.269230] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.879 [2024-11-28 12:50:24.269237] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.879 [2024-11-28 12:50:24.269243] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.879 [2024-11-28 12:50:24.281568] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.879 [2024-11-28 12:50:24.282028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.879 [2024-11-28 12:50:24.282073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:41.879 [2024-11-28 12:50:24.282097] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:41.879 [2024-11-28 12:50:24.282652] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:41.879 [2024-11-28 12:50:24.282832] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.879 [2024-11-28 12:50:24.282841] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.879 [2024-11-28 12:50:24.282848] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.879 [2024-11-28 12:50:24.282855] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.879 [2024-11-28 12:50:24.294538] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.879 [2024-11-28 12:50:24.294852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.879 [2024-11-28 12:50:24.294869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:41.879 [2024-11-28 12:50:24.294876] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:41.879 [2024-11-28 12:50:24.295056] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:41.879 [2024-11-28 12:50:24.295230] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.879 [2024-11-28 12:50:24.295238] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.879 [2024-11-28 12:50:24.295245] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.879 [2024-11-28 12:50:24.295251] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.879 [2024-11-28 12:50:24.307402] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.879 [2024-11-28 12:50:24.307744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.879 [2024-11-28 12:50:24.307762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:41.879 [2024-11-28 12:50:24.307769] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:41.879 [2024-11-28 12:50:24.307942] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:41.879 [2024-11-28 12:50:24.308123] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.879 [2024-11-28 12:50:24.308130] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.879 [2024-11-28 12:50:24.308137] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.879 [2024-11-28 12:50:24.308142] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.879 [2024-11-28 12:50:24.320551] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.879 [2024-11-28 12:50:24.320920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.879 [2024-11-28 12:50:24.320985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:41.879 [2024-11-28 12:50:24.321010] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:41.879 [2024-11-28 12:50:24.321595] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:41.879 [2024-11-28 12:50:24.321861] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.879 [2024-11-28 12:50:24.321871] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.879 [2024-11-28 12:50:24.321879] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.879 [2024-11-28 12:50:24.321885] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.879 [2024-11-28 12:50:24.333694] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.879 [2024-11-28 12:50:24.333971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.879 [2024-11-28 12:50:24.333988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:41.879 [2024-11-28 12:50:24.333995] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:41.879 [2024-11-28 12:50:24.334169] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:41.879 [2024-11-28 12:50:24.334343] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.879 [2024-11-28 12:50:24.334351] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.879 [2024-11-28 12:50:24.334357] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.879 [2024-11-28 12:50:24.334363] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.879 [2024-11-28 12:50:24.346609] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.879 [2024-11-28 12:50:24.346915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.879 [2024-11-28 12:50:24.346931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:41.879 [2024-11-28 12:50:24.346939] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:41.879 [2024-11-28 12:50:24.347120] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:41.879 [2024-11-28 12:50:24.347294] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.879 [2024-11-28 12:50:24.347302] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.879 [2024-11-28 12:50:24.347309] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.879 [2024-11-28 12:50:24.347315] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.879 [2024-11-28 12:50:24.359482] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.879 [2024-11-28 12:50:24.359841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.879 [2024-11-28 12:50:24.359857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:41.879 [2024-11-28 12:50:24.359865] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:41.879 [2024-11-28 12:50:24.360046] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:41.879 [2024-11-28 12:50:24.360220] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.879 [2024-11-28 12:50:24.360229] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.879 [2024-11-28 12:50:24.360235] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.879 [2024-11-28 12:50:24.360246] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.879 [2024-11-28 12:50:24.372444] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.879 [2024-11-28 12:50:24.372821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.879 [2024-11-28 12:50:24.372838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:41.879 [2024-11-28 12:50:24.372845] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:41.879 [2024-11-28 12:50:24.373025] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:41.879 [2024-11-28 12:50:24.373200] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.879 [2024-11-28 12:50:24.373208] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.880 [2024-11-28 12:50:24.373214] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.880 [2024-11-28 12:50:24.373220] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.880 [2024-11-28 12:50:24.385396] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.880 [2024-11-28 12:50:24.385777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.880 [2024-11-28 12:50:24.385793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:41.880 [2024-11-28 12:50:24.385800] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:41.880 [2024-11-28 12:50:24.385980] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:41.880 [2024-11-28 12:50:24.386155] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.880 [2024-11-28 12:50:24.386163] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.880 [2024-11-28 12:50:24.386169] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.880 [2024-11-28 12:50:24.386175] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.140 [2024-11-28 12:50:24.398491] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.140 [2024-11-28 12:50:24.398868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.140 [2024-11-28 12:50:24.398886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:42.140 [2024-11-28 12:50:24.398894] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:42.140 [2024-11-28 12:50:24.399080] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:42.140 [2024-11-28 12:50:24.399259] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.140 [2024-11-28 12:50:24.399268] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.140 [2024-11-28 12:50:24.399274] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.140 [2024-11-28 12:50:24.399280] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.140 [2024-11-28 12:50:24.411326] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.140 [2024-11-28 12:50:24.411681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.140 [2024-11-28 12:50:24.411702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:42.140 [2024-11-28 12:50:24.411709] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:42.140 [2024-11-28 12:50:24.411883] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:42.140 [2024-11-28 12:50:24.412063] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.140 [2024-11-28 12:50:24.412073] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.140 [2024-11-28 12:50:24.412080] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.140 [2024-11-28 12:50:24.412086] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.140 [2024-11-28 12:50:24.424266] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.140 [2024-11-28 12:50:24.424571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.140 [2024-11-28 12:50:24.424587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:42.140 [2024-11-28 12:50:24.424595] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:42.140 [2024-11-28 12:50:24.424769] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:42.141 [2024-11-28 12:50:24.424943] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.141 [2024-11-28 12:50:24.424957] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.141 [2024-11-28 12:50:24.424964] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.141 [2024-11-28 12:50:24.424970] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.141 [2024-11-28 12:50:24.437172] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.141 [2024-11-28 12:50:24.437521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.141 [2024-11-28 12:50:24.437537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:42.141 [2024-11-28 12:50:24.437544] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:42.141 [2024-11-28 12:50:24.437718] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:42.141 [2024-11-28 12:50:24.437893] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.141 [2024-11-28 12:50:24.437900] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.141 [2024-11-28 12:50:24.437907] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.141 [2024-11-28 12:50:24.437913] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.141 [2024-11-28 12:50:24.450089] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.141 [2024-11-28 12:50:24.450454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.141 [2024-11-28 12:50:24.450470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:42.141 [2024-11-28 12:50:24.450478] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:42.141 [2024-11-28 12:50:24.450654] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:42.141 [2024-11-28 12:50:24.450828] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.141 [2024-11-28 12:50:24.450836] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.141 [2024-11-28 12:50:24.450842] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.141 [2024-11-28 12:50:24.450848] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.141 [2024-11-28 12:50:24.463039] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.141 [2024-11-28 12:50:24.463393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.141 [2024-11-28 12:50:24.463410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:42.141 [2024-11-28 12:50:24.463417] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:42.141 [2024-11-28 12:50:24.463590] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:42.141 [2024-11-28 12:50:24.463764] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.141 [2024-11-28 12:50:24.463772] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.141 [2024-11-28 12:50:24.463779] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.141 [2024-11-28 12:50:24.463785] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.141 [2024-11-28 12:50:24.475964] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.141 [2024-11-28 12:50:24.476320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.141 [2024-11-28 12:50:24.476336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:42.141 [2024-11-28 12:50:24.476343] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:42.141 [2024-11-28 12:50:24.476517] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:42.141 [2024-11-28 12:50:24.476690] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.141 [2024-11-28 12:50:24.476699] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.141 [2024-11-28 12:50:24.476705] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.141 [2024-11-28 12:50:24.476711] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.141 [2024-11-28 12:50:24.488883] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.141 [2024-11-28 12:50:24.489317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.141 [2024-11-28 12:50:24.489334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:42.141 [2024-11-28 12:50:24.489341] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:42.141 [2024-11-28 12:50:24.489514] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:42.141 [2024-11-28 12:50:24.489688] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.141 [2024-11-28 12:50:24.489699] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.141 [2024-11-28 12:50:24.489705] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.141 [2024-11-28 12:50:24.489712] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.141 [2024-11-28 12:50:24.501733] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.141 [2024-11-28 12:50:24.502042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.141 [2024-11-28 12:50:24.502059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:42.141 [2024-11-28 12:50:24.502067] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:42.141 [2024-11-28 12:50:24.502241] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:42.141 [2024-11-28 12:50:24.502413] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.141 [2024-11-28 12:50:24.502421] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.141 [2024-11-28 12:50:24.502428] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.141 [2024-11-28 12:50:24.502434] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.141 [2024-11-28 12:50:24.514681] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.141 [2024-11-28 12:50:24.515131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.141 [2024-11-28 12:50:24.515149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:42.141 [2024-11-28 12:50:24.515156] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:42.141 [2024-11-28 12:50:24.515329] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:42.141 [2024-11-28 12:50:24.515509] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.141 [2024-11-28 12:50:24.515518] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.141 [2024-11-28 12:50:24.515524] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.141 [2024-11-28 12:50:24.515530] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.141 [2024-11-28 12:50:24.527521] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.141 [2024-11-28 12:50:24.527962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.141 [2024-11-28 12:50:24.528006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:42.141 [2024-11-28 12:50:24.528029] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:42.141 [2024-11-28 12:50:24.528444] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:42.141 [2024-11-28 12:50:24.528618] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.141 [2024-11-28 12:50:24.528626] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.141 [2024-11-28 12:50:24.528632] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.141 [2024-11-28 12:50:24.528638] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.141 [2024-11-28 12:50:24.540517] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.141 [2024-11-28 12:50:24.540986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.141 [2024-11-28 12:50:24.541030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:42.141 [2024-11-28 12:50:24.541052] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:42.141 [2024-11-28 12:50:24.541561] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:42.141 [2024-11-28 12:50:24.541735] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.141 [2024-11-28 12:50:24.541743] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.141 [2024-11-28 12:50:24.541749] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.141 [2024-11-28 12:50:24.541755] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.141 [2024-11-28 12:50:24.553447] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.141 [2024-11-28 12:50:24.553881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.141 [2024-11-28 12:50:24.553937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:42.141 [2024-11-28 12:50:24.553975] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:42.141 [2024-11-28 12:50:24.554480] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:42.142 [2024-11-28 12:50:24.554654] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.142 [2024-11-28 12:50:24.554662] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.142 [2024-11-28 12:50:24.554668] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.142 [2024-11-28 12:50:24.554675] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.142 [2024-11-28 12:50:24.566317] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.142 [2024-11-28 12:50:24.566770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.142 [2024-11-28 12:50:24.566787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:42.142 [2024-11-28 12:50:24.566794] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:42.142 [2024-11-28 12:50:24.566979] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:42.142 [2024-11-28 12:50:24.567159] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.142 [2024-11-28 12:50:24.567168] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.142 [2024-11-28 12:50:24.567174] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.142 [2024-11-28 12:50:24.567181] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.142 [2024-11-28 12:50:24.579539] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.142 [2024-11-28 12:50:24.579985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.142 [2024-11-28 12:50:24.580006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:42.142 [2024-11-28 12:50:24.580014] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:42.142 [2024-11-28 12:50:24.580193] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:42.142 [2024-11-28 12:50:24.580379] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.142 [2024-11-28 12:50:24.580387] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.142 [2024-11-28 12:50:24.580394] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.142 [2024-11-28 12:50:24.580400] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.142 [2024-11-28 12:50:24.592471] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.142 [2024-11-28 12:50:24.592924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.142 [2024-11-28 12:50:24.592941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:42.142 [2024-11-28 12:50:24.592954] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:42.142 [2024-11-28 12:50:24.593128] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:42.142 [2024-11-28 12:50:24.593302] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.142 [2024-11-28 12:50:24.593310] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.142 [2024-11-28 12:50:24.593316] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.142 [2024-11-28 12:50:24.593323] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.142 [2024-11-28 12:50:24.605347] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.142 [2024-11-28 12:50:24.605753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.142 [2024-11-28 12:50:24.605769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:42.142 [2024-11-28 12:50:24.605776] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:42.142 [2024-11-28 12:50:24.605939] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:42.142 [2024-11-28 12:50:24.606132] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.142 [2024-11-28 12:50:24.606141] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.142 [2024-11-28 12:50:24.606148] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.142 [2024-11-28 12:50:24.606154] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.142 5688.60 IOPS, 22.22 MiB/s [2024-11-28T11:50:24.661Z] [2024-11-28 12:50:24.618256] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.142 [2024-11-28 12:50:24.618656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.142 [2024-11-28 12:50:24.618673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:42.142 [2024-11-28 12:50:24.618680] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:42.142 [2024-11-28 12:50:24.618847] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:42.142 [2024-11-28 12:50:24.619017] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.142 [2024-11-28 12:50:24.619026] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.142 [2024-11-28 12:50:24.619032] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.142 [2024-11-28 12:50:24.619038] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.142 [2024-11-28 12:50:24.631082] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.142 [2024-11-28 12:50:24.631488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.142 [2024-11-28 12:50:24.631504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:42.142 [2024-11-28 12:50:24.631511] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:42.142 [2024-11-28 12:50:24.631675] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:42.142 [2024-11-28 12:50:24.631840] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.142 [2024-11-28 12:50:24.631847] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.142 [2024-11-28 12:50:24.631853] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.142 [2024-11-28 12:50:24.631859] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.142 [2024-11-28 12:50:24.643974] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.142 [2024-11-28 12:50:24.644407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.142 [2024-11-28 12:50:24.644451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:42.142 [2024-11-28 12:50:24.644474] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:42.142 [2024-11-28 12:50:24.644894] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:42.142 [2024-11-28 12:50:24.645084] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.142 [2024-11-28 12:50:24.645092] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.142 [2024-11-28 12:50:24.645099] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.142 [2024-11-28 12:50:24.645105] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.403 [2024-11-28 12:50:24.657200] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.403 [2024-11-28 12:50:24.657641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.403 [2024-11-28 12:50:24.657658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:42.403 [2024-11-28 12:50:24.657666] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:42.403 [2024-11-28 12:50:24.657844] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:42.403 [2024-11-28 12:50:24.658030] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.403 [2024-11-28 12:50:24.658039] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.403 [2024-11-28 12:50:24.658049] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.403 [2024-11-28 12:50:24.658056] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.403 [2024-11-28 12:50:24.670147] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.403 [2024-11-28 12:50:24.670606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.403 [2024-11-28 12:50:24.670650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:42.403 [2024-11-28 12:50:24.670673] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:42.403 [2024-11-28 12:50:24.671128] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:42.403 [2024-11-28 12:50:24.671304] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.403 [2024-11-28 12:50:24.671312] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.403 [2024-11-28 12:50:24.671319] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.403 [2024-11-28 12:50:24.671324] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.403 [2024-11-28 12:50:24.682999] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.403 [2024-11-28 12:50:24.683407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.403 [2024-11-28 12:50:24.683423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:42.403 [2024-11-28 12:50:24.683430] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:42.403 [2024-11-28 12:50:24.683594] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:42.403 [2024-11-28 12:50:24.683757] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.403 [2024-11-28 12:50:24.683764] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.403 [2024-11-28 12:50:24.683770] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.403 [2024-11-28 12:50:24.683776] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.403 [2024-11-28 12:50:24.695932] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.403 [2024-11-28 12:50:24.696371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.403 [2024-11-28 12:50:24.696387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:42.403 [2024-11-28 12:50:24.696393] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:42.403 [2024-11-28 12:50:24.696556] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:42.403 [2024-11-28 12:50:24.696719] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.403 [2024-11-28 12:50:24.696727] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.403 [2024-11-28 12:50:24.696733] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.403 [2024-11-28 12:50:24.696739] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.403 [2024-11-28 12:50:24.708901] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.403 [2024-11-28 12:50:24.709305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.403 [2024-11-28 12:50:24.709322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:42.403 [2024-11-28 12:50:24.709329] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:42.404 [2024-11-28 12:50:24.709492] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:42.404 [2024-11-28 12:50:24.709656] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.404 [2024-11-28 12:50:24.709663] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.404 [2024-11-28 12:50:24.709670] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.404 [2024-11-28 12:50:24.709676] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.404 [2024-11-28 12:50:24.721819] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.404 [2024-11-28 12:50:24.722269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.404 [2024-11-28 12:50:24.722285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:42.404 [2024-11-28 12:50:24.722292] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:42.404 [2024-11-28 12:50:24.722466] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:42.404 [2024-11-28 12:50:24.722639] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.404 [2024-11-28 12:50:24.722648] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.404 [2024-11-28 12:50:24.722654] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.404 [2024-11-28 12:50:24.722660] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.404 [2024-11-28 12:50:24.734747] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.404 [2024-11-28 12:50:24.735184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.404 [2024-11-28 12:50:24.735228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:42.404 [2024-11-28 12:50:24.735252] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:42.404 [2024-11-28 12:50:24.735838] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:42.404 [2024-11-28 12:50:24.736189] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.404 [2024-11-28 12:50:24.736197] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.404 [2024-11-28 12:50:24.736204] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.404 [2024-11-28 12:50:24.736210] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.404 [2024-11-28 12:50:24.747685] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.404 [2024-11-28 12:50:24.748062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.404 [2024-11-28 12:50:24.748082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:42.404 [2024-11-28 12:50:24.748090] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:42.404 [2024-11-28 12:50:24.748263] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:42.404 [2024-11-28 12:50:24.748437] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.404 [2024-11-28 12:50:24.748445] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.404 [2024-11-28 12:50:24.748452] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.404 [2024-11-28 12:50:24.748458] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.404 [2024-11-28 12:50:24.760623] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.404 [2024-11-28 12:50:24.761067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.404 [2024-11-28 12:50:24.761115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:42.404 [2024-11-28 12:50:24.761139] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:42.404 [2024-11-28 12:50:24.761725] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:42.404 [2024-11-28 12:50:24.762188] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.404 [2024-11-28 12:50:24.762200] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.404 [2024-11-28 12:50:24.762210] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.404 [2024-11-28 12:50:24.762219] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.404 [2024-11-28 12:50:24.773991] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.404 [2024-11-28 12:50:24.774434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.404 [2024-11-28 12:50:24.774450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:42.404 [2024-11-28 12:50:24.774458] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:42.404 [2024-11-28 12:50:24.774631] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:42.404 [2024-11-28 12:50:24.774808] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.404 [2024-11-28 12:50:24.774816] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.404 [2024-11-28 12:50:24.774822] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.404 [2024-11-28 12:50:24.774828] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.404 [2024-11-28 12:50:24.786872] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.404 [2024-11-28 12:50:24.787323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.404 [2024-11-28 12:50:24.787340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:42.404 [2024-11-28 12:50:24.787347] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:42.404 [2024-11-28 12:50:24.787521] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:42.404 [2024-11-28 12:50:24.787697] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.404 [2024-11-28 12:50:24.787705] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.404 [2024-11-28 12:50:24.787712] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.404 [2024-11-28 12:50:24.787718] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.404 [2024-11-28 12:50:24.799811] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.404 [2024-11-28 12:50:24.800232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.404 [2024-11-28 12:50:24.800249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:42.404 [2024-11-28 12:50:24.800256] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:42.404 [2024-11-28 12:50:24.800431] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:42.404 [2024-11-28 12:50:24.800605] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.404 [2024-11-28 12:50:24.800612] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.404 [2024-11-28 12:50:24.800619] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.404 [2024-11-28 12:50:24.800625] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.404 [2024-11-28 12:50:24.812780] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.404 [2024-11-28 12:50:24.813172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.404 [2024-11-28 12:50:24.813189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:42.404 [2024-11-28 12:50:24.813197] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:42.404 [2024-11-28 12:50:24.813370] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:42.404 [2024-11-28 12:50:24.813545] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.404 [2024-11-28 12:50:24.813553] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.404 [2024-11-28 12:50:24.813559] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.404 [2024-11-28 12:50:24.813565] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.404 [2024-11-28 12:50:24.825713] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.404 [2024-11-28 12:50:24.826140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.404 [2024-11-28 12:50:24.826157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:42.404 [2024-11-28 12:50:24.826164] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:42.404 [2024-11-28 12:50:24.826338] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:42.404 [2024-11-28 12:50:24.826511] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.404 [2024-11-28 12:50:24.826518] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.404 [2024-11-28 12:50:24.826527] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.404 [2024-11-28 12:50:24.826533] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.404 [2024-11-28 12:50:24.838811] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.404 [2024-11-28 12:50:24.839245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.404 [2024-11-28 12:50:24.839263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:42.404 [2024-11-28 12:50:24.839270] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:42.405 [2024-11-28 12:50:24.839449] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:42.405 [2024-11-28 12:50:24.839627] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.405 [2024-11-28 12:50:24.839636] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.405 [2024-11-28 12:50:24.839643] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.405 [2024-11-28 12:50:24.839649] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.405 [2024-11-28 12:50:24.851792] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.405 [2024-11-28 12:50:24.852225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.405 [2024-11-28 12:50:24.852242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:42.405 [2024-11-28 12:50:24.852249] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:42.405 [2024-11-28 12:50:24.852423] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:42.405 [2024-11-28 12:50:24.852595] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.405 [2024-11-28 12:50:24.852603] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.405 [2024-11-28 12:50:24.852610] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.405 [2024-11-28 12:50:24.852616] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.405 [2024-11-28 12:50:24.864753] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.405 [2024-11-28 12:50:24.865183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.405 [2024-11-28 12:50:24.865200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:42.405 [2024-11-28 12:50:24.865208] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:42.405 [2024-11-28 12:50:24.865381] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:42.405 [2024-11-28 12:50:24.865554] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.405 [2024-11-28 12:50:24.865562] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.405 [2024-11-28 12:50:24.865569] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.405 [2024-11-28 12:50:24.865575] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.405 [2024-11-28 12:50:24.877718] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.405 [2024-11-28 12:50:24.878120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.405 [2024-11-28 12:50:24.878136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:42.405 [2024-11-28 12:50:24.878143] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:42.405 [2024-11-28 12:50:24.878307] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:42.405 [2024-11-28 12:50:24.878470] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.405 [2024-11-28 12:50:24.878478] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.405 [2024-11-28 12:50:24.878484] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.405 [2024-11-28 12:50:24.878490] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.405 [2024-11-28 12:50:24.890719] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.405 [2024-11-28 12:50:24.891076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.405 [2024-11-28 12:50:24.891093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:42.405 [2024-11-28 12:50:24.891100] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:42.405 [2024-11-28 12:50:24.891273] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:42.405 [2024-11-28 12:50:24.891447] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.405 [2024-11-28 12:50:24.891455] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.405 [2024-11-28 12:50:24.891461] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.405 [2024-11-28 12:50:24.891467] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.405 [2024-11-28 12:50:24.903600] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.405 [2024-11-28 12:50:24.904019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.405 [2024-11-28 12:50:24.904063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:42.405 [2024-11-28 12:50:24.904086] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:42.405 [2024-11-28 12:50:24.904671] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:42.405 [2024-11-28 12:50:24.905272] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.405 [2024-11-28 12:50:24.905298] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.405 [2024-11-28 12:50:24.905320] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.405 [2024-11-28 12:50:24.905339] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.405 [2024-11-28 12:50:24.916754] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.405 [2024-11-28 12:50:24.917165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.405 [2024-11-28 12:50:24.917182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:42.405 [2024-11-28 12:50:24.917194] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:42.405 [2024-11-28 12:50:24.917382] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:42.405 [2024-11-28 12:50:24.917576] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.405 [2024-11-28 12:50:24.917584] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.405 [2024-11-28 12:50:24.917591] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.405 [2024-11-28 12:50:24.917597] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.665 [2024-11-28 12:50:24.929838] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.665 [2024-11-28 12:50:24.930254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.665 [2024-11-28 12:50:24.930271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:42.665 [2024-11-28 12:50:24.930278] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:42.665 [2024-11-28 12:50:24.930451] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:42.665 [2024-11-28 12:50:24.930625] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.665 [2024-11-28 12:50:24.930633] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.665 [2024-11-28 12:50:24.930639] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.665 [2024-11-28 12:50:24.930645] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.665 [2024-11-28 12:50:24.942703] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.665 [2024-11-28 12:50:24.943066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.665 [2024-11-28 12:50:24.943083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:42.665 [2024-11-28 12:50:24.943091] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:42.665 [2024-11-28 12:50:24.943265] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:42.665 [2024-11-28 12:50:24.943438] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.665 [2024-11-28 12:50:24.943446] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.665 [2024-11-28 12:50:24.943452] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.665 [2024-11-28 12:50:24.943458] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.665 [2024-11-28 12:50:24.955582] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.665 [2024-11-28 12:50:24.956009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.665 [2024-11-28 12:50:24.956027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:42.665 [2024-11-28 12:50:24.956034] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:42.665 [2024-11-28 12:50:24.956217] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:42.665 [2024-11-28 12:50:24.956386] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.665 [2024-11-28 12:50:24.956393] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.665 [2024-11-28 12:50:24.956399] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.665 [2024-11-28 12:50:24.956405] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.665 [2024-11-28 12:50:24.968527] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.665 [2024-11-28 12:50:24.968966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.665 [2024-11-28 12:50:24.969010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:42.665 [2024-11-28 12:50:24.969033] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:42.665 [2024-11-28 12:50:24.969532] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:42.665 [2024-11-28 12:50:24.969706] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.665 [2024-11-28 12:50:24.969714] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.665 [2024-11-28 12:50:24.969720] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.665 [2024-11-28 12:50:24.969726] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.665 [2024-11-28 12:50:24.981404] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.665 [2024-11-28 12:50:24.981824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.665 [2024-11-28 12:50:24.981839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:42.666 [2024-11-28 12:50:24.981846] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:42.666 [2024-11-28 12:50:24.982035] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:42.666 [2024-11-28 12:50:24.982209] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.666 [2024-11-28 12:50:24.982217] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.666 [2024-11-28 12:50:24.982223] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.666 [2024-11-28 12:50:24.982230] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.666 [2024-11-28 12:50:24.994352] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.666 [2024-11-28 12:50:24.994716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.666 [2024-11-28 12:50:24.994761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:42.666 [2024-11-28 12:50:24.994784] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:42.666 [2024-11-28 12:50:24.995329] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:42.666 [2024-11-28 12:50:24.995503] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.666 [2024-11-28 12:50:24.995511] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.666 [2024-11-28 12:50:24.995520] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.666 [2024-11-28 12:50:24.995527] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.666 [2024-11-28 12:50:25.007296] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.666 [2024-11-28 12:50:25.007718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.666 [2024-11-28 12:50:25.007735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:42.666 [2024-11-28 12:50:25.007742] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:42.666 [2024-11-28 12:50:25.007914] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:42.666 [2024-11-28 12:50:25.008095] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.666 [2024-11-28 12:50:25.008103] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.666 [2024-11-28 12:50:25.008110] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.666 [2024-11-28 12:50:25.008116] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.666 [2024-11-28 12:50:25.020250] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.666 [2024-11-28 12:50:25.020679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.666 [2024-11-28 12:50:25.020724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:42.666 [2024-11-28 12:50:25.020749] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:42.666 [2024-11-28 12:50:25.021347] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:42.666 [2024-11-28 12:50:25.021811] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.666 [2024-11-28 12:50:25.021819] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.666 [2024-11-28 12:50:25.021826] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.666 [2024-11-28 12:50:25.021832] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.666 [2024-11-28 12:50:25.033139] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.666 [2024-11-28 12:50:25.033568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.666 [2024-11-28 12:50:25.033585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:42.666 [2024-11-28 12:50:25.033593] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:42.666 [2024-11-28 12:50:25.033765] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:42.666 [2024-11-28 12:50:25.033939] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.666 [2024-11-28 12:50:25.033955] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.666 [2024-11-28 12:50:25.033962] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.666 [2024-11-28 12:50:25.033969] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.666 [2024-11-28 12:50:25.046055] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.666 [2024-11-28 12:50:25.046465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.666 [2024-11-28 12:50:25.046482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:42.666 [2024-11-28 12:50:25.046489] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:42.666 [2024-11-28 12:50:25.046661] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:42.666 [2024-11-28 12:50:25.046834] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.666 [2024-11-28 12:50:25.046842] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.666 [2024-11-28 12:50:25.046848] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.666 [2024-11-28 12:50:25.046854] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.666 [2024-11-28 12:50:25.059000] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.666 [2024-11-28 12:50:25.059437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.666 [2024-11-28 12:50:25.059480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:42.666 [2024-11-28 12:50:25.059503] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:42.666 [2024-11-28 12:50:25.059929] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:42.666 [2024-11-28 12:50:25.060108] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.666 [2024-11-28 12:50:25.060117] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.666 [2024-11-28 12:50:25.060123] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.666 [2024-11-28 12:50:25.060129] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.666 [2024-11-28 12:50:25.071981] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.666 [2024-11-28 12:50:25.072408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.666 [2024-11-28 12:50:25.072425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:42.666 [2024-11-28 12:50:25.072432] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:42.666 [2024-11-28 12:50:25.072607] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:42.666 [2024-11-28 12:50:25.072780] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.666 [2024-11-28 12:50:25.072788] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.666 [2024-11-28 12:50:25.072794] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.666 [2024-11-28 12:50:25.072800] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.666 [2024-11-28 12:50:25.084938] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.666 [2024-11-28 12:50:25.085301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.666 [2024-11-28 12:50:25.085317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:42.666 [2024-11-28 12:50:25.085328] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:42.666 [2024-11-28 12:50:25.085507] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:42.666 [2024-11-28 12:50:25.085686] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.666 [2024-11-28 12:50:25.085694] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.666 [2024-11-28 12:50:25.085702] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.666 [2024-11-28 12:50:25.085708] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.666 [2024-11-28 12:50:25.097973] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.666 [2024-11-28 12:50:25.098424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.666 [2024-11-28 12:50:25.098441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:42.666 [2024-11-28 12:50:25.098448] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:42.666 [2024-11-28 12:50:25.098628] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:42.666 [2024-11-28 12:50:25.098807] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.666 [2024-11-28 12:50:25.098815] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.666 [2024-11-28 12:50:25.098821] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.666 [2024-11-28 12:50:25.098828] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.666 [2024-11-28 12:50:25.110954] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.666 [2024-11-28 12:50:25.111367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.666 [2024-11-28 12:50:25.111384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:42.667 [2024-11-28 12:50:25.111391] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:42.667 [2024-11-28 12:50:25.111564] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:42.667 [2024-11-28 12:50:25.111738] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.667 [2024-11-28 12:50:25.111746] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.667 [2024-11-28 12:50:25.111752] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.667 [2024-11-28 12:50:25.111759] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.667 [2024-11-28 12:50:25.123905] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.667 [2024-11-28 12:50:25.124350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.667 [2024-11-28 12:50:25.124397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:42.667 [2024-11-28 12:50:25.124421] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:42.667 [2024-11-28 12:50:25.124906] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:42.667 [2024-11-28 12:50:25.125088] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.667 [2024-11-28 12:50:25.125097] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.667 [2024-11-28 12:50:25.125104] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.667 [2024-11-28 12:50:25.125110] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.667 [2024-11-28 12:50:25.136958] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.667 [2024-11-28 12:50:25.137381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.667 [2024-11-28 12:50:25.137398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:42.667 [2024-11-28 12:50:25.137405] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:42.667 [2024-11-28 12:50:25.137578] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:42.667 [2024-11-28 12:50:25.137756] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.667 [2024-11-28 12:50:25.137764] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.667 [2024-11-28 12:50:25.137770] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.667 [2024-11-28 12:50:25.137776] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.667 [2024-11-28 12:50:25.149849] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.667 [2024-11-28 12:50:25.150294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.667 [2024-11-28 12:50:25.150311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:42.667 [2024-11-28 12:50:25.150318] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:42.667 [2024-11-28 12:50:25.150492] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:42.667 [2024-11-28 12:50:25.150670] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.667 [2024-11-28 12:50:25.150678] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.667 [2024-11-28 12:50:25.150684] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.667 [2024-11-28 12:50:25.150690] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.667 [2024-11-28 12:50:25.162666] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.667 [2024-11-28 12:50:25.163069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.667 [2024-11-28 12:50:25.163085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:42.667 [2024-11-28 12:50:25.163092] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:42.667 [2024-11-28 12:50:25.163255] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:42.667 [2024-11-28 12:50:25.163419] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.667 [2024-11-28 12:50:25.163426] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.667 [2024-11-28 12:50:25.163435] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.667 [2024-11-28 12:50:25.163442] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.667 [2024-11-28 12:50:25.175549] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.667 [2024-11-28 12:50:25.175985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.667 [2024-11-28 12:50:25.176002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:42.667 [2024-11-28 12:50:25.176009] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:42.667 [2024-11-28 12:50:25.176188] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:42.667 [2024-11-28 12:50:25.176366] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.667 [2024-11-28 12:50:25.176375] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.667 [2024-11-28 12:50:25.176381] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.667 [2024-11-28 12:50:25.176388] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.925 [2024-11-28 12:50:25.188674] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.925 [2024-11-28 12:50:25.189029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.925 [2024-11-28 12:50:25.189046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:42.925 [2024-11-28 12:50:25.189053] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:42.925 [2024-11-28 12:50:25.189226] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:42.925 [2024-11-28 12:50:25.189399] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.925 [2024-11-28 12:50:25.189407] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.925 [2024-11-28 12:50:25.189413] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.925 [2024-11-28 12:50:25.189419] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.925 [2024-11-28 12:50:25.201678] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.925 [2024-11-28 12:50:25.202128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.925 [2024-11-28 12:50:25.202174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:42.925 [2024-11-28 12:50:25.202198] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:42.925 [2024-11-28 12:50:25.202785] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:42.925 [2024-11-28 12:50:25.203050] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.925 [2024-11-28 12:50:25.203059] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.925 [2024-11-28 12:50:25.203066] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.925 [2024-11-28 12:50:25.203072] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.925 [2024-11-28 12:50:25.214597] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.925 [2024-11-28 12:50:25.215086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.925 [2024-11-28 12:50:25.215131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:42.925 [2024-11-28 12:50:25.215154] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:42.925 [2024-11-28 12:50:25.215739] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:42.926 [2024-11-28 12:50:25.216215] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.926 [2024-11-28 12:50:25.216224] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.926 [2024-11-28 12:50:25.216230] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.926 [2024-11-28 12:50:25.216237] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.926 [2024-11-28 12:50:25.228314] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.926 [2024-11-28 12:50:25.228736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.926 [2024-11-28 12:50:25.228784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:42.926 [2024-11-28 12:50:25.228808] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:42.926 [2024-11-28 12:50:25.229360] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:42.926 [2024-11-28 12:50:25.229534] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.926 [2024-11-28 12:50:25.229542] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.926 [2024-11-28 12:50:25.229550] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.926 [2024-11-28 12:50:25.229556] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.926 [2024-11-28 12:50:25.241254] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.926 [2024-11-28 12:50:25.241683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.926 [2024-11-28 12:50:25.241700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:42.926 [2024-11-28 12:50:25.241708] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:42.926 [2024-11-28 12:50:25.241882] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:42.926 [2024-11-28 12:50:25.242064] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.926 [2024-11-28 12:50:25.242072] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.926 [2024-11-28 12:50:25.242079] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.926 [2024-11-28 12:50:25.242085] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.926 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 2675257 Killed "${NVMF_APP[@]}" "$@" 00:26:42.926 12:50:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:26:42.926 12:50:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:26:42.926 [2024-11-28 12:50:25.254214] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.926 12:50:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:42.926 [2024-11-28 12:50:25.254636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.926 [2024-11-28 12:50:25.254654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:42.926 [2024-11-28 12:50:25.254661] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:42.926 [2024-11-28 12:50:25.254835] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:42.926 12:50:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:42.926 [2024-11-28 12:50:25.255034] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.926 [2024-11-28 12:50:25.255043] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.926 [2024-11-28 12:50:25.255050] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.926 [2024-11-28 12:50:25.255056] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.926 12:50:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:42.926 12:50:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=2676444 00:26:42.926 12:50:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:42.926 12:50:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 2676444 00:26:42.926 12:50:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 2676444 ']' 00:26:42.926 12:50:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:42.926 12:50:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:42.926 12:50:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:42.926 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:42.926 12:50:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:42.926 12:50:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:42.926 [2024-11-28 12:50:25.267370] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.926 [2024-11-28 12:50:25.267811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.926 [2024-11-28 12:50:25.267828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:42.926 [2024-11-28 12:50:25.267835] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:42.926 [2024-11-28 12:50:25.268021] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:42.926 [2024-11-28 12:50:25.268201] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.926 [2024-11-28 12:50:25.268210] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.926 [2024-11-28 12:50:25.268216] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.926 [2024-11-28 12:50:25.268223] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.926 [2024-11-28 12:50:25.280543] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.926 [2024-11-28 12:50:25.280919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.926 [2024-11-28 12:50:25.280939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:42.926 [2024-11-28 12:50:25.280954] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:42.926 [2024-11-28 12:50:25.281134] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:42.926 [2024-11-28 12:50:25.281312] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.926 [2024-11-28 12:50:25.281320] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.926 [2024-11-28 12:50:25.281327] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.926 [2024-11-28 12:50:25.281333] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.926 [2024-11-28 12:50:25.293552] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.926 [2024-11-28 12:50:25.293990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.926 [2024-11-28 12:50:25.294008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:42.926 [2024-11-28 12:50:25.294016] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:42.926 [2024-11-28 12:50:25.294191] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:42.926 [2024-11-28 12:50:25.294366] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.926 [2024-11-28 12:50:25.294374] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.926 [2024-11-28 12:50:25.294381] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.926 [2024-11-28 12:50:25.294387] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.926 [2024-11-28 12:50:25.306674] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.926 [2024-11-28 12:50:25.307116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.926 [2024-11-28 12:50:25.307133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:42.926 [2024-11-28 12:50:25.307140] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:42.926 [2024-11-28 12:50:25.307315] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:42.926 [2024-11-28 12:50:25.307488] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.926 [2024-11-28 12:50:25.307496] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.926 [2024-11-28 12:50:25.307503] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.926 [2024-11-28 12:50:25.307509] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.926 [2024-11-28 12:50:25.309691] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:26:42.926 [2024-11-28 12:50:25.309729] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:42.926 [2024-11-28 12:50:25.319708] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.926 [2024-11-28 12:50:25.320148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.926 [2024-11-28 12:50:25.320171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:42.926 [2024-11-28 12:50:25.320179] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:42.926 [2024-11-28 12:50:25.320353] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:42.926 [2024-11-28 12:50:25.320528] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.926 [2024-11-28 12:50:25.320536] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.927 [2024-11-28 12:50:25.320543] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.927 [2024-11-28 12:50:25.320549] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.927 [2024-11-28 12:50:25.332752] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.927 [2024-11-28 12:50:25.333175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.927 [2024-11-28 12:50:25.333192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:42.927 [2024-11-28 12:50:25.333201] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:42.927 [2024-11-28 12:50:25.333376] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:42.927 [2024-11-28 12:50:25.333551] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.927 [2024-11-28 12:50:25.333559] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.927 [2024-11-28 12:50:25.333566] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.927 [2024-11-28 12:50:25.333572] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.927 [2024-11-28 12:50:25.345787] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.927 [2024-11-28 12:50:25.346249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.927 [2024-11-28 12:50:25.346266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:42.927 [2024-11-28 12:50:25.346274] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:42.927 [2024-11-28 12:50:25.346453] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:42.927 [2024-11-28 12:50:25.346633] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.927 [2024-11-28 12:50:25.346642] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.927 [2024-11-28 12:50:25.346649] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.927 [2024-11-28 12:50:25.346655] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.927 [2024-11-28 12:50:25.358893] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.927 [2024-11-28 12:50:25.359340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.927 [2024-11-28 12:50:25.359358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:42.927 [2024-11-28 12:50:25.359366] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:42.927 [2024-11-28 12:50:25.359548] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:42.927 [2024-11-28 12:50:25.359727] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.927 [2024-11-28 12:50:25.359736] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.927 [2024-11-28 12:50:25.359742] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.927 [2024-11-28 12:50:25.359749] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.927 [2024-11-28 12:50:25.371877] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.927 [2024-11-28 12:50:25.372325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.927 [2024-11-28 12:50:25.372342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:42.927 [2024-11-28 12:50:25.372350] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:42.927 [2024-11-28 12:50:25.372523] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:42.927 [2024-11-28 12:50:25.372698] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.927 [2024-11-28 12:50:25.372706] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.927 [2024-11-28 12:50:25.372712] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.927 [2024-11-28 12:50:25.372718] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.927 [2024-11-28 12:50:25.376176] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:42.927 [2024-11-28 12:50:25.385000] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.927 [2024-11-28 12:50:25.385421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.927 [2024-11-28 12:50:25.385439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:42.927 [2024-11-28 12:50:25.385447] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:42.927 [2024-11-28 12:50:25.385621] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:42.927 [2024-11-28 12:50:25.385796] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.927 [2024-11-28 12:50:25.385805] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.927 [2024-11-28 12:50:25.385812] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.927 [2024-11-28 12:50:25.385818] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.927 [2024-11-28 12:50:25.398130] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.927 [2024-11-28 12:50:25.398494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.927 [2024-11-28 12:50:25.398511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:42.927 [2024-11-28 12:50:25.398519] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:42.927 [2024-11-28 12:50:25.398693] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:42.927 [2024-11-28 12:50:25.398869] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.927 [2024-11-28 12:50:25.398882] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.927 [2024-11-28 12:50:25.398889] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.927 [2024-11-28 12:50:25.398895] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.927 [2024-11-28 12:50:25.411187] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.927 [2024-11-28 12:50:25.411632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.927 [2024-11-28 12:50:25.411650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:42.927 [2024-11-28 12:50:25.411658] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:42.927 [2024-11-28 12:50:25.411831] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:42.927 [2024-11-28 12:50:25.412016] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.927 [2024-11-28 12:50:25.412026] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.927 [2024-11-28 12:50:25.412034] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.927 [2024-11-28 12:50:25.412041] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.927 [2024-11-28 12:50:25.417180] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:42.927 [2024-11-28 12:50:25.417204] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:42.927 [2024-11-28 12:50:25.417212] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:42.927 [2024-11-28 12:50:25.417218] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:42.927 [2024-11-28 12:50:25.417223] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:42.927 [2024-11-28 12:50:25.418488] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:42.927 [2024-11-28 12:50:25.418579] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:42.927 [2024-11-28 12:50:25.418581] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:42.927 [2024-11-28 12:50:25.424366] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.927 [2024-11-28 12:50:25.424826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.927 [2024-11-28 12:50:25.424846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:42.927 [2024-11-28 12:50:25.424855] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:42.927 [2024-11-28 12:50:25.425039] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:42.927 [2024-11-28 12:50:25.425222] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.927 [2024-11-28 12:50:25.425231] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.927 [2024-11-28 12:50:25.425239] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.927 [2024-11-28 12:50:25.425246] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:42.927 [2024-11-28 12:50:25.437572] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:42.927 [2024-11-28 12:50:25.438032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.927 [2024-11-28 12:50:25.438058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:42.927 [2024-11-28 12:50:25.438066] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:42.927 [2024-11-28 12:50:25.438247] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:42.927 [2024-11-28 12:50:25.438428] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:42.927 [2024-11-28 12:50:25.438436] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:42.927 [2024-11-28 12:50:25.438444] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:42.927 [2024-11-28 12:50:25.438451] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.215 [2024-11-28 12:50:25.450764] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.215 [2024-11-28 12:50:25.451227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.215 [2024-11-28 12:50:25.451249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:43.215 [2024-11-28 12:50:25.451257] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:43.215 [2024-11-28 12:50:25.451438] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:43.215 [2024-11-28 12:50:25.451619] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.216 [2024-11-28 12:50:25.451628] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.216 [2024-11-28 12:50:25.451635] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.216 [2024-11-28 12:50:25.451642] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.216 [2024-11-28 12:50:25.463941] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.216 [2024-11-28 12:50:25.464408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.216 [2024-11-28 12:50:25.464428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:43.216 [2024-11-28 12:50:25.464437] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:43.216 [2024-11-28 12:50:25.464617] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:43.216 [2024-11-28 12:50:25.464797] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.216 [2024-11-28 12:50:25.464806] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.216 [2024-11-28 12:50:25.464814] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.216 [2024-11-28 12:50:25.464821] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.216 [2024-11-28 12:50:25.477145] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.216 [2024-11-28 12:50:25.477511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.216 [2024-11-28 12:50:25.477530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:43.216 [2024-11-28 12:50:25.477538] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:43.216 [2024-11-28 12:50:25.477724] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:43.216 [2024-11-28 12:50:25.477905] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.216 [2024-11-28 12:50:25.477914] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.216 [2024-11-28 12:50:25.477921] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.216 [2024-11-28 12:50:25.477929] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.216 [2024-11-28 12:50:25.490244] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.216 [2024-11-28 12:50:25.490545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.216 [2024-11-28 12:50:25.490562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:43.216 [2024-11-28 12:50:25.490570] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:43.216 [2024-11-28 12:50:25.490750] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:43.216 [2024-11-28 12:50:25.490930] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.216 [2024-11-28 12:50:25.490940] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.216 [2024-11-28 12:50:25.490952] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.216 [2024-11-28 12:50:25.490959] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.216 [2024-11-28 12:50:25.503446] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.216 [2024-11-28 12:50:25.503869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.216 [2024-11-28 12:50:25.503886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:43.216 [2024-11-28 12:50:25.503893] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:43.216 [2024-11-28 12:50:25.504075] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:43.216 [2024-11-28 12:50:25.504255] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.216 [2024-11-28 12:50:25.504263] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.216 [2024-11-28 12:50:25.504270] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.216 [2024-11-28 12:50:25.504276] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.216 [2024-11-28 12:50:25.516586] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.216 [2024-11-28 12:50:25.517010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.216 [2024-11-28 12:50:25.517028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:43.216 [2024-11-28 12:50:25.517035] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:43.216 [2024-11-28 12:50:25.517213] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:43.216 [2024-11-28 12:50:25.517392] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.216 [2024-11-28 12:50:25.517405] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.216 [2024-11-28 12:50:25.517412] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.216 [2024-11-28 12:50:25.517419] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.216 12:50:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:43.216 12:50:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:26:43.216 12:50:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:43.216 12:50:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:43.216 12:50:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:43.216 [2024-11-28 12:50:25.529746] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.216 [2024-11-28 12:50:25.530096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.216 [2024-11-28 12:50:25.530113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:43.216 [2024-11-28 12:50:25.530122] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:43.216 [2024-11-28 12:50:25.530301] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:43.216 [2024-11-28 12:50:25.530481] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.216 [2024-11-28 12:50:25.530490] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.216 [2024-11-28 12:50:25.530496] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.216 [2024-11-28 12:50:25.530503] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.216 [2024-11-28 12:50:25.542832] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.216 [2024-11-28 12:50:25.543207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.216 [2024-11-28 12:50:25.543225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:43.216 [2024-11-28 12:50:25.543233] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:43.216 [2024-11-28 12:50:25.543411] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:43.216 [2024-11-28 12:50:25.543590] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.216 [2024-11-28 12:50:25.543598] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.216 [2024-11-28 12:50:25.543605] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.216 [2024-11-28 12:50:25.543612] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.216 [2024-11-28 12:50:25.555960] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.216 [2024-11-28 12:50:25.556265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.216 [2024-11-28 12:50:25.556282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:43.216 [2024-11-28 12:50:25.556289] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:43.216 [2024-11-28 12:50:25.556469] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:43.216 [2024-11-28 12:50:25.556652] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.216 [2024-11-28 12:50:25.556661] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.216 [2024-11-28 12:50:25.556667] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.216 [2024-11-28 12:50:25.556673] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.216 12:50:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:43.216 12:50:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:43.216 12:50:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.216 12:50:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:43.216 [2024-11-28 12:50:25.568174] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:43.216 [2024-11-28 12:50:25.569159] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.216 [2024-11-28 12:50:25.569458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.216 [2024-11-28 12:50:25.569474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:43.216 [2024-11-28 12:50:25.569482] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:43.216 [2024-11-28 12:50:25.569661] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:43.217 [2024-11-28 12:50:25.569842] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.217 [2024-11-28 12:50:25.569850] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.217 [2024-11-28 12:50:25.569857] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.217 [2024-11-28 12:50:25.569863] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.217 12:50:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.217 12:50:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:43.217 12:50:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.217 12:50:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:43.217 [2024-11-28 12:50:25.582348] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.217 [2024-11-28 12:50:25.582780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.217 [2024-11-28 12:50:25.582797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:43.217 [2024-11-28 12:50:25.582804] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:43.217 [2024-11-28 12:50:25.582988] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:43.217 [2024-11-28 12:50:25.583168] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.217 [2024-11-28 12:50:25.583176] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.217 [2024-11-28 12:50:25.583184] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.217 [2024-11-28 12:50:25.583190] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.217 [2024-11-28 12:50:25.595494] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.217 [2024-11-28 12:50:25.595969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.217 [2024-11-28 12:50:25.595986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:43.217 [2024-11-28 12:50:25.595994] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:43.217 [2024-11-28 12:50:25.596172] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:43.217 [2024-11-28 12:50:25.596351] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.217 [2024-11-28 12:50:25.596360] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.217 [2024-11-28 12:50:25.596366] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.217 [2024-11-28 12:50:25.596373] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.217 Malloc0 00:26:43.217 12:50:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.217 12:50:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:43.217 12:50:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.217 12:50:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:43.217 [2024-11-28 12:50:25.608695] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.217 [2024-11-28 12:50:25.609070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.217 [2024-11-28 12:50:25.609088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:43.217 [2024-11-28 12:50:25.609096] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:43.217 [2024-11-28 12:50:25.609276] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:43.217 [2024-11-28 12:50:25.609455] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.217 [2024-11-28 12:50:25.609463] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.217 [2024-11-28 12:50:25.609470] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.217 [2024-11-28 12:50:25.609476] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.217 4740.50 IOPS, 18.52 MiB/s [2024-11-28T11:50:25.736Z] 12:50:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.217 12:50:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:43.217 12:50:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.217 12:50:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:43.217 [2024-11-28 12:50:25.621934] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.217 [2024-11-28 12:50:25.622379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.217 [2024-11-28 12:50:25.622397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa24510 with addr=10.0.0.2, port=4420 00:26:43.217 [2024-11-28 12:50:25.622404] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa24510 is same with the state(6) to be set 00:26:43.217 [2024-11-28 12:50:25.622584] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa24510 (9): Bad file descriptor 00:26:43.217 [2024-11-28 12:50:25.622763] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.217 [2024-11-28 12:50:25.622775] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.217 [2024-11-28 12:50:25.622782] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.217 [2024-11-28 12:50:25.622788] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.217 12:50:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.217 12:50:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:43.217 12:50:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.217 12:50:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:43.217 [2024-11-28 12:50:25.628466] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:43.217 12:50:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.217 12:50:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 2675519 00:26:43.217 [2024-11-28 12:50:25.635102] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.217 [2024-11-28 12:50:25.665319] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:26:45.528 5525.43 IOPS, 21.58 MiB/s [2024-11-28T11:50:28.981Z] 6205.62 IOPS, 24.24 MiB/s [2024-11-28T11:50:29.913Z] 6744.56 IOPS, 26.35 MiB/s [2024-11-28T11:50:30.846Z] 7143.20 IOPS, 27.90 MiB/s [2024-11-28T11:50:31.780Z] 7492.82 IOPS, 29.27 MiB/s [2024-11-28T11:50:32.714Z] 7775.17 IOPS, 30.37 MiB/s [2024-11-28T11:50:33.649Z] 8001.92 IOPS, 31.26 MiB/s [2024-11-28T11:50:35.026Z] 8202.00 IOPS, 32.04 MiB/s 00:26:52.507 Latency(us) 00:26:52.507 [2024-11-28T11:50:35.026Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:52.507 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:52.507 Verification LBA range: start 0x0 length 0x4000 00:26:52.507 Nvme1n1 : 15.00 8365.89 32.68 10821.79 0.00 6650.54 698.10 12936.24 00:26:52.507 [2024-11-28T11:50:35.026Z] =================================================================================================================== 00:26:52.507 [2024-11-28T11:50:35.026Z] Total : 8365.89 32.68 10821.79 0.00 6650.54 698.10 12936.24 00:26:52.507 12:50:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:26:52.507 12:50:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:52.507 12:50:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.507 12:50:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:52.507 12:50:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.507 12:50:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:26:52.507 12:50:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:26:52.507 12:50:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:52.507 12:50:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:26:52.507 12:50:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:52.507 12:50:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:26:52.507 12:50:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:52.507 12:50:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:52.507 rmmod nvme_tcp 00:26:52.507 rmmod nvme_fabrics 00:26:52.507 rmmod nvme_keyring 00:26:52.507 12:50:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:52.507 12:50:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:26:52.507 12:50:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:26:52.507 12:50:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 2676444 ']' 00:26:52.507 12:50:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 2676444 00:26:52.507 12:50:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 2676444 ']' 00:26:52.507 12:50:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 2676444 00:26:52.507 12:50:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 00:26:52.507 12:50:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:52.507 12:50:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2676444 00:26:52.507 12:50:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:52.507 12:50:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:52.507 12:50:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2676444' 00:26:52.507 killing process with pid 2676444 00:26:52.507 12:50:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 2676444 00:26:52.507 12:50:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 2676444 00:26:52.781 12:50:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:52.781 12:50:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:52.781 12:50:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:52.781 12:50:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:26:52.781 12:50:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:52.782 12:50:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:26:52.782 12:50:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:26:52.782 12:50:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:52.782 12:50:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:52.782 12:50:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:52.782 12:50:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:52.782 12:50:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:54.684 12:50:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:54.684 00:26:54.684 real 0m25.409s 00:26:54.684 user 1m0.252s 00:26:54.684 sys 0m6.378s 00:26:54.684 12:50:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:54.684 12:50:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:54.684 ************************************ 00:26:54.684 END TEST nvmf_bdevperf 00:26:54.684 ************************************ 00:26:54.943 12:50:37 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:26:54.943 12:50:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:54.943 12:50:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:54.943 12:50:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.943 ************************************ 00:26:54.943 START TEST nvmf_target_disconnect 00:26:54.943 ************************************ 00:26:54.943 12:50:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:26:54.943 * Looking for test storage... 00:26:54.943 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:54.943 12:50:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:54.943 12:50:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:26:54.943 12:50:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:54.943 12:50:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:54.943 12:50:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:54.943 12:50:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:54.943 12:50:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:54.943 12:50:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:26:54.943 12:50:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:26:54.943 12:50:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:26:54.943 12:50:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:26:54.943 12:50:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:26:54.943 12:50:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:26:54.943 12:50:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:26:54.943 12:50:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:54.943 12:50:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:26:54.943 12:50:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:26:54.943 12:50:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:54.943 12:50:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:54.943 12:50:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:26:54.943 12:50:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:26:54.943 12:50:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:54.943 12:50:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:26:54.943 12:50:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:26:54.943 12:50:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:26:54.943 12:50:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:26:54.943 12:50:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:54.943 12:50:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:26:54.943 12:50:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:26:54.943 12:50:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:54.943 12:50:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:54.943 12:50:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:26:54.943 12:50:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:54.944 12:50:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:54.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:54.944 --rc genhtml_branch_coverage=1 00:26:54.944 --rc genhtml_function_coverage=1 00:26:54.944 --rc genhtml_legend=1 00:26:54.944 --rc geninfo_all_blocks=1 00:26:54.944 --rc geninfo_unexecuted_blocks=1 00:26:54.944 00:26:54.944 ' 00:26:54.944 12:50:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:54.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:54.944 --rc genhtml_branch_coverage=1 00:26:54.944 --rc genhtml_function_coverage=1 00:26:54.944 --rc genhtml_legend=1 00:26:54.944 --rc geninfo_all_blocks=1 00:26:54.944 --rc geninfo_unexecuted_blocks=1 00:26:54.944 00:26:54.944 ' 00:26:54.944 12:50:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:54.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:54.944 --rc genhtml_branch_coverage=1 00:26:54.944 --rc genhtml_function_coverage=1 00:26:54.944 --rc genhtml_legend=1 00:26:54.944 --rc geninfo_all_blocks=1 00:26:54.944 --rc geninfo_unexecuted_blocks=1 00:26:54.944 00:26:54.944 ' 00:26:54.944 12:50:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:54.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:54.944 --rc genhtml_branch_coverage=1 00:26:54.944 --rc genhtml_function_coverage=1 00:26:54.944 --rc genhtml_legend=1 00:26:54.944 --rc geninfo_all_blocks=1 00:26:54.944 --rc geninfo_unexecuted_blocks=1 00:26:54.944 00:26:54.944 ' 00:26:54.944 12:50:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:54.944 12:50:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:26:54.944 12:50:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:54.944 12:50:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:54.944 12:50:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:54.944 12:50:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:54.944 12:50:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:54.944 12:50:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:54.944 12:50:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:54.944 12:50:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:54.944 12:50:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:54.944 12:50:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:54.944 12:50:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:26:54.944 12:50:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:26:54.944 12:50:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:54.944 12:50:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:54.944 12:50:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:54.944 12:50:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:54.944 12:50:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:54.944 12:50:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:26:54.944 12:50:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:54.944 12:50:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:54.944 12:50:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:54.944 12:50:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:54.944 12:50:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:54.944 12:50:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:54.944 12:50:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:26:54.944 12:50:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:54.944 12:50:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:26:54.944 12:50:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:54.944 12:50:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:54.944 12:50:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:54.944 12:50:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:54.944 12:50:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:54.944 12:50:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:54.944 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:54.944 12:50:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:54.944 12:50:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:54.944 12:50:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:54.944 12:50:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:26:54.944 12:50:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:26:54.944 12:50:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:26:54.944 12:50:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:26:54.944 12:50:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:54.944 12:50:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:54.944 12:50:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:54.944 12:50:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:54.944 12:50:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:54.944 12:50:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:54.944 12:50:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:54.944 12:50:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:54.944 12:50:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:54.944 12:50:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:54.944 12:50:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:26:54.944 12:50:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:00.214 12:50:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:00.214 12:50:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:27:00.214 12:50:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:00.214 12:50:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:00.214 12:50:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:00.214 12:50:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:00.214 12:50:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:00.214 12:50:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:27:00.214 12:50:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:00.214 12:50:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:27:00.214 12:50:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:27:00.214 12:50:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:27:00.214 12:50:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:27:00.214 12:50:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:27:00.214 12:50:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:27:00.214 12:50:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:00.214 12:50:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:00.214 12:50:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:00.215 12:50:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:00.215 12:50:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:00.215 12:50:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:00.215 12:50:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:00.215 12:50:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:00.215 12:50:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:00.215 12:50:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:00.215 12:50:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:00.215 12:50:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:00.215 12:50:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:00.215 12:50:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:00.215 12:50:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:00.215 12:50:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:00.215 12:50:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:00.215 12:50:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:00.215 12:50:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:00.215 12:50:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:00.215 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:00.215 12:50:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:00.215 12:50:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:00.215 12:50:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:00.215 12:50:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:00.215 12:50:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:00.215 12:50:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:00.215 12:50:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:00.215 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:00.215 12:50:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:00.215 12:50:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:00.215 12:50:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:00.215 12:50:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:00.215 12:50:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:00.215 12:50:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:00.215 12:50:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:00.215 12:50:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:00.215 12:50:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:00.215 12:50:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:00.215 12:50:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:00.215 12:50:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:00.215 12:50:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:00.215 12:50:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:00.215 12:50:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:00.215 12:50:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:00.215 Found net devices under 0000:86:00.0: cvl_0_0 00:27:00.215 12:50:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:00.215 12:50:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:00.215 12:50:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:00.215 12:50:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:00.215 12:50:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:00.215 12:50:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:00.215 12:50:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:00.215 12:50:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:00.215 12:50:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:00.215 Found net devices under 0000:86:00.1: cvl_0_1 00:27:00.215 12:50:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:00.215 12:50:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:00.215 12:50:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:27:00.215 12:50:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:00.215 12:50:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:00.215 12:50:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:00.215 12:50:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:00.215 12:50:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:00.215 12:50:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:00.215 12:50:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:00.215 12:50:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:00.215 12:50:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:00.215 12:50:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:00.215 12:50:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:00.215 12:50:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:00.215 12:50:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:00.215 12:50:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:00.215 12:50:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:00.215 12:50:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:00.215 12:50:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:00.215 12:50:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:00.215 12:50:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:00.215 12:50:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:00.215 12:50:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:00.215 12:50:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:00.473 12:50:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:00.473 12:50:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:00.473 12:50:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:00.473 12:50:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:00.473 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:00.473 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.424 ms 00:27:00.473 00:27:00.473 --- 10.0.0.2 ping statistics --- 00:27:00.473 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:00.473 rtt min/avg/max/mdev = 0.424/0.424/0.424/0.000 ms 00:27:00.473 12:50:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:00.473 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:00.473 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.214 ms 00:27:00.473 00:27:00.473 --- 10.0.0.1 ping statistics --- 00:27:00.473 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:00.473 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:27:00.474 12:50:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:00.474 12:50:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:27:00.474 12:50:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:00.474 12:50:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:00.474 12:50:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:00.474 12:50:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:00.474 12:50:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:00.474 12:50:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:00.474 12:50:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:00.474 12:50:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:27:00.474 12:50:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:00.474 12:50:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:00.474 12:50:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:00.474 ************************************ 00:27:00.474 START TEST nvmf_target_disconnect_tc1 00:27:00.474 ************************************ 00:27:00.474 12:50:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 00:27:00.474 12:50:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:00.474 12:50:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 00:27:00.474 12:50:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:00.474 12:50:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:00.474 12:50:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:00.474 12:50:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:00.474 12:50:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:00.474 12:50:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:00.474 12:50:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:00.474 12:50:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:00.474 12:50:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:27:00.474 12:50:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:00.474 [2024-11-28 12:50:42.980893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.474 [2024-11-28 12:50:42.980935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174fac0 with addr=10.0.0.2, port=4420 00:27:00.474 [2024-11-28 12:50:42.980963] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:27:00.474 [2024-11-28 12:50:42.980976] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:27:00.474 [2024-11-28 12:50:42.980983] nvme.c: 951:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:27:00.474 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:27:00.474 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:27:00.474 Initializing NVMe Controllers 00:27:00.474 12:50:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 00:27:00.474 12:50:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:00.474 12:50:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:00.474 12:50:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:00.474 00:27:00.474 real 0m0.103s 00:27:00.474 user 0m0.048s 00:27:00.474 sys 0m0.054s 00:27:00.474 12:50:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:00.474 12:50:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:00.474 ************************************ 00:27:00.474 END TEST nvmf_target_disconnect_tc1 00:27:00.474 ************************************ 00:27:00.732 12:50:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:27:00.732 12:50:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:00.732 12:50:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:00.732 12:50:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:00.732 ************************************ 00:27:00.732 START TEST nvmf_target_disconnect_tc2 00:27:00.732 ************************************ 00:27:00.732 12:50:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 00:27:00.732 12:50:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:27:00.732 12:50:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:27:00.732 12:50:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:00.732 12:50:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:00.732 12:50:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:00.732 12:50:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=2681599 00:27:00.732 12:50:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 2681599 00:27:00.732 12:50:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:27:00.732 12:50:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2681599 ']' 00:27:00.732 12:50:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:00.732 12:50:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:00.732 12:50:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:00.732 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:00.732 12:50:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:00.732 12:50:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:00.732 [2024-11-28 12:50:43.114924] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:27:00.732 [2024-11-28 12:50:43.114971] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:00.732 [2024-11-28 12:50:43.195500] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:00.732 [2024-11-28 12:50:43.237535] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:00.732 [2024-11-28 12:50:43.237574] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:00.732 [2024-11-28 12:50:43.237583] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:00.732 [2024-11-28 12:50:43.237589] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:00.732 [2024-11-28 12:50:43.237594] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:00.732 [2024-11-28 12:50:43.242966] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:27:00.732 [2024-11-28 12:50:43.243068] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:27:00.732 [2024-11-28 12:50:43.243176] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:27:00.732 [2024-11-28 12:50:43.243176] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:27:01.663 12:50:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:01.663 12:50:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:27:01.663 12:50:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:01.663 12:50:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:01.663 12:50:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:01.663 12:50:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:01.664 12:50:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:01.664 12:50:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.664 12:50:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:01.664 Malloc0 00:27:01.664 12:50:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.664 12:50:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:27:01.664 12:50:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.664 12:50:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:01.664 [2024-11-28 12:50:44.052336] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:01.664 12:50:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.664 12:50:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:01.664 12:50:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.664 12:50:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:01.664 12:50:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.664 12:50:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:01.664 12:50:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.664 12:50:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:01.664 12:50:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.664 12:50:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:01.664 12:50:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.664 12:50:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:01.664 [2024-11-28 12:50:44.080559] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:01.664 12:50:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.664 12:50:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:01.664 12:50:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.664 12:50:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:01.664 12:50:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.664 12:50:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=2681648 00:27:01.664 12:50:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:27:01.664 12:50:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:04.214 12:50:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 2681599 00:27:04.214 12:50:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:27:04.214 Read completed with error (sct=0, sc=8) 00:27:04.214 starting I/O failed 00:27:04.214 Read completed with error (sct=0, sc=8) 00:27:04.214 starting I/O failed 00:27:04.214 Read completed with error (sct=0, sc=8) 00:27:04.214 starting I/O failed 00:27:04.214 Read completed with error (sct=0, sc=8) 00:27:04.214 starting I/O failed 00:27:04.214 Read completed with error (sct=0, sc=8) 00:27:04.214 starting I/O failed 00:27:04.214 Read completed with error (sct=0, sc=8) 00:27:04.214 starting I/O failed 00:27:04.214 Read completed with error (sct=0, sc=8) 00:27:04.214 starting I/O failed 00:27:04.214 Write completed with error (sct=0, sc=8) 00:27:04.214 starting I/O failed 00:27:04.214 Write completed with error (sct=0, sc=8) 00:27:04.214 starting I/O failed 00:27:04.214 Read completed with error (sct=0, sc=8) 00:27:04.214 starting I/O failed 00:27:04.214 Read completed with error (sct=0, sc=8) 00:27:04.214 starting I/O failed 00:27:04.214 Read completed with error (sct=0, sc=8) 00:27:04.215 starting I/O failed 00:27:04.215 Read completed with error (sct=0, sc=8) 00:27:04.215 starting I/O failed 00:27:04.215 Read completed with error (sct=0, sc=8) 00:27:04.215 starting I/O failed 00:27:04.215 Read completed with error (sct=0, sc=8) 00:27:04.215 starting I/O failed 00:27:04.215 Write completed with error (sct=0, sc=8) 00:27:04.215 starting I/O failed 00:27:04.215 Read completed with error (sct=0, sc=8) 00:27:04.215 starting I/O failed 00:27:04.215 Read completed with error (sct=0, sc=8) 00:27:04.215 starting I/O failed 00:27:04.215 Read completed with error (sct=0, sc=8) 00:27:04.215 starting I/O failed 00:27:04.215 Write completed with error (sct=0, sc=8) 00:27:04.215 starting I/O failed 00:27:04.215 Write completed with error (sct=0, sc=8) 00:27:04.215 starting I/O failed 00:27:04.215 Read completed with error (sct=0, sc=8) 00:27:04.215 starting I/O failed 00:27:04.215 Write completed with error (sct=0, sc=8) 00:27:04.215 starting I/O failed 00:27:04.215 Read completed with error (sct=0, sc=8) 00:27:04.215 starting I/O failed 00:27:04.215 Read completed with error (sct=0, sc=8) 00:27:04.215 starting I/O failed 00:27:04.215 Write completed with error (sct=0, sc=8) 00:27:04.215 starting I/O failed 00:27:04.215 Read completed with error (sct=0, sc=8) 00:27:04.215 starting I/O failed 00:27:04.215 Write completed with error (sct=0, sc=8) 00:27:04.215 starting I/O failed 00:27:04.215 Read completed with error (sct=0, sc=8) 00:27:04.215 starting I/O failed 00:27:04.215 Read completed with error (sct=0, sc=8) 00:27:04.215 starting I/O failed 00:27:04.215 Read completed with error (sct=0, sc=8) 00:27:04.215 starting I/O failed 00:27:04.215 Write completed with error (sct=0, sc=8) 00:27:04.215 starting I/O failed 00:27:04.215 Read completed with error (sct=0, sc=8) 00:27:04.215 starting I/O failed 00:27:04.215 Read completed with error (sct=0, sc=8) 00:27:04.215 starting I/O failed 00:27:04.215 Read completed with error (sct=0, sc=8) 00:27:04.215 starting I/O failed 00:27:04.215 Read completed with error (sct=0, sc=8) 00:27:04.215 starting I/O failed 00:27:04.215 [2024-11-28 12:50:46.106490] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:04.215 Read completed with error (sct=0, sc=8) 00:27:04.215 starting I/O failed 00:27:04.215 Write completed with error (sct=0, sc=8) 00:27:04.215 starting I/O failed 00:27:04.215 Read completed with error (sct=0, sc=8) 00:27:04.215 starting I/O failed 00:27:04.215 Write completed with error (sct=0, sc=8) 00:27:04.215 starting I/O failed 00:27:04.215 Write completed with error (sct=0, sc=8) 00:27:04.215 starting I/O failed 00:27:04.215 Read completed with error (sct=0, sc=8) 00:27:04.215 starting I/O failed 00:27:04.215 Write completed with error (sct=0, sc=8) 00:27:04.215 starting I/O failed 00:27:04.215 Read completed with error (sct=0, sc=8) 00:27:04.215 starting I/O failed 00:27:04.215 Read completed with error (sct=0, sc=8) 00:27:04.215 starting I/O failed 00:27:04.215 Write completed with error (sct=0, sc=8) 00:27:04.215 starting I/O failed 00:27:04.215 Write completed with error (sct=0, sc=8) 00:27:04.215 starting I/O failed 00:27:04.215 Write completed with error (sct=0, sc=8) 00:27:04.215 starting I/O failed 00:27:04.215 Read completed with error (sct=0, sc=8) 00:27:04.215 starting I/O failed 00:27:04.215 Read completed with error (sct=0, sc=8) 00:27:04.215 starting I/O failed 00:27:04.215 Write completed with error (sct=0, sc=8) 00:27:04.215 starting I/O failed 00:27:04.215 Write completed with error (sct=0, sc=8) 00:27:04.215 starting I/O failed 00:27:04.215 Read completed with error (sct=0, sc=8) 00:27:04.215 starting I/O failed 00:27:04.215 Read completed with error (sct=0, sc=8) 00:27:04.215 starting I/O failed 00:27:04.215 Read completed with error (sct=0, sc=8) 00:27:04.215 starting I/O failed 00:27:04.215 Read completed with error (sct=0, sc=8) 00:27:04.215 starting I/O failed 00:27:04.215 Write completed with error (sct=0, sc=8) 00:27:04.215 starting I/O failed 00:27:04.215 Read completed with error (sct=0, sc=8) 00:27:04.215 starting I/O failed 00:27:04.215 Read completed with error (sct=0, sc=8) 00:27:04.215 starting I/O failed 00:27:04.215 Read completed with error (sct=0, sc=8) 00:27:04.215 starting I/O failed 00:27:04.215 Read completed with error (sct=0, sc=8) 00:27:04.215 starting I/O failed 00:27:04.215 Read completed with error (sct=0, sc=8) 00:27:04.215 starting I/O failed 00:27:04.215 Write completed with error (sct=0, sc=8) 00:27:04.215 starting I/O failed 00:27:04.215 Write completed with error (sct=0, sc=8) 00:27:04.215 starting I/O failed 00:27:04.215 [2024-11-28 12:50:46.106702] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:04.215 Read completed with error (sct=0, sc=8) 00:27:04.215 starting I/O failed 00:27:04.215 Read completed with error (sct=0, sc=8) 00:27:04.215 starting I/O failed 00:27:04.215 Read completed with error (sct=0, sc=8) 00:27:04.215 starting I/O failed 00:27:04.215 Read completed with error (sct=0, sc=8) 00:27:04.215 starting I/O failed 00:27:04.215 Read completed with error (sct=0, sc=8) 00:27:04.215 starting I/O failed 00:27:04.215 Read completed with error (sct=0, sc=8) 00:27:04.215 starting I/O failed 00:27:04.215 Read completed with error (sct=0, sc=8) 00:27:04.215 starting I/O failed 00:27:04.215 Read completed with error (sct=0, sc=8) 00:27:04.215 starting I/O failed 00:27:04.215 Read completed with error (sct=0, sc=8) 00:27:04.215 starting I/O failed 00:27:04.215 Read completed with error (sct=0, sc=8) 00:27:04.215 starting I/O failed 00:27:04.215 Read completed with error (sct=0, sc=8) 00:27:04.215 starting I/O failed 00:27:04.215 Write completed with error (sct=0, sc=8) 00:27:04.215 starting I/O failed 00:27:04.215 Read completed with error (sct=0, sc=8) 00:27:04.215 starting I/O failed 00:27:04.215 Write completed with error (sct=0, sc=8) 00:27:04.215 starting I/O failed 00:27:04.215 Read completed with error (sct=0, sc=8) 00:27:04.215 starting I/O failed 00:27:04.215 Write completed with error (sct=0, sc=8) 00:27:04.215 starting I/O failed 00:27:04.215 Write completed with error (sct=0, sc=8) 00:27:04.215 starting I/O failed 00:27:04.215 Read completed with error (sct=0, sc=8) 00:27:04.215 starting I/O failed 00:27:04.215 Write completed with error (sct=0, sc=8) 00:27:04.215 starting I/O failed 00:27:04.215 Read completed with error (sct=0, sc=8) 00:27:04.215 starting I/O failed 00:27:04.215 Write completed with error (sct=0, sc=8) 00:27:04.215 starting I/O failed 00:27:04.215 Write completed with error (sct=0, sc=8) 00:27:04.215 starting I/O failed 00:27:04.215 Write completed with error (sct=0, sc=8) 00:27:04.215 starting I/O failed 00:27:04.215 Write completed with error (sct=0, sc=8) 00:27:04.215 starting I/O failed 00:27:04.215 Read completed with error (sct=0, sc=8) 00:27:04.215 starting I/O failed 00:27:04.215 Read completed with error (sct=0, sc=8) 00:27:04.215 starting I/O failed 00:27:04.215 Write completed with error (sct=0, sc=8) 00:27:04.215 starting I/O failed 00:27:04.215 Write completed with error (sct=0, sc=8) 00:27:04.215 starting I/O failed 00:27:04.215 Read completed with error (sct=0, sc=8) 00:27:04.215 starting I/O failed 00:27:04.215 Read completed with error (sct=0, sc=8) 00:27:04.215 starting I/O failed 00:27:04.215 Read completed with error (sct=0, sc=8) 00:27:04.215 starting I/O failed 00:27:04.215 Read completed with error (sct=0, sc=8) 00:27:04.215 starting I/O failed 00:27:04.215 [2024-11-28 12:50:46.106908] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:04.215 Read completed with error (sct=0, sc=8) 00:27:04.215 starting I/O failed 00:27:04.215 Read completed with error (sct=0, sc=8) 00:27:04.215 starting I/O failed 00:27:04.215 Read completed with error (sct=0, sc=8) 00:27:04.215 starting I/O failed 00:27:04.215 Read completed with error (sct=0, sc=8) 00:27:04.215 starting I/O failed 00:27:04.215 Write completed with error (sct=0, sc=8) 00:27:04.215 starting I/O failed 00:27:04.215 Write completed with error (sct=0, sc=8) 00:27:04.215 starting I/O failed 00:27:04.215 Write completed with error (sct=0, sc=8) 00:27:04.215 starting I/O failed 00:27:04.215 Write completed with error (sct=0, sc=8) 00:27:04.215 starting I/O failed 00:27:04.215 Write completed with error (sct=0, sc=8) 00:27:04.215 starting I/O failed 00:27:04.215 Write completed with error (sct=0, sc=8) 00:27:04.215 starting I/O failed 00:27:04.215 Write completed with error (sct=0, sc=8) 00:27:04.215 starting I/O failed 00:27:04.215 Read completed with error (sct=0, sc=8) 00:27:04.215 starting I/O failed 00:27:04.215 Write completed with error (sct=0, sc=8) 00:27:04.215 starting I/O failed 00:27:04.215 Read completed with error (sct=0, sc=8) 00:27:04.215 starting I/O failed 00:27:04.215 Write completed with error (sct=0, sc=8) 00:27:04.215 starting I/O failed 00:27:04.215 Write completed with error (sct=0, sc=8) 00:27:04.215 starting I/O failed 00:27:04.215 Write completed with error (sct=0, sc=8) 00:27:04.215 starting I/O failed 00:27:04.215 Read completed with error (sct=0, sc=8) 00:27:04.215 starting I/O failed 00:27:04.215 Read completed with error (sct=0, sc=8) 00:27:04.215 starting I/O failed 00:27:04.215 Read completed with error (sct=0, sc=8) 00:27:04.215 starting I/O failed 00:27:04.215 Read completed with error (sct=0, sc=8) 00:27:04.215 starting I/O failed 00:27:04.215 Read completed with error (sct=0, sc=8) 00:27:04.215 starting I/O failed 00:27:04.215 Read completed with error (sct=0, sc=8) 00:27:04.215 starting I/O failed 00:27:04.215 Write completed with error (sct=0, sc=8) 00:27:04.215 starting I/O failed 00:27:04.215 Write completed with error (sct=0, sc=8) 00:27:04.215 starting I/O failed 00:27:04.215 Read completed with error (sct=0, sc=8) 00:27:04.215 starting I/O failed 00:27:04.215 Read completed with error (sct=0, sc=8) 00:27:04.215 starting I/O failed 00:27:04.215 Write completed with error (sct=0, sc=8) 00:27:04.216 starting I/O failed 00:27:04.216 Read completed with error (sct=0, sc=8) 00:27:04.216 starting I/O failed 00:27:04.216 Write completed with error (sct=0, sc=8) 00:27:04.216 starting I/O failed 00:27:04.216 Write completed with error (sct=0, sc=8) 00:27:04.216 starting I/O failed 00:27:04.216 Write completed with error (sct=0, sc=8) 00:27:04.216 starting I/O failed 00:27:04.216 [2024-11-28 12:50:46.107108] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:04.216 [2024-11-28 12:50:46.107363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.216 [2024-11-28 12:50:46.107388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.216 qpair failed and we were unable to recover it. 00:27:04.216 [2024-11-28 12:50:46.107629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.216 [2024-11-28 12:50:46.107640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.216 qpair failed and we were unable to recover it. 00:27:04.216 [2024-11-28 12:50:46.107895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.216 [2024-11-28 12:50:46.107928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.216 qpair failed and we were unable to recover it. 00:27:04.216 [2024-11-28 12:50:46.108150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.216 [2024-11-28 12:50:46.108183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.216 qpair failed and we were unable to recover it. 00:27:04.216 [2024-11-28 12:50:46.108373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.216 [2024-11-28 12:50:46.108385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.216 qpair failed and we were unable to recover it. 00:27:04.216 [2024-11-28 12:50:46.108548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.216 [2024-11-28 12:50:46.108581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.216 qpair failed and we were unable to recover it. 00:27:04.216 [2024-11-28 12:50:46.108784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.216 [2024-11-28 12:50:46.108816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.216 qpair failed and we were unable to recover it. 00:27:04.216 [2024-11-28 12:50:46.109020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.216 [2024-11-28 12:50:46.109054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.216 qpair failed and we were unable to recover it. 00:27:04.216 [2024-11-28 12:50:46.109239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.216 [2024-11-28 12:50:46.109250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.216 qpair failed and we were unable to recover it. 00:27:04.216 [2024-11-28 12:50:46.109397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.216 [2024-11-28 12:50:46.109428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.216 qpair failed and we were unable to recover it. 00:27:04.216 [2024-11-28 12:50:46.109645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.216 [2024-11-28 12:50:46.109677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.216 qpair failed and we were unable to recover it. 00:27:04.216 [2024-11-28 12:50:46.109884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.216 [2024-11-28 12:50:46.109930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.216 qpair failed and we were unable to recover it. 00:27:04.216 [2024-11-28 12:50:46.110100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.216 [2024-11-28 12:50:46.110112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.216 qpair failed and we were unable to recover it. 00:27:04.216 [2024-11-28 12:50:46.110326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.216 [2024-11-28 12:50:46.110357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.216 qpair failed and we were unable to recover it. 00:27:04.216 [2024-11-28 12:50:46.110603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.216 [2024-11-28 12:50:46.110636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.216 qpair failed and we were unable to recover it. 00:27:04.216 [2024-11-28 12:50:46.110887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.216 [2024-11-28 12:50:46.110919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.216 qpair failed and we were unable to recover it. 00:27:04.216 [2024-11-28 12:50:46.111094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.216 [2024-11-28 12:50:46.111154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.216 qpair failed and we were unable to recover it. 00:27:04.216 [2024-11-28 12:50:46.111390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.216 [2024-11-28 12:50:46.111441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.216 qpair failed and we were unable to recover it. 00:27:04.216 [2024-11-28 12:50:46.111775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.216 [2024-11-28 12:50:46.111810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.216 qpair failed and we were unable to recover it. 00:27:04.216 [2024-11-28 12:50:46.112014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.216 [2024-11-28 12:50:46.112056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.216 qpair failed and we were unable to recover it. 00:27:04.216 [2024-11-28 12:50:46.112198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.216 [2024-11-28 12:50:46.112231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.216 qpair failed and we were unable to recover it. 00:27:04.216 [2024-11-28 12:50:46.112511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.216 [2024-11-28 12:50:46.112544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.216 qpair failed and we were unable to recover it. 00:27:04.216 [2024-11-28 12:50:46.112820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.216 [2024-11-28 12:50:46.112853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.216 qpair failed and we were unable to recover it. 00:27:04.216 [2024-11-28 12:50:46.113054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.216 [2024-11-28 12:50:46.113072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.216 qpair failed and we were unable to recover it. 00:27:04.216 [2024-11-28 12:50:46.113236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.216 [2024-11-28 12:50:46.113251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.216 qpair failed and we were unable to recover it. 00:27:04.216 [2024-11-28 12:50:46.113502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.216 [2024-11-28 12:50:46.113519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.216 qpair failed and we were unable to recover it. 00:27:04.216 [2024-11-28 12:50:46.113684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.216 [2024-11-28 12:50:46.113715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.216 qpair failed and we were unable to recover it. 00:27:04.216 [2024-11-28 12:50:46.114010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.216 [2024-11-28 12:50:46.114044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.216 qpair failed and we were unable to recover it. 00:27:04.216 [2024-11-28 12:50:46.114338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.216 [2024-11-28 12:50:46.114371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.216 qpair failed and we were unable to recover it. 00:27:04.216 [2024-11-28 12:50:46.114622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.216 [2024-11-28 12:50:46.114654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.216 qpair failed and we were unable to recover it. 00:27:04.216 [2024-11-28 12:50:46.114865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.216 [2024-11-28 12:50:46.114897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.216 qpair failed and we were unable to recover it. 00:27:04.216 [2024-11-28 12:50:46.115250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.216 [2024-11-28 12:50:46.115267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.216 qpair failed and we were unable to recover it. 00:27:04.216 [2024-11-28 12:50:46.115480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.216 [2024-11-28 12:50:46.115496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.216 qpair failed and we were unable to recover it. 00:27:04.216 [2024-11-28 12:50:46.115653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.216 [2024-11-28 12:50:46.115669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.216 qpair failed and we were unable to recover it. 00:27:04.216 [2024-11-28 12:50:46.115864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.216 [2024-11-28 12:50:46.115880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.216 qpair failed and we were unable to recover it. 00:27:04.216 [2024-11-28 12:50:46.116113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.216 [2024-11-28 12:50:46.116148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.217 qpair failed and we were unable to recover it. 00:27:04.217 [2024-11-28 12:50:46.116302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.217 [2024-11-28 12:50:46.116335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.217 qpair failed and we were unable to recover it. 00:27:04.217 [2024-11-28 12:50:46.116584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.217 [2024-11-28 12:50:46.116617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.217 qpair failed and we were unable to recover it. 00:27:04.217 [2024-11-28 12:50:46.116794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.217 [2024-11-28 12:50:46.116827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.217 qpair failed and we were unable to recover it. 00:27:04.217 [2024-11-28 12:50:46.117037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.217 [2024-11-28 12:50:46.117053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.217 qpair failed and we were unable to recover it. 00:27:04.217 [2024-11-28 12:50:46.117220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.217 [2024-11-28 12:50:46.117253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.217 qpair failed and we were unable to recover it. 00:27:04.217 [2024-11-28 12:50:46.117471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.217 [2024-11-28 12:50:46.117504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.217 qpair failed and we were unable to recover it. 00:27:04.217 [2024-11-28 12:50:46.117693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.217 [2024-11-28 12:50:46.117726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.217 qpair failed and we were unable to recover it. 00:27:04.217 [2024-11-28 12:50:46.117971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.217 [2024-11-28 12:50:46.118005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.217 qpair failed and we were unable to recover it. 00:27:04.217 [2024-11-28 12:50:46.118137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.217 [2024-11-28 12:50:46.118170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.217 qpair failed and we were unable to recover it. 00:27:04.217 [2024-11-28 12:50:46.118358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.217 [2024-11-28 12:50:46.118375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.217 qpair failed and we were unable to recover it. 00:27:04.217 [2024-11-28 12:50:46.118533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.217 [2024-11-28 12:50:46.118564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.217 qpair failed and we were unable to recover it. 00:27:04.217 [2024-11-28 12:50:46.118773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.217 [2024-11-28 12:50:46.118806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.217 qpair failed and we were unable to recover it. 00:27:04.217 [2024-11-28 12:50:46.119072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.217 [2024-11-28 12:50:46.119106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.217 qpair failed and we were unable to recover it. 00:27:04.217 [2024-11-28 12:50:46.119300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.217 [2024-11-28 12:50:46.119339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.217 qpair failed and we were unable to recover it. 00:27:04.217 [2024-11-28 12:50:46.119550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.217 [2024-11-28 12:50:46.119567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.217 qpair failed and we were unable to recover it. 00:27:04.217 [2024-11-28 12:50:46.119801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.217 [2024-11-28 12:50:46.119817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.217 qpair failed and we were unable to recover it. 00:27:04.217 [2024-11-28 12:50:46.120033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.217 [2024-11-28 12:50:46.120066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.217 qpair failed and we were unable to recover it. 00:27:04.217 [2024-11-28 12:50:46.120279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.217 [2024-11-28 12:50:46.120313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.217 qpair failed and we were unable to recover it. 00:27:04.217 [2024-11-28 12:50:46.120504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.217 [2024-11-28 12:50:46.120520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.217 qpair failed and we were unable to recover it. 00:27:04.217 [2024-11-28 12:50:46.120669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.217 [2024-11-28 12:50:46.120685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.217 qpair failed and we were unable to recover it. 00:27:04.217 [2024-11-28 12:50:46.120845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.217 [2024-11-28 12:50:46.120861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.217 qpair failed and we were unable to recover it. 00:27:04.217 [2024-11-28 12:50:46.121096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.217 [2024-11-28 12:50:46.121113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.217 qpair failed and we were unable to recover it. 00:27:04.217 [2024-11-28 12:50:46.121268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.217 [2024-11-28 12:50:46.121284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.217 qpair failed and we were unable to recover it. 00:27:04.217 [2024-11-28 12:50:46.121515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.217 [2024-11-28 12:50:46.121531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.217 qpair failed and we were unable to recover it. 00:27:04.217 [2024-11-28 12:50:46.121698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.217 [2024-11-28 12:50:46.121717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.217 qpair failed and we were unable to recover it. 00:27:04.217 [2024-11-28 12:50:46.121972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.217 [2024-11-28 12:50:46.121989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.217 qpair failed and we were unable to recover it. 00:27:04.217 [2024-11-28 12:50:46.122219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.217 [2024-11-28 12:50:46.122235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.217 qpair failed and we were unable to recover it. 00:27:04.217 [2024-11-28 12:50:46.122448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.217 [2024-11-28 12:50:46.122464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.217 qpair failed and we were unable to recover it. 00:27:04.217 [2024-11-28 12:50:46.122730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.217 [2024-11-28 12:50:46.122746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.217 qpair failed and we were unable to recover it. 00:27:04.217 [2024-11-28 12:50:46.122956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.217 [2024-11-28 12:50:46.122973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.217 qpair failed and we were unable to recover it. 00:27:04.217 [2024-11-28 12:50:46.123211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.217 [2024-11-28 12:50:46.123227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.217 qpair failed and we were unable to recover it. 00:27:04.217 [2024-11-28 12:50:46.123336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.217 [2024-11-28 12:50:46.123352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.217 qpair failed and we were unable to recover it. 00:27:04.217 [2024-11-28 12:50:46.123581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.217 [2024-11-28 12:50:46.123598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.217 qpair failed and we were unable to recover it. 00:27:04.217 [2024-11-28 12:50:46.123808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.217 [2024-11-28 12:50:46.123824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.217 qpair failed and we were unable to recover it. 00:27:04.217 [2024-11-28 12:50:46.123985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.217 [2024-11-28 12:50:46.124019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.217 qpair failed and we were unable to recover it. 00:27:04.217 [2024-11-28 12:50:46.124264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.217 [2024-11-28 12:50:46.124297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.217 qpair failed and we were unable to recover it. 00:27:04.217 [2024-11-28 12:50:46.124545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.217 [2024-11-28 12:50:46.124578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.217 qpair failed and we were unable to recover it. 00:27:04.217 [2024-11-28 12:50:46.124708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.217 [2024-11-28 12:50:46.124741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.217 qpair failed and we were unable to recover it. 00:27:04.217 [2024-11-28 12:50:46.124937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.218 [2024-11-28 12:50:46.124993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.218 qpair failed and we were unable to recover it. 00:27:04.218 [2024-11-28 12:50:46.125203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.218 [2024-11-28 12:50:46.125236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.218 qpair failed and we were unable to recover it. 00:27:04.218 [2024-11-28 12:50:46.125511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.218 [2024-11-28 12:50:46.125544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.218 qpair failed and we were unable to recover it. 00:27:04.218 [2024-11-28 12:50:46.125722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.218 [2024-11-28 12:50:46.125755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.218 qpair failed and we were unable to recover it. 00:27:04.218 [2024-11-28 12:50:46.125957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.218 [2024-11-28 12:50:46.125991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.218 qpair failed and we were unable to recover it. 00:27:04.218 [2024-11-28 12:50:46.126258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.218 [2024-11-28 12:50:46.126291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.218 qpair failed and we were unable to recover it. 00:27:04.218 [2024-11-28 12:50:46.126597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.218 [2024-11-28 12:50:46.126629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.218 qpair failed and we were unable to recover it. 00:27:04.218 [2024-11-28 12:50:46.126885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.218 [2024-11-28 12:50:46.126918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.218 qpair failed and we were unable to recover it. 00:27:04.218 [2024-11-28 12:50:46.127129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.218 [2024-11-28 12:50:46.127162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.218 qpair failed and we were unable to recover it. 00:27:04.218 [2024-11-28 12:50:46.127437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.218 [2024-11-28 12:50:46.127453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.218 qpair failed and we were unable to recover it. 00:27:04.218 [2024-11-28 12:50:46.127624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.218 [2024-11-28 12:50:46.127640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.218 qpair failed and we were unable to recover it. 00:27:04.218 [2024-11-28 12:50:46.127852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.218 [2024-11-28 12:50:46.127868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.218 qpair failed and we were unable to recover it. 00:27:04.218 [2024-11-28 12:50:46.128108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.218 [2024-11-28 12:50:46.128143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.218 qpair failed and we were unable to recover it. 00:27:04.218 [2024-11-28 12:50:46.128347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.218 [2024-11-28 12:50:46.128386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.218 qpair failed and we were unable to recover it. 00:27:04.218 [2024-11-28 12:50:46.128570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.218 [2024-11-28 12:50:46.128602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.218 qpair failed and we were unable to recover it. 00:27:04.218 [2024-11-28 12:50:46.128749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.218 [2024-11-28 12:50:46.128782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.218 qpair failed and we were unable to recover it. 00:27:04.218 [2024-11-28 12:50:46.128975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.218 [2024-11-28 12:50:46.129009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.218 qpair failed and we were unable to recover it. 00:27:04.218 [2024-11-28 12:50:46.129276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.218 [2024-11-28 12:50:46.129309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.218 qpair failed and we were unable to recover it. 00:27:04.218 [2024-11-28 12:50:46.129590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.218 [2024-11-28 12:50:46.129623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.218 qpair failed and we were unable to recover it. 00:27:04.218 [2024-11-28 12:50:46.129902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.218 [2024-11-28 12:50:46.129934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.218 qpair failed and we were unable to recover it. 00:27:04.218 [2024-11-28 12:50:46.130224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.218 [2024-11-28 12:50:46.130258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.218 qpair failed and we were unable to recover it. 00:27:04.218 [2024-11-28 12:50:46.130442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.218 [2024-11-28 12:50:46.130458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.218 qpair failed and we were unable to recover it. 00:27:04.218 [2024-11-28 12:50:46.130718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.218 [2024-11-28 12:50:46.130751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.218 qpair failed and we were unable to recover it. 00:27:04.218 [2024-11-28 12:50:46.130898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.218 [2024-11-28 12:50:46.130930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.218 qpair failed and we were unable to recover it. 00:27:04.218 [2024-11-28 12:50:46.131211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.218 [2024-11-28 12:50:46.131245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.218 qpair failed and we were unable to recover it. 00:27:04.218 [2024-11-28 12:50:46.131431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.218 [2024-11-28 12:50:46.131464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.218 qpair failed and we were unable to recover it. 00:27:04.218 [2024-11-28 12:50:46.131780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.218 [2024-11-28 12:50:46.131812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.218 qpair failed and we were unable to recover it. 00:27:04.218 [2024-11-28 12:50:46.132080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.218 [2024-11-28 12:50:46.132115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.218 qpair failed and we were unable to recover it. 00:27:04.218 [2024-11-28 12:50:46.132361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.218 [2024-11-28 12:50:46.132393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.218 qpair failed and we were unable to recover it. 00:27:04.218 [2024-11-28 12:50:46.132661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.218 [2024-11-28 12:50:46.132694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.218 qpair failed and we were unable to recover it. 00:27:04.218 [2024-11-28 12:50:46.132830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.218 [2024-11-28 12:50:46.132862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.218 qpair failed and we were unable to recover it. 00:27:04.218 [2024-11-28 12:50:46.133109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.218 [2024-11-28 12:50:46.133143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.218 qpair failed and we were unable to recover it. 00:27:04.218 [2024-11-28 12:50:46.133415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.218 [2024-11-28 12:50:46.133447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.218 qpair failed and we were unable to recover it. 00:27:04.218 [2024-11-28 12:50:46.133733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.218 [2024-11-28 12:50:46.133766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.218 qpair failed and we were unable to recover it. 00:27:04.218 [2024-11-28 12:50:46.133910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.218 [2024-11-28 12:50:46.133943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.218 qpair failed and we were unable to recover it. 00:27:04.218 [2024-11-28 12:50:46.134195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.218 [2024-11-28 12:50:46.134229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.218 qpair failed and we were unable to recover it. 00:27:04.218 [2024-11-28 12:50:46.134523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.218 [2024-11-28 12:50:46.134557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.218 qpair failed and we were unable to recover it. 00:27:04.218 [2024-11-28 12:50:46.134821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.218 [2024-11-28 12:50:46.134854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.218 qpair failed and we were unable to recover it. 00:27:04.219 [2024-11-28 12:50:46.135146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.219 [2024-11-28 12:50:46.135163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.219 qpair failed and we were unable to recover it. 00:27:04.219 [2024-11-28 12:50:46.135318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.219 [2024-11-28 12:50:46.135334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.219 qpair failed and we were unable to recover it. 00:27:04.219 [2024-11-28 12:50:46.135551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.219 [2024-11-28 12:50:46.135567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.219 qpair failed and we were unable to recover it. 00:27:04.219 [2024-11-28 12:50:46.135834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.219 [2024-11-28 12:50:46.135851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.219 qpair failed and we were unable to recover it. 00:27:04.219 [2024-11-28 12:50:46.136089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.219 [2024-11-28 12:50:46.136105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.219 qpair failed and we were unable to recover it. 00:27:04.219 [2024-11-28 12:50:46.136330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.219 [2024-11-28 12:50:46.136346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.219 qpair failed and we were unable to recover it. 00:27:04.219 [2024-11-28 12:50:46.136579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.219 [2024-11-28 12:50:46.136595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.219 qpair failed and we were unable to recover it. 00:27:04.219 [2024-11-28 12:50:46.136762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.219 [2024-11-28 12:50:46.136778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.219 qpair failed and we were unable to recover it. 00:27:04.219 [2024-11-28 12:50:46.136959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.219 [2024-11-28 12:50:46.136975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.219 qpair failed and we were unable to recover it. 00:27:04.219 [2024-11-28 12:50:46.137191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.219 [2024-11-28 12:50:46.137224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.219 qpair failed and we were unable to recover it. 00:27:04.219 [2024-11-28 12:50:46.137475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.219 [2024-11-28 12:50:46.137506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.219 qpair failed and we were unable to recover it. 00:27:04.219 [2024-11-28 12:50:46.137689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.219 [2024-11-28 12:50:46.137721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.219 qpair failed and we were unable to recover it. 00:27:04.219 [2024-11-28 12:50:46.137912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.219 [2024-11-28 12:50:46.137944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.219 qpair failed and we were unable to recover it. 00:27:04.219 [2024-11-28 12:50:46.138151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.219 [2024-11-28 12:50:46.138186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.219 qpair failed and we were unable to recover it. 00:27:04.219 [2024-11-28 12:50:46.138429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.219 [2024-11-28 12:50:46.138462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.219 qpair failed and we were unable to recover it. 00:27:04.219 [2024-11-28 12:50:46.138643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.219 [2024-11-28 12:50:46.138675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.219 qpair failed and we were unable to recover it. 00:27:04.219 [2024-11-28 12:50:46.138867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.219 [2024-11-28 12:50:46.138906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.219 qpair failed and we were unable to recover it. 00:27:04.219 [2024-11-28 12:50:46.139190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.219 [2024-11-28 12:50:46.139207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.219 qpair failed and we were unable to recover it. 00:27:04.219 [2024-11-28 12:50:46.139365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.219 [2024-11-28 12:50:46.139398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.219 qpair failed and we were unable to recover it. 00:27:04.219 [2024-11-28 12:50:46.139689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.219 [2024-11-28 12:50:46.139722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.219 qpair failed and we were unable to recover it. 00:27:04.219 [2024-11-28 12:50:46.140013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.219 [2024-11-28 12:50:46.140048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.219 qpair failed and we were unable to recover it. 00:27:04.219 [2024-11-28 12:50:46.140321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.219 [2024-11-28 12:50:46.140354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.219 qpair failed and we were unable to recover it. 00:27:04.219 [2024-11-28 12:50:46.140639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.219 [2024-11-28 12:50:46.140672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.219 qpair failed and we were unable to recover it. 00:27:04.219 [2024-11-28 12:50:46.140959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.219 [2024-11-28 12:50:46.140993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.219 qpair failed and we were unable to recover it. 00:27:04.219 [2024-11-28 12:50:46.141177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.219 [2024-11-28 12:50:46.141193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.219 qpair failed and we were unable to recover it. 00:27:04.219 [2024-11-28 12:50:46.141375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.219 [2024-11-28 12:50:46.141409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.219 qpair failed and we were unable to recover it. 00:27:04.219 [2024-11-28 12:50:46.141686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.219 [2024-11-28 12:50:46.141719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.219 qpair failed and we were unable to recover it. 00:27:04.219 [2024-11-28 12:50:46.141994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.219 [2024-11-28 12:50:46.142027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.219 qpair failed and we were unable to recover it. 00:27:04.219 [2024-11-28 12:50:46.142284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.219 [2024-11-28 12:50:46.142300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.219 qpair failed and we were unable to recover it. 00:27:04.219 [2024-11-28 12:50:46.142558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.219 [2024-11-28 12:50:46.142574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.219 qpair failed and we were unable to recover it. 00:27:04.219 [2024-11-28 12:50:46.142784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.219 [2024-11-28 12:50:46.142800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.219 qpair failed and we were unable to recover it. 00:27:04.219 [2024-11-28 12:50:46.143023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.219 [2024-11-28 12:50:46.143040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.219 qpair failed and we were unable to recover it. 00:27:04.220 [2024-11-28 12:50:46.143211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.220 [2024-11-28 12:50:46.143227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.220 qpair failed and we were unable to recover it. 00:27:04.220 [2024-11-28 12:50:46.143466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.220 [2024-11-28 12:50:46.143498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.220 qpair failed and we were unable to recover it. 00:27:04.220 [2024-11-28 12:50:46.143742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.220 [2024-11-28 12:50:46.143775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.220 qpair failed and we were unable to recover it. 00:27:04.220 [2024-11-28 12:50:46.144031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.220 [2024-11-28 12:50:46.144066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.220 qpair failed and we were unable to recover it. 00:27:04.220 [2024-11-28 12:50:46.144310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.220 [2024-11-28 12:50:46.144343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.220 qpair failed and we were unable to recover it. 00:27:04.220 [2024-11-28 12:50:46.144534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.220 [2024-11-28 12:50:46.144567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.220 qpair failed and we were unable to recover it. 00:27:04.220 [2024-11-28 12:50:46.144838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.220 [2024-11-28 12:50:46.144871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.220 qpair failed and we were unable to recover it. 00:27:04.220 [2024-11-28 12:50:46.145153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.220 [2024-11-28 12:50:46.145187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.220 qpair failed and we were unable to recover it. 00:27:04.220 [2024-11-28 12:50:46.145439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.220 [2024-11-28 12:50:46.145473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.220 qpair failed and we were unable to recover it. 00:27:04.220 [2024-11-28 12:50:46.145685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.220 [2024-11-28 12:50:46.145718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.220 qpair failed and we were unable to recover it. 00:27:04.220 [2024-11-28 12:50:46.145997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.220 [2024-11-28 12:50:46.146040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.220 qpair failed and we were unable to recover it. 00:27:04.220 [2024-11-28 12:50:46.146256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.220 [2024-11-28 12:50:46.146274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.220 qpair failed and we were unable to recover it. 00:27:04.220 [2024-11-28 12:50:46.146421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.220 [2024-11-28 12:50:46.146437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.220 qpair failed and we were unable to recover it. 00:27:04.220 [2024-11-28 12:50:46.146678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.220 [2024-11-28 12:50:46.146711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.220 qpair failed and we were unable to recover it. 00:27:04.220 [2024-11-28 12:50:46.146916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.220 [2024-11-28 12:50:46.146958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.220 qpair failed and we were unable to recover it. 00:27:04.220 [2024-11-28 12:50:46.147215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.220 [2024-11-28 12:50:46.147248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.220 qpair failed and we were unable to recover it. 00:27:04.220 [2024-11-28 12:50:46.147542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.220 [2024-11-28 12:50:46.147575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.220 qpair failed and we were unable to recover it. 00:27:04.220 [2024-11-28 12:50:46.147825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.220 [2024-11-28 12:50:46.147858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.220 qpair failed and we were unable to recover it. 00:27:04.220 [2024-11-28 12:50:46.148117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.220 [2024-11-28 12:50:46.148151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.220 qpair failed and we were unable to recover it. 00:27:04.220 [2024-11-28 12:50:46.148343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.220 [2024-11-28 12:50:46.148376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.220 qpair failed and we were unable to recover it. 00:27:04.220 [2024-11-28 12:50:46.148557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.220 [2024-11-28 12:50:46.148591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.220 qpair failed and we were unable to recover it. 00:27:04.220 [2024-11-28 12:50:46.148835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.220 [2024-11-28 12:50:46.148868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.220 qpair failed and we were unable to recover it. 00:27:04.220 [2024-11-28 12:50:46.149008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.220 [2024-11-28 12:50:46.149042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.220 qpair failed and we were unable to recover it. 00:27:04.220 [2024-11-28 12:50:46.149178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.220 [2024-11-28 12:50:46.149211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.220 qpair failed and we were unable to recover it. 00:27:04.220 [2024-11-28 12:50:46.149484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.220 [2024-11-28 12:50:46.149518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.220 qpair failed and we were unable to recover it. 00:27:04.220 [2024-11-28 12:50:46.149692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.220 [2024-11-28 12:50:46.149763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.220 qpair failed and we were unable to recover it. 00:27:04.220 [2024-11-28 12:50:46.150052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.220 [2024-11-28 12:50:46.150090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.220 qpair failed and we were unable to recover it. 00:27:04.220 [2024-11-28 12:50:46.150306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.220 [2024-11-28 12:50:46.150340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.220 qpair failed and we were unable to recover it. 00:27:04.220 [2024-11-28 12:50:46.150610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.220 [2024-11-28 12:50:46.150643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.220 qpair failed and we were unable to recover it. 00:27:04.220 [2024-11-28 12:50:46.150931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.220 [2024-11-28 12:50:46.150972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.220 qpair failed and we were unable to recover it. 00:27:04.220 [2024-11-28 12:50:46.151242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.220 [2024-11-28 12:50:46.151274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.220 qpair failed and we were unable to recover it. 00:27:04.220 [2024-11-28 12:50:46.151463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.220 [2024-11-28 12:50:46.151496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.220 qpair failed and we were unable to recover it. 00:27:04.220 [2024-11-28 12:50:46.151673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.220 [2024-11-28 12:50:46.151705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.220 qpair failed and we were unable to recover it. 00:27:04.220 [2024-11-28 12:50:46.151976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.220 [2024-11-28 12:50:46.152010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.220 qpair failed and we were unable to recover it. 00:27:04.220 [2024-11-28 12:50:46.152256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.220 [2024-11-28 12:50:46.152290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.220 qpair failed and we were unable to recover it. 00:27:04.220 [2024-11-28 12:50:46.152555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.220 [2024-11-28 12:50:46.152588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.220 qpair failed and we were unable to recover it. 00:27:04.220 [2024-11-28 12:50:46.152833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.220 [2024-11-28 12:50:46.152865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.220 qpair failed and we were unable to recover it. 00:27:04.220 [2024-11-28 12:50:46.153141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.220 [2024-11-28 12:50:46.153175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.221 qpair failed and we were unable to recover it. 00:27:04.221 [2024-11-28 12:50:46.153373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.221 [2024-11-28 12:50:46.153414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.221 qpair failed and we were unable to recover it. 00:27:04.221 [2024-11-28 12:50:46.153664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.221 [2024-11-28 12:50:46.153695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.221 qpair failed and we were unable to recover it. 00:27:04.221 [2024-11-28 12:50:46.153892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.221 [2024-11-28 12:50:46.153926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.221 qpair failed and we were unable to recover it. 00:27:04.221 [2024-11-28 12:50:46.154131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.221 [2024-11-28 12:50:46.154146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.221 qpair failed and we were unable to recover it. 00:27:04.221 [2024-11-28 12:50:46.154315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.221 [2024-11-28 12:50:46.154346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.221 qpair failed and we were unable to recover it. 00:27:04.221 [2024-11-28 12:50:46.154652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.221 [2024-11-28 12:50:46.154684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.221 qpair failed and we were unable to recover it. 00:27:04.221 [2024-11-28 12:50:46.154932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.221 [2024-11-28 12:50:46.154975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.221 qpair failed and we were unable to recover it. 00:27:04.221 [2024-11-28 12:50:46.155122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.221 [2024-11-28 12:50:46.155138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.221 qpair failed and we were unable to recover it. 00:27:04.221 [2024-11-28 12:50:46.155318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.221 [2024-11-28 12:50:46.155350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.221 qpair failed and we were unable to recover it. 00:27:04.221 [2024-11-28 12:50:46.155562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.221 [2024-11-28 12:50:46.155593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.221 qpair failed and we were unable to recover it. 00:27:04.221 [2024-11-28 12:50:46.155705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.221 [2024-11-28 12:50:46.155737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.221 qpair failed and we were unable to recover it. 00:27:04.221 [2024-11-28 12:50:46.155945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.221 [2024-11-28 12:50:46.155988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.221 qpair failed and we were unable to recover it. 00:27:04.221 [2024-11-28 12:50:46.156234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.221 [2024-11-28 12:50:46.156266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.221 qpair failed and we were unable to recover it. 00:27:04.221 [2024-11-28 12:50:46.156560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.221 [2024-11-28 12:50:46.156591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.221 qpair failed and we were unable to recover it. 00:27:04.221 [2024-11-28 12:50:46.156796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.221 [2024-11-28 12:50:46.156829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.221 qpair failed and we were unable to recover it. 00:27:04.221 [2024-11-28 12:50:46.157026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.221 [2024-11-28 12:50:46.157060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.221 qpair failed and we were unable to recover it. 00:27:04.221 [2024-11-28 12:50:46.157324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.221 [2024-11-28 12:50:46.157355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.221 qpair failed and we were unable to recover it. 00:27:04.221 [2024-11-28 12:50:46.157550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.221 [2024-11-28 12:50:46.157582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.221 qpair failed and we were unable to recover it. 00:27:04.221 [2024-11-28 12:50:46.157799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.221 [2024-11-28 12:50:46.157831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.221 qpair failed and we were unable to recover it. 00:27:04.221 [2024-11-28 12:50:46.158093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.221 [2024-11-28 12:50:46.158110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.221 qpair failed and we were unable to recover it. 00:27:04.221 [2024-11-28 12:50:46.158270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.221 [2024-11-28 12:50:46.158301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.221 qpair failed and we were unable to recover it. 00:27:04.221 [2024-11-28 12:50:46.158543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.221 [2024-11-28 12:50:46.158575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.221 qpair failed and we were unable to recover it. 00:27:04.221 [2024-11-28 12:50:46.158847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.221 [2024-11-28 12:50:46.158879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.221 qpair failed and we were unable to recover it. 00:27:04.221 [2024-11-28 12:50:46.159131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.221 [2024-11-28 12:50:46.159163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.221 qpair failed and we were unable to recover it. 00:27:04.221 [2024-11-28 12:50:46.159415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.221 [2024-11-28 12:50:46.159447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.221 qpair failed and we were unable to recover it. 00:27:04.221 [2024-11-28 12:50:46.159689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.221 [2024-11-28 12:50:46.159721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.221 qpair failed and we were unable to recover it. 00:27:04.221 [2024-11-28 12:50:46.159989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.221 [2024-11-28 12:50:46.160022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.221 qpair failed and we were unable to recover it. 00:27:04.221 [2024-11-28 12:50:46.160218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.221 [2024-11-28 12:50:46.160252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.221 qpair failed and we were unable to recover it. 00:27:04.221 [2024-11-28 12:50:46.160434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.221 [2024-11-28 12:50:46.160450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.221 qpair failed and we were unable to recover it. 00:27:04.221 [2024-11-28 12:50:46.160598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.221 [2024-11-28 12:50:46.160630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.221 qpair failed and we were unable to recover it. 00:27:04.221 [2024-11-28 12:50:46.160843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.221 [2024-11-28 12:50:46.160877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.221 qpair failed and we were unable to recover it. 00:27:04.221 [2024-11-28 12:50:46.161063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.221 [2024-11-28 12:50:46.161098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.221 qpair failed and we were unable to recover it. 00:27:04.221 [2024-11-28 12:50:46.161362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.221 [2024-11-28 12:50:46.161378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.221 qpair failed and we were unable to recover it. 00:27:04.221 [2024-11-28 12:50:46.161614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.221 [2024-11-28 12:50:46.161630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.221 qpair failed and we were unable to recover it. 00:27:04.221 [2024-11-28 12:50:46.161717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.221 [2024-11-28 12:50:46.161731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.221 qpair failed and we were unable to recover it. 00:27:04.221 [2024-11-28 12:50:46.161817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.221 [2024-11-28 12:50:46.161833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.221 qpair failed and we were unable to recover it. 00:27:04.221 [2024-11-28 12:50:46.161990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.221 [2024-11-28 12:50:46.162006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.221 qpair failed and we were unable to recover it. 00:27:04.221 [2024-11-28 12:50:46.162221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.221 [2024-11-28 12:50:46.162254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.221 qpair failed and we were unable to recover it. 00:27:04.222 [2024-11-28 12:50:46.162448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.222 [2024-11-28 12:50:46.162479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.222 qpair failed and we were unable to recover it. 00:27:04.222 [2024-11-28 12:50:46.162723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.222 [2024-11-28 12:50:46.162754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.222 qpair failed and we were unable to recover it. 00:27:04.222 [2024-11-28 12:50:46.162937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.222 [2024-11-28 12:50:46.162982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.222 qpair failed and we were unable to recover it. 00:27:04.222 [2024-11-28 12:50:46.163225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.222 [2024-11-28 12:50:46.163240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.222 qpair failed and we were unable to recover it. 00:27:04.222 [2024-11-28 12:50:46.163473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.222 [2024-11-28 12:50:46.163488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.222 qpair failed and we were unable to recover it. 00:27:04.222 [2024-11-28 12:50:46.163754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.222 [2024-11-28 12:50:46.163786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.222 qpair failed and we were unable to recover it. 00:27:04.222 [2024-11-28 12:50:46.164059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.222 [2024-11-28 12:50:46.164093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.222 qpair failed and we were unable to recover it. 00:27:04.222 [2024-11-28 12:50:46.164404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.222 [2024-11-28 12:50:46.164435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.222 qpair failed and we were unable to recover it. 00:27:04.222 [2024-11-28 12:50:46.164627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.222 [2024-11-28 12:50:46.164660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.222 qpair failed and we were unable to recover it. 00:27:04.222 [2024-11-28 12:50:46.164802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.222 [2024-11-28 12:50:46.164835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.222 qpair failed and we were unable to recover it. 00:27:04.222 [2024-11-28 12:50:46.165135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.222 [2024-11-28 12:50:46.165169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.222 qpair failed and we were unable to recover it. 00:27:04.222 [2024-11-28 12:50:46.165420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.222 [2024-11-28 12:50:46.165453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.222 qpair failed and we were unable to recover it. 00:27:04.222 [2024-11-28 12:50:46.165667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.222 [2024-11-28 12:50:46.165699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.222 qpair failed and we were unable to recover it. 00:27:04.222 [2024-11-28 12:50:46.165880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.222 [2024-11-28 12:50:46.165917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.222 qpair failed and we were unable to recover it. 00:27:04.222 [2024-11-28 12:50:46.166133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.222 [2024-11-28 12:50:46.166166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.222 qpair failed and we were unable to recover it. 00:27:04.222 [2024-11-28 12:50:46.166348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.222 [2024-11-28 12:50:46.166379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.222 qpair failed and we were unable to recover it. 00:27:04.222 [2024-11-28 12:50:46.166644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.222 [2024-11-28 12:50:46.166660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.222 qpair failed and we were unable to recover it. 00:27:04.222 [2024-11-28 12:50:46.166764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.222 [2024-11-28 12:50:46.166780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.222 qpair failed and we were unable to recover it. 00:27:04.222 [2024-11-28 12:50:46.167021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.222 [2024-11-28 12:50:46.167053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.222 qpair failed and we were unable to recover it. 00:27:04.222 [2024-11-28 12:50:46.167196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.222 [2024-11-28 12:50:46.167230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.222 qpair failed and we were unable to recover it. 00:27:04.222 [2024-11-28 12:50:46.167526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.222 [2024-11-28 12:50:46.167558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.222 qpair failed and we were unable to recover it. 00:27:04.222 [2024-11-28 12:50:46.167750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.222 [2024-11-28 12:50:46.167783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.222 qpair failed and we were unable to recover it. 00:27:04.222 [2024-11-28 12:50:46.168047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.222 [2024-11-28 12:50:46.168080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.222 qpair failed and we were unable to recover it. 00:27:04.222 [2024-11-28 12:50:46.168370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.222 [2024-11-28 12:50:46.168386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.222 qpair failed and we were unable to recover it. 00:27:04.222 [2024-11-28 12:50:46.168541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.222 [2024-11-28 12:50:46.168558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.222 qpair failed and we were unable to recover it. 00:27:04.222 [2024-11-28 12:50:46.168792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.222 [2024-11-28 12:50:46.168823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.222 qpair failed and we were unable to recover it. 00:27:04.222 [2024-11-28 12:50:46.169039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.222 [2024-11-28 12:50:46.169073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.222 qpair failed and we were unable to recover it. 00:27:04.222 [2024-11-28 12:50:46.169269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.222 [2024-11-28 12:50:46.169285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.222 qpair failed and we were unable to recover it. 00:27:04.222 [2024-11-28 12:50:46.169430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.222 [2024-11-28 12:50:46.169445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.222 qpair failed and we were unable to recover it. 00:27:04.222 [2024-11-28 12:50:46.169597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.222 [2024-11-28 12:50:46.169614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.222 qpair failed and we were unable to recover it. 00:27:04.222 [2024-11-28 12:50:46.169851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.222 [2024-11-28 12:50:46.169867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.222 qpair failed and we were unable to recover it. 00:27:04.222 [2024-11-28 12:50:46.170043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.222 [2024-11-28 12:50:46.170060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.222 qpair failed and we were unable to recover it. 00:27:04.222 [2024-11-28 12:50:46.170223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.222 [2024-11-28 12:50:46.170256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.222 qpair failed and we were unable to recover it. 00:27:04.222 [2024-11-28 12:50:46.170555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.222 [2024-11-28 12:50:46.170589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.222 qpair failed and we were unable to recover it. 00:27:04.222 [2024-11-28 12:50:46.170732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.222 [2024-11-28 12:50:46.170765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.222 qpair failed and we were unable to recover it. 00:27:04.222 [2024-11-28 12:50:46.171038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.222 [2024-11-28 12:50:46.171072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.222 qpair failed and we were unable to recover it. 00:27:04.222 [2024-11-28 12:50:46.171366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.222 [2024-11-28 12:50:46.171409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.222 qpair failed and we were unable to recover it. 00:27:04.222 [2024-11-28 12:50:46.171631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.222 [2024-11-28 12:50:46.171646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.222 qpair failed and we were unable to recover it. 00:27:04.222 [2024-11-28 12:50:46.171838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.223 [2024-11-28 12:50:46.171854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.223 qpair failed and we were unable to recover it. 00:27:04.223 [2024-11-28 12:50:46.172033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.223 [2024-11-28 12:50:46.172050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.223 qpair failed and we were unable to recover it. 00:27:04.223 [2024-11-28 12:50:46.172199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.223 [2024-11-28 12:50:46.172232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.223 qpair failed and we were unable to recover it. 00:27:04.223 [2024-11-28 12:50:46.172375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.223 [2024-11-28 12:50:46.172407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.223 qpair failed and we were unable to recover it. 00:27:04.223 [2024-11-28 12:50:46.172536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.223 [2024-11-28 12:50:46.172577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.223 qpair failed and we were unable to recover it. 00:27:04.223 [2024-11-28 12:50:46.172766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.223 [2024-11-28 12:50:46.172798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.223 qpair failed and we were unable to recover it. 00:27:04.223 [2024-11-28 12:50:46.173024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.223 [2024-11-28 12:50:46.173059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.223 qpair failed and we were unable to recover it. 00:27:04.223 [2024-11-28 12:50:46.173249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.223 [2024-11-28 12:50:46.173265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.223 qpair failed and we were unable to recover it. 00:27:04.223 [2024-11-28 12:50:46.173505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.223 [2024-11-28 12:50:46.173538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.223 qpair failed and we were unable to recover it. 00:27:04.223 [2024-11-28 12:50:46.173727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.223 [2024-11-28 12:50:46.173759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.223 qpair failed and we were unable to recover it. 00:27:04.223 [2024-11-28 12:50:46.173892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.223 [2024-11-28 12:50:46.173925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.223 qpair failed and we were unable to recover it. 00:27:04.223 [2024-11-28 12:50:46.174200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.223 [2024-11-28 12:50:46.174232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.223 qpair failed and we were unable to recover it. 00:27:04.223 [2024-11-28 12:50:46.174364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.223 [2024-11-28 12:50:46.174396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.223 qpair failed and we were unable to recover it. 00:27:04.223 [2024-11-28 12:50:46.174670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.223 [2024-11-28 12:50:46.174686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.223 qpair failed and we were unable to recover it. 00:27:04.223 [2024-11-28 12:50:46.174848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.223 [2024-11-28 12:50:46.174864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.223 qpair failed and we were unable to recover it. 00:27:04.223 [2024-11-28 12:50:46.175115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.223 [2024-11-28 12:50:46.175131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.223 qpair failed and we were unable to recover it. 00:27:04.223 [2024-11-28 12:50:46.175363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.223 [2024-11-28 12:50:46.175379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.223 qpair failed and we were unable to recover it. 00:27:04.223 [2024-11-28 12:50:46.175558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.223 [2024-11-28 12:50:46.175573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.223 qpair failed and we were unable to recover it. 00:27:04.223 [2024-11-28 12:50:46.175734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.223 [2024-11-28 12:50:46.175751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.223 qpair failed and we were unable to recover it. 00:27:04.223 [2024-11-28 12:50:46.175942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.223 [2024-11-28 12:50:46.175997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.223 qpair failed and we were unable to recover it. 00:27:04.223 [2024-11-28 12:50:46.176269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.223 [2024-11-28 12:50:46.176302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.223 qpair failed and we were unable to recover it. 00:27:04.223 [2024-11-28 12:50:46.176572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.223 [2024-11-28 12:50:46.176604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.223 qpair failed and we were unable to recover it. 00:27:04.223 [2024-11-28 12:50:46.176810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.223 [2024-11-28 12:50:46.176844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.223 qpair failed and we were unable to recover it. 00:27:04.223 [2024-11-28 12:50:46.177090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.223 [2024-11-28 12:50:46.177124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.223 qpair failed and we were unable to recover it. 00:27:04.223 [2024-11-28 12:50:46.177343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.223 [2024-11-28 12:50:46.177375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.223 qpair failed and we were unable to recover it. 00:27:04.223 [2024-11-28 12:50:46.177517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.223 [2024-11-28 12:50:46.177533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.223 qpair failed and we were unable to recover it. 00:27:04.223 [2024-11-28 12:50:46.177632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.223 [2024-11-28 12:50:46.177648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.223 qpair failed and we were unable to recover it. 00:27:04.223 [2024-11-28 12:50:46.177811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.223 [2024-11-28 12:50:46.177827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.223 qpair failed and we were unable to recover it. 00:27:04.223 [2024-11-28 12:50:46.178060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.223 [2024-11-28 12:50:46.178101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.223 qpair failed and we were unable to recover it. 00:27:04.223 [2024-11-28 12:50:46.178283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.223 [2024-11-28 12:50:46.178315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.223 qpair failed and we were unable to recover it. 00:27:04.223 [2024-11-28 12:50:46.178436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.223 [2024-11-28 12:50:46.178473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.223 qpair failed and we were unable to recover it. 00:27:04.223 [2024-11-28 12:50:46.178725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.223 [2024-11-28 12:50:46.178756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.223 qpair failed and we were unable to recover it. 00:27:04.223 [2024-11-28 12:50:46.178942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.223 [2024-11-28 12:50:46.178985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.223 qpair failed and we were unable to recover it. 00:27:04.223 [2024-11-28 12:50:46.179262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.223 [2024-11-28 12:50:46.179295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.223 qpair failed and we were unable to recover it. 00:27:04.223 [2024-11-28 12:50:46.179568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.223 [2024-11-28 12:50:46.179600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.223 qpair failed and we were unable to recover it. 00:27:04.223 [2024-11-28 12:50:46.179854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.223 [2024-11-28 12:50:46.179887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.223 qpair failed and we were unable to recover it. 00:27:04.223 [2024-11-28 12:50:46.180181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.223 [2024-11-28 12:50:46.180215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.223 qpair failed and we were unable to recover it. 00:27:04.223 [2024-11-28 12:50:46.180483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.223 [2024-11-28 12:50:46.180515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.223 qpair failed and we were unable to recover it. 00:27:04.223 [2024-11-28 12:50:46.180729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.224 [2024-11-28 12:50:46.180762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.224 qpair failed and we were unable to recover it. 00:27:04.224 [2024-11-28 12:50:46.181047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.224 [2024-11-28 12:50:46.181081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.224 qpair failed and we were unable to recover it. 00:27:04.224 [2024-11-28 12:50:46.181273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.224 [2024-11-28 12:50:46.181289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.224 qpair failed and we were unable to recover it. 00:27:04.224 [2024-11-28 12:50:46.181444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.224 [2024-11-28 12:50:46.181460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.224 qpair failed and we were unable to recover it. 00:27:04.224 [2024-11-28 12:50:46.181563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.224 [2024-11-28 12:50:46.181578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.224 qpair failed and we were unable to recover it. 00:27:04.224 [2024-11-28 12:50:46.181672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.224 [2024-11-28 12:50:46.181689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.224 qpair failed and we were unable to recover it. 00:27:04.224 [2024-11-28 12:50:46.181865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.224 [2024-11-28 12:50:46.181884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.224 qpair failed and we were unable to recover it. 00:27:04.224 [2024-11-28 12:50:46.182100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.224 [2024-11-28 12:50:46.182117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.224 qpair failed and we were unable to recover it. 00:27:04.224 [2024-11-28 12:50:46.182276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.224 [2024-11-28 12:50:46.182293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.224 qpair failed and we were unable to recover it. 00:27:04.224 [2024-11-28 12:50:46.182551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.224 [2024-11-28 12:50:46.182584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.224 qpair failed and we were unable to recover it. 00:27:04.224 [2024-11-28 12:50:46.182837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.224 [2024-11-28 12:50:46.182871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.224 qpair failed and we were unable to recover it. 00:27:04.224 [2024-11-28 12:50:46.183060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.224 [2024-11-28 12:50:46.183093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.224 qpair failed and we were unable to recover it. 00:27:04.224 [2024-11-28 12:50:46.183307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.224 [2024-11-28 12:50:46.183340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.224 qpair failed and we were unable to recover it. 00:27:04.224 [2024-11-28 12:50:46.183525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.224 [2024-11-28 12:50:46.183559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.224 qpair failed and we were unable to recover it. 00:27:04.224 [2024-11-28 12:50:46.183741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.224 [2024-11-28 12:50:46.183773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.224 qpair failed and we were unable to recover it. 00:27:04.224 [2024-11-28 12:50:46.183976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.224 [2024-11-28 12:50:46.184011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.224 qpair failed and we were unable to recover it. 00:27:04.224 [2024-11-28 12:50:46.184285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.224 [2024-11-28 12:50:46.184302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.224 qpair failed and we were unable to recover it. 00:27:04.224 [2024-11-28 12:50:46.184535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.224 [2024-11-28 12:50:46.184550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.224 qpair failed and we were unable to recover it. 00:27:04.224 [2024-11-28 12:50:46.184664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.224 [2024-11-28 12:50:46.184680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.224 qpair failed and we were unable to recover it. 00:27:04.224 [2024-11-28 12:50:46.184835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.224 [2024-11-28 12:50:46.184867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.224 qpair failed and we were unable to recover it. 00:27:04.224 [2024-11-28 12:50:46.185162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.224 [2024-11-28 12:50:46.185196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.224 qpair failed and we were unable to recover it. 00:27:04.224 [2024-11-28 12:50:46.185383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.224 [2024-11-28 12:50:46.185416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.224 qpair failed and we were unable to recover it. 00:27:04.224 [2024-11-28 12:50:46.185680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.224 [2024-11-28 12:50:46.185713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.224 qpair failed and we were unable to recover it. 00:27:04.224 [2024-11-28 12:50:46.185896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.224 [2024-11-28 12:50:46.185929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.224 qpair failed and we were unable to recover it. 00:27:04.224 [2024-11-28 12:50:46.186135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.224 [2024-11-28 12:50:46.186181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.224 qpair failed and we were unable to recover it. 00:27:04.224 [2024-11-28 12:50:46.186426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.224 [2024-11-28 12:50:46.186460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.224 qpair failed and we were unable to recover it. 00:27:04.224 [2024-11-28 12:50:46.186652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.224 [2024-11-28 12:50:46.186685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.224 qpair failed and we were unable to recover it. 00:27:04.224 [2024-11-28 12:50:46.186995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.224 [2024-11-28 12:50:46.187030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.224 qpair failed and we were unable to recover it. 00:27:04.224 [2024-11-28 12:50:46.187273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.224 [2024-11-28 12:50:46.187307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.224 qpair failed and we were unable to recover it. 00:27:04.224 [2024-11-28 12:50:46.187497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.224 [2024-11-28 12:50:46.187531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.224 qpair failed and we were unable to recover it. 00:27:04.224 [2024-11-28 12:50:46.187810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.224 [2024-11-28 12:50:46.187843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.224 qpair failed and we were unable to recover it. 00:27:04.224 [2024-11-28 12:50:46.187991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.224 [2024-11-28 12:50:46.188024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.224 qpair failed and we were unable to recover it. 00:27:04.224 [2024-11-28 12:50:46.188317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.224 [2024-11-28 12:50:46.188349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.224 qpair failed and we were unable to recover it. 00:27:04.225 [2024-11-28 12:50:46.188717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.225 [2024-11-28 12:50:46.188769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.225 qpair failed and we were unable to recover it. 00:27:04.225 [2024-11-28 12:50:46.189007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.225 [2024-11-28 12:50:46.189023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.225 qpair failed and we were unable to recover it. 00:27:04.225 [2024-11-28 12:50:46.189200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.225 [2024-11-28 12:50:46.189213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.225 qpair failed and we were unable to recover it. 00:27:04.225 [2024-11-28 12:50:46.189448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.225 [2024-11-28 12:50:46.189480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.225 qpair failed and we were unable to recover it. 00:27:04.225 [2024-11-28 12:50:46.189676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.225 [2024-11-28 12:50:46.189710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.225 qpair failed and we were unable to recover it. 00:27:04.225 [2024-11-28 12:50:46.189832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.225 [2024-11-28 12:50:46.189863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.225 qpair failed and we were unable to recover it. 00:27:04.225 [2024-11-28 12:50:46.190057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.225 [2024-11-28 12:50:46.190091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.225 qpair failed and we were unable to recover it. 00:27:04.225 [2024-11-28 12:50:46.190293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.225 [2024-11-28 12:50:46.190325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.225 qpair failed and we were unable to recover it. 00:27:04.225 [2024-11-28 12:50:46.190468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.225 [2024-11-28 12:50:46.190480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.225 qpair failed and we were unable to recover it. 00:27:04.225 [2024-11-28 12:50:46.190717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.225 [2024-11-28 12:50:46.190751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.225 qpair failed and we were unable to recover it. 00:27:04.225 [2024-11-28 12:50:46.190942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.225 [2024-11-28 12:50:46.190985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.225 qpair failed and we were unable to recover it. 00:27:04.225 [2024-11-28 12:50:46.191194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.225 [2024-11-28 12:50:46.191226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.225 qpair failed and we were unable to recover it. 00:27:04.225 [2024-11-28 12:50:46.191481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.225 [2024-11-28 12:50:46.191514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.225 qpair failed and we were unable to recover it. 00:27:04.225 [2024-11-28 12:50:46.191706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.225 [2024-11-28 12:50:46.191749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.225 qpair failed and we were unable to recover it. 00:27:04.225 [2024-11-28 12:50:46.191893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.225 [2024-11-28 12:50:46.191927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.225 qpair failed and we were unable to recover it. 00:27:04.225 [2024-11-28 12:50:46.192143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.225 [2024-11-28 12:50:46.192177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.225 qpair failed and we were unable to recover it. 00:27:04.225 [2024-11-28 12:50:46.192449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.225 [2024-11-28 12:50:46.192481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.225 qpair failed and we were unable to recover it. 00:27:04.225 [2024-11-28 12:50:46.192769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.225 [2024-11-28 12:50:46.192803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.225 qpair failed and we were unable to recover it. 00:27:04.225 [2024-11-28 12:50:46.192986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.225 [2024-11-28 12:50:46.193020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.225 qpair failed and we were unable to recover it. 00:27:04.225 [2024-11-28 12:50:46.193288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.225 [2024-11-28 12:50:46.193321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.225 qpair failed and we were unable to recover it. 00:27:04.225 [2024-11-28 12:50:46.193586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.225 [2024-11-28 12:50:46.193599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.225 qpair failed and we were unable to recover it. 00:27:04.225 [2024-11-28 12:50:46.193803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.225 [2024-11-28 12:50:46.193815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.225 qpair failed and we were unable to recover it. 00:27:04.225 [2024-11-28 12:50:46.193950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.225 [2024-11-28 12:50:46.193963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.225 qpair failed and we were unable to recover it. 00:27:04.225 [2024-11-28 12:50:46.194136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.225 [2024-11-28 12:50:46.194149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.225 qpair failed and we were unable to recover it. 00:27:04.225 [2024-11-28 12:50:46.194297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.225 [2024-11-28 12:50:46.194309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.225 qpair failed and we were unable to recover it. 00:27:04.225 [2024-11-28 12:50:46.194466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.225 [2024-11-28 12:50:46.194478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.225 qpair failed and we were unable to recover it. 00:27:04.225 [2024-11-28 12:50:46.194571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.225 [2024-11-28 12:50:46.194582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.225 qpair failed and we were unable to recover it. 00:27:04.225 [2024-11-28 12:50:46.194829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.225 [2024-11-28 12:50:46.194863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.225 qpair failed and we were unable to recover it. 00:27:04.225 [2024-11-28 12:50:46.195128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.225 [2024-11-28 12:50:46.195162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.225 qpair failed and we were unable to recover it. 00:27:04.225 [2024-11-28 12:50:46.195461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.225 [2024-11-28 12:50:46.195495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.225 qpair failed and we were unable to recover it. 00:27:04.225 [2024-11-28 12:50:46.195740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.225 [2024-11-28 12:50:46.195772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.225 qpair failed and we were unable to recover it. 00:27:04.225 [2024-11-28 12:50:46.195985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.225 [2024-11-28 12:50:46.196020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.225 qpair failed and we were unable to recover it. 00:27:04.225 [2024-11-28 12:50:46.196221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.225 [2024-11-28 12:50:46.196254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.225 qpair failed and we were unable to recover it. 00:27:04.225 [2024-11-28 12:50:46.196569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.225 [2024-11-28 12:50:46.196603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.225 qpair failed and we were unable to recover it. 00:27:04.225 [2024-11-28 12:50:46.196808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.225 [2024-11-28 12:50:46.196841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.225 qpair failed and we were unable to recover it. 00:27:04.225 [2024-11-28 12:50:46.197038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.225 [2024-11-28 12:50:46.197072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.225 qpair failed and we were unable to recover it. 00:27:04.225 [2024-11-28 12:50:46.197330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.225 [2024-11-28 12:50:46.197343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.225 qpair failed and we were unable to recover it. 00:27:04.225 [2024-11-28 12:50:46.197544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.225 [2024-11-28 12:50:46.197557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.225 qpair failed and we were unable to recover it. 00:27:04.226 [2024-11-28 12:50:46.197761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.226 [2024-11-28 12:50:46.197774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.226 qpair failed and we were unable to recover it. 00:27:04.226 [2024-11-28 12:50:46.198025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.226 [2024-11-28 12:50:46.198038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.226 qpair failed and we were unable to recover it. 00:27:04.226 [2024-11-28 12:50:46.198260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.226 [2024-11-28 12:50:46.198274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.226 qpair failed and we were unable to recover it. 00:27:04.226 [2024-11-28 12:50:46.198443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.226 [2024-11-28 12:50:46.198455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.226 qpair failed and we were unable to recover it. 00:27:04.226 [2024-11-28 12:50:46.198675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.226 [2024-11-28 12:50:46.198687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.226 qpair failed and we were unable to recover it. 00:27:04.226 [2024-11-28 12:50:46.198841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.226 [2024-11-28 12:50:46.198874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.226 qpair failed and we were unable to recover it. 00:27:04.226 [2024-11-28 12:50:46.199151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.226 [2024-11-28 12:50:46.199185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.226 qpair failed and we were unable to recover it. 00:27:04.226 [2024-11-28 12:50:46.199330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.226 [2024-11-28 12:50:46.199364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.226 qpair failed and we were unable to recover it. 00:27:04.226 [2024-11-28 12:50:46.199573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.226 [2024-11-28 12:50:46.199585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.226 qpair failed and we were unable to recover it. 00:27:04.226 [2024-11-28 12:50:46.199787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.226 [2024-11-28 12:50:46.199800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.226 qpair failed and we were unable to recover it. 00:27:04.226 [2024-11-28 12:50:46.199868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.226 [2024-11-28 12:50:46.199879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.226 qpair failed and we were unable to recover it. 00:27:04.226 [2024-11-28 12:50:46.200169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.226 [2024-11-28 12:50:46.200201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.226 qpair failed and we were unable to recover it. 00:27:04.226 [2024-11-28 12:50:46.200447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.226 [2024-11-28 12:50:46.200481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.226 qpair failed and we were unable to recover it. 00:27:04.226 [2024-11-28 12:50:46.200746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.226 [2024-11-28 12:50:46.200779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.226 qpair failed and we were unable to recover it. 00:27:04.226 [2024-11-28 12:50:46.200975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.226 [2024-11-28 12:50:46.201009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.226 qpair failed and we were unable to recover it. 00:27:04.226 [2024-11-28 12:50:46.201203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.226 [2024-11-28 12:50:46.201216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.226 qpair failed and we were unable to recover it. 00:27:04.226 [2024-11-28 12:50:46.201407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.226 [2024-11-28 12:50:46.201440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.226 qpair failed and we were unable to recover it. 00:27:04.226 [2024-11-28 12:50:46.201626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.226 [2024-11-28 12:50:46.201659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.226 qpair failed and we were unable to recover it. 00:27:04.226 [2024-11-28 12:50:46.201970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.226 [2024-11-28 12:50:46.202006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.226 qpair failed and we were unable to recover it. 00:27:04.226 [2024-11-28 12:50:46.202278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.226 [2024-11-28 12:50:46.202311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.226 qpair failed and we were unable to recover it. 00:27:04.226 [2024-11-28 12:50:46.202588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.226 [2024-11-28 12:50:46.202620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.226 qpair failed and we were unable to recover it. 00:27:04.226 [2024-11-28 12:50:46.202910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.226 [2024-11-28 12:50:46.202944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.226 qpair failed and we were unable to recover it. 00:27:04.226 [2024-11-28 12:50:46.203160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.226 [2024-11-28 12:50:46.203192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.226 qpair failed and we were unable to recover it. 00:27:04.226 [2024-11-28 12:50:46.203471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.226 [2024-11-28 12:50:46.203505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.226 qpair failed and we were unable to recover it. 00:27:04.226 [2024-11-28 12:50:46.203781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.226 [2024-11-28 12:50:46.203813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.226 qpair failed and we were unable to recover it. 00:27:04.226 [2024-11-28 12:50:46.204067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.226 [2024-11-28 12:50:46.204101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.226 qpair failed and we were unable to recover it. 00:27:04.226 [2024-11-28 12:50:46.204391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.226 [2024-11-28 12:50:46.204403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.226 qpair failed and we were unable to recover it. 00:27:04.226 [2024-11-28 12:50:46.204586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.226 [2024-11-28 12:50:46.204598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.226 qpair failed and we were unable to recover it. 00:27:04.226 [2024-11-28 12:50:46.204760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.226 [2024-11-28 12:50:46.204794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.226 qpair failed and we were unable to recover it. 00:27:04.226 [2024-11-28 12:50:46.205095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.226 [2024-11-28 12:50:46.205130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.226 qpair failed and we were unable to recover it. 00:27:04.226 [2024-11-28 12:50:46.205390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.226 [2024-11-28 12:50:46.205403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.226 qpair failed and we were unable to recover it. 00:27:04.226 [2024-11-28 12:50:46.205646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.226 [2024-11-28 12:50:46.205681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.226 qpair failed and we were unable to recover it. 00:27:04.226 [2024-11-28 12:50:46.205980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.226 [2024-11-28 12:50:46.206015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.226 qpair failed and we were unable to recover it. 00:27:04.226 [2024-11-28 12:50:46.206288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.226 [2024-11-28 12:50:46.206321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.226 qpair failed and we were unable to recover it. 00:27:04.226 [2024-11-28 12:50:46.206539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.226 [2024-11-28 12:50:46.206571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.226 qpair failed and we were unable to recover it. 00:27:04.226 [2024-11-28 12:50:46.206774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.226 [2024-11-28 12:50:46.206807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.226 qpair failed and we were unable to recover it. 00:27:04.226 [2024-11-28 12:50:46.207064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.226 [2024-11-28 12:50:46.207097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.226 qpair failed and we were unable to recover it. 00:27:04.226 [2024-11-28 12:50:46.207349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.226 [2024-11-28 12:50:46.207382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.226 qpair failed and we were unable to recover it. 00:27:04.226 [2024-11-28 12:50:46.207622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.227 [2024-11-28 12:50:46.207634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.227 qpair failed and we were unable to recover it. 00:27:04.227 [2024-11-28 12:50:46.207811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.227 [2024-11-28 12:50:46.207823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.227 qpair failed and we were unable to recover it. 00:27:04.227 [2024-11-28 12:50:46.208088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.227 [2024-11-28 12:50:46.208123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.227 qpair failed and we were unable to recover it. 00:27:04.227 [2024-11-28 12:50:46.208387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.227 [2024-11-28 12:50:46.208399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.227 qpair failed and we were unable to recover it. 00:27:04.227 [2024-11-28 12:50:46.208616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.227 [2024-11-28 12:50:46.208630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.227 qpair failed and we were unable to recover it. 00:27:04.227 [2024-11-28 12:50:46.208779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.227 [2024-11-28 12:50:46.208792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.227 qpair failed and we were unable to recover it. 00:27:04.227 [2024-11-28 12:50:46.208957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.227 [2024-11-28 12:50:46.208970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.227 qpair failed and we were unable to recover it. 00:27:04.227 [2024-11-28 12:50:46.209117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.227 [2024-11-28 12:50:46.209151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.227 qpair failed and we were unable to recover it. 00:27:04.227 [2024-11-28 12:50:46.209414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.227 [2024-11-28 12:50:46.209448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.227 qpair failed and we were unable to recover it. 00:27:04.227 [2024-11-28 12:50:46.209580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.227 [2024-11-28 12:50:46.209612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.227 qpair failed and we were unable to recover it. 00:27:04.227 [2024-11-28 12:50:46.209870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.227 [2024-11-28 12:50:46.209903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.227 qpair failed and we were unable to recover it. 00:27:04.227 [2024-11-28 12:50:46.210183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.227 [2024-11-28 12:50:46.210217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.227 qpair failed and we were unable to recover it. 00:27:04.227 [2024-11-28 12:50:46.210470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.227 [2024-11-28 12:50:46.210503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.227 qpair failed and we were unable to recover it. 00:27:04.227 [2024-11-28 12:50:46.210768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.227 [2024-11-28 12:50:46.210795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.227 qpair failed and we were unable to recover it. 00:27:04.227 [2024-11-28 12:50:46.211096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.227 [2024-11-28 12:50:46.211132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.227 qpair failed and we were unable to recover it. 00:27:04.227 [2024-11-28 12:50:46.211361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.227 [2024-11-28 12:50:46.211395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.227 qpair failed and we were unable to recover it. 00:27:04.227 [2024-11-28 12:50:46.211663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.227 [2024-11-28 12:50:46.211695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.227 qpair failed and we were unable to recover it. 00:27:04.227 [2024-11-28 12:50:46.211904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.227 [2024-11-28 12:50:46.211939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.227 qpair failed and we were unable to recover it. 00:27:04.227 [2024-11-28 12:50:46.212136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.227 [2024-11-28 12:50:46.212169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.227 qpair failed and we were unable to recover it. 00:27:04.227 [2024-11-28 12:50:46.212443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.227 [2024-11-28 12:50:46.212477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.227 qpair failed and we were unable to recover it. 00:27:04.227 [2024-11-28 12:50:46.212656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.227 [2024-11-28 12:50:46.212668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.227 qpair failed and we were unable to recover it. 00:27:04.227 [2024-11-28 12:50:46.212882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.227 [2024-11-28 12:50:46.212895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.227 qpair failed and we were unable to recover it. 00:27:04.227 [2024-11-28 12:50:46.212990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.227 [2024-11-28 12:50:46.213022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.227 qpair failed and we were unable to recover it. 00:27:04.227 [2024-11-28 12:50:46.213222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.227 [2024-11-28 12:50:46.213256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.227 qpair failed and we were unable to recover it. 00:27:04.227 [2024-11-28 12:50:46.213484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.227 [2024-11-28 12:50:46.213518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.227 qpair failed and we were unable to recover it. 00:27:04.227 [2024-11-28 12:50:46.213729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.227 [2024-11-28 12:50:46.213742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.227 qpair failed and we were unable to recover it. 00:27:04.227 [2024-11-28 12:50:46.213836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.227 [2024-11-28 12:50:46.213847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.227 qpair failed and we were unable to recover it. 00:27:04.227 [2024-11-28 12:50:46.214025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.227 [2024-11-28 12:50:46.214038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.227 qpair failed and we were unable to recover it. 00:27:04.227 [2024-11-28 12:50:46.214186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.227 [2024-11-28 12:50:46.214199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.227 qpair failed and we were unable to recover it. 00:27:04.227 [2024-11-28 12:50:46.214434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.227 [2024-11-28 12:50:46.214468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.227 qpair failed and we were unable to recover it. 00:27:04.227 [2024-11-28 12:50:46.214660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.227 [2024-11-28 12:50:46.214692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.227 qpair failed and we were unable to recover it. 00:27:04.227 [2024-11-28 12:50:46.214979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.227 [2024-11-28 12:50:46.215014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.227 qpair failed and we were unable to recover it. 00:27:04.227 [2024-11-28 12:50:46.215232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.227 [2024-11-28 12:50:46.215265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.227 qpair failed and we were unable to recover it. 00:27:04.227 [2024-11-28 12:50:46.215465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.227 [2024-11-28 12:50:46.215498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.227 qpair failed and we were unable to recover it. 00:27:04.227 [2024-11-28 12:50:46.215745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.227 [2024-11-28 12:50:46.215778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.227 qpair failed and we were unable to recover it. 00:27:04.227 [2024-11-28 12:50:46.216032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.227 [2024-11-28 12:50:46.216068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.227 qpair failed and we were unable to recover it. 00:27:04.227 [2024-11-28 12:50:46.216323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.227 [2024-11-28 12:50:46.216356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.227 qpair failed and we were unable to recover it. 00:27:04.227 [2024-11-28 12:50:46.216601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.227 [2024-11-28 12:50:46.216634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.227 qpair failed and we were unable to recover it. 00:27:04.227 [2024-11-28 12:50:46.216825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.228 [2024-11-28 12:50:46.216858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.228 qpair failed and we were unable to recover it. 00:27:04.228 [2024-11-28 12:50:46.217078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.228 [2024-11-28 12:50:46.217111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.228 qpair failed and we were unable to recover it. 00:27:04.228 [2024-11-28 12:50:46.217355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.228 [2024-11-28 12:50:46.217368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.228 qpair failed and we were unable to recover it. 00:27:04.228 [2024-11-28 12:50:46.217443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.228 [2024-11-28 12:50:46.217454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.228 qpair failed and we were unable to recover it. 00:27:04.228 [2024-11-28 12:50:46.217620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.228 [2024-11-28 12:50:46.217659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.228 qpair failed and we were unable to recover it. 00:27:04.228 [2024-11-28 12:50:46.217909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.228 [2024-11-28 12:50:46.217942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.228 qpair failed and we were unable to recover it. 00:27:04.228 [2024-11-28 12:50:46.218276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.228 [2024-11-28 12:50:46.218316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.228 qpair failed and we were unable to recover it. 00:27:04.228 [2024-11-28 12:50:46.218547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.228 [2024-11-28 12:50:46.218580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.228 qpair failed and we were unable to recover it. 00:27:04.228 [2024-11-28 12:50:46.218879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.228 [2024-11-28 12:50:46.218912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.228 qpair failed and we were unable to recover it. 00:27:04.228 [2024-11-28 12:50:46.219210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.228 [2024-11-28 12:50:46.219244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.228 qpair failed and we were unable to recover it. 00:27:04.228 [2024-11-28 12:50:46.219498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.228 [2024-11-28 12:50:46.219511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.228 qpair failed and we were unable to recover it. 00:27:04.228 [2024-11-28 12:50:46.219667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.228 [2024-11-28 12:50:46.219680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.228 qpair failed and we were unable to recover it. 00:27:04.228 [2024-11-28 12:50:46.219846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.228 [2024-11-28 12:50:46.219880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.228 qpair failed and we were unable to recover it. 00:27:04.228 [2024-11-28 12:50:46.220062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.228 [2024-11-28 12:50:46.220097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.228 qpair failed and we were unable to recover it. 00:27:04.228 [2024-11-28 12:50:46.220370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.228 [2024-11-28 12:50:46.220403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.228 qpair failed and we were unable to recover it. 00:27:04.228 [2024-11-28 12:50:46.220699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.228 [2024-11-28 12:50:46.220732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.228 qpair failed and we were unable to recover it. 00:27:04.228 [2024-11-28 12:50:46.220862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.228 [2024-11-28 12:50:46.220896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.228 qpair failed and we were unable to recover it. 00:27:04.228 [2024-11-28 12:50:46.221157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.228 [2024-11-28 12:50:46.221190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.228 qpair failed and we were unable to recover it. 00:27:04.228 [2024-11-28 12:50:46.221489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.228 [2024-11-28 12:50:46.221501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.228 qpair failed and we were unable to recover it. 00:27:04.228 [2024-11-28 12:50:46.221701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.228 [2024-11-28 12:50:46.221713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.228 qpair failed and we were unable to recover it. 00:27:04.228 [2024-11-28 12:50:46.221855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.228 [2024-11-28 12:50:46.221868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.228 qpair failed and we were unable to recover it. 00:27:04.228 [2024-11-28 12:50:46.222123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.228 [2024-11-28 12:50:46.222135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.228 qpair failed and we were unable to recover it. 00:27:04.228 [2024-11-28 12:50:46.222360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.228 [2024-11-28 12:50:46.222372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.228 qpair failed and we were unable to recover it. 00:27:04.228 [2024-11-28 12:50:46.222610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.228 [2024-11-28 12:50:46.222643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.228 qpair failed and we were unable to recover it. 00:27:04.228 [2024-11-28 12:50:46.222908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.228 [2024-11-28 12:50:46.222941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.228 qpair failed and we were unable to recover it. 00:27:04.228 [2024-11-28 12:50:46.223175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.228 [2024-11-28 12:50:46.223208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.228 qpair failed and we were unable to recover it. 00:27:04.228 [2024-11-28 12:50:46.223502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.228 [2024-11-28 12:50:46.223514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.228 qpair failed and we were unable to recover it. 00:27:04.228 [2024-11-28 12:50:46.223740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.228 [2024-11-28 12:50:46.223752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.228 qpair failed and we were unable to recover it. 00:27:04.228 [2024-11-28 12:50:46.223994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.228 [2024-11-28 12:50:46.224007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.228 qpair failed and we were unable to recover it. 00:27:04.228 [2024-11-28 12:50:46.224217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.228 [2024-11-28 12:50:46.224230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.228 qpair failed and we were unable to recover it. 00:27:04.228 [2024-11-28 12:50:46.224377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.228 [2024-11-28 12:50:46.224389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.228 qpair failed and we were unable to recover it. 00:27:04.228 [2024-11-28 12:50:46.224542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.228 [2024-11-28 12:50:46.224574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.228 qpair failed and we were unable to recover it. 00:27:04.228 [2024-11-28 12:50:46.224826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.228 [2024-11-28 12:50:46.224860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.228 qpair failed and we were unable to recover it. 00:27:04.229 [2024-11-28 12:50:46.225141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.229 [2024-11-28 12:50:46.225174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.229 qpair failed and we were unable to recover it. 00:27:04.229 [2024-11-28 12:50:46.225401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.229 [2024-11-28 12:50:46.225433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.229 qpair failed and we were unable to recover it. 00:27:04.229 [2024-11-28 12:50:46.225711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.229 [2024-11-28 12:50:46.225744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.229 qpair failed and we were unable to recover it. 00:27:04.229 [2024-11-28 12:50:46.226003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.229 [2024-11-28 12:50:46.226036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.229 qpair failed and we were unable to recover it. 00:27:04.229 [2024-11-28 12:50:46.226222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.229 [2024-11-28 12:50:46.226254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.229 qpair failed and we were unable to recover it. 00:27:04.229 [2024-11-28 12:50:46.226383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.229 [2024-11-28 12:50:46.226407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.229 qpair failed and we were unable to recover it. 00:27:04.229 [2024-11-28 12:50:46.226670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.229 [2024-11-28 12:50:46.226702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.229 qpair failed and we were unable to recover it. 00:27:04.229 [2024-11-28 12:50:46.226900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.229 [2024-11-28 12:50:46.226933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.229 qpair failed and we were unable to recover it. 00:27:04.229 [2024-11-28 12:50:46.227253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.229 [2024-11-28 12:50:46.227287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.229 qpair failed and we were unable to recover it. 00:27:04.229 [2024-11-28 12:50:46.227488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.229 [2024-11-28 12:50:46.227525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.229 qpair failed and we were unable to recover it. 00:27:04.229 [2024-11-28 12:50:46.227680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.229 [2024-11-28 12:50:46.227693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.229 qpair failed and we were unable to recover it. 00:27:04.229 [2024-11-28 12:50:46.227911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.229 [2024-11-28 12:50:46.227944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.229 qpair failed and we were unable to recover it. 00:27:04.229 [2024-11-28 12:50:46.228260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.229 [2024-11-28 12:50:46.228293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.229 qpair failed and we were unable to recover it. 00:27:04.229 [2024-11-28 12:50:46.228547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.229 [2024-11-28 12:50:46.228585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.229 qpair failed and we were unable to recover it. 00:27:04.229 [2024-11-28 12:50:46.228886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.229 [2024-11-28 12:50:46.228920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.229 qpair failed and we were unable to recover it. 00:27:04.229 [2024-11-28 12:50:46.229186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.229 [2024-11-28 12:50:46.229219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.229 qpair failed and we were unable to recover it. 00:27:04.229 [2024-11-28 12:50:46.229355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.229 [2024-11-28 12:50:46.229388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.229 qpair failed and we were unable to recover it. 00:27:04.229 [2024-11-28 12:50:46.229582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.229 [2024-11-28 12:50:46.229615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.229 qpair failed and we were unable to recover it. 00:27:04.229 [2024-11-28 12:50:46.229896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.229 [2024-11-28 12:50:46.229930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.229 qpair failed and we were unable to recover it. 00:27:04.229 [2024-11-28 12:50:46.230220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.229 [2024-11-28 12:50:46.230254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.229 qpair failed and we were unable to recover it. 00:27:04.229 [2024-11-28 12:50:46.230526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.229 [2024-11-28 12:50:46.230538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.229 qpair failed and we were unable to recover it. 00:27:04.229 [2024-11-28 12:50:46.230677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.229 [2024-11-28 12:50:46.230690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.229 qpair failed and we were unable to recover it. 00:27:04.229 [2024-11-28 12:50:46.230831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.229 [2024-11-28 12:50:46.230844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.229 qpair failed and we were unable to recover it. 00:27:04.229 [2024-11-28 12:50:46.230989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.229 [2024-11-28 12:50:46.231022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.229 qpair failed and we were unable to recover it. 00:27:04.229 [2024-11-28 12:50:46.231299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.229 [2024-11-28 12:50:46.231333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.229 qpair failed and we were unable to recover it. 00:27:04.229 [2024-11-28 12:50:46.231530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.229 [2024-11-28 12:50:46.231542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.229 qpair failed and we were unable to recover it. 00:27:04.229 [2024-11-28 12:50:46.231703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.229 [2024-11-28 12:50:46.231716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.229 qpair failed and we were unable to recover it. 00:27:04.229 [2024-11-28 12:50:46.231938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.229 [2024-11-28 12:50:46.231983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.229 qpair failed and we were unable to recover it. 00:27:04.229 [2024-11-28 12:50:46.232230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.229 [2024-11-28 12:50:46.232262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.229 qpair failed and we were unable to recover it. 00:27:04.229 [2024-11-28 12:50:46.232512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.229 [2024-11-28 12:50:46.232545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.229 qpair failed and we were unable to recover it. 00:27:04.229 [2024-11-28 12:50:46.232736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.229 [2024-11-28 12:50:46.232769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.229 qpair failed and we were unable to recover it. 00:27:04.229 [2024-11-28 12:50:46.233056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.229 [2024-11-28 12:50:46.233090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.229 qpair failed and we were unable to recover it. 00:27:04.229 [2024-11-28 12:50:46.233289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.229 [2024-11-28 12:50:46.233322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.229 qpair failed and we were unable to recover it. 00:27:04.229 [2024-11-28 12:50:46.233533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.229 [2024-11-28 12:50:46.233546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.229 qpair failed and we were unable to recover it. 00:27:04.229 [2024-11-28 12:50:46.233797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.229 [2024-11-28 12:50:46.233829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.229 qpair failed and we were unable to recover it. 00:27:04.229 [2024-11-28 12:50:46.234088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.229 [2024-11-28 12:50:46.234122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.229 qpair failed and we were unable to recover it. 00:27:04.229 [2024-11-28 12:50:46.234208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.229 [2024-11-28 12:50:46.234218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.229 qpair failed and we were unable to recover it. 00:27:04.230 [2024-11-28 12:50:46.234390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.230 [2024-11-28 12:50:46.234423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.230 qpair failed and we were unable to recover it. 00:27:04.230 [2024-11-28 12:50:46.234651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.230 [2024-11-28 12:50:46.234683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.230 qpair failed and we were unable to recover it. 00:27:04.230 [2024-11-28 12:50:46.234985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.230 [2024-11-28 12:50:46.235019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.230 qpair failed and we were unable to recover it. 00:27:04.230 [2024-11-28 12:50:46.235216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.230 [2024-11-28 12:50:46.235250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.230 qpair failed and we were unable to recover it. 00:27:04.230 [2024-11-28 12:50:46.235531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.230 [2024-11-28 12:50:46.235564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.230 qpair failed and we were unable to recover it. 00:27:04.230 [2024-11-28 12:50:46.235824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.230 [2024-11-28 12:50:46.235857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.230 qpair failed and we were unable to recover it. 00:27:04.230 [2024-11-28 12:50:46.236054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.230 [2024-11-28 12:50:46.236088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.230 qpair failed and we were unable to recover it. 00:27:04.230 [2024-11-28 12:50:46.236357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.230 [2024-11-28 12:50:46.236390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.230 qpair failed and we were unable to recover it. 00:27:04.230 [2024-11-28 12:50:46.236592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.230 [2024-11-28 12:50:46.236625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.230 qpair failed and we were unable to recover it. 00:27:04.230 [2024-11-28 12:50:46.236827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.230 [2024-11-28 12:50:46.236860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.230 qpair failed and we were unable to recover it. 00:27:04.230 [2024-11-28 12:50:46.237193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.230 [2024-11-28 12:50:46.237226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.230 qpair failed and we were unable to recover it. 00:27:04.230 [2024-11-28 12:50:46.237493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.230 [2024-11-28 12:50:46.237526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.230 qpair failed and we were unable to recover it. 00:27:04.230 [2024-11-28 12:50:46.237728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.230 [2024-11-28 12:50:46.237762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.230 qpair failed and we were unable to recover it. 00:27:04.230 [2024-11-28 12:50:46.237967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.230 [2024-11-28 12:50:46.238000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.230 qpair failed and we were unable to recover it. 00:27:04.230 [2024-11-28 12:50:46.238252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.230 [2024-11-28 12:50:46.238286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.230 qpair failed and we were unable to recover it. 00:27:04.230 [2024-11-28 12:50:46.238520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.230 [2024-11-28 12:50:46.238532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.230 qpair failed and we were unable to recover it. 00:27:04.230 [2024-11-28 12:50:46.238691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.230 [2024-11-28 12:50:46.238706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.230 qpair failed and we were unable to recover it. 00:27:04.230 [2024-11-28 12:50:46.238892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.230 [2024-11-28 12:50:46.238925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.230 qpair failed and we were unable to recover it. 00:27:04.230 [2024-11-28 12:50:46.239210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.230 [2024-11-28 12:50:46.239244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.230 qpair failed and we were unable to recover it. 00:27:04.230 [2024-11-28 12:50:46.239423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.230 [2024-11-28 12:50:46.239436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.230 qpair failed and we were unable to recover it. 00:27:04.230 [2024-11-28 12:50:46.239671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.230 [2024-11-28 12:50:46.239704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.230 qpair failed and we were unable to recover it. 00:27:04.230 [2024-11-28 12:50:46.239977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.230 [2024-11-28 12:50:46.240011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.230 qpair failed and we were unable to recover it. 00:27:04.230 [2024-11-28 12:50:46.240304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.230 [2024-11-28 12:50:46.240338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.230 qpair failed and we were unable to recover it. 00:27:04.230 [2024-11-28 12:50:46.240609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.230 [2024-11-28 12:50:46.240643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.230 qpair failed and we were unable to recover it. 00:27:04.230 [2024-11-28 12:50:46.240844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.230 [2024-11-28 12:50:46.240877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.230 qpair failed and we were unable to recover it. 00:27:04.230 [2024-11-28 12:50:46.241097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.230 [2024-11-28 12:50:46.241131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.230 qpair failed and we were unable to recover it. 00:27:04.230 [2024-11-28 12:50:46.241353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.230 [2024-11-28 12:50:46.241385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.230 qpair failed and we were unable to recover it. 00:27:04.230 [2024-11-28 12:50:46.241658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.230 [2024-11-28 12:50:46.241671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.230 qpair failed and we were unable to recover it. 00:27:04.230 [2024-11-28 12:50:46.241939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.230 [2024-11-28 12:50:46.241982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.230 qpair failed and we were unable to recover it. 00:27:04.230 [2024-11-28 12:50:46.242287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.230 [2024-11-28 12:50:46.242319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.230 qpair failed and we were unable to recover it. 00:27:04.230 [2024-11-28 12:50:46.242576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.230 [2024-11-28 12:50:46.242610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.230 qpair failed and we were unable to recover it. 00:27:04.230 [2024-11-28 12:50:46.242866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.230 [2024-11-28 12:50:46.242899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.230 qpair failed and we were unable to recover it. 00:27:04.230 [2024-11-28 12:50:46.243208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.230 [2024-11-28 12:50:46.243241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.230 qpair failed and we were unable to recover it. 00:27:04.230 [2024-11-28 12:50:46.243450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.230 [2024-11-28 12:50:46.243483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.230 qpair failed and we were unable to recover it. 00:27:04.230 [2024-11-28 12:50:46.243674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.230 [2024-11-28 12:50:46.243686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.230 qpair failed and we were unable to recover it. 00:27:04.230 [2024-11-28 12:50:46.243839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.230 [2024-11-28 12:50:46.243852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.230 qpair failed and we were unable to recover it. 00:27:04.230 [2024-11-28 12:50:46.243940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.230 [2024-11-28 12:50:46.243957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.230 qpair failed and we were unable to recover it. 00:27:04.230 [2024-11-28 12:50:46.244167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.230 [2024-11-28 12:50:46.244180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.230 qpair failed and we were unable to recover it. 00:27:04.230 [2024-11-28 12:50:46.244359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.231 [2024-11-28 12:50:46.244391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.231 qpair failed and we were unable to recover it. 00:27:04.231 [2024-11-28 12:50:46.244612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.231 [2024-11-28 12:50:46.244646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.231 qpair failed and we were unable to recover it. 00:27:04.231 [2024-11-28 12:50:46.244872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.231 [2024-11-28 12:50:46.244905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.231 qpair failed and we were unable to recover it. 00:27:04.231 [2024-11-28 12:50:46.245168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.231 [2024-11-28 12:50:46.245203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.231 qpair failed and we were unable to recover it. 00:27:04.231 [2024-11-28 12:50:46.245410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.231 [2024-11-28 12:50:46.245442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.231 qpair failed and we were unable to recover it. 00:27:04.231 [2024-11-28 12:50:46.245662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.231 [2024-11-28 12:50:46.245674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.231 qpair failed and we were unable to recover it. 00:27:04.231 [2024-11-28 12:50:46.245908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.231 [2024-11-28 12:50:46.245941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.231 qpair failed and we were unable to recover it. 00:27:04.231 [2024-11-28 12:50:46.246232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.231 [2024-11-28 12:50:46.246265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.231 qpair failed and we were unable to recover it. 00:27:04.231 [2024-11-28 12:50:46.246560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.231 [2024-11-28 12:50:46.246593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.231 qpair failed and we were unable to recover it. 00:27:04.231 [2024-11-28 12:50:46.246868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.231 [2024-11-28 12:50:46.246901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.231 qpair failed and we were unable to recover it. 00:27:04.231 [2024-11-28 12:50:46.247196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.231 [2024-11-28 12:50:46.247231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.231 qpair failed and we were unable to recover it. 00:27:04.231 [2024-11-28 12:50:46.247491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.231 [2024-11-28 12:50:46.247505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.231 qpair failed and we were unable to recover it. 00:27:04.231 [2024-11-28 12:50:46.247694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.231 [2024-11-28 12:50:46.247706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.231 qpair failed and we were unable to recover it. 00:27:04.231 [2024-11-28 12:50:46.247875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.231 [2024-11-28 12:50:46.247908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.231 qpair failed and we were unable to recover it. 00:27:04.231 [2024-11-28 12:50:46.248238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.231 [2024-11-28 12:50:46.248271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.231 qpair failed and we were unable to recover it. 00:27:04.231 [2024-11-28 12:50:46.248557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.231 [2024-11-28 12:50:46.248591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.231 qpair failed and we were unable to recover it. 00:27:04.231 [2024-11-28 12:50:46.248873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.231 [2024-11-28 12:50:46.248908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.231 qpair failed and we were unable to recover it. 00:27:04.231 [2024-11-28 12:50:46.249139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.231 [2024-11-28 12:50:46.249175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.231 qpair failed and we were unable to recover it. 00:27:04.231 [2024-11-28 12:50:46.249376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.231 [2024-11-28 12:50:46.249416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.231 qpair failed and we were unable to recover it. 00:27:04.231 [2024-11-28 12:50:46.249566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.231 [2024-11-28 12:50:46.249600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.231 qpair failed and we were unable to recover it. 00:27:04.231 [2024-11-28 12:50:46.249792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.231 [2024-11-28 12:50:46.249806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.231 qpair failed and we were unable to recover it. 00:27:04.231 [2024-11-28 12:50:46.250038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.231 [2024-11-28 12:50:46.250072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.231 qpair failed and we were unable to recover it. 00:27:04.231 [2024-11-28 12:50:46.250272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.231 [2024-11-28 12:50:46.250306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.231 qpair failed and we were unable to recover it. 00:27:04.231 [2024-11-28 12:50:46.250510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.231 [2024-11-28 12:50:46.250543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.231 qpair failed and we were unable to recover it. 00:27:04.231 [2024-11-28 12:50:46.250790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.231 [2024-11-28 12:50:46.250802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.231 qpair failed and we were unable to recover it. 00:27:04.231 [2024-11-28 12:50:46.250939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.231 [2024-11-28 12:50:46.250956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.231 qpair failed and we were unable to recover it. 00:27:04.231 [2024-11-28 12:50:46.251123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.231 [2024-11-28 12:50:46.251155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.231 qpair failed and we were unable to recover it. 00:27:04.231 [2024-11-28 12:50:46.251435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.231 [2024-11-28 12:50:46.251468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.231 qpair failed and we were unable to recover it. 00:27:04.231 [2024-11-28 12:50:46.251743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.231 [2024-11-28 12:50:46.251776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.231 qpair failed and we were unable to recover it. 00:27:04.231 [2024-11-28 12:50:46.251983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.231 [2024-11-28 12:50:46.252017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.231 qpair failed and we were unable to recover it. 00:27:04.231 [2024-11-28 12:50:46.252283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.231 [2024-11-28 12:50:46.252317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.231 qpair failed and we were unable to recover it. 00:27:04.231 [2024-11-28 12:50:46.252497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.231 [2024-11-28 12:50:46.252510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.231 qpair failed and we were unable to recover it. 00:27:04.231 [2024-11-28 12:50:46.252613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.231 [2024-11-28 12:50:46.252665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.231 qpair failed and we were unable to recover it. 00:27:04.231 [2024-11-28 12:50:46.252955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.231 [2024-11-28 12:50:46.252989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.231 qpair failed and we were unable to recover it. 00:27:04.231 [2024-11-28 12:50:46.253193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.231 [2024-11-28 12:50:46.253232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.231 qpair failed and we were unable to recover it. 00:27:04.231 [2024-11-28 12:50:46.253479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.231 [2024-11-28 12:50:46.253524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.231 qpair failed and we were unable to recover it. 00:27:04.231 [2024-11-28 12:50:46.253779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.231 [2024-11-28 12:50:46.253813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.231 qpair failed and we were unable to recover it. 00:27:04.231 [2024-11-28 12:50:46.254123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.231 [2024-11-28 12:50:46.254159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.231 qpair failed and we were unable to recover it. 00:27:04.231 [2024-11-28 12:50:46.254361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.231 [2024-11-28 12:50:46.254404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.232 qpair failed and we were unable to recover it. 00:27:04.232 [2024-11-28 12:50:46.254618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.232 [2024-11-28 12:50:46.254630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.232 qpair failed and we were unable to recover it. 00:27:04.232 [2024-11-28 12:50:46.254782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.232 [2024-11-28 12:50:46.254816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.232 qpair failed and we were unable to recover it. 00:27:04.232 [2024-11-28 12:50:46.255053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.232 [2024-11-28 12:50:46.255088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.232 qpair failed and we were unable to recover it. 00:27:04.232 [2024-11-28 12:50:46.255393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.232 [2024-11-28 12:50:46.255427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.232 qpair failed and we were unable to recover it. 00:27:04.232 [2024-11-28 12:50:46.255691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.232 [2024-11-28 12:50:46.255704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.232 qpair failed and we were unable to recover it. 00:27:04.232 [2024-11-28 12:50:46.255872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.232 [2024-11-28 12:50:46.255885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.232 qpair failed and we were unable to recover it. 00:27:04.232 [2024-11-28 12:50:46.256105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.232 [2024-11-28 12:50:46.256119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.232 qpair failed and we were unable to recover it. 00:27:04.232 [2024-11-28 12:50:46.256311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.232 [2024-11-28 12:50:46.256344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.232 qpair failed and we were unable to recover it. 00:27:04.232 [2024-11-28 12:50:46.256647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.232 [2024-11-28 12:50:46.256681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.232 qpair failed and we were unable to recover it. 00:27:04.232 [2024-11-28 12:50:46.256928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.232 [2024-11-28 12:50:46.256969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.232 qpair failed and we were unable to recover it. 00:27:04.232 [2024-11-28 12:50:46.257309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.232 [2024-11-28 12:50:46.257343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.232 qpair failed and we were unable to recover it. 00:27:04.232 [2024-11-28 12:50:46.257571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.232 [2024-11-28 12:50:46.257604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.232 qpair failed and we were unable to recover it. 00:27:04.232 [2024-11-28 12:50:46.257859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.232 [2024-11-28 12:50:46.257893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.232 qpair failed and we were unable to recover it. 00:27:04.232 [2024-11-28 12:50:46.258200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.232 [2024-11-28 12:50:46.258234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.232 qpair failed and we were unable to recover it. 00:27:04.232 [2024-11-28 12:50:46.258518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.232 [2024-11-28 12:50:46.258552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.232 qpair failed and we were unable to recover it. 00:27:04.232 [2024-11-28 12:50:46.258811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.232 [2024-11-28 12:50:46.258845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.232 qpair failed and we were unable to recover it. 00:27:04.232 [2024-11-28 12:50:46.259141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.232 [2024-11-28 12:50:46.259175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.232 qpair failed and we were unable to recover it. 00:27:04.232 [2024-11-28 12:50:46.259368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.232 [2024-11-28 12:50:46.259381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.232 qpair failed and we were unable to recover it. 00:27:04.232 [2024-11-28 12:50:46.259539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.232 [2024-11-28 12:50:46.259563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.232 qpair failed and we were unable to recover it. 00:27:04.232 [2024-11-28 12:50:46.259779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.232 [2024-11-28 12:50:46.259819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.232 qpair failed and we were unable to recover it. 00:27:04.232 [2024-11-28 12:50:46.260096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.232 [2024-11-28 12:50:46.260129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.232 qpair failed and we were unable to recover it. 00:27:04.232 [2024-11-28 12:50:46.260419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.232 [2024-11-28 12:50:46.260453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.232 qpair failed and we were unable to recover it. 00:27:04.232 [2024-11-28 12:50:46.260583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.232 [2024-11-28 12:50:46.260616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.232 qpair failed and we were unable to recover it. 00:27:04.232 [2024-11-28 12:50:46.260883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.232 [2024-11-28 12:50:46.260896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.232 qpair failed and we were unable to recover it. 00:27:04.232 [2024-11-28 12:50:46.261066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.232 [2024-11-28 12:50:46.261079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.232 qpair failed and we were unable to recover it. 00:27:04.232 [2024-11-28 12:50:46.261321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.232 [2024-11-28 12:50:46.261356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.232 qpair failed and we were unable to recover it. 00:27:04.232 [2024-11-28 12:50:46.261641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.232 [2024-11-28 12:50:46.261654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.232 qpair failed and we were unable to recover it. 00:27:04.232 [2024-11-28 12:50:46.261836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.232 [2024-11-28 12:50:46.261849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.232 qpair failed and we were unable to recover it. 00:27:04.232 [2024-11-28 12:50:46.262007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.232 [2024-11-28 12:50:46.262040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.232 qpair failed and we were unable to recover it. 00:27:04.232 [2024-11-28 12:50:46.262295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.232 [2024-11-28 12:50:46.262328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.232 qpair failed and we were unable to recover it. 00:27:04.232 [2024-11-28 12:50:46.262526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.232 [2024-11-28 12:50:46.262560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.232 qpair failed and we were unable to recover it. 00:27:04.232 [2024-11-28 12:50:46.262774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.232 [2024-11-28 12:50:46.262806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.232 qpair failed and we were unable to recover it. 00:27:04.232 [2024-11-28 12:50:46.263036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.232 [2024-11-28 12:50:46.263073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.232 qpair failed and we were unable to recover it. 00:27:04.232 [2024-11-28 12:50:46.263264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.232 [2024-11-28 12:50:46.263297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.232 qpair failed and we were unable to recover it. 00:27:04.232 [2024-11-28 12:50:46.263548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.232 [2024-11-28 12:50:46.263561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.232 qpair failed and we were unable to recover it. 00:27:04.232 [2024-11-28 12:50:46.263726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.232 [2024-11-28 12:50:46.263759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.232 qpair failed and we were unable to recover it. 00:27:04.232 [2024-11-28 12:50:46.264020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.232 [2024-11-28 12:50:46.264054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.232 qpair failed and we were unable to recover it. 00:27:04.232 [2024-11-28 12:50:46.264333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.232 [2024-11-28 12:50:46.264365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.233 qpair failed and we were unable to recover it. 00:27:04.233 [2024-11-28 12:50:46.264557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.233 [2024-11-28 12:50:46.264570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.233 qpair failed and we were unable to recover it. 00:27:04.233 [2024-11-28 12:50:46.264765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.233 [2024-11-28 12:50:46.264798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.233 qpair failed and we were unable to recover it. 00:27:04.233 [2024-11-28 12:50:46.265087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.233 [2024-11-28 12:50:46.265122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.233 qpair failed and we were unable to recover it. 00:27:04.233 [2024-11-28 12:50:46.265264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.233 [2024-11-28 12:50:46.265297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.233 qpair failed and we were unable to recover it. 00:27:04.233 [2024-11-28 12:50:46.265501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.233 [2024-11-28 12:50:46.265534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.233 qpair failed and we were unable to recover it. 00:27:04.233 [2024-11-28 12:50:46.265829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.233 [2024-11-28 12:50:46.265842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.233 qpair failed and we were unable to recover it. 00:27:04.233 [2024-11-28 12:50:46.266074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.233 [2024-11-28 12:50:46.266086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.233 qpair failed and we were unable to recover it. 00:27:04.233 [2024-11-28 12:50:46.266248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.233 [2024-11-28 12:50:46.266261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.233 qpair failed and we were unable to recover it. 00:27:04.233 [2024-11-28 12:50:46.266515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.233 [2024-11-28 12:50:46.266528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.233 qpair failed and we were unable to recover it. 00:27:04.233 [2024-11-28 12:50:46.266764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.233 [2024-11-28 12:50:46.266777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.233 qpair failed and we were unable to recover it. 00:27:04.233 [2024-11-28 12:50:46.266860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.233 [2024-11-28 12:50:46.266871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.233 qpair failed and we were unable to recover it. 00:27:04.233 [2024-11-28 12:50:46.267086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.233 [2024-11-28 12:50:46.267099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.233 qpair failed and we were unable to recover it. 00:27:04.233 [2024-11-28 12:50:46.267343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.233 [2024-11-28 12:50:46.267376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.233 qpair failed and we were unable to recover it. 00:27:04.233 [2024-11-28 12:50:46.267566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.233 [2024-11-28 12:50:46.267600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.233 qpair failed and we were unable to recover it. 00:27:04.233 [2024-11-28 12:50:46.267876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.233 [2024-11-28 12:50:46.267910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.233 qpair failed and we were unable to recover it. 00:27:04.233 [2024-11-28 12:50:46.268108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.233 [2024-11-28 12:50:46.268143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.233 qpair failed and we were unable to recover it. 00:27:04.233 [2024-11-28 12:50:46.268429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.233 [2024-11-28 12:50:46.268468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.233 qpair failed and we were unable to recover it. 00:27:04.233 [2024-11-28 12:50:46.268625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.233 [2024-11-28 12:50:46.268638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.233 qpair failed and we were unable to recover it. 00:27:04.233 [2024-11-28 12:50:46.268814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.233 [2024-11-28 12:50:46.268827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.233 qpair failed and we were unable to recover it. 00:27:04.233 [2024-11-28 12:50:46.268926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.233 [2024-11-28 12:50:46.268938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.233 qpair failed and we were unable to recover it. 00:27:04.233 [2024-11-28 12:50:46.269037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.233 [2024-11-28 12:50:46.269049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.233 qpair failed and we were unable to recover it. 00:27:04.233 [2024-11-28 12:50:46.269196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.233 [2024-11-28 12:50:46.269212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.233 qpair failed and we were unable to recover it. 00:27:04.233 [2024-11-28 12:50:46.269449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.233 [2024-11-28 12:50:46.269462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.233 qpair failed and we were unable to recover it. 00:27:04.233 [2024-11-28 12:50:46.269701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.233 [2024-11-28 12:50:46.269714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.233 qpair failed and we were unable to recover it. 00:27:04.233 [2024-11-28 12:50:46.269868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.233 [2024-11-28 12:50:46.269882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.233 qpair failed and we were unable to recover it. 00:27:04.233 [2024-11-28 12:50:46.270053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.233 [2024-11-28 12:50:46.270067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.233 qpair failed and we were unable to recover it. 00:27:04.233 [2024-11-28 12:50:46.270155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.233 [2024-11-28 12:50:46.270166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.233 qpair failed and we were unable to recover it. 00:27:04.233 [2024-11-28 12:50:46.270400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.233 [2024-11-28 12:50:46.270414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.233 qpair failed and we were unable to recover it. 00:27:04.233 [2024-11-28 12:50:46.270676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.233 [2024-11-28 12:50:46.270709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.233 qpair failed and we were unable to recover it. 00:27:04.233 [2024-11-28 12:50:46.270847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.233 [2024-11-28 12:50:46.270880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.233 qpair failed and we were unable to recover it. 00:27:04.233 [2024-11-28 12:50:46.271088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.233 [2024-11-28 12:50:46.271122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.233 qpair failed and we were unable to recover it. 00:27:04.233 [2024-11-28 12:50:46.271315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.233 [2024-11-28 12:50:46.271348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.233 qpair failed and we were unable to recover it. 00:27:04.233 [2024-11-28 12:50:46.271633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.233 [2024-11-28 12:50:46.271646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.234 qpair failed and we were unable to recover it. 00:27:04.234 [2024-11-28 12:50:46.271819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.234 [2024-11-28 12:50:46.271853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.234 qpair failed and we were unable to recover it. 00:27:04.234 [2024-11-28 12:50:46.272155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.234 [2024-11-28 12:50:46.272190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.234 qpair failed and we were unable to recover it. 00:27:04.234 [2024-11-28 12:50:46.272332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.234 [2024-11-28 12:50:46.272345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.234 qpair failed and we were unable to recover it. 00:27:04.234 [2024-11-28 12:50:46.272485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.234 [2024-11-28 12:50:46.272498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.234 qpair failed and we were unable to recover it. 00:27:04.234 [2024-11-28 12:50:46.272646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.234 [2024-11-28 12:50:46.272660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.234 qpair failed and we were unable to recover it. 00:27:04.234 [2024-11-28 12:50:46.272847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.234 [2024-11-28 12:50:46.272879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.234 qpair failed and we were unable to recover it. 00:27:04.234 [2024-11-28 12:50:46.273183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.234 [2024-11-28 12:50:46.273216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.234 qpair failed and we were unable to recover it. 00:27:04.234 [2024-11-28 12:50:46.273506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.234 [2024-11-28 12:50:46.273519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.234 qpair failed and we were unable to recover it. 00:27:04.234 [2024-11-28 12:50:46.273727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.234 [2024-11-28 12:50:46.273741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.234 qpair failed and we were unable to recover it. 00:27:04.234 [2024-11-28 12:50:46.273991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.234 [2024-11-28 12:50:46.274004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.234 qpair failed and we were unable to recover it. 00:27:04.234 [2024-11-28 12:50:46.274217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.234 [2024-11-28 12:50:46.274230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.234 qpair failed and we were unable to recover it. 00:27:04.234 [2024-11-28 12:50:46.274383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.234 [2024-11-28 12:50:46.274395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.234 qpair failed and we were unable to recover it. 00:27:04.234 [2024-11-28 12:50:46.274554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.234 [2024-11-28 12:50:46.274567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.234 qpair failed and we were unable to recover it. 00:27:04.234 [2024-11-28 12:50:46.274777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.234 [2024-11-28 12:50:46.274790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.234 qpair failed and we were unable to recover it. 00:27:04.234 [2024-11-28 12:50:46.275044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.234 [2024-11-28 12:50:46.275059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.234 qpair failed and we were unable to recover it. 00:27:04.234 [2024-11-28 12:50:46.275220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.234 [2024-11-28 12:50:46.275234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.234 qpair failed and we were unable to recover it. 00:27:04.234 [2024-11-28 12:50:46.275413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.234 [2024-11-28 12:50:46.275426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.234 qpair failed and we were unable to recover it. 00:27:04.234 [2024-11-28 12:50:46.275519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.234 [2024-11-28 12:50:46.275530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.234 qpair failed and we were unable to recover it. 00:27:04.234 [2024-11-28 12:50:46.275742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.234 [2024-11-28 12:50:46.275776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.234 qpair failed and we were unable to recover it. 00:27:04.234 [2024-11-28 12:50:46.276055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.234 [2024-11-28 12:50:46.276091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.234 qpair failed and we were unable to recover it. 00:27:04.234 [2024-11-28 12:50:46.276289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.234 [2024-11-28 12:50:46.276322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.234 qpair failed and we were unable to recover it. 00:27:04.234 [2024-11-28 12:50:46.276542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.234 [2024-11-28 12:50:46.276575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.234 qpair failed and we were unable to recover it. 00:27:04.234 [2024-11-28 12:50:46.276795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.234 [2024-11-28 12:50:46.276828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.234 qpair failed and we were unable to recover it. 00:27:04.234 [2024-11-28 12:50:46.277089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.234 [2024-11-28 12:50:46.277123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.234 qpair failed and we were unable to recover it. 00:27:04.234 [2024-11-28 12:50:46.277395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.234 [2024-11-28 12:50:46.277430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.234 qpair failed and we were unable to recover it. 00:27:04.234 [2024-11-28 12:50:46.277621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.234 [2024-11-28 12:50:46.277655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.234 qpair failed and we were unable to recover it. 00:27:04.234 [2024-11-28 12:50:46.277836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.234 [2024-11-28 12:50:46.277871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.234 qpair failed and we were unable to recover it. 00:27:04.234 [2024-11-28 12:50:46.278148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.234 [2024-11-28 12:50:46.278182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.234 qpair failed and we were unable to recover it. 00:27:04.234 [2024-11-28 12:50:46.278279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.234 [2024-11-28 12:50:46.278294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.234 qpair failed and we were unable to recover it. 00:27:04.234 [2024-11-28 12:50:46.278479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.234 [2024-11-28 12:50:46.278512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.234 qpair failed and we were unable to recover it. 00:27:04.234 [2024-11-28 12:50:46.278711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.234 [2024-11-28 12:50:46.278744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.234 qpair failed and we were unable to recover it. 00:27:04.234 [2024-11-28 12:50:46.278998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.234 [2024-11-28 12:50:46.279033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.234 qpair failed and we were unable to recover it. 00:27:04.234 [2024-11-28 12:50:46.279334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.234 [2024-11-28 12:50:46.279368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.234 qpair failed and we were unable to recover it. 00:27:04.234 [2024-11-28 12:50:46.279578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.234 [2024-11-28 12:50:46.279611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.234 qpair failed and we were unable to recover it. 00:27:04.234 [2024-11-28 12:50:46.279895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.234 [2024-11-28 12:50:46.279929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.234 qpair failed and we were unable to recover it. 00:27:04.234 [2024-11-28 12:50:46.280195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.234 [2024-11-28 12:50:46.280228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.234 qpair failed and we were unable to recover it. 00:27:04.234 [2024-11-28 12:50:46.280492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.234 [2024-11-28 12:50:46.280525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.234 qpair failed and we were unable to recover it. 00:27:04.234 [2024-11-28 12:50:46.280799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.234 [2024-11-28 12:50:46.280832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.234 qpair failed and we were unable to recover it. 00:27:04.235 [2024-11-28 12:50:46.281044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.235 [2024-11-28 12:50:46.281079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.235 qpair failed and we were unable to recover it. 00:27:04.235 [2024-11-28 12:50:46.281212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.235 [2024-11-28 12:50:46.281246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.235 qpair failed and we were unable to recover it. 00:27:04.235 [2024-11-28 12:50:46.281506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.235 [2024-11-28 12:50:46.281519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.235 qpair failed and we were unable to recover it. 00:27:04.235 [2024-11-28 12:50:46.281676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.235 [2024-11-28 12:50:46.281690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.235 qpair failed and we were unable to recover it. 00:27:04.235 [2024-11-28 12:50:46.281881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.235 [2024-11-28 12:50:46.281913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.235 qpair failed and we were unable to recover it. 00:27:04.235 [2024-11-28 12:50:46.282075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.235 [2024-11-28 12:50:46.282108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.235 qpair failed and we were unable to recover it. 00:27:04.235 [2024-11-28 12:50:46.282368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.235 [2024-11-28 12:50:46.282402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.235 qpair failed and we were unable to recover it. 00:27:04.235 [2024-11-28 12:50:46.282692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.235 [2024-11-28 12:50:46.282724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.235 qpair failed and we were unable to recover it. 00:27:04.235 [2024-11-28 12:50:46.283003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.235 [2024-11-28 12:50:46.283038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.235 qpair failed and we were unable to recover it. 00:27:04.235 [2024-11-28 12:50:46.283320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.235 [2024-11-28 12:50:46.283353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.235 qpair failed and we were unable to recover it. 00:27:04.235 [2024-11-28 12:50:46.283612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.235 [2024-11-28 12:50:46.283646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.235 qpair failed and we were unable to recover it. 00:27:04.235 [2024-11-28 12:50:46.283847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.235 [2024-11-28 12:50:46.283881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.235 qpair failed and we were unable to recover it. 00:27:04.235 [2024-11-28 12:50:46.284153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.235 [2024-11-28 12:50:46.284189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.235 qpair failed and we were unable to recover it. 00:27:04.235 [2024-11-28 12:50:46.284465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.235 [2024-11-28 12:50:46.284497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.235 qpair failed and we were unable to recover it. 00:27:04.235 [2024-11-28 12:50:46.284789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.235 [2024-11-28 12:50:46.284824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.235 qpair failed and we were unable to recover it. 00:27:04.235 [2024-11-28 12:50:46.285118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.235 [2024-11-28 12:50:46.285152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.235 qpair failed and we were unable to recover it. 00:27:04.235 [2024-11-28 12:50:46.285366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.235 [2024-11-28 12:50:46.285400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.235 qpair failed and we were unable to recover it. 00:27:04.235 [2024-11-28 12:50:46.285701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.235 [2024-11-28 12:50:46.285715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.235 qpair failed and we were unable to recover it. 00:27:04.235 [2024-11-28 12:50:46.285868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.235 [2024-11-28 12:50:46.285882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.235 qpair failed and we were unable to recover it. 00:27:04.235 [2024-11-28 12:50:46.286070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.235 [2024-11-28 12:50:46.286083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.235 qpair failed and we were unable to recover it. 00:27:04.235 [2024-11-28 12:50:46.286237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.235 [2024-11-28 12:50:46.286250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.235 qpair failed and we were unable to recover it. 00:27:04.235 [2024-11-28 12:50:46.286432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.235 [2024-11-28 12:50:46.286465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.235 qpair failed and we were unable to recover it. 00:27:04.235 [2024-11-28 12:50:46.286669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.235 [2024-11-28 12:50:46.286701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.235 qpair failed and we were unable to recover it. 00:27:04.235 [2024-11-28 12:50:46.286914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.235 [2024-11-28 12:50:46.286958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.235 qpair failed and we were unable to recover it. 00:27:04.235 [2024-11-28 12:50:46.287145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.235 [2024-11-28 12:50:46.287178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.235 qpair failed and we were unable to recover it. 00:27:04.235 [2024-11-28 12:50:46.287455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.235 [2024-11-28 12:50:46.287489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.235 qpair failed and we were unable to recover it. 00:27:04.235 [2024-11-28 12:50:46.287700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.235 [2024-11-28 12:50:46.287713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.235 qpair failed and we were unable to recover it. 00:27:04.235 [2024-11-28 12:50:46.287921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.235 [2024-11-28 12:50:46.287934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.235 qpair failed and we were unable to recover it. 00:27:04.235 [2024-11-28 12:50:46.288194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.235 [2024-11-28 12:50:46.288207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.235 qpair failed and we were unable to recover it. 00:27:04.235 [2024-11-28 12:50:46.288369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.235 [2024-11-28 12:50:46.288381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.235 qpair failed and we were unable to recover it. 00:27:04.235 [2024-11-28 12:50:46.288581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.235 [2024-11-28 12:50:46.288621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.235 qpair failed and we were unable to recover it. 00:27:04.235 [2024-11-28 12:50:46.288877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.235 [2024-11-28 12:50:46.288911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.235 qpair failed and we were unable to recover it. 00:27:04.235 [2024-11-28 12:50:46.289119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.235 [2024-11-28 12:50:46.289153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.235 qpair failed and we were unable to recover it. 00:27:04.235 [2024-11-28 12:50:46.289353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.235 [2024-11-28 12:50:46.289386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.235 qpair failed and we were unable to recover it. 00:27:04.235 [2024-11-28 12:50:46.289667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.235 [2024-11-28 12:50:46.289701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.235 qpair failed and we were unable to recover it. 00:27:04.235 [2024-11-28 12:50:46.289879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.235 [2024-11-28 12:50:46.289892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.235 qpair failed and we were unable to recover it. 00:27:04.235 [2024-11-28 12:50:46.290113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.235 [2024-11-28 12:50:46.290149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.235 qpair failed and we were unable to recover it. 00:27:04.235 [2024-11-28 12:50:46.290355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.235 [2024-11-28 12:50:46.290389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.235 qpair failed and we were unable to recover it. 00:27:04.236 [2024-11-28 12:50:46.290656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.236 [2024-11-28 12:50:46.290689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.236 qpair failed and we were unable to recover it. 00:27:04.236 [2024-11-28 12:50:46.290822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.236 [2024-11-28 12:50:46.290855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.236 qpair failed and we were unable to recover it. 00:27:04.236 [2024-11-28 12:50:46.291043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.236 [2024-11-28 12:50:46.291076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.236 qpair failed and we were unable to recover it. 00:27:04.236 [2024-11-28 12:50:46.291362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.236 [2024-11-28 12:50:46.291397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.236 qpair failed and we were unable to recover it. 00:27:04.236 [2024-11-28 12:50:46.291584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.236 [2024-11-28 12:50:46.291617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.236 qpair failed and we were unable to recover it. 00:27:04.236 [2024-11-28 12:50:46.291870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.236 [2024-11-28 12:50:46.291904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.236 qpair failed and we were unable to recover it. 00:27:04.236 [2024-11-28 12:50:46.292122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.236 [2024-11-28 12:50:46.292158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.236 qpair failed and we were unable to recover it. 00:27:04.236 [2024-11-28 12:50:46.292408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.236 [2024-11-28 12:50:46.292421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.236 qpair failed and we were unable to recover it. 00:27:04.236 [2024-11-28 12:50:46.292659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.236 [2024-11-28 12:50:46.292692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.236 qpair failed and we were unable to recover it. 00:27:04.236 [2024-11-28 12:50:46.292904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.236 [2024-11-28 12:50:46.292937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.236 qpair failed and we were unable to recover it. 00:27:04.236 [2024-11-28 12:50:46.293206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.236 [2024-11-28 12:50:46.293240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.236 qpair failed and we were unable to recover it. 00:27:04.236 [2024-11-28 12:50:46.293496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.236 [2024-11-28 12:50:46.293530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.236 qpair failed and we were unable to recover it. 00:27:04.236 [2024-11-28 12:50:46.293755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.236 [2024-11-28 12:50:46.293795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.236 qpair failed and we were unable to recover it. 00:27:04.236 [2024-11-28 12:50:46.294007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.236 [2024-11-28 12:50:46.294020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.236 qpair failed and we were unable to recover it. 00:27:04.236 [2024-11-28 12:50:46.294261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.236 [2024-11-28 12:50:46.294274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.236 qpair failed and we were unable to recover it. 00:27:04.236 [2024-11-28 12:50:46.294437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.236 [2024-11-28 12:50:46.294450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.236 qpair failed and we were unable to recover it. 00:27:04.236 [2024-11-28 12:50:46.294552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.236 [2024-11-28 12:50:46.294563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.236 qpair failed and we were unable to recover it. 00:27:04.236 [2024-11-28 12:50:46.294835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.236 [2024-11-28 12:50:46.294869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.236 qpair failed and we were unable to recover it. 00:27:04.236 [2024-11-28 12:50:46.295076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.236 [2024-11-28 12:50:46.295110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.236 qpair failed and we were unable to recover it. 00:27:04.236 [2024-11-28 12:50:46.295472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.236 [2024-11-28 12:50:46.295560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.236 qpair failed and we were unable to recover it. 00:27:04.236 [2024-11-28 12:50:46.295867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.236 [2024-11-28 12:50:46.295904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.236 qpair failed and we were unable to recover it. 00:27:04.236 [2024-11-28 12:50:46.296227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.236 [2024-11-28 12:50:46.296264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.236 qpair failed and we were unable to recover it. 00:27:04.236 [2024-11-28 12:50:46.296553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.236 [2024-11-28 12:50:46.296587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.236 qpair failed and we were unable to recover it. 00:27:04.236 [2024-11-28 12:50:46.296795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.236 [2024-11-28 12:50:46.296829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.236 qpair failed and we were unable to recover it. 00:27:04.236 [2024-11-28 12:50:46.297065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.236 [2024-11-28 12:50:46.297101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.236 qpair failed and we were unable to recover it. 00:27:04.236 [2024-11-28 12:50:46.297301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.236 [2024-11-28 12:50:46.297335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.236 qpair failed and we were unable to recover it. 00:27:04.236 [2024-11-28 12:50:46.297541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.236 [2024-11-28 12:50:46.297575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.236 qpair failed and we were unable to recover it. 00:27:04.236 [2024-11-28 12:50:46.297796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.236 [2024-11-28 12:50:46.297829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.236 qpair failed and we were unable to recover it. 00:27:04.236 [2024-11-28 12:50:46.298117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.236 [2024-11-28 12:50:46.298152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.236 qpair failed and we were unable to recover it. 00:27:04.236 [2024-11-28 12:50:46.298352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.236 [2024-11-28 12:50:46.298385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.236 qpair failed and we were unable to recover it. 00:27:04.236 [2024-11-28 12:50:46.298690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.236 [2024-11-28 12:50:46.298724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.236 qpair failed and we were unable to recover it. 00:27:04.236 [2024-11-28 12:50:46.298962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.236 [2024-11-28 12:50:46.298996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.236 qpair failed and we were unable to recover it. 00:27:04.236 [2024-11-28 12:50:46.299209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.236 [2024-11-28 12:50:46.299253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.236 qpair failed and we were unable to recover it. 00:27:04.236 [2024-11-28 12:50:46.299516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.236 [2024-11-28 12:50:46.299533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.236 qpair failed and we were unable to recover it. 00:27:04.236 [2024-11-28 12:50:46.299750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.236 [2024-11-28 12:50:46.299766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.236 qpair failed and we were unable to recover it. 00:27:04.236 [2024-11-28 12:50:46.299871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.236 [2024-11-28 12:50:46.299887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.236 qpair failed and we were unable to recover it. 00:27:04.236 [2024-11-28 12:50:46.300157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.236 [2024-11-28 12:50:46.300192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.236 qpair failed and we were unable to recover it. 00:27:04.236 [2024-11-28 12:50:46.300496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.236 [2024-11-28 12:50:46.300529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.236 qpair failed and we were unable to recover it. 00:27:04.236 [2024-11-28 12:50:46.300788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.237 [2024-11-28 12:50:46.300805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.237 qpair failed and we were unable to recover it. 00:27:04.237 [2024-11-28 12:50:46.300912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.237 [2024-11-28 12:50:46.300929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.237 qpair failed and we were unable to recover it. 00:27:04.237 [2024-11-28 12:50:46.301182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.237 [2024-11-28 12:50:46.301200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.237 qpair failed and we were unable to recover it. 00:27:04.237 [2024-11-28 12:50:46.301450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.237 [2024-11-28 12:50:46.301483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.237 qpair failed and we were unable to recover it. 00:27:04.237 [2024-11-28 12:50:46.301699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.237 [2024-11-28 12:50:46.301733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.237 qpair failed and we were unable to recover it. 00:27:04.237 [2024-11-28 12:50:46.301882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.237 [2024-11-28 12:50:46.301917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.237 qpair failed and we were unable to recover it. 00:27:04.237 [2024-11-28 12:50:46.302246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.237 [2024-11-28 12:50:46.302323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.237 qpair failed and we were unable to recover it. 00:27:04.237 [2024-11-28 12:50:46.302630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.237 [2024-11-28 12:50:46.302668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.237 qpair failed and we were unable to recover it. 00:27:04.237 [2024-11-28 12:50:46.302903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.237 [2024-11-28 12:50:46.302938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.237 qpair failed and we were unable to recover it. 00:27:04.237 [2024-11-28 12:50:46.303142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.237 [2024-11-28 12:50:46.303177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.237 qpair failed and we were unable to recover it. 00:27:04.237 [2024-11-28 12:50:46.303376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.237 [2024-11-28 12:50:46.303408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.237 qpair failed and we were unable to recover it. 00:27:04.237 [2024-11-28 12:50:46.303672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.237 [2024-11-28 12:50:46.303689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.237 qpair failed and we were unable to recover it. 00:27:04.237 [2024-11-28 12:50:46.303959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.237 [2024-11-28 12:50:46.303977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.237 qpair failed and we were unable to recover it. 00:27:04.237 [2024-11-28 12:50:46.304126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.237 [2024-11-28 12:50:46.304142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.237 qpair failed and we were unable to recover it. 00:27:04.237 [2024-11-28 12:50:46.304325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.237 [2024-11-28 12:50:46.304358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.237 qpair failed and we were unable to recover it. 00:27:04.237 [2024-11-28 12:50:46.304498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.237 [2024-11-28 12:50:46.304531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.237 qpair failed and we were unable to recover it. 00:27:04.237 [2024-11-28 12:50:46.304735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.237 [2024-11-28 12:50:46.304768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.237 qpair failed and we were unable to recover it. 00:27:04.237 [2024-11-28 12:50:46.304964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.237 [2024-11-28 12:50:46.304981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.237 qpair failed and we were unable to recover it. 00:27:04.237 [2024-11-28 12:50:46.305137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.237 [2024-11-28 12:50:46.305171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.237 qpair failed and we were unable to recover it. 00:27:04.237 [2024-11-28 12:50:46.305404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.237 [2024-11-28 12:50:46.305436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.237 qpair failed and we were unable to recover it. 00:27:04.237 [2024-11-28 12:50:46.305659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.237 [2024-11-28 12:50:46.305692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.237 qpair failed and we were unable to recover it. 00:27:04.237 [2024-11-28 12:50:46.305898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.237 [2024-11-28 12:50:46.305938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.237 qpair failed and we were unable to recover it. 00:27:04.237 [2024-11-28 12:50:46.306258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.237 [2024-11-28 12:50:46.306291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.237 qpair failed and we were unable to recover it. 00:27:04.237 [2024-11-28 12:50:46.306560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.237 [2024-11-28 12:50:46.306594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.237 qpair failed and we were unable to recover it. 00:27:04.237 [2024-11-28 12:50:46.306892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.237 [2024-11-28 12:50:46.306908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.237 qpair failed and we were unable to recover it. 00:27:04.237 [2024-11-28 12:50:46.307179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.237 [2024-11-28 12:50:46.307197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.237 qpair failed and we were unable to recover it. 00:27:04.237 [2024-11-28 12:50:46.307361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.237 [2024-11-28 12:50:46.307378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.237 qpair failed and we were unable to recover it. 00:27:04.237 [2024-11-28 12:50:46.307565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.237 [2024-11-28 12:50:46.307582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.237 qpair failed and we were unable to recover it. 00:27:04.237 [2024-11-28 12:50:46.307750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.237 [2024-11-28 12:50:46.307768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.237 qpair failed and we were unable to recover it. 00:27:04.237 [2024-11-28 12:50:46.308046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.237 [2024-11-28 12:50:46.308080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.237 qpair failed and we were unable to recover it. 00:27:04.237 [2024-11-28 12:50:46.308277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.237 [2024-11-28 12:50:46.308311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.237 qpair failed and we were unable to recover it. 00:27:04.237 [2024-11-28 12:50:46.308569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.237 [2024-11-28 12:50:46.308586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.237 qpair failed and we were unable to recover it. 00:27:04.237 [2024-11-28 12:50:46.308894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.237 [2024-11-28 12:50:46.308928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.237 qpair failed and we were unable to recover it. 00:27:04.237 [2024-11-28 12:50:46.309151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.237 [2024-11-28 12:50:46.309183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.237 qpair failed and we were unable to recover it. 00:27:04.237 [2024-11-28 12:50:46.309398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.238 [2024-11-28 12:50:46.309431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.238 qpair failed and we were unable to recover it. 00:27:04.238 [2024-11-28 12:50:46.309696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.238 [2024-11-28 12:50:46.309770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.238 qpair failed and we were unable to recover it. 00:27:04.238 [2024-11-28 12:50:46.309992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.238 [2024-11-28 12:50:46.310032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.238 qpair failed and we were unable to recover it. 00:27:04.238 [2024-11-28 12:50:46.310327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.238 [2024-11-28 12:50:46.310362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.238 qpair failed and we were unable to recover it. 00:27:04.238 [2024-11-28 12:50:46.310645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.238 [2024-11-28 12:50:46.310678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.238 qpair failed and we were unable to recover it. 00:27:04.238 [2024-11-28 12:50:46.310967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.238 [2024-11-28 12:50:46.311002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.238 qpair failed and we were unable to recover it. 00:27:04.238 [2024-11-28 12:50:46.311277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.238 [2024-11-28 12:50:46.311309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.238 qpair failed and we were unable to recover it. 00:27:04.238 [2024-11-28 12:50:46.311508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.238 [2024-11-28 12:50:46.311526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.238 qpair failed and we were unable to recover it. 00:27:04.238 [2024-11-28 12:50:46.311690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.238 [2024-11-28 12:50:46.311723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.238 qpair failed and we were unable to recover it. 00:27:04.238 [2024-11-28 12:50:46.311942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.238 [2024-11-28 12:50:46.311986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.238 qpair failed and we were unable to recover it. 00:27:04.238 [2024-11-28 12:50:46.312171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.238 [2024-11-28 12:50:46.312206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.238 qpair failed and we were unable to recover it. 00:27:04.238 [2024-11-28 12:50:46.312393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.238 [2024-11-28 12:50:46.312426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.238 qpair failed and we were unable to recover it. 00:27:04.238 [2024-11-28 12:50:46.312649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.238 [2024-11-28 12:50:46.312683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.238 qpair failed and we were unable to recover it. 00:27:04.238 [2024-11-28 12:50:46.312825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.238 [2024-11-28 12:50:46.312858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.238 qpair failed and we were unable to recover it. 00:27:04.238 [2024-11-28 12:50:46.313112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.238 [2024-11-28 12:50:46.313156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.238 qpair failed and we were unable to recover it. 00:27:04.238 [2024-11-28 12:50:46.313442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.238 [2024-11-28 12:50:46.313475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.238 qpair failed and we were unable to recover it. 00:27:04.238 [2024-11-28 12:50:46.313750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.238 [2024-11-28 12:50:46.313793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.238 qpair failed and we were unable to recover it. 00:27:04.238 [2024-11-28 12:50:46.314057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.238 [2024-11-28 12:50:46.314074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.238 qpair failed and we were unable to recover it. 00:27:04.238 [2024-11-28 12:50:46.314188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.238 [2024-11-28 12:50:46.314205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.238 qpair failed and we were unable to recover it. 00:27:04.238 [2024-11-28 12:50:46.314425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.238 [2024-11-28 12:50:46.314458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.238 qpair failed and we were unable to recover it. 00:27:04.238 [2024-11-28 12:50:46.314734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.238 [2024-11-28 12:50:46.314768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.238 qpair failed and we were unable to recover it. 00:27:04.238 [2024-11-28 12:50:46.315044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.238 [2024-11-28 12:50:46.315078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.238 qpair failed and we were unable to recover it. 00:27:04.238 [2024-11-28 12:50:46.315392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.238 [2024-11-28 12:50:46.315426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.238 qpair failed and we were unable to recover it. 00:27:04.238 [2024-11-28 12:50:46.315674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.238 [2024-11-28 12:50:46.315691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.238 qpair failed and we were unable to recover it. 00:27:04.238 [2024-11-28 12:50:46.315935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.238 [2024-11-28 12:50:46.315956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.238 qpair failed and we were unable to recover it. 00:27:04.238 [2024-11-28 12:50:46.316175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.238 [2024-11-28 12:50:46.316192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.238 qpair failed and we were unable to recover it. 00:27:04.238 [2024-11-28 12:50:46.316436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.238 [2024-11-28 12:50:46.316452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.238 qpair failed and we were unable to recover it. 00:27:04.238 [2024-11-28 12:50:46.316537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.238 [2024-11-28 12:50:46.316551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.238 qpair failed and we were unable to recover it. 00:27:04.238 [2024-11-28 12:50:46.316721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.238 [2024-11-28 12:50:46.316761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.238 qpair failed and we were unable to recover it. 00:27:04.238 [2024-11-28 12:50:46.317036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.238 [2024-11-28 12:50:46.317070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.238 qpair failed and we were unable to recover it. 00:27:04.238 [2024-11-28 12:50:46.317287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.238 [2024-11-28 12:50:46.317321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.238 qpair failed and we were unable to recover it. 00:27:04.238 [2024-11-28 12:50:46.317521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.238 [2024-11-28 12:50:46.317554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.238 qpair failed and we were unable to recover it. 00:27:04.238 [2024-11-28 12:50:46.317694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.238 [2024-11-28 12:50:46.317711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.238 qpair failed and we were unable to recover it. 00:27:04.238 [2024-11-28 12:50:46.317877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.238 [2024-11-28 12:50:46.317910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.238 qpair failed and we were unable to recover it. 00:27:04.238 [2024-11-28 12:50:46.318117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.238 [2024-11-28 12:50:46.318151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.238 qpair failed and we were unable to recover it. 00:27:04.238 [2024-11-28 12:50:46.318331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.238 [2024-11-28 12:50:46.318363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.238 qpair failed and we were unable to recover it. 00:27:04.238 [2024-11-28 12:50:46.318557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.238 [2024-11-28 12:50:46.318573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.238 qpair failed and we were unable to recover it. 00:27:04.238 [2024-11-28 12:50:46.318685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.238 [2024-11-28 12:50:46.318717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.238 qpair failed and we were unable to recover it. 00:27:04.239 [2024-11-28 12:50:46.318930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.239 [2024-11-28 12:50:46.318973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.239 qpair failed and we were unable to recover it. 00:27:04.239 [2024-11-28 12:50:46.319272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.239 [2024-11-28 12:50:46.319306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.239 qpair failed and we were unable to recover it. 00:27:04.239 [2024-11-28 12:50:46.319457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.239 [2024-11-28 12:50:46.319489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.239 qpair failed and we were unable to recover it. 00:27:04.239 [2024-11-28 12:50:46.319711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.239 [2024-11-28 12:50:46.319757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.239 qpair failed and we were unable to recover it. 00:27:04.239 [2024-11-28 12:50:46.319907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.239 [2024-11-28 12:50:46.319923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.239 qpair failed and we were unable to recover it. 00:27:04.239 [2024-11-28 12:50:46.320108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.239 [2024-11-28 12:50:46.320144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.239 qpair failed and we were unable to recover it. 00:27:04.239 [2024-11-28 12:50:46.320399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.239 [2024-11-28 12:50:46.320433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.239 qpair failed and we were unable to recover it. 00:27:04.239 [2024-11-28 12:50:46.320654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.239 [2024-11-28 12:50:46.320695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.239 qpair failed and we were unable to recover it. 00:27:04.239 [2024-11-28 12:50:46.320854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.239 [2024-11-28 12:50:46.320871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.239 qpair failed and we were unable to recover it. 00:27:04.239 [2024-11-28 12:50:46.321086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.239 [2024-11-28 12:50:46.321102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.239 qpair failed and we were unable to recover it. 00:27:04.239 [2024-11-28 12:50:46.321269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.239 [2024-11-28 12:50:46.321302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.239 qpair failed and we were unable to recover it. 00:27:04.239 [2024-11-28 12:50:46.321501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.239 [2024-11-28 12:50:46.321534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.239 qpair failed and we were unable to recover it. 00:27:04.239 [2024-11-28 12:50:46.321855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.239 [2024-11-28 12:50:46.321889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.239 qpair failed and we were unable to recover it. 00:27:04.239 [2024-11-28 12:50:46.322042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.239 [2024-11-28 12:50:46.322076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.239 qpair failed and we were unable to recover it. 00:27:04.239 [2024-11-28 12:50:46.322349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.239 [2024-11-28 12:50:46.322383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.239 qpair failed and we were unable to recover it. 00:27:04.239 [2024-11-28 12:50:46.322634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.239 [2024-11-28 12:50:46.322668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.239 qpair failed and we were unable to recover it. 00:27:04.239 [2024-11-28 12:50:46.322869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.239 [2024-11-28 12:50:46.322902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.239 qpair failed and we were unable to recover it. 00:27:04.239 [2024-11-28 12:50:46.323194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.239 [2024-11-28 12:50:46.323229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.239 qpair failed and we were unable to recover it. 00:27:04.239 [2024-11-28 12:50:46.323508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.239 [2024-11-28 12:50:46.323541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.239 qpair failed and we were unable to recover it. 00:27:04.239 [2024-11-28 12:50:46.323825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.239 [2024-11-28 12:50:46.323859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.239 qpair failed and we were unable to recover it. 00:27:04.239 [2024-11-28 12:50:46.324090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.239 [2024-11-28 12:50:46.324124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.239 qpair failed and we were unable to recover it. 00:27:04.239 [2024-11-28 12:50:46.324407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.239 [2024-11-28 12:50:46.324442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.239 qpair failed and we were unable to recover it. 00:27:04.239 [2024-11-28 12:50:46.324688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.239 [2024-11-28 12:50:46.324720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.239 qpair failed and we were unable to recover it. 00:27:04.239 [2024-11-28 12:50:46.325029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.239 [2024-11-28 12:50:46.325064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.239 qpair failed and we were unable to recover it. 00:27:04.239 [2024-11-28 12:50:46.325320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.239 [2024-11-28 12:50:46.325366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.239 qpair failed and we were unable to recover it. 00:27:04.239 [2024-11-28 12:50:46.325599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.239 [2024-11-28 12:50:46.325616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.239 qpair failed and we were unable to recover it. 00:27:04.239 [2024-11-28 12:50:46.325730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.239 [2024-11-28 12:50:46.325746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.239 qpair failed and we were unable to recover it. 00:27:04.239 [2024-11-28 12:50:46.325961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.239 [2024-11-28 12:50:46.325996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.239 qpair failed and we were unable to recover it. 00:27:04.239 [2024-11-28 12:50:46.326274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.239 [2024-11-28 12:50:46.326306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.239 qpair failed and we were unable to recover it. 00:27:04.239 [2024-11-28 12:50:46.326446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.239 [2024-11-28 12:50:46.326464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.239 qpair failed and we were unable to recover it. 00:27:04.239 [2024-11-28 12:50:46.326694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.239 [2024-11-28 12:50:46.326733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.239 qpair failed and we were unable to recover it. 00:27:04.239 [2024-11-28 12:50:46.326933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.239 [2024-11-28 12:50:46.326988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.239 qpair failed and we were unable to recover it. 00:27:04.239 [2024-11-28 12:50:46.327195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.239 [2024-11-28 12:50:46.327229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.239 qpair failed and we were unable to recover it. 00:27:04.239 [2024-11-28 12:50:46.327452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.239 [2024-11-28 12:50:46.327484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.239 qpair failed and we were unable to recover it. 00:27:04.239 [2024-11-28 12:50:46.327730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.239 [2024-11-28 12:50:46.327747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.239 qpair failed and we were unable to recover it. 00:27:04.239 [2024-11-28 12:50:46.328028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.239 [2024-11-28 12:50:46.328063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.239 qpair failed and we were unable to recover it. 00:27:04.239 [2024-11-28 12:50:46.328279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.239 [2024-11-28 12:50:46.328312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.239 qpair failed and we were unable to recover it. 00:27:04.239 [2024-11-28 12:50:46.328572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.239 [2024-11-28 12:50:46.328605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.239 qpair failed and we were unable to recover it. 00:27:04.239 [2024-11-28 12:50:46.328896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.240 [2024-11-28 12:50:46.328913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.240 qpair failed and we were unable to recover it. 00:27:04.240 [2024-11-28 12:50:46.329073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.240 [2024-11-28 12:50:46.329090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.240 qpair failed and we were unable to recover it. 00:27:04.240 [2024-11-28 12:50:46.329262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.240 [2024-11-28 12:50:46.329279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.240 qpair failed and we were unable to recover it. 00:27:04.240 [2024-11-28 12:50:46.329452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.240 [2024-11-28 12:50:46.329485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.240 qpair failed and we were unable to recover it. 00:27:04.240 [2024-11-28 12:50:46.329736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.240 [2024-11-28 12:50:46.329768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.240 qpair failed and we were unable to recover it. 00:27:04.240 [2024-11-28 12:50:46.330063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.240 [2024-11-28 12:50:46.330097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.240 qpair failed and we were unable to recover it. 00:27:04.240 [2024-11-28 12:50:46.330389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.240 [2024-11-28 12:50:46.330422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.240 qpair failed and we were unable to recover it. 00:27:04.240 [2024-11-28 12:50:46.330692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.240 [2024-11-28 12:50:46.330725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.240 qpair failed and we were unable to recover it. 00:27:04.240 [2024-11-28 12:50:46.330865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.240 [2024-11-28 12:50:46.330882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.240 qpair failed and we were unable to recover it. 00:27:04.240 [2024-11-28 12:50:46.331124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.240 [2024-11-28 12:50:46.331141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.240 qpair failed and we were unable to recover it. 00:27:04.240 [2024-11-28 12:50:46.331404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.240 [2024-11-28 12:50:46.331438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.240 qpair failed and we were unable to recover it. 00:27:04.240 [2024-11-28 12:50:46.331708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.240 [2024-11-28 12:50:46.331741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.240 qpair failed and we were unable to recover it. 00:27:04.240 [2024-11-28 12:50:46.331994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.240 [2024-11-28 12:50:46.332012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.240 qpair failed and we were unable to recover it. 00:27:04.240 [2024-11-28 12:50:46.332186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.240 [2024-11-28 12:50:46.332220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.240 qpair failed and we were unable to recover it. 00:27:04.240 [2024-11-28 12:50:46.332474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.240 [2024-11-28 12:50:46.332507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.240 qpair failed and we were unable to recover it. 00:27:04.240 [2024-11-28 12:50:46.332783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.240 [2024-11-28 12:50:46.332816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.240 qpair failed and we were unable to recover it. 00:27:04.240 [2024-11-28 12:50:46.333090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.240 [2024-11-28 12:50:46.333124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.240 qpair failed and we were unable to recover it. 00:27:04.240 [2024-11-28 12:50:46.333388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.240 [2024-11-28 12:50:46.333422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.240 qpair failed and we were unable to recover it. 00:27:04.240 [2024-11-28 12:50:46.333659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.240 [2024-11-28 12:50:46.333677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.240 qpair failed and we were unable to recover it. 00:27:04.240 [2024-11-28 12:50:46.333837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.240 [2024-11-28 12:50:46.333857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.240 qpair failed and we were unable to recover it. 00:27:04.240 [2024-11-28 12:50:46.334020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.240 [2024-11-28 12:50:46.334055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.240 qpair failed and we were unable to recover it. 00:27:04.240 [2024-11-28 12:50:46.334200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.240 [2024-11-28 12:50:46.334234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.240 qpair failed and we were unable to recover it. 00:27:04.240 [2024-11-28 12:50:46.334500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.240 [2024-11-28 12:50:46.334534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.240 qpair failed and we were unable to recover it. 00:27:04.240 [2024-11-28 12:50:46.334799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.240 [2024-11-28 12:50:46.334832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.240 qpair failed and we were unable to recover it. 00:27:04.240 [2024-11-28 12:50:46.335128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.240 [2024-11-28 12:50:46.335162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.240 qpair failed and we were unable to recover it. 00:27:04.240 [2024-11-28 12:50:46.335435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.240 [2024-11-28 12:50:46.335479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.240 qpair failed and we were unable to recover it. 00:27:04.240 [2024-11-28 12:50:46.335651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.240 [2024-11-28 12:50:46.335668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.240 qpair failed and we were unable to recover it. 00:27:04.240 [2024-11-28 12:50:46.335834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.240 [2024-11-28 12:50:46.335868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.240 qpair failed and we were unable to recover it. 00:27:04.240 [2024-11-28 12:50:46.336100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.240 [2024-11-28 12:50:46.336135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.240 qpair failed and we were unable to recover it. 00:27:04.240 [2024-11-28 12:50:46.336337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.240 [2024-11-28 12:50:46.336371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.240 qpair failed and we were unable to recover it. 00:27:04.240 [2024-11-28 12:50:46.336620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.240 [2024-11-28 12:50:46.336637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.240 qpair failed and we were unable to recover it. 00:27:04.240 [2024-11-28 12:50:46.336880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.240 [2024-11-28 12:50:46.336896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.240 qpair failed and we were unable to recover it. 00:27:04.240 [2024-11-28 12:50:46.337068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.240 [2024-11-28 12:50:46.337085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.240 qpair failed and we were unable to recover it. 00:27:04.240 [2024-11-28 12:50:46.337302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.240 [2024-11-28 12:50:46.337318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.240 qpair failed and we were unable to recover it. 00:27:04.240 [2024-11-28 12:50:46.337505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.240 [2024-11-28 12:50:46.337522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.240 qpair failed and we were unable to recover it. 00:27:04.240 [2024-11-28 12:50:46.337749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.240 [2024-11-28 12:50:46.337783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.240 qpair failed and we were unable to recover it. 00:27:04.240 [2024-11-28 12:50:46.338065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.240 [2024-11-28 12:50:46.338100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.240 qpair failed and we were unable to recover it. 00:27:04.240 [2024-11-28 12:50:46.338401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.240 [2024-11-28 12:50:46.338436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.240 qpair failed and we were unable to recover it. 00:27:04.240 [2024-11-28 12:50:46.338633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.240 [2024-11-28 12:50:46.338668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.240 qpair failed and we were unable to recover it. 00:27:04.241 [2024-11-28 12:50:46.338888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.241 [2024-11-28 12:50:46.338921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.241 qpair failed and we were unable to recover it. 00:27:04.241 [2024-11-28 12:50:46.339145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.241 [2024-11-28 12:50:46.339178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.241 qpair failed and we were unable to recover it. 00:27:04.241 [2024-11-28 12:50:46.339432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.241 [2024-11-28 12:50:46.339465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.241 qpair failed and we were unable to recover it. 00:27:04.241 [2024-11-28 12:50:46.339717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.241 [2024-11-28 12:50:46.339734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.241 qpair failed and we were unable to recover it. 00:27:04.241 [2024-11-28 12:50:46.339905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.241 [2024-11-28 12:50:46.339922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.241 qpair failed and we were unable to recover it. 00:27:04.241 [2024-11-28 12:50:46.340145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.241 [2024-11-28 12:50:46.340162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.241 qpair failed and we were unable to recover it. 00:27:04.241 [2024-11-28 12:50:46.340456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.241 [2024-11-28 12:50:46.340490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.241 qpair failed and we were unable to recover it. 00:27:04.241 [2024-11-28 12:50:46.340746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.241 [2024-11-28 12:50:46.340779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.241 qpair failed and we were unable to recover it. 00:27:04.241 [2024-11-28 12:50:46.340960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.241 [2024-11-28 12:50:46.340978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.241 qpair failed and we were unable to recover it. 00:27:04.241 [2024-11-28 12:50:46.341197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.241 [2024-11-28 12:50:46.341230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.241 qpair failed and we were unable to recover it. 00:27:04.241 [2024-11-28 12:50:46.341504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.241 [2024-11-28 12:50:46.341539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.241 qpair failed and we were unable to recover it. 00:27:04.241 [2024-11-28 12:50:46.341795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.241 [2024-11-28 12:50:46.341828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.241 qpair failed and we were unable to recover it. 00:27:04.241 [2024-11-28 12:50:46.342102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.241 [2024-11-28 12:50:46.342136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.241 qpair failed and we were unable to recover it. 00:27:04.241 [2024-11-28 12:50:46.342381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.241 [2024-11-28 12:50:46.342398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.241 qpair failed and we were unable to recover it. 00:27:04.241 [2024-11-28 12:50:46.342622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.241 [2024-11-28 12:50:46.342640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.241 qpair failed and we were unable to recover it. 00:27:04.241 [2024-11-28 12:50:46.342854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.241 [2024-11-28 12:50:46.342871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.241 qpair failed and we were unable to recover it. 00:27:04.241 [2024-11-28 12:50:46.343062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.241 [2024-11-28 12:50:46.343080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.241 qpair failed and we were unable to recover it. 00:27:04.241 [2024-11-28 12:50:46.343317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.241 [2024-11-28 12:50:46.343334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.241 qpair failed and we were unable to recover it. 00:27:04.241 [2024-11-28 12:50:46.343587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.241 [2024-11-28 12:50:46.343621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.241 qpair failed and we were unable to recover it. 00:27:04.241 [2024-11-28 12:50:46.343890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.241 [2024-11-28 12:50:46.343906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.241 qpair failed and we were unable to recover it. 00:27:04.241 [2024-11-28 12:50:46.344085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.241 [2024-11-28 12:50:46.344102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.241 qpair failed and we were unable to recover it. 00:27:04.241 [2024-11-28 12:50:46.344346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.241 [2024-11-28 12:50:46.344385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.241 qpair failed and we were unable to recover it. 00:27:04.241 [2024-11-28 12:50:46.344562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.241 [2024-11-28 12:50:46.344579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.241 qpair failed and we were unable to recover it. 00:27:04.241 [2024-11-28 12:50:46.344856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.241 [2024-11-28 12:50:46.344890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.241 qpair failed and we were unable to recover it. 00:27:04.241 [2024-11-28 12:50:46.345201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.241 [2024-11-28 12:50:46.345235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.241 qpair failed and we were unable to recover it. 00:27:04.241 [2024-11-28 12:50:46.345487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.241 [2024-11-28 12:50:46.345521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.241 qpair failed and we were unable to recover it. 00:27:04.241 [2024-11-28 12:50:46.345823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.241 [2024-11-28 12:50:46.345855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.241 qpair failed and we were unable to recover it. 00:27:04.241 [2024-11-28 12:50:46.346124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.241 [2024-11-28 12:50:46.346160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.241 qpair failed and we were unable to recover it. 00:27:04.241 [2024-11-28 12:50:46.346441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.241 [2024-11-28 12:50:46.346474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.241 qpair failed and we were unable to recover it. 00:27:04.241 [2024-11-28 12:50:46.346694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.241 [2024-11-28 12:50:46.346728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.241 qpair failed and we were unable to recover it. 00:27:04.241 [2024-11-28 12:50:46.346877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.241 [2024-11-28 12:50:46.346919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.241 qpair failed and we were unable to recover it. 00:27:04.241 [2024-11-28 12:50:46.347214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.241 [2024-11-28 12:50:46.347248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.241 qpair failed and we were unable to recover it. 00:27:04.241 [2024-11-28 12:50:46.347506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.241 [2024-11-28 12:50:46.347540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.241 qpair failed and we were unable to recover it. 00:27:04.241 [2024-11-28 12:50:46.347836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.241 [2024-11-28 12:50:46.347869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.241 qpair failed and we were unable to recover it. 00:27:04.241 [2024-11-28 12:50:46.348056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.241 [2024-11-28 12:50:46.348091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.241 qpair failed and we were unable to recover it. 00:27:04.241 [2024-11-28 12:50:46.348300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.241 [2024-11-28 12:50:46.348334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.241 qpair failed and we were unable to recover it. 00:27:04.241 [2024-11-28 12:50:46.348617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.241 [2024-11-28 12:50:46.348635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.241 qpair failed and we were unable to recover it. 00:27:04.241 [2024-11-28 12:50:46.348794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.241 [2024-11-28 12:50:46.348811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.241 qpair failed and we were unable to recover it. 00:27:04.241 [2024-11-28 12:50:46.349053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.242 [2024-11-28 12:50:46.349071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.242 qpair failed and we were unable to recover it. 00:27:04.242 [2024-11-28 12:50:46.349258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.242 [2024-11-28 12:50:46.349291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.242 qpair failed and we were unable to recover it. 00:27:04.242 [2024-11-28 12:50:46.349571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.242 [2024-11-28 12:50:46.349605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.242 qpair failed and we were unable to recover it. 00:27:04.242 [2024-11-28 12:50:46.349793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.242 [2024-11-28 12:50:46.349834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.242 qpair failed and we were unable to recover it. 00:27:04.242 [2024-11-28 12:50:46.350054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.242 [2024-11-28 12:50:46.350071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.242 qpair failed and we were unable to recover it. 00:27:04.242 [2024-11-28 12:50:46.350239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.242 [2024-11-28 12:50:46.350257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.242 qpair failed and we were unable to recover it. 00:27:04.242 [2024-11-28 12:50:46.350429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.242 [2024-11-28 12:50:46.350461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.242 qpair failed and we were unable to recover it. 00:27:04.242 [2024-11-28 12:50:46.350741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.242 [2024-11-28 12:50:46.350774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.242 qpair failed and we were unable to recover it. 00:27:04.242 [2024-11-28 12:50:46.350969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.242 [2024-11-28 12:50:46.351003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.242 qpair failed and we were unable to recover it. 00:27:04.242 [2024-11-28 12:50:46.351257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.242 [2024-11-28 12:50:46.351290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.242 qpair failed and we were unable to recover it. 00:27:04.242 [2024-11-28 12:50:46.351596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.242 [2024-11-28 12:50:46.351631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.242 qpair failed and we were unable to recover it. 00:27:04.242 [2024-11-28 12:50:46.351923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.242 [2024-11-28 12:50:46.351978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.242 qpair failed and we were unable to recover it. 00:27:04.242 [2024-11-28 12:50:46.352178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.242 [2024-11-28 12:50:46.352213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.242 qpair failed and we were unable to recover it. 00:27:04.242 [2024-11-28 12:50:46.352514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.242 [2024-11-28 12:50:46.352547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.242 qpair failed and we were unable to recover it. 00:27:04.242 [2024-11-28 12:50:46.352781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.242 [2024-11-28 12:50:46.352800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.242 qpair failed and we were unable to recover it. 00:27:04.242 [2024-11-28 12:50:46.352961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.242 [2024-11-28 12:50:46.352979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.242 qpair failed and we were unable to recover it. 00:27:04.242 [2024-11-28 12:50:46.353202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.242 [2024-11-28 12:50:46.353235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.242 qpair failed and we were unable to recover it. 00:27:04.242 [2024-11-28 12:50:46.353433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.242 [2024-11-28 12:50:46.353468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.242 qpair failed and we were unable to recover it. 00:27:04.242 [2024-11-28 12:50:46.353687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.242 [2024-11-28 12:50:46.353720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.242 qpair failed and we were unable to recover it. 00:27:04.242 [2024-11-28 12:50:46.353996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.242 [2024-11-28 12:50:46.354032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.242 qpair failed and we were unable to recover it. 00:27:04.242 [2024-11-28 12:50:46.354336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.242 [2024-11-28 12:50:46.354369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.242 qpair failed and we were unable to recover it. 00:27:04.242 [2024-11-28 12:50:46.354580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.242 [2024-11-28 12:50:46.354614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.242 qpair failed and we were unable to recover it. 00:27:04.242 [2024-11-28 12:50:46.354767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.242 [2024-11-28 12:50:46.354800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.242 qpair failed and we were unable to recover it. 00:27:04.242 [2024-11-28 12:50:46.355072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.242 [2024-11-28 12:50:46.355106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.242 qpair failed and we were unable to recover it. 00:27:04.242 [2024-11-28 12:50:46.355412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.242 [2024-11-28 12:50:46.355487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.242 qpair failed and we were unable to recover it. 00:27:04.242 [2024-11-28 12:50:46.355716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.242 [2024-11-28 12:50:46.355753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.242 qpair failed and we were unable to recover it. 00:27:04.242 [2024-11-28 12:50:46.356012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.242 [2024-11-28 12:50:46.356049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.242 qpair failed and we were unable to recover it. 00:27:04.242 [2024-11-28 12:50:46.356329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.242 [2024-11-28 12:50:46.356362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.242 qpair failed and we were unable to recover it. 00:27:04.242 [2024-11-28 12:50:46.356671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.242 [2024-11-28 12:50:46.356705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.242 qpair failed and we were unable to recover it. 00:27:04.242 [2024-11-28 12:50:46.356892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.242 [2024-11-28 12:50:46.356927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.242 qpair failed and we were unable to recover it. 00:27:04.242 [2024-11-28 12:50:46.357087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.242 [2024-11-28 12:50:46.357122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.242 qpair failed and we were unable to recover it. 00:27:04.242 [2024-11-28 12:50:46.357314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.242 [2024-11-28 12:50:46.357348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.242 qpair failed and we were unable to recover it. 00:27:04.242 [2024-11-28 12:50:46.357490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.242 [2024-11-28 12:50:46.357523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.242 qpair failed and we were unable to recover it. 00:27:04.242 [2024-11-28 12:50:46.357801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.243 [2024-11-28 12:50:46.357817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.243 qpair failed and we were unable to recover it. 00:27:04.243 [2024-11-28 12:50:46.358059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.243 [2024-11-28 12:50:46.358076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.243 qpair failed and we were unable to recover it. 00:27:04.243 [2024-11-28 12:50:46.358239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.243 [2024-11-28 12:50:46.358257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.243 qpair failed and we were unable to recover it. 00:27:04.243 [2024-11-28 12:50:46.358434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.243 [2024-11-28 12:50:46.358450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.243 qpair failed and we were unable to recover it. 00:27:04.243 [2024-11-28 12:50:46.358556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.243 [2024-11-28 12:50:46.358577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.243 qpair failed and we were unable to recover it. 00:27:04.243 [2024-11-28 12:50:46.358682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.243 [2024-11-28 12:50:46.358698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.243 qpair failed and we were unable to recover it. 00:27:04.243 [2024-11-28 12:50:46.358870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.243 [2024-11-28 12:50:46.358887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.243 qpair failed and we were unable to recover it. 00:27:04.243 [2024-11-28 12:50:46.359055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.243 [2024-11-28 12:50:46.359078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.243 qpair failed and we were unable to recover it. 00:27:04.243 [2024-11-28 12:50:46.359251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.243 [2024-11-28 12:50:46.359277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.243 qpair failed and we were unable to recover it. 00:27:04.243 [2024-11-28 12:50:46.359408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.243 [2024-11-28 12:50:46.359432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.243 qpair failed and we were unable to recover it. 00:27:04.243 [2024-11-28 12:50:46.359667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.243 [2024-11-28 12:50:46.359694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.243 qpair failed and we were unable to recover it. 00:27:04.243 [2024-11-28 12:50:46.359966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.243 [2024-11-28 12:50:46.360000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.243 qpair failed and we were unable to recover it. 00:27:04.243 [2024-11-28 12:50:46.360184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.243 [2024-11-28 12:50:46.360202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.243 qpair failed and we were unable to recover it. 00:27:04.243 [2024-11-28 12:50:46.360372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.243 [2024-11-28 12:50:46.360389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.243 qpair failed and we were unable to recover it. 00:27:04.243 [2024-11-28 12:50:46.360632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.243 [2024-11-28 12:50:46.360649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.243 qpair failed and we were unable to recover it. 00:27:04.243 [2024-11-28 12:50:46.360891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.243 [2024-11-28 12:50:46.360908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.243 qpair failed and we were unable to recover it. 00:27:04.243 [2024-11-28 12:50:46.361015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.243 [2024-11-28 12:50:46.361031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.243 qpair failed and we were unable to recover it. 00:27:04.243 [2024-11-28 12:50:46.361179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.243 [2024-11-28 12:50:46.361196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.243 qpair failed and we were unable to recover it. 00:27:04.243 [2024-11-28 12:50:46.361374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.243 [2024-11-28 12:50:46.361390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.243 qpair failed and we were unable to recover it. 00:27:04.243 [2024-11-28 12:50:46.361637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.243 [2024-11-28 12:50:46.361662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.243 qpair failed and we were unable to recover it. 00:27:04.243 [2024-11-28 12:50:46.361835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.243 [2024-11-28 12:50:46.361859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.243 qpair failed and we were unable to recover it. 00:27:04.243 [2024-11-28 12:50:46.362042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.243 [2024-11-28 12:50:46.362075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.243 qpair failed and we were unable to recover it. 00:27:04.243 [2024-11-28 12:50:46.362281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.243 [2024-11-28 12:50:46.362299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.243 qpair failed and we were unable to recover it. 00:27:04.243 [2024-11-28 12:50:46.362573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.243 [2024-11-28 12:50:46.362587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.243 qpair failed and we were unable to recover it. 00:27:04.243 [2024-11-28 12:50:46.362739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.243 [2024-11-28 12:50:46.362753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.243 qpair failed and we were unable to recover it. 00:27:04.243 [2024-11-28 12:50:46.362991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.243 [2024-11-28 12:50:46.363008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.243 qpair failed and we were unable to recover it. 00:27:04.243 [2024-11-28 12:50:46.363201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.243 [2024-11-28 12:50:46.363214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.243 qpair failed and we were unable to recover it. 00:27:04.243 [2024-11-28 12:50:46.363425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.243 [2024-11-28 12:50:46.363439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.243 qpair failed and we were unable to recover it. 00:27:04.243 [2024-11-28 12:50:46.363603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.243 [2024-11-28 12:50:46.363618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.243 qpair failed and we were unable to recover it. 00:27:04.243 [2024-11-28 12:50:46.363714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.243 [2024-11-28 12:50:46.363726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.243 qpair failed and we were unable to recover it. 00:27:04.243 [2024-11-28 12:50:46.363905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.243 [2024-11-28 12:50:46.363921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.243 qpair failed and we were unable to recover it. 00:27:04.243 [2024-11-28 12:50:46.364301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.243 [2024-11-28 12:50:46.364347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.243 qpair failed and we were unable to recover it. 00:27:04.243 [2024-11-28 12:50:46.364597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.243 [2024-11-28 12:50:46.364618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.243 qpair failed and we were unable to recover it. 00:27:04.243 [2024-11-28 12:50:46.364800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.243 [2024-11-28 12:50:46.364817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.243 qpair failed and we were unable to recover it. 00:27:04.243 [2024-11-28 12:50:46.364968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.243 [2024-11-28 12:50:46.364985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.243 qpair failed and we were unable to recover it. 00:27:04.243 [2024-11-28 12:50:46.365264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.243 [2024-11-28 12:50:46.365281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.243 qpair failed and we were unable to recover it. 00:27:04.243 [2024-11-28 12:50:46.365467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.243 [2024-11-28 12:50:46.365484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.243 qpair failed and we were unable to recover it. 00:27:04.243 [2024-11-28 12:50:46.365656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.243 [2024-11-28 12:50:46.365671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.243 qpair failed and we were unable to recover it. 00:27:04.243 [2024-11-28 12:50:46.365838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.243 [2024-11-28 12:50:46.365855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.243 qpair failed and we were unable to recover it. 00:27:04.244 [2024-11-28 12:50:46.366006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.244 [2024-11-28 12:50:46.366023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.244 qpair failed and we were unable to recover it. 00:27:04.244 [2024-11-28 12:50:46.366248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.244 [2024-11-28 12:50:46.366265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.244 qpair failed and we were unable to recover it. 00:27:04.244 [2024-11-28 12:50:46.366485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.244 [2024-11-28 12:50:46.366502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.244 qpair failed and we were unable to recover it. 00:27:04.244 [2024-11-28 12:50:46.366612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.244 [2024-11-28 12:50:46.366627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.244 qpair failed and we were unable to recover it. 00:27:04.244 [2024-11-28 12:50:46.366783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.244 [2024-11-28 12:50:46.366801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.244 qpair failed and we were unable to recover it. 00:27:04.244 [2024-11-28 12:50:46.366962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.244 [2024-11-28 12:50:46.366979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.244 qpair failed and we were unable to recover it. 00:27:04.244 [2024-11-28 12:50:46.367181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.244 [2024-11-28 12:50:46.367198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.244 qpair failed and we were unable to recover it. 00:27:04.244 [2024-11-28 12:50:46.367299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.244 [2024-11-28 12:50:46.367315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.244 qpair failed and we were unable to recover it. 00:27:04.244 [2024-11-28 12:50:46.367538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.244 [2024-11-28 12:50:46.367555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.244 qpair failed and we were unable to recover it. 00:27:04.244 [2024-11-28 12:50:46.367670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.244 [2024-11-28 12:50:46.367688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.244 qpair failed and we were unable to recover it. 00:27:04.244 [2024-11-28 12:50:46.367910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.244 [2024-11-28 12:50:46.367926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.244 qpair failed and we were unable to recover it. 00:27:04.244 [2024-11-28 12:50:46.368109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.244 [2024-11-28 12:50:46.368127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.244 qpair failed and we were unable to recover it. 00:27:04.244 [2024-11-28 12:50:46.368374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.244 [2024-11-28 12:50:46.368391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.244 qpair failed and we were unable to recover it. 00:27:04.244 [2024-11-28 12:50:46.368590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.244 [2024-11-28 12:50:46.368607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.244 qpair failed and we were unable to recover it. 00:27:04.244 [2024-11-28 12:50:46.368848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.244 [2024-11-28 12:50:46.368865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.244 qpair failed and we were unable to recover it. 00:27:04.244 [2024-11-28 12:50:46.368960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.244 [2024-11-28 12:50:46.368976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.244 qpair failed and we were unable to recover it. 00:27:04.244 [2024-11-28 12:50:46.369193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.244 [2024-11-28 12:50:46.369210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.244 qpair failed and we were unable to recover it. 00:27:04.244 [2024-11-28 12:50:46.369372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.244 [2024-11-28 12:50:46.369389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.244 qpair failed and we were unable to recover it. 00:27:04.244 [2024-11-28 12:50:46.369576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.244 [2024-11-28 12:50:46.369592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.244 qpair failed and we were unable to recover it. 00:27:04.244 [2024-11-28 12:50:46.369683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.244 [2024-11-28 12:50:46.369701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.244 qpair failed and we were unable to recover it. 00:27:04.244 [2024-11-28 12:50:46.369942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.244 [2024-11-28 12:50:46.369966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.244 qpair failed and we were unable to recover it. 00:27:04.244 [2024-11-28 12:50:46.370209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.244 [2024-11-28 12:50:46.370228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.244 qpair failed and we were unable to recover it. 00:27:04.244 [2024-11-28 12:50:46.370420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.244 [2024-11-28 12:50:46.370436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.244 qpair failed and we were unable to recover it. 00:27:04.244 [2024-11-28 12:50:46.370666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.244 [2024-11-28 12:50:46.370683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.244 qpair failed and we were unable to recover it. 00:27:04.244 [2024-11-28 12:50:46.370874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.244 [2024-11-28 12:50:46.370891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.244 qpair failed and we were unable to recover it. 00:27:04.244 [2024-11-28 12:50:46.371044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.244 [2024-11-28 12:50:46.371061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.244 qpair failed and we were unable to recover it. 00:27:04.244 [2024-11-28 12:50:46.371300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.244 [2024-11-28 12:50:46.371317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.244 qpair failed and we were unable to recover it. 00:27:04.244 [2024-11-28 12:50:46.371482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.244 [2024-11-28 12:50:46.371499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.244 qpair failed and we were unable to recover it. 00:27:04.244 [2024-11-28 12:50:46.371761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.244 [2024-11-28 12:50:46.371778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.244 qpair failed and we were unable to recover it. 00:27:04.244 [2024-11-28 12:50:46.372000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.244 [2024-11-28 12:50:46.372018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.244 qpair failed and we were unable to recover it. 00:27:04.244 [2024-11-28 12:50:46.372178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.244 [2024-11-28 12:50:46.372195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.244 qpair failed and we were unable to recover it. 00:27:04.244 [2024-11-28 12:50:46.372427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.244 [2024-11-28 12:50:46.372444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.244 qpair failed and we were unable to recover it. 00:27:04.244 [2024-11-28 12:50:46.372524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.244 [2024-11-28 12:50:46.372540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.244 qpair failed and we were unable to recover it. 00:27:04.244 [2024-11-28 12:50:46.372739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.244 [2024-11-28 12:50:46.372756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.244 qpair failed and we were unable to recover it. 00:27:04.244 [2024-11-28 12:50:46.372917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.244 [2024-11-28 12:50:46.372935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.244 qpair failed and we were unable to recover it. 00:27:04.244 [2024-11-28 12:50:46.373116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.244 [2024-11-28 12:50:46.373132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.244 qpair failed and we were unable to recover it. 00:27:04.244 [2024-11-28 12:50:46.373376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.244 [2024-11-28 12:50:46.373393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.244 qpair failed and we were unable to recover it. 00:27:04.244 [2024-11-28 12:50:46.373553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.244 [2024-11-28 12:50:46.373570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.244 qpair failed and we were unable to recover it. 00:27:04.244 [2024-11-28 12:50:46.373797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.245 [2024-11-28 12:50:46.373814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.245 qpair failed and we were unable to recover it. 00:27:04.245 [2024-11-28 12:50:46.373964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.245 [2024-11-28 12:50:46.373982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.245 qpair failed and we were unable to recover it. 00:27:04.245 [2024-11-28 12:50:46.374153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.245 [2024-11-28 12:50:46.374169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.245 qpair failed and we were unable to recover it. 00:27:04.245 [2024-11-28 12:50:46.374413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.245 [2024-11-28 12:50:46.374430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.245 qpair failed and we were unable to recover it. 00:27:04.245 [2024-11-28 12:50:46.374670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.245 [2024-11-28 12:50:46.374686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.245 qpair failed and we were unable to recover it. 00:27:04.245 [2024-11-28 12:50:46.374847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.245 [2024-11-28 12:50:46.374864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.245 qpair failed and we were unable to recover it. 00:27:04.245 [2024-11-28 12:50:46.375110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.245 [2024-11-28 12:50:46.375127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.245 qpair failed and we were unable to recover it. 00:27:04.245 [2024-11-28 12:50:46.375294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.245 [2024-11-28 12:50:46.375311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.245 qpair failed and we were unable to recover it. 00:27:04.245 [2024-11-28 12:50:46.375559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.245 [2024-11-28 12:50:46.375577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.245 qpair failed and we were unable to recover it. 00:27:04.245 [2024-11-28 12:50:46.375799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.245 [2024-11-28 12:50:46.375815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.245 qpair failed and we were unable to recover it. 00:27:04.245 [2024-11-28 12:50:46.375974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.245 [2024-11-28 12:50:46.375991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.245 qpair failed and we were unable to recover it. 00:27:04.245 [2024-11-28 12:50:46.376071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.245 [2024-11-28 12:50:46.376087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.245 qpair failed and we were unable to recover it. 00:27:04.245 [2024-11-28 12:50:46.376327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.245 [2024-11-28 12:50:46.376343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.245 qpair failed and we were unable to recover it. 00:27:04.245 [2024-11-28 12:50:46.376571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.245 [2024-11-28 12:50:46.376588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.245 qpair failed and we were unable to recover it. 00:27:04.245 [2024-11-28 12:50:46.376732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.245 [2024-11-28 12:50:46.376749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.245 qpair failed and we were unable to recover it. 00:27:04.245 [2024-11-28 12:50:46.376911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.245 [2024-11-28 12:50:46.376929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.245 qpair failed and we were unable to recover it. 00:27:04.245 [2024-11-28 12:50:46.377104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.245 [2024-11-28 12:50:46.377121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.245 qpair failed and we were unable to recover it. 00:27:04.245 [2024-11-28 12:50:46.377282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.245 [2024-11-28 12:50:46.377299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.245 qpair failed and we were unable to recover it. 00:27:04.245 [2024-11-28 12:50:46.377472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.245 [2024-11-28 12:50:46.377488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.245 qpair failed and we were unable to recover it. 00:27:04.245 [2024-11-28 12:50:46.377646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.245 [2024-11-28 12:50:46.377665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.245 qpair failed and we were unable to recover it. 00:27:04.245 [2024-11-28 12:50:46.377895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.245 [2024-11-28 12:50:46.377911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.245 qpair failed and we were unable to recover it. 00:27:04.245 [2024-11-28 12:50:46.378155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.245 [2024-11-28 12:50:46.378172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.245 qpair failed and we were unable to recover it. 00:27:04.245 [2024-11-28 12:50:46.378317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.245 [2024-11-28 12:50:46.378337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.245 qpair failed and we were unable to recover it. 00:27:04.245 [2024-11-28 12:50:46.378435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.245 [2024-11-28 12:50:46.378450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.245 qpair failed and we were unable to recover it. 00:27:04.245 [2024-11-28 12:50:46.378585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.245 [2024-11-28 12:50:46.378602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.245 qpair failed and we were unable to recover it. 00:27:04.245 [2024-11-28 12:50:46.378704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.245 [2024-11-28 12:50:46.378720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.245 qpair failed and we were unable to recover it. 00:27:04.245 [2024-11-28 12:50:46.378828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.245 [2024-11-28 12:50:46.378844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.245 qpair failed and we were unable to recover it. 00:27:04.245 [2024-11-28 12:50:46.378939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.245 [2024-11-28 12:50:46.378962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.245 qpair failed and we were unable to recover it. 00:27:04.245 [2024-11-28 12:50:46.379114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.245 [2024-11-28 12:50:46.379131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.245 qpair failed and we were unable to recover it. 00:27:04.245 [2024-11-28 12:50:46.379220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.245 [2024-11-28 12:50:46.379235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.245 qpair failed and we were unable to recover it. 00:27:04.245 [2024-11-28 12:50:46.379346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.245 [2024-11-28 12:50:46.379361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.245 qpair failed and we were unable to recover it. 00:27:04.245 [2024-11-28 12:50:46.379625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.245 [2024-11-28 12:50:46.379642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.245 qpair failed and we were unable to recover it. 00:27:04.245 [2024-11-28 12:50:46.379816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.245 [2024-11-28 12:50:46.379833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.245 qpair failed and we were unable to recover it. 00:27:04.245 [2024-11-28 12:50:46.379998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.245 [2024-11-28 12:50:46.380016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.245 qpair failed and we were unable to recover it. 00:27:04.245 [2024-11-28 12:50:46.380126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.245 [2024-11-28 12:50:46.380141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.245 qpair failed and we were unable to recover it. 00:27:04.245 [2024-11-28 12:50:46.380250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.245 [2024-11-28 12:50:46.380268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.245 qpair failed and we were unable to recover it. 00:27:04.245 [2024-11-28 12:50:46.380371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.245 [2024-11-28 12:50:46.380387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.245 qpair failed and we were unable to recover it. 00:27:04.245 [2024-11-28 12:50:46.380617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.245 [2024-11-28 12:50:46.380633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.245 qpair failed and we were unable to recover it. 00:27:04.245 [2024-11-28 12:50:46.380738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.245 [2024-11-28 12:50:46.380753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.245 qpair failed and we were unable to recover it. 00:27:04.246 [2024-11-28 12:50:46.380916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.246 [2024-11-28 12:50:46.380933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.246 qpair failed and we were unable to recover it. 00:27:04.246 [2024-11-28 12:50:46.381206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.246 [2024-11-28 12:50:46.381223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.246 qpair failed and we were unable to recover it. 00:27:04.246 [2024-11-28 12:50:46.381335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.246 [2024-11-28 12:50:46.381352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.246 qpair failed and we were unable to recover it. 00:27:04.246 [2024-11-28 12:50:46.381523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.246 [2024-11-28 12:50:46.381539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.246 qpair failed and we were unable to recover it. 00:27:04.246 [2024-11-28 12:50:46.381693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.246 [2024-11-28 12:50:46.381711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.246 qpair failed and we were unable to recover it. 00:27:04.246 [2024-11-28 12:50:46.381897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.246 [2024-11-28 12:50:46.381913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.246 qpair failed and we were unable to recover it. 00:27:04.246 [2024-11-28 12:50:46.382080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.246 [2024-11-28 12:50:46.382098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.246 qpair failed and we were unable to recover it. 00:27:04.246 [2024-11-28 12:50:46.382244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.246 [2024-11-28 12:50:46.382261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.246 qpair failed and we were unable to recover it. 00:27:04.246 [2024-11-28 12:50:46.382359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.246 [2024-11-28 12:50:46.382374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.246 qpair failed and we were unable to recover it. 00:27:04.246 [2024-11-28 12:50:46.382615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.246 [2024-11-28 12:50:46.382631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.246 qpair failed and we were unable to recover it. 00:27:04.246 [2024-11-28 12:50:46.382719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.246 [2024-11-28 12:50:46.382738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.246 qpair failed and we were unable to recover it. 00:27:04.246 [2024-11-28 12:50:46.382887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.246 [2024-11-28 12:50:46.382903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.246 qpair failed and we were unable to recover it. 00:27:04.246 [2024-11-28 12:50:46.383000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.246 [2024-11-28 12:50:46.383016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.246 qpair failed and we were unable to recover it. 00:27:04.246 [2024-11-28 12:50:46.383183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.246 [2024-11-28 12:50:46.383200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.246 qpair failed and we were unable to recover it. 00:27:04.246 [2024-11-28 12:50:46.383366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.246 [2024-11-28 12:50:46.383382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.246 qpair failed and we were unable to recover it. 00:27:04.246 [2024-11-28 12:50:46.383532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.246 [2024-11-28 12:50:46.383548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.246 qpair failed and we were unable to recover it. 00:27:04.246 [2024-11-28 12:50:46.383717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.246 [2024-11-28 12:50:46.383734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.246 qpair failed and we were unable to recover it. 00:27:04.246 [2024-11-28 12:50:46.383815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.246 [2024-11-28 12:50:46.383831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.246 qpair failed and we were unable to recover it. 00:27:04.246 [2024-11-28 12:50:46.384001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.246 [2024-11-28 12:50:46.384018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.246 qpair failed and we were unable to recover it. 00:27:04.246 [2024-11-28 12:50:46.384099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.246 [2024-11-28 12:50:46.384114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.246 qpair failed and we were unable to recover it. 00:27:04.246 [2024-11-28 12:50:46.384282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.246 [2024-11-28 12:50:46.384297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.246 qpair failed and we were unable to recover it. 00:27:04.246 [2024-11-28 12:50:46.384394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.246 [2024-11-28 12:50:46.384410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.246 qpair failed and we were unable to recover it. 00:27:04.246 [2024-11-28 12:50:46.384558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.246 [2024-11-28 12:50:46.384574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.246 qpair failed and we were unable to recover it. 00:27:04.246 [2024-11-28 12:50:46.384678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.246 [2024-11-28 12:50:46.384694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.246 qpair failed and we were unable to recover it. 00:27:04.246 [2024-11-28 12:50:46.384914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.246 [2024-11-28 12:50:46.384931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.246 qpair failed and we were unable to recover it. 00:27:04.246 [2024-11-28 12:50:46.385121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.246 [2024-11-28 12:50:46.385137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.246 qpair failed and we were unable to recover it. 00:27:04.246 [2024-11-28 12:50:46.385363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.246 [2024-11-28 12:50:46.385380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.246 qpair failed and we were unable to recover it. 00:27:04.246 [2024-11-28 12:50:46.385544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.246 [2024-11-28 12:50:46.385562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.246 qpair failed and we were unable to recover it. 00:27:04.246 [2024-11-28 12:50:46.385755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.246 [2024-11-28 12:50:46.385772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.246 qpair failed and we were unable to recover it. 00:27:04.246 [2024-11-28 12:50:46.385882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.246 [2024-11-28 12:50:46.385898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.246 qpair failed and we were unable to recover it. 00:27:04.246 [2024-11-28 12:50:46.385988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.246 [2024-11-28 12:50:46.386004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.246 qpair failed and we were unable to recover it. 00:27:04.246 [2024-11-28 12:50:46.386256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.246 [2024-11-28 12:50:46.386274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.246 qpair failed and we were unable to recover it. 00:27:04.246 [2024-11-28 12:50:46.386373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.246 [2024-11-28 12:50:46.386389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.246 qpair failed and we were unable to recover it. 00:27:04.246 [2024-11-28 12:50:46.386502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.246 [2024-11-28 12:50:46.386520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.246 qpair failed and we were unable to recover it. 00:27:04.246 [2024-11-28 12:50:46.386676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.246 [2024-11-28 12:50:46.386693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.246 qpair failed and we were unable to recover it. 00:27:04.246 [2024-11-28 12:50:46.386793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.246 [2024-11-28 12:50:46.386809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.246 qpair failed and we were unable to recover it. 00:27:04.247 [2024-11-28 12:50:46.386894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.247 [2024-11-28 12:50:46.386910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.247 qpair failed and we were unable to recover it. 00:27:04.247 [2024-11-28 12:50:46.387087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.247 [2024-11-28 12:50:46.387103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.247 qpair failed and we were unable to recover it. 00:27:04.247 [2024-11-28 12:50:46.387268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.247 [2024-11-28 12:50:46.387285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.247 qpair failed and we were unable to recover it. 00:27:04.247 [2024-11-28 12:50:46.387439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.247 [2024-11-28 12:50:46.387456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.247 qpair failed and we were unable to recover it. 00:27:04.247 [2024-11-28 12:50:46.387539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.247 [2024-11-28 12:50:46.387555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.247 qpair failed and we were unable to recover it. 00:27:04.247 [2024-11-28 12:50:46.387737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.247 [2024-11-28 12:50:46.387755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.247 qpair failed and we were unable to recover it. 00:27:04.247 [2024-11-28 12:50:46.387934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.247 [2024-11-28 12:50:46.387970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.247 qpair failed and we were unable to recover it. 00:27:04.247 [2024-11-28 12:50:46.388081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.247 [2024-11-28 12:50:46.388098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.247 qpair failed and we were unable to recover it. 00:27:04.247 [2024-11-28 12:50:46.388333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.247 [2024-11-28 12:50:46.388349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.247 qpair failed and we were unable to recover it. 00:27:04.247 [2024-11-28 12:50:46.388444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.247 [2024-11-28 12:50:46.388461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.247 qpair failed and we were unable to recover it. 00:27:04.247 [2024-11-28 12:50:46.388677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.247 [2024-11-28 12:50:46.388693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.247 qpair failed and we were unable to recover it. 00:27:04.247 [2024-11-28 12:50:46.388912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.247 [2024-11-28 12:50:46.388929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.247 qpair failed and we were unable to recover it. 00:27:04.247 [2024-11-28 12:50:46.389035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.247 [2024-11-28 12:50:46.389053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.247 qpair failed and we were unable to recover it. 00:27:04.247 [2024-11-28 12:50:46.389142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.247 [2024-11-28 12:50:46.389158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.247 qpair failed and we were unable to recover it. 00:27:04.247 [2024-11-28 12:50:46.389252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.247 [2024-11-28 12:50:46.389268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.247 qpair failed and we were unable to recover it. 00:27:04.247 [2024-11-28 12:50:46.389366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.247 [2024-11-28 12:50:46.389385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.247 qpair failed and we were unable to recover it. 00:27:04.247 [2024-11-28 12:50:46.389481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.247 [2024-11-28 12:50:46.389499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.247 qpair failed and we were unable to recover it. 00:27:04.247 [2024-11-28 12:50:46.389719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.247 [2024-11-28 12:50:46.389736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.247 qpair failed and we were unable to recover it. 00:27:04.247 [2024-11-28 12:50:46.389893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.247 [2024-11-28 12:50:46.389909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.247 qpair failed and we were unable to recover it. 00:27:04.247 [2024-11-28 12:50:46.390071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.247 [2024-11-28 12:50:46.390088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.247 qpair failed and we were unable to recover it. 00:27:04.247 [2024-11-28 12:50:46.390166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.247 [2024-11-28 12:50:46.390181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.247 qpair failed and we were unable to recover it. 00:27:04.247 [2024-11-28 12:50:46.390337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.247 [2024-11-28 12:50:46.390354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.247 qpair failed and we were unable to recover it. 00:27:04.247 [2024-11-28 12:50:46.390467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.247 [2024-11-28 12:50:46.390483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.247 qpair failed and we were unable to recover it. 00:27:04.247 [2024-11-28 12:50:46.390629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.247 [2024-11-28 12:50:46.390645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.247 qpair failed and we were unable to recover it. 00:27:04.247 [2024-11-28 12:50:46.390801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.247 [2024-11-28 12:50:46.390818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.247 qpair failed and we were unable to recover it. 00:27:04.247 [2024-11-28 12:50:46.390924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.247 [2024-11-28 12:50:46.390939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.247 qpair failed and we were unable to recover it. 00:27:04.247 [2024-11-28 12:50:46.391183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.247 [2024-11-28 12:50:46.391199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.247 qpair failed and we were unable to recover it. 00:27:04.247 [2024-11-28 12:50:46.391295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.247 [2024-11-28 12:50:46.391312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.247 qpair failed and we were unable to recover it. 00:27:04.247 [2024-11-28 12:50:46.391458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.247 [2024-11-28 12:50:46.391475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.247 qpair failed and we were unable to recover it. 00:27:04.247 [2024-11-28 12:50:46.391604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.247 [2024-11-28 12:50:46.391620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.247 qpair failed and we were unable to recover it. 00:27:04.247 [2024-11-28 12:50:46.391779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.247 [2024-11-28 12:50:46.391796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.247 qpair failed and we were unable to recover it. 00:27:04.247 [2024-11-28 12:50:46.391888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.247 [2024-11-28 12:50:46.391902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.247 qpair failed and we were unable to recover it. 00:27:04.247 [2024-11-28 12:50:46.392062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.247 [2024-11-28 12:50:46.392078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.247 qpair failed and we were unable to recover it. 00:27:04.247 [2024-11-28 12:50:46.392334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.247 [2024-11-28 12:50:46.392351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.247 qpair failed and we were unable to recover it. 00:27:04.247 [2024-11-28 12:50:46.392452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.247 [2024-11-28 12:50:46.392468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.247 qpair failed and we were unable to recover it. 00:27:04.247 [2024-11-28 12:50:46.392569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.247 [2024-11-28 12:50:46.392586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.247 qpair failed and we were unable to recover it. 00:27:04.248 [2024-11-28 12:50:46.392752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.248 [2024-11-28 12:50:46.392768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.248 qpair failed and we were unable to recover it. 00:27:04.248 [2024-11-28 12:50:46.392860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.248 [2024-11-28 12:50:46.392877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.248 qpair failed and we were unable to recover it. 00:27:04.248 [2024-11-28 12:50:46.393120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.248 [2024-11-28 12:50:46.393137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.248 qpair failed and we were unable to recover it. 00:27:04.248 [2024-11-28 12:50:46.393313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.248 [2024-11-28 12:50:46.393329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.248 qpair failed and we were unable to recover it. 00:27:04.248 [2024-11-28 12:50:46.393499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.248 [2024-11-28 12:50:46.393515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.248 qpair failed and we were unable to recover it. 00:27:04.248 [2024-11-28 12:50:46.393678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.248 [2024-11-28 12:50:46.393695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.248 qpair failed and we were unable to recover it. 00:27:04.248 [2024-11-28 12:50:46.393882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.248 [2024-11-28 12:50:46.393902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.248 qpair failed and we were unable to recover it. 00:27:04.248 [2024-11-28 12:50:46.394007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.248 [2024-11-28 12:50:46.394023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.248 qpair failed and we were unable to recover it. 00:27:04.248 [2024-11-28 12:50:46.394171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.248 [2024-11-28 12:50:46.394188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.248 qpair failed and we were unable to recover it. 00:27:04.248 [2024-11-28 12:50:46.394397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.248 [2024-11-28 12:50:46.394413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.248 qpair failed and we were unable to recover it. 00:27:04.248 [2024-11-28 12:50:46.394581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.248 [2024-11-28 12:50:46.394597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.248 qpair failed and we were unable to recover it. 00:27:04.248 [2024-11-28 12:50:46.394763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.248 [2024-11-28 12:50:46.394779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.248 qpair failed and we were unable to recover it. 00:27:04.248 [2024-11-28 12:50:46.394926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.248 [2024-11-28 12:50:46.394943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.248 qpair failed and we were unable to recover it. 00:27:04.248 [2024-11-28 12:50:46.395046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.248 [2024-11-28 12:50:46.395061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.248 qpair failed and we were unable to recover it. 00:27:04.248 [2024-11-28 12:50:46.395212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.248 [2024-11-28 12:50:46.395228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.248 qpair failed and we were unable to recover it. 00:27:04.248 [2024-11-28 12:50:46.395402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.248 [2024-11-28 12:50:46.395418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.248 qpair failed and we were unable to recover it. 00:27:04.248 [2024-11-28 12:50:46.395645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.248 [2024-11-28 12:50:46.395661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.248 qpair failed and we were unable to recover it. 00:27:04.248 [2024-11-28 12:50:46.395817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.248 [2024-11-28 12:50:46.395833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.248 qpair failed and we were unable to recover it. 00:27:04.248 [2024-11-28 12:50:46.396021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.248 [2024-11-28 12:50:46.396037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.248 qpair failed and we were unable to recover it. 00:27:04.248 [2024-11-28 12:50:46.396208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.248 [2024-11-28 12:50:46.396224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.248 qpair failed and we were unable to recover it. 00:27:04.248 [2024-11-28 12:50:46.396396] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd10b20 is same with the state(6) to be set 00:27:04.248 [2024-11-28 12:50:46.396652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.248 [2024-11-28 12:50:46.396683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.248 qpair failed and we were unable to recover it. 00:27:04.248 [2024-11-28 12:50:46.396857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.248 [2024-11-28 12:50:46.396875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.248 qpair failed and we were unable to recover it. 00:27:04.248 [2024-11-28 12:50:46.396964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.248 [2024-11-28 12:50:46.396980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.248 qpair failed and we were unable to recover it. 00:27:04.248 [2024-11-28 12:50:46.397155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.248 [2024-11-28 12:50:46.397172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.248 qpair failed and we were unable to recover it. 00:27:04.248 [2024-11-28 12:50:46.397333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.248 [2024-11-28 12:50:46.397354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.248 qpair failed and we were unable to recover it. 00:27:04.248 [2024-11-28 12:50:46.397522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.248 [2024-11-28 12:50:46.397546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.248 qpair failed and we were unable to recover it. 00:27:04.248 [2024-11-28 12:50:46.397725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.248 [2024-11-28 12:50:46.397749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.248 qpair failed and we were unable to recover it. 00:27:04.248 [2024-11-28 12:50:46.397939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.248 [2024-11-28 12:50:46.397969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.248 qpair failed and we were unable to recover it. 00:27:04.248 [2024-11-28 12:50:46.398090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.248 [2024-11-28 12:50:46.398113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.248 qpair failed and we were unable to recover it. 00:27:04.248 [2024-11-28 12:50:46.398282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.248 [2024-11-28 12:50:46.398302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.248 qpair failed and we were unable to recover it. 00:27:04.248 [2024-11-28 12:50:46.398453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.248 [2024-11-28 12:50:46.398470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.248 qpair failed and we were unable to recover it. 00:27:04.248 [2024-11-28 12:50:46.398568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.248 [2024-11-28 12:50:46.398585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.248 qpair failed and we were unable to recover it. 00:27:04.248 [2024-11-28 12:50:46.398687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.248 [2024-11-28 12:50:46.398703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.248 qpair failed and we were unable to recover it. 00:27:04.248 [2024-11-28 12:50:46.398901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.248 [2024-11-28 12:50:46.398917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.248 qpair failed and we were unable to recover it. 00:27:04.248 [2024-11-28 12:50:46.399082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.248 [2024-11-28 12:50:46.399099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.248 qpair failed and we were unable to recover it. 00:27:04.248 [2024-11-28 12:50:46.399190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.248 [2024-11-28 12:50:46.399206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.248 qpair failed and we were unable to recover it. 00:27:04.248 [2024-11-28 12:50:46.399377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.248 [2024-11-28 12:50:46.399402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.248 qpair failed and we were unable to recover it. 00:27:04.248 [2024-11-28 12:50:46.399586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.248 [2024-11-28 12:50:46.399609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.248 qpair failed and we were unable to recover it. 00:27:04.249 [2024-11-28 12:50:46.399797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.249 [2024-11-28 12:50:46.399821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.249 qpair failed and we were unable to recover it. 00:27:04.249 [2024-11-28 12:50:46.399985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.249 [2024-11-28 12:50:46.400010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.249 qpair failed and we were unable to recover it. 00:27:04.249 [2024-11-28 12:50:46.400197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.249 [2024-11-28 12:50:46.400216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.249 qpair failed and we were unable to recover it. 00:27:04.249 [2024-11-28 12:50:46.400438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.249 [2024-11-28 12:50:46.400454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.249 qpair failed and we were unable to recover it. 00:27:04.249 [2024-11-28 12:50:46.400544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.249 [2024-11-28 12:50:46.400561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.249 qpair failed and we were unable to recover it. 00:27:04.249 [2024-11-28 12:50:46.400660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.249 [2024-11-28 12:50:46.400676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.249 qpair failed and we were unable to recover it. 00:27:04.249 [2024-11-28 12:50:46.400832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.249 [2024-11-28 12:50:46.400848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.249 qpair failed and we were unable to recover it. 00:27:04.249 [2024-11-28 12:50:46.401066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.249 [2024-11-28 12:50:46.401083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.249 qpair failed and we were unable to recover it. 00:27:04.249 [2024-11-28 12:50:46.401164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.249 [2024-11-28 12:50:46.401190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.249 qpair failed and we were unable to recover it. 00:27:04.249 [2024-11-28 12:50:46.401303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.249 [2024-11-28 12:50:46.401326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.249 qpair failed and we were unable to recover it. 00:27:04.249 [2024-11-28 12:50:46.401531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.249 [2024-11-28 12:50:46.401555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.249 qpair failed and we were unable to recover it. 00:27:04.249 [2024-11-28 12:50:46.401681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.249 [2024-11-28 12:50:46.401705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.249 qpair failed and we were unable to recover it. 00:27:04.249 [2024-11-28 12:50:46.401810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.249 [2024-11-28 12:50:46.401829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.249 qpair failed and we were unable to recover it. 00:27:04.249 [2024-11-28 12:50:46.401978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.249 [2024-11-28 12:50:46.401995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.249 qpair failed and we were unable to recover it. 00:27:04.249 [2024-11-28 12:50:46.402227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.249 [2024-11-28 12:50:46.402243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.249 qpair failed and we were unable to recover it. 00:27:04.249 [2024-11-28 12:50:46.402317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.249 [2024-11-28 12:50:46.402333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.249 qpair failed and we were unable to recover it. 00:27:04.249 [2024-11-28 12:50:46.402570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.249 [2024-11-28 12:50:46.402587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.249 qpair failed and we were unable to recover it. 00:27:04.249 [2024-11-28 12:50:46.402699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.249 [2024-11-28 12:50:46.402714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.249 qpair failed and we were unable to recover it. 00:27:04.249 [2024-11-28 12:50:46.402856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.249 [2024-11-28 12:50:46.402872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.249 qpair failed and we were unable to recover it. 00:27:04.249 [2024-11-28 12:50:46.403134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.249 [2024-11-28 12:50:46.403150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.249 qpair failed and we were unable to recover it. 00:27:04.249 [2024-11-28 12:50:46.403248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.249 [2024-11-28 12:50:46.403263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.249 qpair failed and we were unable to recover it. 00:27:04.249 [2024-11-28 12:50:46.403367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.249 [2024-11-28 12:50:46.403381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.249 qpair failed and we were unable to recover it. 00:27:04.249 [2024-11-28 12:50:46.403621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.249 [2024-11-28 12:50:46.403637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.249 qpair failed and we were unable to recover it. 00:27:04.249 [2024-11-28 12:50:46.403715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.249 [2024-11-28 12:50:46.403730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.249 qpair failed and we were unable to recover it. 00:27:04.249 [2024-11-28 12:50:46.403826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.249 [2024-11-28 12:50:46.403841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.249 qpair failed and we were unable to recover it. 00:27:04.249 [2024-11-28 12:50:46.403943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.249 [2024-11-28 12:50:46.403964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.249 qpair failed and we were unable to recover it. 00:27:04.249 [2024-11-28 12:50:46.404178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.249 [2024-11-28 12:50:46.404194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.249 qpair failed and we were unable to recover it. 00:27:04.249 [2024-11-28 12:50:46.404410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.249 [2024-11-28 12:50:46.404426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.249 qpair failed and we were unable to recover it. 00:27:04.249 [2024-11-28 12:50:46.404517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.249 [2024-11-28 12:50:46.404533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.249 qpair failed and we were unable to recover it. 00:27:04.249 [2024-11-28 12:50:46.404631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.249 [2024-11-28 12:50:46.404648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.249 qpair failed and we were unable to recover it. 00:27:04.249 [2024-11-28 12:50:46.404734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.249 [2024-11-28 12:50:46.404749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.249 qpair failed and we were unable to recover it. 00:27:04.249 [2024-11-28 12:50:46.404836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.249 [2024-11-28 12:50:46.404850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.249 qpair failed and we were unable to recover it. 00:27:04.249 [2024-11-28 12:50:46.404958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.249 [2024-11-28 12:50:46.404974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.249 qpair failed and we were unable to recover it. 00:27:04.249 [2024-11-28 12:50:46.405191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.249 [2024-11-28 12:50:46.405208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.249 qpair failed and we were unable to recover it. 00:27:04.249 [2024-11-28 12:50:46.405322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.249 [2024-11-28 12:50:46.405338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.249 qpair failed and we were unable to recover it. 00:27:04.249 [2024-11-28 12:50:46.405495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.249 [2024-11-28 12:50:46.405515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.249 qpair failed and we were unable to recover it. 00:27:04.249 [2024-11-28 12:50:46.405624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.249 [2024-11-28 12:50:46.405640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.249 qpair failed and we were unable to recover it. 00:27:04.249 [2024-11-28 12:50:46.405790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.250 [2024-11-28 12:50:46.405805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.250 qpair failed and we were unable to recover it. 00:27:04.250 [2024-11-28 12:50:46.405958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.250 [2024-11-28 12:50:46.405975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.250 qpair failed and we were unable to recover it. 00:27:04.250 [2024-11-28 12:50:46.406125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.250 [2024-11-28 12:50:46.406142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.250 qpair failed and we were unable to recover it. 00:27:04.250 [2024-11-28 12:50:46.406231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.250 [2024-11-28 12:50:46.406247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.250 qpair failed and we were unable to recover it. 00:27:04.250 [2024-11-28 12:50:46.406327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.250 [2024-11-28 12:50:46.406343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.250 qpair failed and we were unable to recover it. 00:27:04.250 [2024-11-28 12:50:46.406434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.250 [2024-11-28 12:50:46.406450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.250 qpair failed and we were unable to recover it. 00:27:04.250 [2024-11-28 12:50:46.406719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.250 [2024-11-28 12:50:46.406735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.250 qpair failed and we were unable to recover it. 00:27:04.250 [2024-11-28 12:50:46.406826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.250 [2024-11-28 12:50:46.406843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.250 qpair failed and we were unable to recover it. 00:27:04.250 [2024-11-28 12:50:46.407076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.250 [2024-11-28 12:50:46.407092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.250 qpair failed and we were unable to recover it. 00:27:04.250 [2024-11-28 12:50:46.407174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.250 [2024-11-28 12:50:46.407188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.250 qpair failed and we were unable to recover it. 00:27:04.250 [2024-11-28 12:50:46.407276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.250 [2024-11-28 12:50:46.407292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.250 qpair failed and we were unable to recover it. 00:27:04.250 [2024-11-28 12:50:46.407377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.250 [2024-11-28 12:50:46.407394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.250 qpair failed and we were unable to recover it. 00:27:04.250 [2024-11-28 12:50:46.407484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.250 [2024-11-28 12:50:46.407500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.250 qpair failed and we were unable to recover it. 00:27:04.250 [2024-11-28 12:50:46.407675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.250 [2024-11-28 12:50:46.407691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.250 qpair failed and we were unable to recover it. 00:27:04.250 [2024-11-28 12:50:46.407835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.250 [2024-11-28 12:50:46.407851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.250 qpair failed and we were unable to recover it. 00:27:04.250 [2024-11-28 12:50:46.408019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.250 [2024-11-28 12:50:46.408036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.250 qpair failed and we were unable to recover it. 00:27:04.250 [2024-11-28 12:50:46.408192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.250 [2024-11-28 12:50:46.408209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.250 qpair failed and we were unable to recover it. 00:27:04.250 [2024-11-28 12:50:46.408298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.250 [2024-11-28 12:50:46.408314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.250 qpair failed and we were unable to recover it. 00:27:04.250 [2024-11-28 12:50:46.408416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.250 [2024-11-28 12:50:46.408432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.250 qpair failed and we were unable to recover it. 00:27:04.250 [2024-11-28 12:50:46.408594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.250 [2024-11-28 12:50:46.408610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.250 qpair failed and we were unable to recover it. 00:27:04.250 [2024-11-28 12:50:46.408766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.250 [2024-11-28 12:50:46.408783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.250 qpair failed and we were unable to recover it. 00:27:04.250 [2024-11-28 12:50:46.408917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.250 [2024-11-28 12:50:46.408933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.250 qpair failed and we were unable to recover it. 00:27:04.250 [2024-11-28 12:50:46.409095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.250 [2024-11-28 12:50:46.409111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.250 qpair failed and we were unable to recover it. 00:27:04.250 [2024-11-28 12:50:46.409306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.250 [2024-11-28 12:50:46.409322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.250 qpair failed and we were unable to recover it. 00:27:04.250 [2024-11-28 12:50:46.409468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.250 [2024-11-28 12:50:46.409484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.250 qpair failed and we were unable to recover it. 00:27:04.250 [2024-11-28 12:50:46.409745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.250 [2024-11-28 12:50:46.409761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.250 qpair failed and we were unable to recover it. 00:27:04.250 [2024-11-28 12:50:46.409872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.250 [2024-11-28 12:50:46.409886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.250 qpair failed and we were unable to recover it. 00:27:04.250 [2024-11-28 12:50:46.410030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.250 [2024-11-28 12:50:46.410047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.250 qpair failed and we were unable to recover it. 00:27:04.250 [2024-11-28 12:50:46.410128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.250 [2024-11-28 12:50:46.410143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.250 qpair failed and we were unable to recover it. 00:27:04.250 [2024-11-28 12:50:46.410338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.250 [2024-11-28 12:50:46.410354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.250 qpair failed and we were unable to recover it. 00:27:04.250 [2024-11-28 12:50:46.410451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.250 [2024-11-28 12:50:46.410468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.250 qpair failed and we were unable to recover it. 00:27:04.250 [2024-11-28 12:50:46.410657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.250 [2024-11-28 12:50:46.410673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.250 qpair failed and we were unable to recover it. 00:27:04.250 [2024-11-28 12:50:46.410832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.250 [2024-11-28 12:50:46.410849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.250 qpair failed and we were unable to recover it. 00:27:04.250 [2024-11-28 12:50:46.411022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.250 [2024-11-28 12:50:46.411039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.250 qpair failed and we were unable to recover it. 00:27:04.250 [2024-11-28 12:50:46.411275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.250 [2024-11-28 12:50:46.411291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.250 qpair failed and we were unable to recover it. 00:27:04.250 [2024-11-28 12:50:46.411397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.250 [2024-11-28 12:50:46.411413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.250 qpair failed and we were unable to recover it. 00:27:04.250 [2024-11-28 12:50:46.411519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.250 [2024-11-28 12:50:46.411536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.250 qpair failed and we were unable to recover it. 00:27:04.250 [2024-11-28 12:50:46.411683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.250 [2024-11-28 12:50:46.411699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.250 qpair failed and we were unable to recover it. 00:27:04.250 [2024-11-28 12:50:46.411931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.250 [2024-11-28 12:50:46.411953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.250 qpair failed and we were unable to recover it. 00:27:04.251 [2024-11-28 12:50:46.412046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.251 [2024-11-28 12:50:46.412066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.251 qpair failed and we were unable to recover it. 00:27:04.251 [2024-11-28 12:50:46.412229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.251 [2024-11-28 12:50:46.412246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.251 qpair failed and we were unable to recover it. 00:27:04.251 [2024-11-28 12:50:46.412459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.251 [2024-11-28 12:50:46.412475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.251 qpair failed and we were unable to recover it. 00:27:04.251 [2024-11-28 12:50:46.412639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.251 [2024-11-28 12:50:46.412655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.251 qpair failed and we were unable to recover it. 00:27:04.251 [2024-11-28 12:50:46.412745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.251 [2024-11-28 12:50:46.412761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.251 qpair failed and we were unable to recover it. 00:27:04.251 [2024-11-28 12:50:46.412849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.251 [2024-11-28 12:50:46.412866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.251 qpair failed and we were unable to recover it. 00:27:04.251 [2024-11-28 12:50:46.412957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.251 [2024-11-28 12:50:46.412973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.251 qpair failed and we were unable to recover it. 00:27:04.251 [2024-11-28 12:50:46.413065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.251 [2024-11-28 12:50:46.413083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.251 qpair failed and we were unable to recover it. 00:27:04.251 [2024-11-28 12:50:46.413246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.251 [2024-11-28 12:50:46.413261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.251 qpair failed and we were unable to recover it. 00:27:04.251 [2024-11-28 12:50:46.413365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.251 [2024-11-28 12:50:46.413381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.251 qpair failed and we were unable to recover it. 00:27:04.251 [2024-11-28 12:50:46.413612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.251 [2024-11-28 12:50:46.413628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.251 qpair failed and we were unable to recover it. 00:27:04.251 [2024-11-28 12:50:46.413708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.251 [2024-11-28 12:50:46.413723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.251 qpair failed and we were unable to recover it. 00:27:04.251 [2024-11-28 12:50:46.413816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.251 [2024-11-28 12:50:46.413832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.251 qpair failed and we were unable to recover it. 00:27:04.251 [2024-11-28 12:50:46.413979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.251 [2024-11-28 12:50:46.413995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.251 qpair failed and we were unable to recover it. 00:27:04.251 [2024-11-28 12:50:46.414155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.251 [2024-11-28 12:50:46.414172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.251 qpair failed and we were unable to recover it. 00:27:04.251 [2024-11-28 12:50:46.414318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.251 [2024-11-28 12:50:46.414335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.251 qpair failed and we were unable to recover it. 00:27:04.251 [2024-11-28 12:50:46.414483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.251 [2024-11-28 12:50:46.414499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.251 qpair failed and we were unable to recover it. 00:27:04.251 [2024-11-28 12:50:46.414580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.251 [2024-11-28 12:50:46.414594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.251 qpair failed and we were unable to recover it. 00:27:04.251 [2024-11-28 12:50:46.414754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.251 [2024-11-28 12:50:46.414768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.251 qpair failed and we were unable to recover it. 00:27:04.251 [2024-11-28 12:50:46.414914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.251 [2024-11-28 12:50:46.414928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.251 qpair failed and we were unable to recover it. 00:27:04.251 [2024-11-28 12:50:46.415095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.251 [2024-11-28 12:50:46.415110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.251 qpair failed and we were unable to recover it. 00:27:04.251 [2024-11-28 12:50:46.415265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.251 [2024-11-28 12:50:46.415280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.251 qpair failed and we were unable to recover it. 00:27:04.251 [2024-11-28 12:50:46.415419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.251 [2024-11-28 12:50:46.415434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.251 qpair failed and we were unable to recover it. 00:27:04.251 [2024-11-28 12:50:46.415574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.251 [2024-11-28 12:50:46.415588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.251 qpair failed and we were unable to recover it. 00:27:04.251 [2024-11-28 12:50:46.415770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.251 [2024-11-28 12:50:46.415786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.251 qpair failed and we were unable to recover it. 00:27:04.251 [2024-11-28 12:50:46.415874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.251 [2024-11-28 12:50:46.415889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.251 qpair failed and we were unable to recover it. 00:27:04.251 [2024-11-28 12:50:46.416083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.251 [2024-11-28 12:50:46.416098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.251 qpair failed and we were unable to recover it. 00:27:04.251 [2024-11-28 12:50:46.416354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.251 [2024-11-28 12:50:46.416372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.251 qpair failed and we were unable to recover it. 00:27:04.251 [2024-11-28 12:50:46.416461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.251 [2024-11-28 12:50:46.416476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.251 qpair failed and we were unable to recover it. 00:27:04.251 [2024-11-28 12:50:46.416633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.251 [2024-11-28 12:50:46.416648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.251 qpair failed and we were unable to recover it. 00:27:04.251 [2024-11-28 12:50:46.416814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.251 [2024-11-28 12:50:46.416828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.251 qpair failed and we were unable to recover it. 00:27:04.251 [2024-11-28 12:50:46.417013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.251 [2024-11-28 12:50:46.417029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.251 qpair failed and we were unable to recover it. 00:27:04.251 [2024-11-28 12:50:46.417201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.251 [2024-11-28 12:50:46.417217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.251 qpair failed and we were unable to recover it. 00:27:04.251 [2024-11-28 12:50:46.417447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.251 [2024-11-28 12:50:46.417463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.251 qpair failed and we were unable to recover it. 00:27:04.251 [2024-11-28 12:50:46.417706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.251 [2024-11-28 12:50:46.417721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.251 qpair failed and we were unable to recover it. 00:27:04.252 [2024-11-28 12:50:46.417975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.252 [2024-11-28 12:50:46.417992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.252 qpair failed and we were unable to recover it. 00:27:04.252 [2024-11-28 12:50:46.418227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.252 [2024-11-28 12:50:46.418243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.252 qpair failed and we were unable to recover it. 00:27:04.252 [2024-11-28 12:50:46.418436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.252 [2024-11-28 12:50:46.418454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.252 qpair failed and we were unable to recover it. 00:27:04.252 [2024-11-28 12:50:46.418609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.252 [2024-11-28 12:50:46.418624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.252 qpair failed and we were unable to recover it. 00:27:04.252 [2024-11-28 12:50:46.418855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.252 [2024-11-28 12:50:46.418870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.252 qpair failed and we were unable to recover it. 00:27:04.252 [2024-11-28 12:50:46.419039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.252 [2024-11-28 12:50:46.419053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.252 qpair failed and we were unable to recover it. 00:27:04.252 [2024-11-28 12:50:46.419149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.252 [2024-11-28 12:50:46.419164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.252 qpair failed and we were unable to recover it. 00:27:04.252 [2024-11-28 12:50:46.419396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.252 [2024-11-28 12:50:46.419411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.252 qpair failed and we were unable to recover it. 00:27:04.252 [2024-11-28 12:50:46.419502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.252 [2024-11-28 12:50:46.419516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.252 qpair failed and we were unable to recover it. 00:27:04.252 [2024-11-28 12:50:46.419672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.252 [2024-11-28 12:50:46.419686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.252 qpair failed and we were unable to recover it. 00:27:04.252 [2024-11-28 12:50:46.419773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.252 [2024-11-28 12:50:46.419786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.252 qpair failed and we were unable to recover it. 00:27:04.252 [2024-11-28 12:50:46.419933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.252 [2024-11-28 12:50:46.419954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.252 qpair failed and we were unable to recover it. 00:27:04.252 [2024-11-28 12:50:46.420213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.252 [2024-11-28 12:50:46.420228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.252 qpair failed and we were unable to recover it. 00:27:04.252 [2024-11-28 12:50:46.420439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.252 [2024-11-28 12:50:46.420454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.252 qpair failed and we were unable to recover it. 00:27:04.252 [2024-11-28 12:50:46.420730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.252 [2024-11-28 12:50:46.420746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.252 qpair failed and we were unable to recover it. 00:27:04.252 [2024-11-28 12:50:46.420988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.252 [2024-11-28 12:50:46.421005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.252 qpair failed and we were unable to recover it. 00:27:04.252 [2024-11-28 12:50:46.421178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.252 [2024-11-28 12:50:46.421192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.252 qpair failed and we were unable to recover it. 00:27:04.252 [2024-11-28 12:50:46.421345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.252 [2024-11-28 12:50:46.421360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.252 qpair failed and we were unable to recover it. 00:27:04.252 [2024-11-28 12:50:46.421609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.252 [2024-11-28 12:50:46.421625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.252 qpair failed and we were unable to recover it. 00:27:04.252 [2024-11-28 12:50:46.421855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.252 [2024-11-28 12:50:46.421871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.252 qpair failed and we were unable to recover it. 00:27:04.252 [2024-11-28 12:50:46.422055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.252 [2024-11-28 12:50:46.422072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.252 qpair failed and we were unable to recover it. 00:27:04.252 [2024-11-28 12:50:46.422283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.252 [2024-11-28 12:50:46.422299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.252 qpair failed and we were unable to recover it. 00:27:04.252 [2024-11-28 12:50:46.422448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.252 [2024-11-28 12:50:46.422464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.252 qpair failed and we were unable to recover it. 00:27:04.252 [2024-11-28 12:50:46.422627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.252 [2024-11-28 12:50:46.422642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.252 qpair failed and we were unable to recover it. 00:27:04.252 [2024-11-28 12:50:46.422797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.252 [2024-11-28 12:50:46.422813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.252 qpair failed and we were unable to recover it. 00:27:04.252 [2024-11-28 12:50:46.422909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.252 [2024-11-28 12:50:46.422923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.252 qpair failed and we were unable to recover it. 00:27:04.252 [2024-11-28 12:50:46.423021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.252 [2024-11-28 12:50:46.423036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.252 qpair failed and we were unable to recover it. 00:27:04.252 [2024-11-28 12:50:46.423255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.252 [2024-11-28 12:50:46.423270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.252 qpair failed and we were unable to recover it. 00:27:04.252 [2024-11-28 12:50:46.423479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.252 [2024-11-28 12:50:46.423494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.252 qpair failed and we were unable to recover it. 00:27:04.252 [2024-11-28 12:50:46.423652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.252 [2024-11-28 12:50:46.423668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.252 qpair failed and we were unable to recover it. 00:27:04.252 [2024-11-28 12:50:46.423810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.252 [2024-11-28 12:50:46.423827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.252 qpair failed and we were unable to recover it. 00:27:04.252 [2024-11-28 12:50:46.423983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.252 [2024-11-28 12:50:46.424000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.252 qpair failed and we were unable to recover it. 00:27:04.252 [2024-11-28 12:50:46.424172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.252 [2024-11-28 12:50:46.424188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.252 qpair failed and we were unable to recover it. 00:27:04.252 [2024-11-28 12:50:46.424421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.252 [2024-11-28 12:50:46.424439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.252 qpair failed and we were unable to recover it. 00:27:04.252 [2024-11-28 12:50:46.424526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.252 [2024-11-28 12:50:46.424540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.252 qpair failed and we were unable to recover it. 00:27:04.252 [2024-11-28 12:50:46.424633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.252 [2024-11-28 12:50:46.424647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.252 qpair failed and we were unable to recover it. 00:27:04.252 [2024-11-28 12:50:46.424880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.252 [2024-11-28 12:50:46.424896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.252 qpair failed and we were unable to recover it. 00:27:04.252 [2024-11-28 12:50:46.425068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.252 [2024-11-28 12:50:46.425083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.252 qpair failed and we were unable to recover it. 00:27:04.252 [2024-11-28 12:50:46.425190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.252 [2024-11-28 12:50:46.425205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.252 qpair failed and we were unable to recover it. 00:27:04.252 [2024-11-28 12:50:46.425352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.252 [2024-11-28 12:50:46.425368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.252 qpair failed and we were unable to recover it. 00:27:04.252 [2024-11-28 12:50:46.425618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.252 [2024-11-28 12:50:46.425634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.252 qpair failed and we were unable to recover it. 00:27:04.252 [2024-11-28 12:50:46.425812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.252 [2024-11-28 12:50:46.425829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.252 qpair failed and we were unable to recover it. 00:27:04.252 [2024-11-28 12:50:46.426002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.253 [2024-11-28 12:50:46.426019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.253 qpair failed and we were unable to recover it. 00:27:04.253 [2024-11-28 12:50:46.426249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.253 [2024-11-28 12:50:46.426264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.253 qpair failed and we were unable to recover it. 00:27:04.253 [2024-11-28 12:50:46.426425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.253 [2024-11-28 12:50:46.426441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.253 qpair failed and we were unable to recover it. 00:27:04.253 [2024-11-28 12:50:46.426599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.253 [2024-11-28 12:50:46.426616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.253 qpair failed and we were unable to recover it. 00:27:04.253 [2024-11-28 12:50:46.426795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.253 [2024-11-28 12:50:46.426810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.253 qpair failed and we were unable to recover it. 00:27:04.253 [2024-11-28 12:50:46.427023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.253 [2024-11-28 12:50:46.427041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.253 qpair failed and we were unable to recover it. 00:27:04.253 [2024-11-28 12:50:46.427127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.253 [2024-11-28 12:50:46.427143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.253 qpair failed and we were unable to recover it. 00:27:04.253 [2024-11-28 12:50:46.427301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.253 [2024-11-28 12:50:46.427317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.253 qpair failed and we were unable to recover it. 00:27:04.253 [2024-11-28 12:50:46.427392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.253 [2024-11-28 12:50:46.427407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.253 qpair failed and we were unable to recover it. 00:27:04.253 [2024-11-28 12:50:46.427650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.253 [2024-11-28 12:50:46.427666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.253 qpair failed and we were unable to recover it. 00:27:04.253 [2024-11-28 12:50:46.427920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.253 [2024-11-28 12:50:46.427936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.253 qpair failed and we were unable to recover it. 00:27:04.253 [2024-11-28 12:50:46.428085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.253 [2024-11-28 12:50:46.428101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.253 qpair failed and we were unable to recover it. 00:27:04.253 [2024-11-28 12:50:46.428288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.253 [2024-11-28 12:50:46.428305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.253 qpair failed and we were unable to recover it. 00:27:04.253 [2024-11-28 12:50:46.428449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.253 [2024-11-28 12:50:46.428465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.253 qpair failed and we were unable to recover it. 00:27:04.253 [2024-11-28 12:50:46.428557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.253 [2024-11-28 12:50:46.428572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.253 qpair failed and we were unable to recover it. 00:27:04.253 [2024-11-28 12:50:46.428727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.253 [2024-11-28 12:50:46.428743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.253 qpair failed and we were unable to recover it. 00:27:04.253 [2024-11-28 12:50:46.428928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.253 [2024-11-28 12:50:46.428944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.253 qpair failed and we were unable to recover it. 00:27:04.253 [2024-11-28 12:50:46.429183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.253 [2024-11-28 12:50:46.429200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.253 qpair failed and we were unable to recover it. 00:27:04.253 [2024-11-28 12:50:46.429415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.253 [2024-11-28 12:50:46.429434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.253 qpair failed and we were unable to recover it. 00:27:04.253 [2024-11-28 12:50:46.429623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.253 [2024-11-28 12:50:46.429639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.253 qpair failed and we were unable to recover it. 00:27:04.253 [2024-11-28 12:50:46.429737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.253 [2024-11-28 12:50:46.429753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.253 qpair failed and we were unable to recover it. 00:27:04.253 [2024-11-28 12:50:46.429907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.253 [2024-11-28 12:50:46.429922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.253 qpair failed and we were unable to recover it. 00:27:04.253 [2024-11-28 12:50:46.430067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.253 [2024-11-28 12:50:46.430083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.253 qpair failed and we were unable to recover it. 00:27:04.253 [2024-11-28 12:50:46.430251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.253 [2024-11-28 12:50:46.430266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.253 qpair failed and we were unable to recover it. 00:27:04.253 [2024-11-28 12:50:46.430504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.253 [2024-11-28 12:50:46.430520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.253 qpair failed and we were unable to recover it. 00:27:04.253 [2024-11-28 12:50:46.430665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.253 [2024-11-28 12:50:46.430681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.253 qpair failed and we were unable to recover it. 00:27:04.253 [2024-11-28 12:50:46.430943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.253 [2024-11-28 12:50:46.430965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.253 qpair failed and we were unable to recover it. 00:27:04.253 [2024-11-28 12:50:46.431174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.253 [2024-11-28 12:50:46.431190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.253 qpair failed and we were unable to recover it. 00:27:04.253 [2024-11-28 12:50:46.431396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.253 [2024-11-28 12:50:46.431412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.253 qpair failed and we were unable to recover it. 00:27:04.253 [2024-11-28 12:50:46.431646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.253 [2024-11-28 12:50:46.431662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.253 qpair failed and we were unable to recover it. 00:27:04.253 [2024-11-28 12:50:46.431835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.253 [2024-11-28 12:50:46.431850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.253 qpair failed and we were unable to recover it. 00:27:04.253 [2024-11-28 12:50:46.432097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.253 [2024-11-28 12:50:46.432113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.253 qpair failed and we were unable to recover it. 00:27:04.253 [2024-11-28 12:50:46.432265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.253 [2024-11-28 12:50:46.432289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.253 qpair failed and we were unable to recover it. 00:27:04.253 [2024-11-28 12:50:46.432522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.253 [2024-11-28 12:50:46.432536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.253 qpair failed and we were unable to recover it. 00:27:04.253 [2024-11-28 12:50:46.432765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.253 [2024-11-28 12:50:46.432779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.253 qpair failed and we were unable to recover it. 00:27:04.253 [2024-11-28 12:50:46.432928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.253 [2024-11-28 12:50:46.432940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.253 qpair failed and we were unable to recover it. 00:27:04.253 [2024-11-28 12:50:46.433100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.253 [2024-11-28 12:50:46.433113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.253 qpair failed and we were unable to recover it. 00:27:04.253 [2024-11-28 12:50:46.433362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.253 [2024-11-28 12:50:46.433375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.253 qpair failed and we were unable to recover it. 00:27:04.253 [2024-11-28 12:50:46.433517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.253 [2024-11-28 12:50:46.433530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.253 qpair failed and we were unable to recover it. 00:27:04.253 [2024-11-28 12:50:46.433746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.253 [2024-11-28 12:50:46.433760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.253 qpair failed and we were unable to recover it. 00:27:04.253 [2024-11-28 12:50:46.433968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.253 [2024-11-28 12:50:46.433983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.253 qpair failed and we were unable to recover it. 00:27:04.253 [2024-11-28 12:50:46.434263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.254 [2024-11-28 12:50:46.434279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.254 qpair failed and we were unable to recover it. 00:27:04.254 [2024-11-28 12:50:46.434509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.254 [2024-11-28 12:50:46.434521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.254 qpair failed and we were unable to recover it. 00:27:04.254 [2024-11-28 12:50:46.434750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.254 [2024-11-28 12:50:46.434763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.254 qpair failed and we were unable to recover it. 00:27:04.254 [2024-11-28 12:50:46.434867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.254 [2024-11-28 12:50:46.434879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.254 qpair failed and we were unable to recover it. 00:27:04.254 [2024-11-28 12:50:46.435048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.254 [2024-11-28 12:50:46.435064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.254 qpair failed and we were unable to recover it. 00:27:04.254 [2024-11-28 12:50:46.435209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.254 [2024-11-28 12:50:46.435222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.254 qpair failed and we were unable to recover it. 00:27:04.254 [2024-11-28 12:50:46.435382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.254 [2024-11-28 12:50:46.435395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.254 qpair failed and we were unable to recover it. 00:27:04.254 [2024-11-28 12:50:46.435643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.254 [2024-11-28 12:50:46.435656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.254 qpair failed and we were unable to recover it. 00:27:04.254 [2024-11-28 12:50:46.435865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.254 [2024-11-28 12:50:46.435879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.254 qpair failed and we were unable to recover it. 00:27:04.254 [2024-11-28 12:50:46.436140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.254 [2024-11-28 12:50:46.436154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.254 qpair failed and we were unable to recover it. 00:27:04.254 [2024-11-28 12:50:46.436366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.254 [2024-11-28 12:50:46.436379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.254 qpair failed and we were unable to recover it. 00:27:04.254 [2024-11-28 12:50:46.436578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.254 [2024-11-28 12:50:46.436590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.254 qpair failed and we were unable to recover it. 00:27:04.254 [2024-11-28 12:50:46.436743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.254 [2024-11-28 12:50:46.436755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.254 qpair failed and we were unable to recover it. 00:27:04.254 [2024-11-28 12:50:46.436904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.254 [2024-11-28 12:50:46.436917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.254 qpair failed and we were unable to recover it. 00:27:04.254 [2024-11-28 12:50:46.437169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.254 [2024-11-28 12:50:46.437182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.254 qpair failed and we were unable to recover it. 00:27:04.254 [2024-11-28 12:50:46.437362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.254 [2024-11-28 12:50:46.437374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.254 qpair failed and we were unable to recover it. 00:27:04.254 [2024-11-28 12:50:46.437480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.254 [2024-11-28 12:50:46.437493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.254 qpair failed and we were unable to recover it. 00:27:04.254 [2024-11-28 12:50:46.437708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.254 [2024-11-28 12:50:46.437721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.254 qpair failed and we were unable to recover it. 00:27:04.254 [2024-11-28 12:50:46.437978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.254 [2024-11-28 12:50:46.437992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.254 qpair failed and we were unable to recover it. 00:27:04.254 [2024-11-28 12:50:46.438142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.254 [2024-11-28 12:50:46.438154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.254 qpair failed and we were unable to recover it. 00:27:04.254 [2024-11-28 12:50:46.438306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.254 [2024-11-28 12:50:46.438318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.254 qpair failed and we were unable to recover it. 00:27:04.254 [2024-11-28 12:50:46.438523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.254 [2024-11-28 12:50:46.438536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.254 qpair failed and we were unable to recover it. 00:27:04.254 [2024-11-28 12:50:46.438777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.254 [2024-11-28 12:50:46.438790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.254 qpair failed and we were unable to recover it. 00:27:04.254 [2024-11-28 12:50:46.438995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.254 [2024-11-28 12:50:46.439010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.254 qpair failed and we were unable to recover it. 00:27:04.254 [2024-11-28 12:50:46.439198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.254 [2024-11-28 12:50:46.439211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.254 qpair failed and we were unable to recover it. 00:27:04.254 [2024-11-28 12:50:46.439387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.254 [2024-11-28 12:50:46.439399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.254 qpair failed and we were unable to recover it. 00:27:04.254 [2024-11-28 12:50:46.439580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.254 [2024-11-28 12:50:46.439593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.254 qpair failed and we were unable to recover it. 00:27:04.254 [2024-11-28 12:50:46.439674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.254 [2024-11-28 12:50:46.439686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.254 qpair failed and we were unable to recover it. 00:27:04.254 [2024-11-28 12:50:46.439831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.254 [2024-11-28 12:50:46.439844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.254 qpair failed and we were unable to recover it. 00:27:04.254 [2024-11-28 12:50:46.439996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.254 [2024-11-28 12:50:46.440009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.254 qpair failed and we were unable to recover it. 00:27:04.254 [2024-11-28 12:50:46.440163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.254 [2024-11-28 12:50:46.440175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.254 qpair failed and we were unable to recover it. 00:27:04.254 [2024-11-28 12:50:46.440331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.254 [2024-11-28 12:50:46.440344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.254 qpair failed and we were unable to recover it. 00:27:04.254 [2024-11-28 12:50:46.440637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.254 [2024-11-28 12:50:46.440652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.254 qpair failed and we were unable to recover it. 00:27:04.254 [2024-11-28 12:50:46.440926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.254 [2024-11-28 12:50:46.440938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.254 qpair failed and we were unable to recover it. 00:27:04.254 [2024-11-28 12:50:46.441113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.254 [2024-11-28 12:50:46.441126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.254 qpair failed and we were unable to recover it. 00:27:04.254 [2024-11-28 12:50:46.441296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.254 [2024-11-28 12:50:46.441308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.254 qpair failed and we were unable to recover it. 00:27:04.254 [2024-11-28 12:50:46.441458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.254 [2024-11-28 12:50:46.441470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.254 qpair failed and we were unable to recover it. 00:27:04.254 [2024-11-28 12:50:46.441616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.254 [2024-11-28 12:50:46.441631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.254 qpair failed and we were unable to recover it. 00:27:04.254 [2024-11-28 12:50:46.441834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.254 [2024-11-28 12:50:46.441846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.254 qpair failed and we were unable to recover it. 00:27:04.254 [2024-11-28 12:50:46.442002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.254 [2024-11-28 12:50:46.442015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.254 qpair failed and we were unable to recover it. 00:27:04.254 [2024-11-28 12:50:46.442247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.254 [2024-11-28 12:50:46.442261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.254 qpair failed and we were unable to recover it. 00:27:04.254 [2024-11-28 12:50:46.442434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.254 [2024-11-28 12:50:46.442447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.254 qpair failed and we were unable to recover it. 00:27:04.255 [2024-11-28 12:50:46.442594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.255 [2024-11-28 12:50:46.442606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.255 qpair failed and we were unable to recover it. 00:27:04.255 [2024-11-28 12:50:46.442755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.255 [2024-11-28 12:50:46.442768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.255 qpair failed and we were unable to recover it. 00:27:04.255 [2024-11-28 12:50:46.443000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.255 [2024-11-28 12:50:46.443017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.255 qpair failed and we were unable to recover it. 00:27:04.255 [2024-11-28 12:50:46.443170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.255 [2024-11-28 12:50:46.443185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.255 qpair failed and we were unable to recover it. 00:27:04.255 [2024-11-28 12:50:46.443334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.255 [2024-11-28 12:50:46.443347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.255 qpair failed and we were unable to recover it. 00:27:04.255 [2024-11-28 12:50:46.443573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.255 [2024-11-28 12:50:46.443587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.255 qpair failed and we were unable to recover it. 00:27:04.255 [2024-11-28 12:50:46.443669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.255 [2024-11-28 12:50:46.443681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.255 qpair failed and we were unable to recover it. 00:27:04.255 [2024-11-28 12:50:46.443774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.255 [2024-11-28 12:50:46.443786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.255 qpair failed and we were unable to recover it. 00:27:04.255 [2024-11-28 12:50:46.444019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.255 [2024-11-28 12:50:46.444035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.255 qpair failed and we were unable to recover it. 00:27:04.255 [2024-11-28 12:50:46.444172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.255 [2024-11-28 12:50:46.444184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.255 qpair failed and we were unable to recover it. 00:27:04.255 [2024-11-28 12:50:46.444389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.255 [2024-11-28 12:50:46.444402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.255 qpair failed and we were unable to recover it. 00:27:04.255 [2024-11-28 12:50:46.444613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.255 [2024-11-28 12:50:46.444625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.255 qpair failed and we were unable to recover it. 00:27:04.255 [2024-11-28 12:50:46.444783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.255 [2024-11-28 12:50:46.444796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.255 qpair failed and we were unable to recover it. 00:27:04.255 [2024-11-28 12:50:46.444954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.255 [2024-11-28 12:50:46.444971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.255 qpair failed and we were unable to recover it. 00:27:04.255 [2024-11-28 12:50:46.445176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.255 [2024-11-28 12:50:46.445189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.255 qpair failed and we were unable to recover it. 00:27:04.255 [2024-11-28 12:50:46.445412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.255 [2024-11-28 12:50:46.445428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.255 qpair failed and we were unable to recover it. 00:27:04.255 [2024-11-28 12:50:46.445637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.255 [2024-11-28 12:50:46.445650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.255 qpair failed and we were unable to recover it. 00:27:04.255 [2024-11-28 12:50:46.445740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.255 [2024-11-28 12:50:46.445753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.255 qpair failed and we were unable to recover it. 00:27:04.255 [2024-11-28 12:50:46.445958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.255 [2024-11-28 12:50:46.445972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.255 qpair failed and we were unable to recover it. 00:27:04.255 [2024-11-28 12:50:46.446046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.255 [2024-11-28 12:50:46.446058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.255 qpair failed and we were unable to recover it. 00:27:04.255 [2024-11-28 12:50:46.446258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.255 [2024-11-28 12:50:46.446271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.255 qpair failed and we were unable to recover it. 00:27:04.255 [2024-11-28 12:50:46.446521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.255 [2024-11-28 12:50:46.446536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.255 qpair failed and we were unable to recover it. 00:27:04.255 [2024-11-28 12:50:46.446684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.255 [2024-11-28 12:50:46.446701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.255 qpair failed and we were unable to recover it. 00:27:04.255 [2024-11-28 12:50:46.446887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.255 [2024-11-28 12:50:46.446900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.255 qpair failed and we were unable to recover it. 00:27:04.255 [2024-11-28 12:50:46.447001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.255 [2024-11-28 12:50:46.447017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.255 qpair failed and we were unable to recover it. 00:27:04.255 [2024-11-28 12:50:46.447247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.255 [2024-11-28 12:50:46.447260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.255 qpair failed and we were unable to recover it. 00:27:04.255 [2024-11-28 12:50:46.447412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.255 [2024-11-28 12:50:46.447434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.255 qpair failed and we were unable to recover it. 00:27:04.255 [2024-11-28 12:50:46.447654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.255 [2024-11-28 12:50:46.447668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.255 qpair failed and we were unable to recover it. 00:27:04.255 [2024-11-28 12:50:46.447818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.255 [2024-11-28 12:50:46.447831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.255 qpair failed and we were unable to recover it. 00:27:04.255 [2024-11-28 12:50:46.447987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.255 [2024-11-28 12:50:46.448002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.255 qpair failed and we were unable to recover it. 00:27:04.255 [2024-11-28 12:50:46.448214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.255 [2024-11-28 12:50:46.448227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.255 qpair failed and we were unable to recover it. 00:27:04.255 [2024-11-28 12:50:46.448503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.255 [2024-11-28 12:50:46.448517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.255 qpair failed and we were unable to recover it. 00:27:04.255 [2024-11-28 12:50:46.448766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.255 [2024-11-28 12:50:46.448780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.255 qpair failed and we were unable to recover it. 00:27:04.255 [2024-11-28 12:50:46.449033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.255 [2024-11-28 12:50:46.449046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.255 qpair failed and we were unable to recover it. 00:27:04.255 [2024-11-28 12:50:46.449248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.255 [2024-11-28 12:50:46.449262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.255 qpair failed and we were unable to recover it. 00:27:04.255 [2024-11-28 12:50:46.449348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.255 [2024-11-28 12:50:46.449361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.255 qpair failed and we were unable to recover it. 00:27:04.255 [2024-11-28 12:50:46.449513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.255 [2024-11-28 12:50:46.449525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.256 qpair failed and we were unable to recover it. 00:27:04.256 [2024-11-28 12:50:46.449688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.256 [2024-11-28 12:50:46.449701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.256 qpair failed and we were unable to recover it. 00:27:04.256 [2024-11-28 12:50:46.449924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.256 [2024-11-28 12:50:46.449939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.256 qpair failed and we were unable to recover it. 00:27:04.256 [2024-11-28 12:50:46.450082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.256 [2024-11-28 12:50:46.450095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.256 qpair failed and we were unable to recover it. 00:27:04.256 [2024-11-28 12:50:46.450246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.256 [2024-11-28 12:50:46.450259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.256 qpair failed and we were unable to recover it. 00:27:04.256 [2024-11-28 12:50:46.450399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.256 [2024-11-28 12:50:46.450412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.256 qpair failed and we were unable to recover it. 00:27:04.256 [2024-11-28 12:50:46.450612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.256 [2024-11-28 12:50:46.450628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.256 qpair failed and we were unable to recover it. 00:27:04.256 [2024-11-28 12:50:46.450875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.256 [2024-11-28 12:50:46.450889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.256 qpair failed and we were unable to recover it. 00:27:04.256 [2024-11-28 12:50:46.451106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.256 [2024-11-28 12:50:46.451119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.256 qpair failed and we were unable to recover it. 00:27:04.256 [2024-11-28 12:50:46.451278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.256 [2024-11-28 12:50:46.451290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.256 qpair failed and we were unable to recover it. 00:27:04.256 [2024-11-28 12:50:46.451539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.256 [2024-11-28 12:50:46.451556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.256 qpair failed and we were unable to recover it. 00:27:04.256 [2024-11-28 12:50:46.451701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.256 [2024-11-28 12:50:46.451713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.256 qpair failed and we were unable to recover it. 00:27:04.256 [2024-11-28 12:50:46.451816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.256 [2024-11-28 12:50:46.451828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.256 qpair failed and we were unable to recover it. 00:27:04.256 [2024-11-28 12:50:46.452033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.256 [2024-11-28 12:50:46.452050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.256 qpair failed and we were unable to recover it. 00:27:04.256 [2024-11-28 12:50:46.452252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.256 [2024-11-28 12:50:46.452266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.256 qpair failed and we were unable to recover it. 00:27:04.256 [2024-11-28 12:50:46.452336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.256 [2024-11-28 12:50:46.452347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.256 qpair failed and we were unable to recover it. 00:27:04.256 [2024-11-28 12:50:46.452514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.256 [2024-11-28 12:50:46.452527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.256 qpair failed and we were unable to recover it. 00:27:04.256 [2024-11-28 12:50:46.452698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.256 [2024-11-28 12:50:46.452711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.256 qpair failed and we were unable to recover it. 00:27:04.256 [2024-11-28 12:50:46.452923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.256 [2024-11-28 12:50:46.452940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.256 qpair failed and we were unable to recover it. 00:27:04.256 [2024-11-28 12:50:46.453028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.256 [2024-11-28 12:50:46.453039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.256 qpair failed and we were unable to recover it. 00:27:04.256 [2024-11-28 12:50:46.453217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.256 [2024-11-28 12:50:46.453230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.256 qpair failed and we were unable to recover it. 00:27:04.256 [2024-11-28 12:50:46.453431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.256 [2024-11-28 12:50:46.453446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.256 qpair failed and we were unable to recover it. 00:27:04.256 [2024-11-28 12:50:46.453609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.256 [2024-11-28 12:50:46.453621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.256 qpair failed and we were unable to recover it. 00:27:04.256 [2024-11-28 12:50:46.453798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.256 [2024-11-28 12:50:46.453811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.256 qpair failed and we were unable to recover it. 00:27:04.256 [2024-11-28 12:50:46.453961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.256 [2024-11-28 12:50:46.453974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.256 qpair failed and we were unable to recover it. 00:27:04.256 [2024-11-28 12:50:46.454072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.256 [2024-11-28 12:50:46.454084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.256 qpair failed and we were unable to recover it. 00:27:04.256 [2024-11-28 12:50:46.454221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.256 [2024-11-28 12:50:46.454233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.256 qpair failed and we were unable to recover it. 00:27:04.256 [2024-11-28 12:50:46.454320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.256 [2024-11-28 12:50:46.454331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.256 qpair failed and we were unable to recover it. 00:27:04.256 [2024-11-28 12:50:46.454539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.256 [2024-11-28 12:50:46.454553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.256 qpair failed and we were unable to recover it. 00:27:04.256 [2024-11-28 12:50:46.454718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.256 [2024-11-28 12:50:46.454731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.256 qpair failed and we were unable to recover it. 00:27:04.256 [2024-11-28 12:50:46.454958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.256 [2024-11-28 12:50:46.454971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.256 qpair failed and we were unable to recover it. 00:27:04.256 [2024-11-28 12:50:46.455158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.256 [2024-11-28 12:50:46.455172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.256 qpair failed and we were unable to recover it. 00:27:04.256 [2024-11-28 12:50:46.455349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.256 [2024-11-28 12:50:46.455361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.256 qpair failed and we were unable to recover it. 00:27:04.256 [2024-11-28 12:50:46.455595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.256 [2024-11-28 12:50:46.455616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.256 qpair failed and we were unable to recover it. 00:27:04.256 [2024-11-28 12:50:46.455726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.256 [2024-11-28 12:50:46.455742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.256 qpair failed and we were unable to recover it. 00:27:04.256 [2024-11-28 12:50:46.455885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.256 [2024-11-28 12:50:46.455901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.256 qpair failed and we were unable to recover it. 00:27:04.256 [2024-11-28 12:50:46.456002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.256 [2024-11-28 12:50:46.456017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.256 qpair failed and we were unable to recover it. 00:27:04.256 [2024-11-28 12:50:46.456167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.256 [2024-11-28 12:50:46.456182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.256 qpair failed and we were unable to recover it. 00:27:04.256 [2024-11-28 12:50:46.456392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.256 [2024-11-28 12:50:46.456408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.256 qpair failed and we were unable to recover it. 00:27:04.256 [2024-11-28 12:50:46.456565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.256 [2024-11-28 12:50:46.456582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.256 qpair failed and we were unable to recover it. 00:27:04.256 [2024-11-28 12:50:46.456814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.256 [2024-11-28 12:50:46.456829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.256 qpair failed and we were unable to recover it. 00:27:04.256 [2024-11-28 12:50:46.456922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.256 [2024-11-28 12:50:46.456938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.256 qpair failed and we were unable to recover it. 00:27:04.256 [2024-11-28 12:50:46.457129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.256 [2024-11-28 12:50:46.457146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.256 qpair failed and we were unable to recover it. 00:27:04.256 [2024-11-28 12:50:46.457375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.256 [2024-11-28 12:50:46.457390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.256 qpair failed and we were unable to recover it. 00:27:04.257 [2024-11-28 12:50:46.457487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.257 [2024-11-28 12:50:46.457502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.257 qpair failed and we were unable to recover it. 00:27:04.257 [2024-11-28 12:50:46.457687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.257 [2024-11-28 12:50:46.457703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.257 qpair failed and we were unable to recover it. 00:27:04.257 [2024-11-28 12:50:46.457935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.257 [2024-11-28 12:50:46.457955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.257 qpair failed and we were unable to recover it. 00:27:04.257 [2024-11-28 12:50:46.458059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.257 [2024-11-28 12:50:46.458074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.257 qpair failed and we were unable to recover it. 00:27:04.257 [2024-11-28 12:50:46.458245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.257 [2024-11-28 12:50:46.458262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.257 qpair failed and we were unable to recover it. 00:27:04.257 [2024-11-28 12:50:46.458470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.257 [2024-11-28 12:50:46.458485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.257 qpair failed and we were unable to recover it. 00:27:04.257 [2024-11-28 12:50:46.458578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.257 [2024-11-28 12:50:46.458592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.257 qpair failed and we were unable to recover it. 00:27:04.257 [2024-11-28 12:50:46.458733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.257 [2024-11-28 12:50:46.458749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.257 qpair failed and we were unable to recover it. 00:27:04.257 [2024-11-28 12:50:46.459004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.257 [2024-11-28 12:50:46.459021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.257 qpair failed and we were unable to recover it. 00:27:04.257 [2024-11-28 12:50:46.459110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.257 [2024-11-28 12:50:46.459125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.257 qpair failed and we were unable to recover it. 00:27:04.257 [2024-11-28 12:50:46.459239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.257 [2024-11-28 12:50:46.459255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.257 qpair failed and we were unable to recover it. 00:27:04.257 [2024-11-28 12:50:46.459343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.257 [2024-11-28 12:50:46.459359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.257 qpair failed and we were unable to recover it. 00:27:04.257 [2024-11-28 12:50:46.459592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.257 [2024-11-28 12:50:46.459608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.257 qpair failed and we were unable to recover it. 00:27:04.257 [2024-11-28 12:50:46.459852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.257 [2024-11-28 12:50:46.459868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.257 qpair failed and we were unable to recover it. 00:27:04.257 [2024-11-28 12:50:46.460030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.257 [2024-11-28 12:50:46.460045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.257 qpair failed and we were unable to recover it. 00:27:04.257 [2024-11-28 12:50:46.460240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.257 [2024-11-28 12:50:46.460256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.257 qpair failed and we were unable to recover it. 00:27:04.257 [2024-11-28 12:50:46.460483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.257 [2024-11-28 12:50:46.460505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.257 qpair failed and we were unable to recover it. 00:27:04.257 [2024-11-28 12:50:46.460675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.257 [2024-11-28 12:50:46.460691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.257 qpair failed and we were unable to recover it. 00:27:04.257 [2024-11-28 12:50:46.460859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.257 [2024-11-28 12:50:46.460876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.257 qpair failed and we were unable to recover it. 00:27:04.257 [2024-11-28 12:50:46.461029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.257 [2024-11-28 12:50:46.461048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.257 qpair failed and we were unable to recover it. 00:27:04.257 [2024-11-28 12:50:46.461207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.257 [2024-11-28 12:50:46.461222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.257 qpair failed and we were unable to recover it. 00:27:04.257 [2024-11-28 12:50:46.461383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.257 [2024-11-28 12:50:46.461399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.257 qpair failed and we were unable to recover it. 00:27:04.257 [2024-11-28 12:50:46.461541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.257 [2024-11-28 12:50:46.461557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.257 qpair failed and we were unable to recover it. 00:27:04.257 [2024-11-28 12:50:46.461827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.257 [2024-11-28 12:50:46.461844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.257 qpair failed and we were unable to recover it. 00:27:04.257 [2024-11-28 12:50:46.462010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.257 [2024-11-28 12:50:46.462027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.257 qpair failed and we were unable to recover it. 00:27:04.257 [2024-11-28 12:50:46.462214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.257 [2024-11-28 12:50:46.462232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.257 qpair failed and we were unable to recover it. 00:27:04.257 [2024-11-28 12:50:46.462387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.257 [2024-11-28 12:50:46.462406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.257 qpair failed and we were unable to recover it. 00:27:04.257 [2024-11-28 12:50:46.462565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.257 [2024-11-28 12:50:46.462582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.257 qpair failed and we were unable to recover it. 00:27:04.257 [2024-11-28 12:50:46.462825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.257 [2024-11-28 12:50:46.462841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.257 qpair failed and we were unable to recover it. 00:27:04.257 [2024-11-28 12:50:46.463038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.257 [2024-11-28 12:50:46.463055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.257 qpair failed and we were unable to recover it. 00:27:04.257 [2024-11-28 12:50:46.463292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.257 [2024-11-28 12:50:46.463310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.257 qpair failed and we were unable to recover it. 00:27:04.257 [2024-11-28 12:50:46.463475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.257 [2024-11-28 12:50:46.463493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.257 qpair failed and we were unable to recover it. 00:27:04.257 [2024-11-28 12:50:46.463648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.257 [2024-11-28 12:50:46.463660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.257 qpair failed and we were unable to recover it. 00:27:04.257 [2024-11-28 12:50:46.463801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.257 [2024-11-28 12:50:46.463813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.257 qpair failed and we were unable to recover it. 00:27:04.257 [2024-11-28 12:50:46.464061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.257 [2024-11-28 12:50:46.464081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.257 qpair failed and we were unable to recover it. 00:27:04.257 [2024-11-28 12:50:46.464255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.257 [2024-11-28 12:50:46.464271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.257 qpair failed and we were unable to recover it. 00:27:04.257 [2024-11-28 12:50:46.464368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.257 [2024-11-28 12:50:46.464379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.257 qpair failed and we were unable to recover it. 00:27:04.257 [2024-11-28 12:50:46.464511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.257 [2024-11-28 12:50:46.464524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.257 qpair failed and we were unable to recover it. 00:27:04.257 [2024-11-28 12:50:46.464733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.257 [2024-11-28 12:50:46.464746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.257 qpair failed and we were unable to recover it. 00:27:04.257 [2024-11-28 12:50:46.464962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.257 [2024-11-28 12:50:46.464975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.257 qpair failed and we were unable to recover it. 00:27:04.257 [2024-11-28 12:50:46.465198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.257 [2024-11-28 12:50:46.465217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.257 qpair failed and we were unable to recover it. 00:27:04.257 [2024-11-28 12:50:46.465363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.257 [2024-11-28 12:50:46.465376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.257 qpair failed and we were unable to recover it. 00:27:04.257 [2024-11-28 12:50:46.465521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.257 [2024-11-28 12:50:46.465533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.257 qpair failed and we were unable to recover it. 00:27:04.257 [2024-11-28 12:50:46.465689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.257 [2024-11-28 12:50:46.465704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.257 qpair failed and we were unable to recover it. 00:27:04.257 [2024-11-28 12:50:46.465847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.257 [2024-11-28 12:50:46.465860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.258 qpair failed and we were unable to recover it. 00:27:04.258 [2024-11-28 12:50:46.466069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.258 [2024-11-28 12:50:46.466083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.258 qpair failed and we were unable to recover it. 00:27:04.258 [2024-11-28 12:50:46.466237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.258 [2024-11-28 12:50:46.466250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.258 qpair failed and we were unable to recover it. 00:27:04.258 [2024-11-28 12:50:46.466490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.258 [2024-11-28 12:50:46.466503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.258 qpair failed and we were unable to recover it. 00:27:04.258 [2024-11-28 12:50:46.466670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.258 [2024-11-28 12:50:46.466682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.258 qpair failed and we were unable to recover it. 00:27:04.258 [2024-11-28 12:50:46.466933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.258 [2024-11-28 12:50:46.466945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.258 qpair failed and we were unable to recover it. 00:27:04.258 [2024-11-28 12:50:46.467176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.258 [2024-11-28 12:50:46.467190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.258 qpair failed and we were unable to recover it. 00:27:04.258 [2024-11-28 12:50:46.467362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.258 [2024-11-28 12:50:46.467375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.258 qpair failed and we were unable to recover it. 00:27:04.258 [2024-11-28 12:50:46.467571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.258 [2024-11-28 12:50:46.467584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.258 qpair failed and we were unable to recover it. 00:27:04.258 [2024-11-28 12:50:46.467678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.258 [2024-11-28 12:50:46.467689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.258 qpair failed and we were unable to recover it. 00:27:04.258 [2024-11-28 12:50:46.467918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.258 [2024-11-28 12:50:46.467933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.258 qpair failed and we were unable to recover it. 00:27:04.258 [2024-11-28 12:50:46.468191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.258 [2024-11-28 12:50:46.468204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.258 qpair failed and we were unable to recover it. 00:27:04.258 [2024-11-28 12:50:46.468281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.258 [2024-11-28 12:50:46.468296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.258 qpair failed and we were unable to recover it. 00:27:04.258 [2024-11-28 12:50:46.468522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.258 [2024-11-28 12:50:46.468535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.258 qpair failed and we were unable to recover it. 00:27:04.258 [2024-11-28 12:50:46.468743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.258 [2024-11-28 12:50:46.468756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.258 qpair failed and we were unable to recover it. 00:27:04.258 [2024-11-28 12:50:46.468962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.258 [2024-11-28 12:50:46.468975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.258 qpair failed and we were unable to recover it. 00:27:04.258 [2024-11-28 12:50:46.469177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.258 [2024-11-28 12:50:46.469190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.258 qpair failed and we were unable to recover it. 00:27:04.258 [2024-11-28 12:50:46.469360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.258 [2024-11-28 12:50:46.469373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.258 qpair failed and we were unable to recover it. 00:27:04.258 [2024-11-28 12:50:46.469592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.258 [2024-11-28 12:50:46.469605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.258 qpair failed and we were unable to recover it. 00:27:04.258 [2024-11-28 12:50:46.469823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.258 [2024-11-28 12:50:46.469837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.258 qpair failed and we were unable to recover it. 00:27:04.258 [2024-11-28 12:50:46.470068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.258 [2024-11-28 12:50:46.470081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.258 qpair failed and we were unable to recover it. 00:27:04.258 [2024-11-28 12:50:46.470234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.258 [2024-11-28 12:50:46.470246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.258 qpair failed and we were unable to recover it. 00:27:04.258 [2024-11-28 12:50:46.470391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.258 [2024-11-28 12:50:46.470407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.258 qpair failed and we were unable to recover it. 00:27:04.258 [2024-11-28 12:50:46.470549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.258 [2024-11-28 12:50:46.470561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.258 qpair failed and we were unable to recover it. 00:27:04.258 [2024-11-28 12:50:46.470700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.258 [2024-11-28 12:50:46.470715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.258 qpair failed and we were unable to recover it. 00:27:04.258 [2024-11-28 12:50:46.470850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.258 [2024-11-28 12:50:46.470862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.258 qpair failed and we were unable to recover it. 00:27:04.258 [2024-11-28 12:50:46.471045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.258 [2024-11-28 12:50:46.471065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.258 qpair failed and we were unable to recover it. 00:27:04.258 [2024-11-28 12:50:46.471251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.258 [2024-11-28 12:50:46.471267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.258 qpair failed and we were unable to recover it. 00:27:04.258 [2024-11-28 12:50:46.471427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.258 [2024-11-28 12:50:46.471442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.258 qpair failed and we were unable to recover it. 00:27:04.258 [2024-11-28 12:50:46.471673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.258 [2024-11-28 12:50:46.471689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.258 qpair failed and we were unable to recover it. 00:27:04.258 [2024-11-28 12:50:46.471851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.258 [2024-11-28 12:50:46.471867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.258 qpair failed and we were unable to recover it. 00:27:04.258 [2024-11-28 12:50:46.472018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.258 [2024-11-28 12:50:46.472034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.258 qpair failed and we were unable to recover it. 00:27:04.258 [2024-11-28 12:50:46.472200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.258 [2024-11-28 12:50:46.472216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.258 qpair failed and we were unable to recover it. 00:27:04.258 [2024-11-28 12:50:46.472451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.258 [2024-11-28 12:50:46.472467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.258 qpair failed and we were unable to recover it. 00:27:04.258 [2024-11-28 12:50:46.472631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.258 [2024-11-28 12:50:46.472647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.258 qpair failed and we were unable to recover it. 00:27:04.258 [2024-11-28 12:50:46.472881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.258 [2024-11-28 12:50:46.472896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.258 qpair failed and we were unable to recover it. 00:27:04.258 [2024-11-28 12:50:46.473042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.258 [2024-11-28 12:50:46.473058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.258 qpair failed and we were unable to recover it. 00:27:04.258 [2024-11-28 12:50:46.473152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.258 [2024-11-28 12:50:46.473167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.258 qpair failed and we were unable to recover it. 00:27:04.258 [2024-11-28 12:50:46.473416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.258 [2024-11-28 12:50:46.473433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.258 qpair failed and we were unable to recover it. 00:27:04.258 [2024-11-28 12:50:46.473588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.258 [2024-11-28 12:50:46.473604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.258 qpair failed and we were unable to recover it. 00:27:04.258 [2024-11-28 12:50:46.473873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.258 [2024-11-28 12:50:46.473888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.258 qpair failed and we were unable to recover it. 00:27:04.258 [2024-11-28 12:50:46.474034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.258 [2024-11-28 12:50:46.474050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.258 qpair failed and we were unable to recover it. 00:27:04.258 [2024-11-28 12:50:46.474231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.258 [2024-11-28 12:50:46.474247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.258 qpair failed and we were unable to recover it. 00:27:04.258 [2024-11-28 12:50:46.474488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.258 [2024-11-28 12:50:46.474504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.258 qpair failed and we were unable to recover it. 00:27:04.258 [2024-11-28 12:50:46.474667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.258 [2024-11-28 12:50:46.474682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.258 qpair failed and we were unable to recover it. 00:27:04.258 [2024-11-28 12:50:46.474889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.259 [2024-11-28 12:50:46.474905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.259 qpair failed and we were unable to recover it. 00:27:04.259 [2024-11-28 12:50:46.475086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.259 [2024-11-28 12:50:46.475103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.259 qpair failed and we were unable to recover it. 00:27:04.259 [2024-11-28 12:50:46.475312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.259 [2024-11-28 12:50:46.475328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.259 qpair failed and we were unable to recover it. 00:27:04.259 [2024-11-28 12:50:46.475582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.259 [2024-11-28 12:50:46.475598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.259 qpair failed and we were unable to recover it. 00:27:04.259 [2024-11-28 12:50:46.475689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.259 [2024-11-28 12:50:46.475703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.259 qpair failed and we were unable to recover it. 00:27:04.259 [2024-11-28 12:50:46.475942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.259 [2024-11-28 12:50:46.475963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.259 qpair failed and we were unable to recover it. 00:27:04.259 [2024-11-28 12:50:46.476135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.259 [2024-11-28 12:50:46.476150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.259 qpair failed and we were unable to recover it. 00:27:04.259 [2024-11-28 12:50:46.476313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.259 [2024-11-28 12:50:46.476329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.259 qpair failed and we were unable to recover it. 00:27:04.259 [2024-11-28 12:50:46.476541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.259 [2024-11-28 12:50:46.476560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.259 qpair failed and we were unable to recover it. 00:27:04.259 [2024-11-28 12:50:46.476762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.259 [2024-11-28 12:50:46.476778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.259 qpair failed and we were unable to recover it. 00:27:04.259 [2024-11-28 12:50:46.476940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.259 [2024-11-28 12:50:46.476960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.259 qpair failed and we were unable to recover it. 00:27:04.259 [2024-11-28 12:50:46.477048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.259 [2024-11-28 12:50:46.477063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.259 qpair failed and we were unable to recover it. 00:27:04.259 [2024-11-28 12:50:46.477207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.259 [2024-11-28 12:50:46.477223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.259 qpair failed and we were unable to recover it. 00:27:04.259 [2024-11-28 12:50:46.477305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.259 [2024-11-28 12:50:46.477318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.259 qpair failed and we were unable to recover it. 00:27:04.259 [2024-11-28 12:50:46.477479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.259 [2024-11-28 12:50:46.477495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.259 qpair failed and we were unable to recover it. 00:27:04.259 [2024-11-28 12:50:46.477657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.259 [2024-11-28 12:50:46.477672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.259 qpair failed and we were unable to recover it. 00:27:04.259 [2024-11-28 12:50:46.477879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.259 [2024-11-28 12:50:46.477894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.259 qpair failed and we were unable to recover it. 00:27:04.259 [2024-11-28 12:50:46.477994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.259 [2024-11-28 12:50:46.478009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.259 qpair failed and we were unable to recover it. 00:27:04.259 [2024-11-28 12:50:46.478247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.259 [2024-11-28 12:50:46.478264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.259 qpair failed and we were unable to recover it. 00:27:04.259 [2024-11-28 12:50:46.478497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.259 [2024-11-28 12:50:46.478512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.259 qpair failed and we were unable to recover it. 00:27:04.259 [2024-11-28 12:50:46.478754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.259 [2024-11-28 12:50:46.478775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.259 qpair failed and we were unable to recover it. 00:27:04.259 [2024-11-28 12:50:46.478920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.259 [2024-11-28 12:50:46.478937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.259 qpair failed and we were unable to recover it. 00:27:04.259 [2024-11-28 12:50:46.479040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.259 [2024-11-28 12:50:46.479054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.259 qpair failed and we were unable to recover it. 00:27:04.259 [2024-11-28 12:50:46.479145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.259 [2024-11-28 12:50:46.479160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.259 qpair failed and we were unable to recover it. 00:27:04.259 [2024-11-28 12:50:46.479323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.259 [2024-11-28 12:50:46.479339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.259 qpair failed and we were unable to recover it. 00:27:04.259 [2024-11-28 12:50:46.479591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.259 [2024-11-28 12:50:46.479607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.259 qpair failed and we were unable to recover it. 00:27:04.259 [2024-11-28 12:50:46.479762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.259 [2024-11-28 12:50:46.479778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.259 qpair failed and we were unable to recover it. 00:27:04.259 [2024-11-28 12:50:46.479942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.259 [2024-11-28 12:50:46.479963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.259 qpair failed and we were unable to recover it. 00:27:04.259 [2024-11-28 12:50:46.480149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.259 [2024-11-28 12:50:46.480165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.259 qpair failed and we were unable to recover it. 00:27:04.259 [2024-11-28 12:50:46.480265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.259 [2024-11-28 12:50:46.480280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.259 qpair failed and we were unable to recover it. 00:27:04.259 [2024-11-28 12:50:46.480421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.259 [2024-11-28 12:50:46.480437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.259 qpair failed and we were unable to recover it. 00:27:04.259 [2024-11-28 12:50:46.480533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.259 [2024-11-28 12:50:46.480547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.259 qpair failed and we were unable to recover it. 00:27:04.259 [2024-11-28 12:50:46.480757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.259 [2024-11-28 12:50:46.480774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.259 qpair failed and we were unable to recover it. 00:27:04.259 [2024-11-28 12:50:46.480985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.259 [2024-11-28 12:50:46.481001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.259 qpair failed and we were unable to recover it. 00:27:04.259 [2024-11-28 12:50:46.481238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.259 [2024-11-28 12:50:46.481254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.259 qpair failed and we were unable to recover it. 00:27:04.259 [2024-11-28 12:50:46.481360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.259 [2024-11-28 12:50:46.481376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.259 qpair failed and we were unable to recover it. 00:27:04.259 [2024-11-28 12:50:46.481573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.259 [2024-11-28 12:50:46.481590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.259 qpair failed and we were unable to recover it. 00:27:04.259 [2024-11-28 12:50:46.481687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.259 [2024-11-28 12:50:46.481701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.259 qpair failed and we were unable to recover it. 00:27:04.259 [2024-11-28 12:50:46.481858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.259 [2024-11-28 12:50:46.481874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.259 qpair failed and we were unable to recover it. 00:27:04.259 [2024-11-28 12:50:46.482095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.259 [2024-11-28 12:50:46.482112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.259 qpair failed and we were unable to recover it. 00:27:04.259 [2024-11-28 12:50:46.482313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.259 [2024-11-28 12:50:46.482329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.259 qpair failed and we were unable to recover it. 00:27:04.259 [2024-11-28 12:50:46.482472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.260 [2024-11-28 12:50:46.482488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.260 qpair failed and we were unable to recover it. 00:27:04.260 [2024-11-28 12:50:46.482649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.260 [2024-11-28 12:50:46.482665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.260 qpair failed and we were unable to recover it. 00:27:04.260 [2024-11-28 12:50:46.482888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.260 [2024-11-28 12:50:46.482904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.260 qpair failed and we were unable to recover it. 00:27:04.260 [2024-11-28 12:50:46.482996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.260 [2024-11-28 12:50:46.483011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.260 qpair failed and we were unable to recover it. 00:27:04.260 [2024-11-28 12:50:46.483275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.260 [2024-11-28 12:50:46.483291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.260 qpair failed and we were unable to recover it. 00:27:04.260 [2024-11-28 12:50:46.483518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.260 [2024-11-28 12:50:46.483534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.260 qpair failed and we were unable to recover it. 00:27:04.260 [2024-11-28 12:50:46.483703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.260 [2024-11-28 12:50:46.483719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.260 qpair failed and we were unable to recover it. 00:27:04.260 [2024-11-28 12:50:46.483882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.260 [2024-11-28 12:50:46.483898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.260 qpair failed and we were unable to recover it. 00:27:04.260 [2024-11-28 12:50:46.484078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.260 [2024-11-28 12:50:46.484114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.260 qpair failed and we were unable to recover it. 00:27:04.260 [2024-11-28 12:50:46.484395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.260 [2024-11-28 12:50:46.484414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.260 qpair failed and we were unable to recover it. 00:27:04.260 [2024-11-28 12:50:46.484625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.260 [2024-11-28 12:50:46.484637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.260 qpair failed and we were unable to recover it. 00:27:04.260 [2024-11-28 12:50:46.484909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.260 [2024-11-28 12:50:46.484923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.260 qpair failed and we were unable to recover it. 00:27:04.260 [2024-11-28 12:50:46.485092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.260 [2024-11-28 12:50:46.485106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.260 qpair failed and we were unable to recover it. 00:27:04.260 [2024-11-28 12:50:46.485258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.260 [2024-11-28 12:50:46.485270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.260 qpair failed and we were unable to recover it. 00:27:04.260 [2024-11-28 12:50:46.485415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.260 [2024-11-28 12:50:46.485430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.260 qpair failed and we were unable to recover it. 00:27:04.260 [2024-11-28 12:50:46.485639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.260 [2024-11-28 12:50:46.485653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.260 qpair failed and we were unable to recover it. 00:27:04.260 [2024-11-28 12:50:46.485788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.260 [2024-11-28 12:50:46.485800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.260 qpair failed and we were unable to recover it. 00:27:04.260 [2024-11-28 12:50:46.485945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.260 [2024-11-28 12:50:46.485966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.260 qpair failed and we were unable to recover it. 00:27:04.260 [2024-11-28 12:50:46.486117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.260 [2024-11-28 12:50:46.486130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.260 qpair failed and we were unable to recover it. 00:27:04.260 [2024-11-28 12:50:46.486378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.260 [2024-11-28 12:50:46.486391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.260 qpair failed and we were unable to recover it. 00:27:04.260 [2024-11-28 12:50:46.486549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.260 [2024-11-28 12:50:46.486561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.260 qpair failed and we were unable to recover it. 00:27:04.260 [2024-11-28 12:50:46.486696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.260 [2024-11-28 12:50:46.486714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.260 qpair failed and we were unable to recover it. 00:27:04.260 [2024-11-28 12:50:46.486864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.260 [2024-11-28 12:50:46.486876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.260 qpair failed and we were unable to recover it. 00:27:04.260 [2024-11-28 12:50:46.487044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.260 [2024-11-28 12:50:46.487059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.260 qpair failed and we were unable to recover it. 00:27:04.260 [2024-11-28 12:50:46.487201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.260 [2024-11-28 12:50:46.487213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.260 qpair failed and we were unable to recover it. 00:27:04.260 [2024-11-28 12:50:46.487457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.260 [2024-11-28 12:50:46.487469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.260 qpair failed and we were unable to recover it. 00:27:04.260 [2024-11-28 12:50:46.487676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.260 [2024-11-28 12:50:46.487689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.260 qpair failed and we were unable to recover it. 00:27:04.260 [2024-11-28 12:50:46.487841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.260 [2024-11-28 12:50:46.487853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.260 qpair failed and we were unable to recover it. 00:27:04.260 [2024-11-28 12:50:46.488002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.260 [2024-11-28 12:50:46.488015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.260 qpair failed and we were unable to recover it. 00:27:04.260 [2024-11-28 12:50:46.488237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.260 [2024-11-28 12:50:46.488252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.260 qpair failed and we were unable to recover it. 00:27:04.260 [2024-11-28 12:50:46.488459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.260 [2024-11-28 12:50:46.488472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.260 qpair failed and we were unable to recover it. 00:27:04.260 [2024-11-28 12:50:46.488728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.260 [2024-11-28 12:50:46.488741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.260 qpair failed and we were unable to recover it. 00:27:04.260 [2024-11-28 12:50:46.488955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.260 [2024-11-28 12:50:46.488969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.260 qpair failed and we were unable to recover it. 00:27:04.260 [2024-11-28 12:50:46.489103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.261 [2024-11-28 12:50:46.489115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.261 qpair failed and we were unable to recover it. 00:27:04.261 [2024-11-28 12:50:46.489354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.261 [2024-11-28 12:50:46.489367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.261 qpair failed and we were unable to recover it. 00:27:04.261 [2024-11-28 12:50:46.489593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.261 [2024-11-28 12:50:46.489606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.261 qpair failed and we were unable to recover it. 00:27:04.261 [2024-11-28 12:50:46.489829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.261 [2024-11-28 12:50:46.489842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.261 qpair failed and we were unable to recover it. 00:27:04.261 [2024-11-28 12:50:46.490068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.261 [2024-11-28 12:50:46.490081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.261 qpair failed and we were unable to recover it. 00:27:04.261 [2024-11-28 12:50:46.490174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.261 [2024-11-28 12:50:46.490186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.261 qpair failed and we were unable to recover it. 00:27:04.261 [2024-11-28 12:50:46.490331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.261 [2024-11-28 12:50:46.490343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.261 qpair failed and we were unable to recover it. 00:27:04.261 [2024-11-28 12:50:46.490590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.261 [2024-11-28 12:50:46.490603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.261 qpair failed and we were unable to recover it. 00:27:04.261 [2024-11-28 12:50:46.490771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.261 [2024-11-28 12:50:46.490787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.261 qpair failed and we were unable to recover it. 00:27:04.261 [2024-11-28 12:50:46.490967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.261 [2024-11-28 12:50:46.490980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.261 qpair failed and we were unable to recover it. 00:27:04.261 [2024-11-28 12:50:46.491116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.261 [2024-11-28 12:50:46.491131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.261 qpair failed and we were unable to recover it. 00:27:04.261 [2024-11-28 12:50:46.491286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.261 [2024-11-28 12:50:46.491300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.261 qpair failed and we were unable to recover it. 00:27:04.261 [2024-11-28 12:50:46.491477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.261 [2024-11-28 12:50:46.491489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.261 qpair failed and we were unable to recover it. 00:27:04.261 [2024-11-28 12:50:46.491716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.261 [2024-11-28 12:50:46.491730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.261 qpair failed and we were unable to recover it. 00:27:04.261 [2024-11-28 12:50:46.491884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.261 [2024-11-28 12:50:46.491897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.261 qpair failed and we were unable to recover it. 00:27:04.261 [2024-11-28 12:50:46.492165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.261 [2024-11-28 12:50:46.492185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.261 qpair failed and we were unable to recover it. 00:27:04.261 [2024-11-28 12:50:46.492351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.261 [2024-11-28 12:50:46.492367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.261 qpair failed and we were unable to recover it. 00:27:04.261 [2024-11-28 12:50:46.492642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.261 [2024-11-28 12:50:46.492658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.261 qpair failed and we were unable to recover it. 00:27:04.261 [2024-11-28 12:50:46.492837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.261 [2024-11-28 12:50:46.492852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.261 qpair failed and we were unable to recover it. 00:27:04.261 [2024-11-28 12:50:46.493015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.261 [2024-11-28 12:50:46.493032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.261 qpair failed and we were unable to recover it. 00:27:04.261 [2024-11-28 12:50:46.493211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.261 [2024-11-28 12:50:46.493226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.261 qpair failed and we were unable to recover it. 00:27:04.261 [2024-11-28 12:50:46.493405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.261 [2024-11-28 12:50:46.493421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.261 qpair failed and we were unable to recover it. 00:27:04.261 [2024-11-28 12:50:46.493642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.261 [2024-11-28 12:50:46.493658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.261 qpair failed and we were unable to recover it. 00:27:04.261 [2024-11-28 12:50:46.493867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.261 [2024-11-28 12:50:46.493883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.261 qpair failed and we were unable to recover it. 00:27:04.261 [2024-11-28 12:50:46.494045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.261 [2024-11-28 12:50:46.494061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.261 qpair failed and we were unable to recover it. 00:27:04.261 [2024-11-28 12:50:46.494213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.261 [2024-11-28 12:50:46.494229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.261 qpair failed and we were unable to recover it. 00:27:04.261 [2024-11-28 12:50:46.494372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.261 [2024-11-28 12:50:46.494389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.261 qpair failed and we were unable to recover it. 00:27:04.261 [2024-11-28 12:50:46.494599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.261 [2024-11-28 12:50:46.494615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.261 qpair failed and we were unable to recover it. 00:27:04.261 [2024-11-28 12:50:46.494871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.261 [2024-11-28 12:50:46.494895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.261 qpair failed and we were unable to recover it. 00:27:04.261 [2024-11-28 12:50:46.495138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.261 [2024-11-28 12:50:46.495155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.261 qpair failed and we were unable to recover it. 00:27:04.261 [2024-11-28 12:50:46.495365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.261 [2024-11-28 12:50:46.495381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.261 qpair failed and we were unable to recover it. 00:27:04.261 [2024-11-28 12:50:46.495538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.261 [2024-11-28 12:50:46.495555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.261 qpair failed and we were unable to recover it. 00:27:04.261 [2024-11-28 12:50:46.495782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.261 [2024-11-28 12:50:46.495798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.261 qpair failed and we were unable to recover it. 00:27:04.261 [2024-11-28 12:50:46.496026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.261 [2024-11-28 12:50:46.496043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.261 qpair failed and we were unable to recover it. 00:27:04.261 [2024-11-28 12:50:46.496303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.261 [2024-11-28 12:50:46.496319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.261 qpair failed and we were unable to recover it. 00:27:04.261 [2024-11-28 12:50:46.496568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.261 [2024-11-28 12:50:46.496584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.261 qpair failed and we were unable to recover it. 00:27:04.261 [2024-11-28 12:50:46.496739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.261 [2024-11-28 12:50:46.496754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.261 qpair failed and we were unable to recover it. 00:27:04.261 [2024-11-28 12:50:46.496995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.261 [2024-11-28 12:50:46.497012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.261 qpair failed and we were unable to recover it. 00:27:04.261 [2024-11-28 12:50:46.497116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.262 [2024-11-28 12:50:46.497132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.262 qpair failed and we were unable to recover it. 00:27:04.262 [2024-11-28 12:50:46.497368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.262 [2024-11-28 12:50:46.497385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.262 qpair failed and we were unable to recover it. 00:27:04.262 [2024-11-28 12:50:46.497597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.262 [2024-11-28 12:50:46.497613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.262 qpair failed and we were unable to recover it. 00:27:04.262 [2024-11-28 12:50:46.497775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.262 [2024-11-28 12:50:46.497793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.262 qpair failed and we were unable to recover it. 00:27:04.262 [2024-11-28 12:50:46.497881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.262 [2024-11-28 12:50:46.497897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.262 qpair failed and we were unable to recover it. 00:27:04.262 [2024-11-28 12:50:46.497998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.262 [2024-11-28 12:50:46.498013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.262 qpair failed and we were unable to recover it. 00:27:04.262 [2024-11-28 12:50:46.498248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.262 [2024-11-28 12:50:46.498266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.262 qpair failed and we were unable to recover it. 00:27:04.262 [2024-11-28 12:50:46.498489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.262 [2024-11-28 12:50:46.498505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.262 qpair failed and we were unable to recover it. 00:27:04.262 [2024-11-28 12:50:46.498670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.262 [2024-11-28 12:50:46.498686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.262 qpair failed and we were unable to recover it. 00:27:04.262 [2024-11-28 12:50:46.498929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.262 [2024-11-28 12:50:46.498946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.262 qpair failed and we were unable to recover it. 00:27:04.262 [2024-11-28 12:50:46.499184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.262 [2024-11-28 12:50:46.499201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.262 qpair failed and we were unable to recover it. 00:27:04.262 [2024-11-28 12:50:46.499463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.262 [2024-11-28 12:50:46.499480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.262 qpair failed and we were unable to recover it. 00:27:04.262 [2024-11-28 12:50:46.499702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.262 [2024-11-28 12:50:46.499719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.262 qpair failed and we were unable to recover it. 00:27:04.262 [2024-11-28 12:50:46.499957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.262 [2024-11-28 12:50:46.499973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.262 qpair failed and we were unable to recover it. 00:27:04.262 [2024-11-28 12:50:46.500181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.262 [2024-11-28 12:50:46.500198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.262 qpair failed and we were unable to recover it. 00:27:04.262 [2024-11-28 12:50:46.500414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.262 [2024-11-28 12:50:46.500431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.262 qpair failed and we were unable to recover it. 00:27:04.262 [2024-11-28 12:50:46.500645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.262 [2024-11-28 12:50:46.500661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.262 qpair failed and we were unable to recover it. 00:27:04.262 [2024-11-28 12:50:46.500939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.262 [2024-11-28 12:50:46.500975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.262 qpair failed and we were unable to recover it. 00:27:04.262 [2024-11-28 12:50:46.501215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.262 [2024-11-28 12:50:46.501232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.262 qpair failed and we were unable to recover it. 00:27:04.262 [2024-11-28 12:50:46.501343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.262 [2024-11-28 12:50:46.501359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.262 qpair failed and we were unable to recover it. 00:27:04.262 [2024-11-28 12:50:46.501541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.262 [2024-11-28 12:50:46.501559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.262 qpair failed and we were unable to recover it. 00:27:04.262 [2024-11-28 12:50:46.501672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.262 [2024-11-28 12:50:46.501694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.262 qpair failed and we were unable to recover it. 00:27:04.262 [2024-11-28 12:50:46.501932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.262 [2024-11-28 12:50:46.501959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.262 qpair failed and we were unable to recover it. 00:27:04.262 [2024-11-28 12:50:46.502223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.262 [2024-11-28 12:50:46.502242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.262 qpair failed and we were unable to recover it. 00:27:04.262 [2024-11-28 12:50:46.502474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.262 [2024-11-28 12:50:46.502490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.262 qpair failed and we were unable to recover it. 00:27:04.262 [2024-11-28 12:50:46.502650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.262 [2024-11-28 12:50:46.502666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.262 qpair failed and we were unable to recover it. 00:27:04.262 [2024-11-28 12:50:46.502830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.262 [2024-11-28 12:50:46.502846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.262 qpair failed and we were unable to recover it. 00:27:04.262 [2024-11-28 12:50:46.503029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.262 [2024-11-28 12:50:46.503045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.262 qpair failed and we were unable to recover it. 00:27:04.262 [2024-11-28 12:50:46.503205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.262 [2024-11-28 12:50:46.503221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.262 qpair failed and we were unable to recover it. 00:27:04.262 [2024-11-28 12:50:46.503373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.262 [2024-11-28 12:50:46.503389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.262 qpair failed and we were unable to recover it. 00:27:04.262 [2024-11-28 12:50:46.503483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.262 [2024-11-28 12:50:46.503502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.262 qpair failed and we were unable to recover it. 00:27:04.262 [2024-11-28 12:50:46.503681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.262 [2024-11-28 12:50:46.503697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.262 qpair failed and we were unable to recover it. 00:27:04.262 [2024-11-28 12:50:46.503911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.262 [2024-11-28 12:50:46.503926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.262 qpair failed and we were unable to recover it. 00:27:04.262 [2024-11-28 12:50:46.504207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.262 [2024-11-28 12:50:46.504223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.262 qpair failed and we were unable to recover it. 00:27:04.262 [2024-11-28 12:50:46.504310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.262 [2024-11-28 12:50:46.504325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.262 qpair failed and we were unable to recover it. 00:27:04.262 [2024-11-28 12:50:46.504521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.262 [2024-11-28 12:50:46.504536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.262 qpair failed and we were unable to recover it. 00:27:04.262 [2024-11-28 12:50:46.504687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.262 [2024-11-28 12:50:46.504703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.262 qpair failed and we were unable to recover it. 00:27:04.262 [2024-11-28 12:50:46.504785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.262 [2024-11-28 12:50:46.504800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.262 qpair failed and we were unable to recover it. 00:27:04.262 [2024-11-28 12:50:46.505008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.262 [2024-11-28 12:50:46.505024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.263 qpair failed and we were unable to recover it. 00:27:04.263 [2024-11-28 12:50:46.505235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.263 [2024-11-28 12:50:46.505251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.263 qpair failed and we were unable to recover it. 00:27:04.263 [2024-11-28 12:50:46.505480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.263 [2024-11-28 12:50:46.505497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.263 qpair failed and we were unable to recover it. 00:27:04.263 [2024-11-28 12:50:46.505774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.263 [2024-11-28 12:50:46.505791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.263 qpair failed and we were unable to recover it. 00:27:04.263 [2024-11-28 12:50:46.505936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.263 [2024-11-28 12:50:46.505955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.263 qpair failed and we were unable to recover it. 00:27:04.263 [2024-11-28 12:50:46.506163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.263 [2024-11-28 12:50:46.506180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.263 qpair failed and we were unable to recover it. 00:27:04.263 [2024-11-28 12:50:46.506412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.263 [2024-11-28 12:50:46.506428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.263 qpair failed and we were unable to recover it. 00:27:04.263 [2024-11-28 12:50:46.506671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.263 [2024-11-28 12:50:46.506687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.263 qpair failed and we were unable to recover it. 00:27:04.263 [2024-11-28 12:50:46.506844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.263 [2024-11-28 12:50:46.506860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.263 qpair failed and we were unable to recover it. 00:27:04.263 [2024-11-28 12:50:46.507082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.263 [2024-11-28 12:50:46.507099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.263 qpair failed and we were unable to recover it. 00:27:04.263 [2024-11-28 12:50:46.507353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.263 [2024-11-28 12:50:46.507369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.263 qpair failed and we were unable to recover it. 00:27:04.263 [2024-11-28 12:50:46.507523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.263 [2024-11-28 12:50:46.507541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.263 qpair failed and we were unable to recover it. 00:27:04.263 [2024-11-28 12:50:46.507739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.263 [2024-11-28 12:50:46.507754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.263 qpair failed and we were unable to recover it. 00:27:04.263 [2024-11-28 12:50:46.507986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.263 [2024-11-28 12:50:46.508002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.263 qpair failed and we were unable to recover it. 00:27:04.263 [2024-11-28 12:50:46.508162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.263 [2024-11-28 12:50:46.508178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.263 qpair failed and we were unable to recover it. 00:27:04.263 [2024-11-28 12:50:46.508399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.263 [2024-11-28 12:50:46.508414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.263 qpair failed and we were unable to recover it. 00:27:04.263 [2024-11-28 12:50:46.508509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.263 [2024-11-28 12:50:46.508524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.263 qpair failed and we were unable to recover it. 00:27:04.263 [2024-11-28 12:50:46.508757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.263 [2024-11-28 12:50:46.508774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.263 qpair failed and we were unable to recover it. 00:27:04.263 [2024-11-28 12:50:46.508929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.263 [2024-11-28 12:50:46.508945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.263 qpair failed and we were unable to recover it. 00:27:04.263 [2024-11-28 12:50:46.509135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.263 [2024-11-28 12:50:46.509156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.263 qpair failed and we were unable to recover it. 00:27:04.263 [2024-11-28 12:50:46.509329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.263 [2024-11-28 12:50:46.509345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.263 qpair failed and we were unable to recover it. 00:27:04.263 [2024-11-28 12:50:46.509489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.263 [2024-11-28 12:50:46.509505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.263 qpair failed and we were unable to recover it. 00:27:04.263 [2024-11-28 12:50:46.509738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.263 [2024-11-28 12:50:46.509753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.263 qpair failed and we were unable to recover it. 00:27:04.263 [2024-11-28 12:50:46.509893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.263 [2024-11-28 12:50:46.509909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.263 qpair failed and we were unable to recover it. 00:27:04.263 [2024-11-28 12:50:46.510118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.263 [2024-11-28 12:50:46.510134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.263 qpair failed and we were unable to recover it. 00:27:04.263 [2024-11-28 12:50:46.510311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.263 [2024-11-28 12:50:46.510327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.263 qpair failed and we were unable to recover it. 00:27:04.263 [2024-11-28 12:50:46.510538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.263 [2024-11-28 12:50:46.510555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.263 qpair failed and we were unable to recover it. 00:27:04.263 [2024-11-28 12:50:46.510780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.263 [2024-11-28 12:50:46.510795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.263 qpair failed and we were unable to recover it. 00:27:04.263 [2024-11-28 12:50:46.511018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.263 [2024-11-28 12:50:46.511035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.263 qpair failed and we were unable to recover it. 00:27:04.263 [2024-11-28 12:50:46.511275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.263 [2024-11-28 12:50:46.511290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.263 qpair failed and we were unable to recover it. 00:27:04.263 [2024-11-28 12:50:46.511502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.263 [2024-11-28 12:50:46.511518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.263 qpair failed and we were unable to recover it. 00:27:04.263 [2024-11-28 12:50:46.511677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.263 [2024-11-28 12:50:46.511692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.263 qpair failed and we were unable to recover it. 00:27:04.263 [2024-11-28 12:50:46.511853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.263 [2024-11-28 12:50:46.511870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.263 qpair failed and we were unable to recover it. 00:27:04.263 [2024-11-28 12:50:46.512015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.263 [2024-11-28 12:50:46.512032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.263 qpair failed and we were unable to recover it. 00:27:04.263 [2024-11-28 12:50:46.512183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.263 [2024-11-28 12:50:46.512198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.263 qpair failed and we were unable to recover it. 00:27:04.263 [2024-11-28 12:50:46.512439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.263 [2024-11-28 12:50:46.512455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.263 qpair failed and we were unable to recover it. 00:27:04.263 [2024-11-28 12:50:46.512661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.263 [2024-11-28 12:50:46.512677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.263 qpair failed and we were unable to recover it. 00:27:04.263 [2024-11-28 12:50:46.512907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.263 [2024-11-28 12:50:46.512923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.263 qpair failed and we were unable to recover it. 00:27:04.263 [2024-11-28 12:50:46.513201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.263 [2024-11-28 12:50:46.513233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.263 qpair failed and we were unable to recover it. 00:27:04.264 [2024-11-28 12:50:46.513381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.264 [2024-11-28 12:50:46.513414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.264 qpair failed and we were unable to recover it. 00:27:04.264 [2024-11-28 12:50:46.513661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.264 [2024-11-28 12:50:46.513693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.264 qpair failed and we were unable to recover it. 00:27:04.264 [2024-11-28 12:50:46.513817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.264 [2024-11-28 12:50:46.513850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.264 qpair failed and we were unable to recover it. 00:27:04.264 [2024-11-28 12:50:46.514119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.264 [2024-11-28 12:50:46.514151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.264 qpair failed and we were unable to recover it. 00:27:04.264 [2024-11-28 12:50:46.514352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.264 [2024-11-28 12:50:46.514384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.264 qpair failed and we were unable to recover it. 00:27:04.264 [2024-11-28 12:50:46.514636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.264 [2024-11-28 12:50:46.514668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.264 qpair failed and we were unable to recover it. 00:27:04.264 [2024-11-28 12:50:46.514848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.264 [2024-11-28 12:50:46.514880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.264 qpair failed and we were unable to recover it. 00:27:04.264 [2024-11-28 12:50:46.515074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.264 [2024-11-28 12:50:46.515113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.264 qpair failed and we were unable to recover it. 00:27:04.264 [2024-11-28 12:50:46.515249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.264 [2024-11-28 12:50:46.515280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.264 qpair failed and we were unable to recover it. 00:27:04.264 [2024-11-28 12:50:46.515463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.264 [2024-11-28 12:50:46.515496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.264 qpair failed and we were unable to recover it. 00:27:04.264 [2024-11-28 12:50:46.515739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.264 [2024-11-28 12:50:46.515771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.264 qpair failed and we were unable to recover it. 00:27:04.264 [2024-11-28 12:50:46.516063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.264 [2024-11-28 12:50:46.516096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.264 qpair failed and we were unable to recover it. 00:27:04.264 [2024-11-28 12:50:46.516302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.264 [2024-11-28 12:50:46.516335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.264 qpair failed and we were unable to recover it. 00:27:04.264 [2024-11-28 12:50:46.516596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.264 [2024-11-28 12:50:46.516628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.264 qpair failed and we were unable to recover it. 00:27:04.264 [2024-11-28 12:50:46.516901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.264 [2024-11-28 12:50:46.516942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.264 qpair failed and we were unable to recover it. 00:27:04.264 [2024-11-28 12:50:46.517097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.264 [2024-11-28 12:50:46.517113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.264 qpair failed and we were unable to recover it. 00:27:04.264 [2024-11-28 12:50:46.517228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.264 [2024-11-28 12:50:46.517259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.264 qpair failed and we were unable to recover it. 00:27:04.264 [2024-11-28 12:50:46.517527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.264 [2024-11-28 12:50:46.517559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.264 qpair failed and we were unable to recover it. 00:27:04.264 [2024-11-28 12:50:46.517758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.264 [2024-11-28 12:50:46.517790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.264 qpair failed and we were unable to recover it. 00:27:04.264 [2024-11-28 12:50:46.518046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.264 [2024-11-28 12:50:46.518079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.264 qpair failed and we were unable to recover it. 00:27:04.264 [2024-11-28 12:50:46.518371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.264 [2024-11-28 12:50:46.518387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.264 qpair failed and we were unable to recover it. 00:27:04.264 [2024-11-28 12:50:46.518542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.264 [2024-11-28 12:50:46.518558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.264 qpair failed and we were unable to recover it. 00:27:04.264 [2024-11-28 12:50:46.518795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.264 [2024-11-28 12:50:46.518827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.264 qpair failed and we were unable to recover it. 00:27:04.264 [2024-11-28 12:50:46.519033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.264 [2024-11-28 12:50:46.519065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.264 qpair failed and we were unable to recover it. 00:27:04.264 [2024-11-28 12:50:46.519242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.264 [2024-11-28 12:50:46.519274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.264 qpair failed and we were unable to recover it. 00:27:04.264 [2024-11-28 12:50:46.519505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.264 [2024-11-28 12:50:46.519521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.264 qpair failed and we were unable to recover it. 00:27:04.264 [2024-11-28 12:50:46.519760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.264 [2024-11-28 12:50:46.519775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.264 qpair failed and we were unable to recover it. 00:27:04.264 [2024-11-28 12:50:46.520042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.264 [2024-11-28 12:50:46.520059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.264 qpair failed and we were unable to recover it. 00:27:04.264 [2024-11-28 12:50:46.520281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.264 [2024-11-28 12:50:46.520322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.264 qpair failed and we were unable to recover it. 00:27:04.264 [2024-11-28 12:50:46.520565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.264 [2024-11-28 12:50:46.520598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.264 qpair failed and we were unable to recover it. 00:27:04.264 [2024-11-28 12:50:46.520783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.264 [2024-11-28 12:50:46.520814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.264 qpair failed and we were unable to recover it. 00:27:04.264 [2024-11-28 12:50:46.521100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.264 [2024-11-28 12:50:46.521117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.264 qpair failed and we were unable to recover it. 00:27:04.264 [2024-11-28 12:50:46.521328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.265 [2024-11-28 12:50:46.521344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.265 qpair failed and we were unable to recover it. 00:27:04.265 [2024-11-28 12:50:46.521580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.265 [2024-11-28 12:50:46.521595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.265 qpair failed and we were unable to recover it. 00:27:04.265 [2024-11-28 12:50:46.521872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.265 [2024-11-28 12:50:46.521904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.265 qpair failed and we were unable to recover it. 00:27:04.265 [2024-11-28 12:50:46.522051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.265 [2024-11-28 12:50:46.522083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.265 qpair failed and we were unable to recover it. 00:27:04.265 [2024-11-28 12:50:46.522276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.265 [2024-11-28 12:50:46.522309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.265 qpair failed and we were unable to recover it. 00:27:04.265 [2024-11-28 12:50:46.522506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.265 [2024-11-28 12:50:46.522538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.265 qpair failed and we were unable to recover it. 00:27:04.265 [2024-11-28 12:50:46.522732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.265 [2024-11-28 12:50:46.522764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.265 qpair failed and we were unable to recover it. 00:27:04.265 [2024-11-28 12:50:46.523039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.265 [2024-11-28 12:50:46.523071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.265 qpair failed and we were unable to recover it. 00:27:04.265 [2024-11-28 12:50:46.523269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.265 [2024-11-28 12:50:46.523301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.265 qpair failed and we were unable to recover it. 00:27:04.265 [2024-11-28 12:50:46.523551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.265 [2024-11-28 12:50:46.523567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.265 qpair failed and we were unable to recover it. 00:27:04.265 [2024-11-28 12:50:46.523730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.265 [2024-11-28 12:50:46.523745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.265 qpair failed and we were unable to recover it. 00:27:04.265 [2024-11-28 12:50:46.523956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.265 [2024-11-28 12:50:46.523972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.265 qpair failed and we were unable to recover it. 00:27:04.265 [2024-11-28 12:50:46.524126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.265 [2024-11-28 12:50:46.524159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.265 qpair failed and we were unable to recover it. 00:27:04.265 [2024-11-28 12:50:46.524302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.265 [2024-11-28 12:50:46.524334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.265 qpair failed and we were unable to recover it. 00:27:04.265 [2024-11-28 12:50:46.524534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.265 [2024-11-28 12:50:46.524567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.265 qpair failed and we were unable to recover it. 00:27:04.265 [2024-11-28 12:50:46.524845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.265 [2024-11-28 12:50:46.524877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.265 qpair failed and we were unable to recover it. 00:27:04.265 [2024-11-28 12:50:46.525096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.265 [2024-11-28 12:50:46.525134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.265 qpair failed and we were unable to recover it. 00:27:04.265 [2024-11-28 12:50:46.525296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.265 [2024-11-28 12:50:46.525312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.265 qpair failed and we were unable to recover it. 00:27:04.265 [2024-11-28 12:50:46.525525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.265 [2024-11-28 12:50:46.525557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.265 qpair failed and we were unable to recover it. 00:27:04.265 [2024-11-28 12:50:46.525802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.265 [2024-11-28 12:50:46.525834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.265 qpair failed and we were unable to recover it. 00:27:04.265 [2024-11-28 12:50:46.526088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.265 [2024-11-28 12:50:46.526121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.265 qpair failed and we were unable to recover it. 00:27:04.265 [2024-11-28 12:50:46.526305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.265 [2024-11-28 12:50:46.526337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.265 qpair failed and we were unable to recover it. 00:27:04.265 [2024-11-28 12:50:46.526549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.265 [2024-11-28 12:50:46.526582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.265 qpair failed and we were unable to recover it. 00:27:04.265 [2024-11-28 12:50:46.526698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.265 [2024-11-28 12:50:46.526730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.265 qpair failed and we were unable to recover it. 00:27:04.265 [2024-11-28 12:50:46.526997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.265 [2024-11-28 12:50:46.527013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.265 qpair failed and we were unable to recover it. 00:27:04.265 [2024-11-28 12:50:46.527254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.265 [2024-11-28 12:50:46.527286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.265 qpair failed and we were unable to recover it. 00:27:04.265 [2024-11-28 12:50:46.527552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.265 [2024-11-28 12:50:46.527585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.265 qpair failed and we were unable to recover it. 00:27:04.265 [2024-11-28 12:50:46.527881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.265 [2024-11-28 12:50:46.527913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.265 qpair failed and we were unable to recover it. 00:27:04.265 [2024-11-28 12:50:46.528180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.265 [2024-11-28 12:50:46.528213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.265 qpair failed and we were unable to recover it. 00:27:04.265 [2024-11-28 12:50:46.528456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.265 [2024-11-28 12:50:46.528488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.265 qpair failed and we were unable to recover it. 00:27:04.265 [2024-11-28 12:50:46.528762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.265 [2024-11-28 12:50:46.528795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.265 qpair failed and we were unable to recover it. 00:27:04.265 [2024-11-28 12:50:46.528977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.265 [2024-11-28 12:50:46.529011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.265 qpair failed and we were unable to recover it. 00:27:04.265 [2024-11-28 12:50:46.529275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.265 [2024-11-28 12:50:46.529290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.265 qpair failed and we were unable to recover it. 00:27:04.265 [2024-11-28 12:50:46.529527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.265 [2024-11-28 12:50:46.529543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.265 qpair failed and we were unable to recover it. 00:27:04.265 [2024-11-28 12:50:46.529776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.265 [2024-11-28 12:50:46.529793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.265 qpair failed and we were unable to recover it. 00:27:04.265 [2024-11-28 12:50:46.530031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.265 [2024-11-28 12:50:46.530047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.265 qpair failed and we were unable to recover it. 00:27:04.265 [2024-11-28 12:50:46.530198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.265 [2024-11-28 12:50:46.530214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.265 qpair failed and we were unable to recover it. 00:27:04.265 [2024-11-28 12:50:46.530436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.265 [2024-11-28 12:50:46.530469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.265 qpair failed and we were unable to recover it. 00:27:04.265 [2024-11-28 12:50:46.530660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.265 [2024-11-28 12:50:46.530693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.266 qpair failed and we were unable to recover it. 00:27:04.266 [2024-11-28 12:50:46.530874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.266 [2024-11-28 12:50:46.530907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.266 qpair failed and we were unable to recover it. 00:27:04.266 [2024-11-28 12:50:46.531118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.266 [2024-11-28 12:50:46.531152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.266 qpair failed and we were unable to recover it. 00:27:04.266 [2024-11-28 12:50:46.531345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.266 [2024-11-28 12:50:46.531376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.266 qpair failed and we were unable to recover it. 00:27:04.266 [2024-11-28 12:50:46.531527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.266 [2024-11-28 12:50:46.531560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.266 qpair failed and we were unable to recover it. 00:27:04.266 [2024-11-28 12:50:46.531742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.266 [2024-11-28 12:50:46.531779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.266 qpair failed and we were unable to recover it. 00:27:04.266 [2024-11-28 12:50:46.531963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.266 [2024-11-28 12:50:46.531980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.266 qpair failed and we were unable to recover it. 00:27:04.266 [2024-11-28 12:50:46.532166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.266 [2024-11-28 12:50:46.532197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.266 qpair failed and we were unable to recover it. 00:27:04.266 [2024-11-28 12:50:46.532380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.266 [2024-11-28 12:50:46.532412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.266 qpair failed and we were unable to recover it. 00:27:04.266 [2024-11-28 12:50:46.532683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.266 [2024-11-28 12:50:46.532715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.266 qpair failed and we were unable to recover it. 00:27:04.266 [2024-11-28 12:50:46.532891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.266 [2024-11-28 12:50:46.532922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.266 qpair failed and we were unable to recover it. 00:27:04.266 [2024-11-28 12:50:46.533195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.266 [2024-11-28 12:50:46.533227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.266 qpair failed and we were unable to recover it. 00:27:04.266 [2024-11-28 12:50:46.533500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.266 [2024-11-28 12:50:46.533533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.266 qpair failed and we were unable to recover it. 00:27:04.266 [2024-11-28 12:50:46.533722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.266 [2024-11-28 12:50:46.533755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.266 qpair failed and we were unable to recover it. 00:27:04.266 [2024-11-28 12:50:46.534026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.266 [2024-11-28 12:50:46.534059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.266 qpair failed and we were unable to recover it. 00:27:04.266 [2024-11-28 12:50:46.534354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.266 [2024-11-28 12:50:46.534388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.266 qpair failed and we were unable to recover it. 00:27:04.266 [2024-11-28 12:50:46.534658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.266 [2024-11-28 12:50:46.534691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.266 qpair failed and we were unable to recover it. 00:27:04.266 [2024-11-28 12:50:46.534935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.266 [2024-11-28 12:50:46.534978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.266 qpair failed and we were unable to recover it. 00:27:04.266 [2024-11-28 12:50:46.535227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.266 [2024-11-28 12:50:46.535259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.266 qpair failed and we were unable to recover it. 00:27:04.266 [2024-11-28 12:50:46.535441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.266 [2024-11-28 12:50:46.535475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.266 qpair failed and we were unable to recover it. 00:27:04.266 [2024-11-28 12:50:46.535717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.266 [2024-11-28 12:50:46.535750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.266 qpair failed and we were unable to recover it. 00:27:04.266 [2024-11-28 12:50:46.536021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.266 [2024-11-28 12:50:46.536055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.266 qpair failed and we were unable to recover it. 00:27:04.266 [2024-11-28 12:50:46.536330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.266 [2024-11-28 12:50:46.536363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.266 qpair failed and we were unable to recover it. 00:27:04.266 [2024-11-28 12:50:46.536557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.266 [2024-11-28 12:50:46.536588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.266 qpair failed and we were unable to recover it. 00:27:04.266 [2024-11-28 12:50:46.536777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.266 [2024-11-28 12:50:46.536810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.266 qpair failed and we were unable to recover it. 00:27:04.266 [2024-11-28 12:50:46.537000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.266 [2024-11-28 12:50:46.537017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.266 qpair failed and we were unable to recover it. 00:27:04.266 [2024-11-28 12:50:46.537187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.266 [2024-11-28 12:50:46.537219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.266 qpair failed and we were unable to recover it. 00:27:04.266 [2024-11-28 12:50:46.537414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.266 [2024-11-28 12:50:46.537447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.266 qpair failed and we were unable to recover it. 00:27:04.266 [2024-11-28 12:50:46.537714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.266 [2024-11-28 12:50:46.537746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.266 qpair failed and we were unable to recover it. 00:27:04.266 [2024-11-28 12:50:46.538034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.266 [2024-11-28 12:50:46.538068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.266 qpair failed and we were unable to recover it. 00:27:04.266 [2024-11-28 12:50:46.538278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.266 [2024-11-28 12:50:46.538311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.266 qpair failed and we were unable to recover it. 00:27:04.266 [2024-11-28 12:50:46.538498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.266 [2024-11-28 12:50:46.538530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.266 qpair failed and we were unable to recover it. 00:27:04.266 [2024-11-28 12:50:46.538798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.266 [2024-11-28 12:50:46.538831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.266 qpair failed and we were unable to recover it. 00:27:04.266 [2024-11-28 12:50:46.539020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.266 [2024-11-28 12:50:46.539052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.266 qpair failed and we were unable to recover it. 00:27:04.266 [2024-11-28 12:50:46.539250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.266 [2024-11-28 12:50:46.539282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.266 qpair failed and we were unable to recover it. 00:27:04.266 [2024-11-28 12:50:46.539460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.266 [2024-11-28 12:50:46.539476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.266 qpair failed and we were unable to recover it. 00:27:04.266 [2024-11-28 12:50:46.539714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.266 [2024-11-28 12:50:46.539730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.266 qpair failed and we were unable to recover it. 00:27:04.266 [2024-11-28 12:50:46.539961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.266 [2024-11-28 12:50:46.539978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.266 qpair failed and we were unable to recover it. 00:27:04.266 [2024-11-28 12:50:46.540162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.266 [2024-11-28 12:50:46.540178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.266 qpair failed and we were unable to recover it. 00:27:04.266 [2024-11-28 12:50:46.540418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.267 [2024-11-28 12:50:46.540450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.267 qpair failed and we were unable to recover it. 00:27:04.267 [2024-11-28 12:50:46.540580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.267 [2024-11-28 12:50:46.540612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.267 qpair failed and we were unable to recover it. 00:27:04.267 [2024-11-28 12:50:46.540827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.267 [2024-11-28 12:50:46.540860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.267 qpair failed and we were unable to recover it. 00:27:04.267 [2024-11-28 12:50:46.541107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.267 [2024-11-28 12:50:46.541140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.267 qpair failed and we were unable to recover it. 00:27:04.267 [2024-11-28 12:50:46.541254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.267 [2024-11-28 12:50:46.541296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.267 qpair failed and we were unable to recover it. 00:27:04.267 [2024-11-28 12:50:46.541438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.267 [2024-11-28 12:50:46.541454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.267 qpair failed and we were unable to recover it. 00:27:04.267 [2024-11-28 12:50:46.541689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.267 [2024-11-28 12:50:46.541722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.267 qpair failed and we were unable to recover it. 00:27:04.267 [2024-11-28 12:50:46.541901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.267 [2024-11-28 12:50:46.541939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.267 qpair failed and we were unable to recover it. 00:27:04.267 [2024-11-28 12:50:46.542259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.267 [2024-11-28 12:50:46.542292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.267 qpair failed and we were unable to recover it. 00:27:04.267 [2024-11-28 12:50:46.542562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.267 [2024-11-28 12:50:46.542596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.267 qpair failed and we were unable to recover it. 00:27:04.267 [2024-11-28 12:50:46.542885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.267 [2024-11-28 12:50:46.542917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.267 qpair failed and we were unable to recover it. 00:27:04.267 [2024-11-28 12:50:46.543159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.267 [2024-11-28 12:50:46.543174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.267 qpair failed and we were unable to recover it. 00:27:04.267 [2024-11-28 12:50:46.543383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.267 [2024-11-28 12:50:46.543399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.267 qpair failed and we were unable to recover it. 00:27:04.267 [2024-11-28 12:50:46.543559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.267 [2024-11-28 12:50:46.543575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.267 qpair failed and we were unable to recover it. 00:27:04.267 [2024-11-28 12:50:46.543739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.267 [2024-11-28 12:50:46.543772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.267 qpair failed and we were unable to recover it. 00:27:04.267 [2024-11-28 12:50:46.544037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.267 [2024-11-28 12:50:46.544070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.267 qpair failed and we were unable to recover it. 00:27:04.267 [2024-11-28 12:50:46.544184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.267 [2024-11-28 12:50:46.544217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.267 qpair failed and we were unable to recover it. 00:27:04.267 [2024-11-28 12:50:46.544463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.267 [2024-11-28 12:50:46.544495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.267 qpair failed and we were unable to recover it. 00:27:04.267 [2024-11-28 12:50:46.544677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.267 [2024-11-28 12:50:46.544709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.267 qpair failed and we were unable to recover it. 00:27:04.267 [2024-11-28 12:50:46.544835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.267 [2024-11-28 12:50:46.544867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.267 qpair failed and we were unable to recover it. 00:27:04.267 [2024-11-28 12:50:46.545111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.267 [2024-11-28 12:50:46.545127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.267 qpair failed and we were unable to recover it. 00:27:04.267 [2024-11-28 12:50:46.545367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.267 [2024-11-28 12:50:46.545400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.267 qpair failed and we were unable to recover it. 00:27:04.267 [2024-11-28 12:50:46.545584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.267 [2024-11-28 12:50:46.545616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.267 qpair failed and we were unable to recover it. 00:27:04.267 [2024-11-28 12:50:46.545931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.267 [2024-11-28 12:50:46.545988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.267 qpair failed and we were unable to recover it. 00:27:04.267 [2024-11-28 12:50:46.546259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.267 [2024-11-28 12:50:46.546291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.267 qpair failed and we were unable to recover it. 00:27:04.267 [2024-11-28 12:50:46.546557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.267 [2024-11-28 12:50:46.546590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.267 qpair failed and we were unable to recover it. 00:27:04.267 [2024-11-28 12:50:46.546883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.267 [2024-11-28 12:50:46.546915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.267 qpair failed and we were unable to recover it. 00:27:04.267 [2024-11-28 12:50:46.547190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.267 [2024-11-28 12:50:46.547222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.267 qpair failed and we were unable to recover it. 00:27:04.267 [2024-11-28 12:50:46.547428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.267 [2024-11-28 12:50:46.547443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.267 qpair failed and we were unable to recover it. 00:27:04.267 [2024-11-28 12:50:46.547704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.267 [2024-11-28 12:50:46.547736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.267 qpair failed and we were unable to recover it. 00:27:04.267 [2024-11-28 12:50:46.548003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.267 [2024-11-28 12:50:46.548037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.267 qpair failed and we were unable to recover it. 00:27:04.267 [2024-11-28 12:50:46.548280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.267 [2024-11-28 12:50:46.548313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.267 qpair failed and we were unable to recover it. 00:27:04.267 [2024-11-28 12:50:46.548517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.267 [2024-11-28 12:50:46.548549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.267 qpair failed and we were unable to recover it. 00:27:04.267 [2024-11-28 12:50:46.548793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.267 [2024-11-28 12:50:46.548826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.267 qpair failed and we were unable to recover it. 00:27:04.267 [2024-11-28 12:50:46.549102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.267 [2024-11-28 12:50:46.549141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.267 qpair failed and we were unable to recover it. 00:27:04.267 [2024-11-28 12:50:46.549337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.267 [2024-11-28 12:50:46.549353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.267 qpair failed and we were unable to recover it. 00:27:04.267 [2024-11-28 12:50:46.549450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.267 [2024-11-28 12:50:46.549465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.267 qpair failed and we were unable to recover it. 00:27:04.267 [2024-11-28 12:50:46.549638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.267 [2024-11-28 12:50:46.549654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.267 qpair failed and we were unable to recover it. 00:27:04.267 [2024-11-28 12:50:46.549802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.267 [2024-11-28 12:50:46.549835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.267 qpair failed and we were unable to recover it. 00:27:04.268 [2024-11-28 12:50:46.550088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.268 [2024-11-28 12:50:46.550121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.268 qpair failed and we were unable to recover it. 00:27:04.268 [2024-11-28 12:50:46.550379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.268 [2024-11-28 12:50:46.550411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.268 qpair failed and we were unable to recover it. 00:27:04.268 [2024-11-28 12:50:46.550704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.268 [2024-11-28 12:50:46.550737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.268 qpair failed and we were unable to recover it. 00:27:04.268 [2024-11-28 12:50:46.550929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.268 [2024-11-28 12:50:46.550972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.268 qpair failed and we were unable to recover it. 00:27:04.268 [2024-11-28 12:50:46.551117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.268 [2024-11-28 12:50:46.551133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.268 qpair failed and we were unable to recover it. 00:27:04.268 [2024-11-28 12:50:46.551286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.268 [2024-11-28 12:50:46.551303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.268 qpair failed and we were unable to recover it. 00:27:04.268 [2024-11-28 12:50:46.551472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.268 [2024-11-28 12:50:46.551488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.268 qpair failed and we were unable to recover it. 00:27:04.268 [2024-11-28 12:50:46.551701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.268 [2024-11-28 12:50:46.551734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.268 qpair failed and we were unable to recover it. 00:27:04.268 [2024-11-28 12:50:46.552001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.268 [2024-11-28 12:50:46.552035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.268 qpair failed and we were unable to recover it. 00:27:04.268 [2024-11-28 12:50:46.552247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.268 [2024-11-28 12:50:46.552280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.268 qpair failed and we were unable to recover it. 00:27:04.268 [2024-11-28 12:50:46.552524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.268 [2024-11-28 12:50:46.552556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.268 qpair failed and we were unable to recover it. 00:27:04.268 [2024-11-28 12:50:46.552784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.268 [2024-11-28 12:50:46.552818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.268 qpair failed and we were unable to recover it. 00:27:04.268 [2024-11-28 12:50:46.553090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.268 [2024-11-28 12:50:46.553123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.268 qpair failed and we were unable to recover it. 00:27:04.268 [2024-11-28 12:50:46.553391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.268 [2024-11-28 12:50:46.553407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.268 qpair failed and we were unable to recover it. 00:27:04.268 [2024-11-28 12:50:46.553556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.268 [2024-11-28 12:50:46.553572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.268 qpair failed and we were unable to recover it. 00:27:04.268 [2024-11-28 12:50:46.553744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.268 [2024-11-28 12:50:46.553776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.268 qpair failed and we were unable to recover it. 00:27:04.268 [2024-11-28 12:50:46.554028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.268 [2024-11-28 12:50:46.554061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.268 qpair failed and we were unable to recover it. 00:27:04.268 [2024-11-28 12:50:46.554335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.268 [2024-11-28 12:50:46.554368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.268 qpair failed and we were unable to recover it. 00:27:04.268 [2024-11-28 12:50:46.554557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.268 [2024-11-28 12:50:46.554590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.268 qpair failed and we were unable to recover it. 00:27:04.268 [2024-11-28 12:50:46.554832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.268 [2024-11-28 12:50:46.554866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.268 qpair failed and we were unable to recover it. 00:27:04.268 [2024-11-28 12:50:46.555081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.268 [2024-11-28 12:50:46.555098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.268 qpair failed and we were unable to recover it. 00:27:04.268 [2024-11-28 12:50:46.555285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.268 [2024-11-28 12:50:46.555319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.268 qpair failed and we were unable to recover it. 00:27:04.268 [2024-11-28 12:50:46.555587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.268 [2024-11-28 12:50:46.555619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.268 qpair failed and we were unable to recover it. 00:27:04.268 [2024-11-28 12:50:46.555821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.268 [2024-11-28 12:50:46.555854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.268 qpair failed and we were unable to recover it. 00:27:04.268 [2024-11-28 12:50:46.556049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.268 [2024-11-28 12:50:46.556083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.268 qpair failed and we were unable to recover it. 00:27:04.268 [2024-11-28 12:50:46.556354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.268 [2024-11-28 12:50:46.556397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.268 qpair failed and we were unable to recover it. 00:27:04.268 [2024-11-28 12:50:46.556623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.268 [2024-11-28 12:50:46.556639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.268 qpair failed and we were unable to recover it. 00:27:04.268 [2024-11-28 12:50:46.556817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.268 [2024-11-28 12:50:46.556833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.268 qpair failed and we were unable to recover it. 00:27:04.268 [2024-11-28 12:50:46.557013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.268 [2024-11-28 12:50:46.557030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.268 qpair failed and we were unable to recover it. 00:27:04.268 [2024-11-28 12:50:46.557202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.268 [2024-11-28 12:50:46.557235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.268 qpair failed and we were unable to recover it. 00:27:04.268 [2024-11-28 12:50:46.557433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.268 [2024-11-28 12:50:46.557465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.268 qpair failed and we were unable to recover it. 00:27:04.268 [2024-11-28 12:50:46.557691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.268 [2024-11-28 12:50:46.557725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.268 qpair failed and we were unable to recover it. 00:27:04.268 [2024-11-28 12:50:46.557957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.268 [2024-11-28 12:50:46.557991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.268 qpair failed and we were unable to recover it. 00:27:04.268 [2024-11-28 12:50:46.558238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.268 [2024-11-28 12:50:46.558270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.268 qpair failed and we were unable to recover it. 00:27:04.268 [2024-11-28 12:50:46.558571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.268 [2024-11-28 12:50:46.558603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.268 qpair failed and we were unable to recover it. 00:27:04.268 [2024-11-28 12:50:46.558871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.268 [2024-11-28 12:50:46.558902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.269 qpair failed and we were unable to recover it. 00:27:04.269 [2024-11-28 12:50:46.559188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.269 [2024-11-28 12:50:46.559228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.269 qpair failed and we were unable to recover it. 00:27:04.269 [2024-11-28 12:50:46.559511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.269 [2024-11-28 12:50:46.559542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.269 qpair failed and we were unable to recover it. 00:27:04.269 [2024-11-28 12:50:46.559791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.269 [2024-11-28 12:50:46.559825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.269 qpair failed and we were unable to recover it. 00:27:04.269 [2024-11-28 12:50:46.560122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.269 [2024-11-28 12:50:46.560155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.269 qpair failed and we were unable to recover it. 00:27:04.269 [2024-11-28 12:50:46.560414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.269 [2024-11-28 12:50:46.560430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.269 qpair failed and we were unable to recover it. 00:27:04.269 [2024-11-28 12:50:46.560608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.269 [2024-11-28 12:50:46.560625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.269 qpair failed and we were unable to recover it. 00:27:04.269 [2024-11-28 12:50:46.560782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.269 [2024-11-28 12:50:46.560798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.269 qpair failed and we were unable to recover it. 00:27:04.269 [2024-11-28 12:50:46.560939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.269 [2024-11-28 12:50:46.560961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.269 qpair failed and we were unable to recover it. 00:27:04.269 [2024-11-28 12:50:46.561126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.269 [2024-11-28 12:50:46.561142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.269 qpair failed and we were unable to recover it. 00:27:04.269 [2024-11-28 12:50:46.561225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.269 [2024-11-28 12:50:46.561240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.269 qpair failed and we were unable to recover it. 00:27:04.269 [2024-11-28 12:50:46.561383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.269 [2024-11-28 12:50:46.561397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.269 qpair failed and we were unable to recover it. 00:27:04.269 [2024-11-28 12:50:46.561489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.269 [2024-11-28 12:50:46.561527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.269 qpair failed and we were unable to recover it. 00:27:04.269 [2024-11-28 12:50:46.561798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.269 [2024-11-28 12:50:46.561830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.269 qpair failed and we were unable to recover it. 00:27:04.269 [2024-11-28 12:50:46.562100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.269 [2024-11-28 12:50:46.562132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.269 qpair failed and we were unable to recover it. 00:27:04.269 [2024-11-28 12:50:46.562266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.269 [2024-11-28 12:50:46.562299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.269 qpair failed and we were unable to recover it. 00:27:04.269 [2024-11-28 12:50:46.562569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.269 [2024-11-28 12:50:46.562602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.269 qpair failed and we were unable to recover it. 00:27:04.269 [2024-11-28 12:50:46.562875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.269 [2024-11-28 12:50:46.562907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.269 qpair failed and we were unable to recover it. 00:27:04.269 [2024-11-28 12:50:46.563192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.269 [2024-11-28 12:50:46.563208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.269 qpair failed and we were unable to recover it. 00:27:04.269 [2024-11-28 12:50:46.563439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.269 [2024-11-28 12:50:46.563455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.269 qpair failed and we were unable to recover it. 00:27:04.269 [2024-11-28 12:50:46.563612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.269 [2024-11-28 12:50:46.563628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.269 qpair failed and we were unable to recover it. 00:27:04.269 [2024-11-28 12:50:46.563859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.269 [2024-11-28 12:50:46.563876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.269 qpair failed and we were unable to recover it. 00:27:04.269 [2024-11-28 12:50:46.564058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.269 [2024-11-28 12:50:46.564092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.269 qpair failed and we were unable to recover it. 00:27:04.269 [2024-11-28 12:50:46.564353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.269 [2024-11-28 12:50:46.564384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.269 qpair failed and we were unable to recover it. 00:27:04.269 [2024-11-28 12:50:46.564578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.269 [2024-11-28 12:50:46.564610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.269 qpair failed and we were unable to recover it. 00:27:04.269 [2024-11-28 12:50:46.564887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.269 [2024-11-28 12:50:46.564920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.269 qpair failed and we were unable to recover it. 00:27:04.269 [2024-11-28 12:50:46.565250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.269 [2024-11-28 12:50:46.565314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.269 qpair failed and we were unable to recover it. 00:27:04.269 [2024-11-28 12:50:46.565516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.269 [2024-11-28 12:50:46.565547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.269 qpair failed and we were unable to recover it. 00:27:04.269 [2024-11-28 12:50:46.565764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.269 [2024-11-28 12:50:46.565783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.269 qpair failed and we were unable to recover it. 00:27:04.269 [2024-11-28 12:50:46.565959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.269 [2024-11-28 12:50:46.565973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.269 qpair failed and we were unable to recover it. 00:27:04.269 [2024-11-28 12:50:46.566198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.269 [2024-11-28 12:50:46.566221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.269 qpair failed and we were unable to recover it. 00:27:04.269 [2024-11-28 12:50:46.566506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.269 [2024-11-28 12:50:46.566538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.269 qpair failed and we were unable to recover it. 00:27:04.269 [2024-11-28 12:50:46.566716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.269 [2024-11-28 12:50:46.566749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.269 qpair failed and we were unable to recover it. 00:27:04.269 [2024-11-28 12:50:46.566959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.269 [2024-11-28 12:50:46.566995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.269 qpair failed and we were unable to recover it. 00:27:04.269 [2024-11-28 12:50:46.567172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.269 [2024-11-28 12:50:46.567185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.269 qpair failed and we were unable to recover it. 00:27:04.269 [2024-11-28 12:50:46.567346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.269 [2024-11-28 12:50:46.567378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.269 qpair failed and we were unable to recover it. 00:27:04.269 [2024-11-28 12:50:46.567582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.269 [2024-11-28 12:50:46.567616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.269 qpair failed and we were unable to recover it. 00:27:04.270 [2024-11-28 12:50:46.567809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.270 [2024-11-28 12:50:46.567841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.270 qpair failed and we were unable to recover it. 00:27:04.270 [2024-11-28 12:50:46.568114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.270 [2024-11-28 12:50:46.568149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.270 qpair failed and we were unable to recover it. 00:27:04.270 [2024-11-28 12:50:46.568362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.270 [2024-11-28 12:50:46.568394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.270 qpair failed and we were unable to recover it. 00:27:04.270 [2024-11-28 12:50:46.568543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.270 [2024-11-28 12:50:46.568555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.270 qpair failed and we were unable to recover it. 00:27:04.270 [2024-11-28 12:50:46.568719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.270 [2024-11-28 12:50:46.568752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.270 qpair failed and we were unable to recover it. 00:27:04.270 [2024-11-28 12:50:46.569037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.270 [2024-11-28 12:50:46.569072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.270 qpair failed and we were unable to recover it. 00:27:04.270 [2024-11-28 12:50:46.569318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.270 [2024-11-28 12:50:46.569331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.270 qpair failed and we were unable to recover it. 00:27:04.270 [2024-11-28 12:50:46.569545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.270 [2024-11-28 12:50:46.569558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.270 qpair failed and we were unable to recover it. 00:27:04.270 [2024-11-28 12:50:46.569694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.270 [2024-11-28 12:50:46.569706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.270 qpair failed and we were unable to recover it. 00:27:04.270 [2024-11-28 12:50:46.569956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.270 [2024-11-28 12:50:46.569969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.270 qpair failed and we were unable to recover it. 00:27:04.270 [2024-11-28 12:50:46.570175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.270 [2024-11-28 12:50:46.570187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.270 qpair failed and we were unable to recover it. 00:27:04.270 [2024-11-28 12:50:46.570287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.270 [2024-11-28 12:50:46.570298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.270 qpair failed and we were unable to recover it. 00:27:04.270 [2024-11-28 12:50:46.570390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.270 [2024-11-28 12:50:46.570402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.270 qpair failed and we were unable to recover it. 00:27:04.270 [2024-11-28 12:50:46.570623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.270 [2024-11-28 12:50:46.570656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.270 qpair failed and we were unable to recover it. 00:27:04.270 [2024-11-28 12:50:46.570904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.270 [2024-11-28 12:50:46.570936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.270 qpair failed and we were unable to recover it. 00:27:04.270 [2024-11-28 12:50:46.571217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.270 [2024-11-28 12:50:46.571250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.270 qpair failed and we were unable to recover it. 00:27:04.270 [2024-11-28 12:50:46.571528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.270 [2024-11-28 12:50:46.571562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.270 qpair failed and we were unable to recover it. 00:27:04.270 [2024-11-28 12:50:46.571842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.270 [2024-11-28 12:50:46.571875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.270 qpair failed and we were unable to recover it. 00:27:04.270 [2024-11-28 12:50:46.572100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.270 [2024-11-28 12:50:46.572136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.270 qpair failed and we were unable to recover it. 00:27:04.270 [2024-11-28 12:50:46.572410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.270 [2024-11-28 12:50:46.572442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.270 qpair failed and we were unable to recover it. 00:27:04.270 [2024-11-28 12:50:46.572726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.270 [2024-11-28 12:50:46.572759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.270 qpair failed and we were unable to recover it. 00:27:04.270 [2024-11-28 12:50:46.573038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.270 [2024-11-28 12:50:46.573072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.270 qpair failed and we were unable to recover it. 00:27:04.270 [2024-11-28 12:50:46.573310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.270 [2024-11-28 12:50:46.573343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.270 qpair failed and we were unable to recover it. 00:27:04.270 [2024-11-28 12:50:46.573606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.270 [2024-11-28 12:50:46.573639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.270 qpair failed and we were unable to recover it. 00:27:04.270 [2024-11-28 12:50:46.573855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.270 [2024-11-28 12:50:46.573888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.270 qpair failed and we were unable to recover it. 00:27:04.270 [2024-11-28 12:50:46.574196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.270 [2024-11-28 12:50:46.574230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.270 qpair failed and we were unable to recover it. 00:27:04.270 [2024-11-28 12:50:46.574486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.270 [2024-11-28 12:50:46.574519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.270 qpair failed and we were unable to recover it. 00:27:04.270 [2024-11-28 12:50:46.574696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.270 [2024-11-28 12:50:46.574728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.270 qpair failed and we were unable to recover it. 00:27:04.270 [2024-11-28 12:50:46.574907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.270 [2024-11-28 12:50:46.574939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.270 qpair failed and we were unable to recover it. 00:27:04.270 [2024-11-28 12:50:46.575201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.270 [2024-11-28 12:50:46.575234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.270 qpair failed and we were unable to recover it. 00:27:04.270 [2024-11-28 12:50:46.575412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.270 [2024-11-28 12:50:46.575426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.270 qpair failed and we were unable to recover it. 00:27:04.270 [2024-11-28 12:50:46.575656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.270 [2024-11-28 12:50:46.575693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.270 qpair failed and we were unable to recover it. 00:27:04.270 [2024-11-28 12:50:46.575908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.270 [2024-11-28 12:50:46.575941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.270 qpair failed and we were unable to recover it. 00:27:04.270 [2024-11-28 12:50:46.576174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.270 [2024-11-28 12:50:46.576207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.270 qpair failed and we were unable to recover it. 00:27:04.270 [2024-11-28 12:50:46.576464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.270 [2024-11-28 12:50:46.576476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.270 qpair failed and we were unable to recover it. 00:27:04.270 [2024-11-28 12:50:46.576680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.270 [2024-11-28 12:50:46.576692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.270 qpair failed and we were unable to recover it. 00:27:04.270 [2024-11-28 12:50:46.576892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.270 [2024-11-28 12:50:46.576905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.270 qpair failed and we were unable to recover it. 00:27:04.270 [2024-11-28 12:50:46.577042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.270 [2024-11-28 12:50:46.577056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.270 qpair failed and we were unable to recover it. 00:27:04.270 [2024-11-28 12:50:46.577313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.271 [2024-11-28 12:50:46.577346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.271 qpair failed and we were unable to recover it. 00:27:04.271 [2024-11-28 12:50:46.577566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.271 [2024-11-28 12:50:46.577600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.271 qpair failed and we were unable to recover it. 00:27:04.271 [2024-11-28 12:50:46.577868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.271 [2024-11-28 12:50:46.577902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.271 qpair failed and we were unable to recover it. 00:27:04.271 [2024-11-28 12:50:46.578056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.271 [2024-11-28 12:50:46.578092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.271 qpair failed and we were unable to recover it. 00:27:04.271 [2024-11-28 12:50:46.578369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.271 [2024-11-28 12:50:46.578402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.271 qpair failed and we were unable to recover it. 00:27:04.271 [2024-11-28 12:50:46.578586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.271 [2024-11-28 12:50:46.578619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.271 qpair failed and we were unable to recover it. 00:27:04.271 [2024-11-28 12:50:46.578923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.271 [2024-11-28 12:50:46.578962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.271 qpair failed and we were unable to recover it. 00:27:04.271 [2024-11-28 12:50:46.579204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.271 [2024-11-28 12:50:46.579218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.271 qpair failed and we were unable to recover it. 00:27:04.271 [2024-11-28 12:50:46.579318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.271 [2024-11-28 12:50:46.579329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.271 qpair failed and we were unable to recover it. 00:27:04.271 [2024-11-28 12:50:46.579499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.271 [2024-11-28 12:50:46.579512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.271 qpair failed and we were unable to recover it. 00:27:04.271 [2024-11-28 12:50:46.579714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.271 [2024-11-28 12:50:46.579726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.271 qpair failed and we were unable to recover it. 00:27:04.271 [2024-11-28 12:50:46.579950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.271 [2024-11-28 12:50:46.579963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.271 qpair failed and we were unable to recover it. 00:27:04.271 [2024-11-28 12:50:46.580203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.271 [2024-11-28 12:50:46.580215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.271 qpair failed and we were unable to recover it. 00:27:04.271 [2024-11-28 12:50:46.580356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.271 [2024-11-28 12:50:46.580388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.271 qpair failed and we were unable to recover it. 00:27:04.271 [2024-11-28 12:50:46.580590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.271 [2024-11-28 12:50:46.580623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.271 qpair failed and we were unable to recover it. 00:27:04.271 [2024-11-28 12:50:46.580818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.271 [2024-11-28 12:50:46.580851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.271 qpair failed and we were unable to recover it. 00:27:04.271 [2024-11-28 12:50:46.581124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.271 [2024-11-28 12:50:46.581159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.271 qpair failed and we were unable to recover it. 00:27:04.271 [2024-11-28 12:50:46.581293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.271 [2024-11-28 12:50:46.581326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.271 qpair failed and we were unable to recover it. 00:27:04.271 [2024-11-28 12:50:46.581508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.271 [2024-11-28 12:50:46.581540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.271 qpair failed and we were unable to recover it. 00:27:04.271 [2024-11-28 12:50:46.581812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.271 [2024-11-28 12:50:46.581844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.271 qpair failed and we were unable to recover it. 00:27:04.271 [2024-11-28 12:50:46.582199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.271 [2024-11-28 12:50:46.582239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.271 qpair failed and we were unable to recover it. 00:27:04.271 [2024-11-28 12:50:46.582443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.271 [2024-11-28 12:50:46.582480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.271 qpair failed and we were unable to recover it. 00:27:04.271 [2024-11-28 12:50:46.582737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.271 [2024-11-28 12:50:46.582770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.271 qpair failed and we were unable to recover it. 00:27:04.271 [2024-11-28 12:50:46.583072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.271 [2024-11-28 12:50:46.583089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.271 qpair failed and we were unable to recover it. 00:27:04.271 [2024-11-28 12:50:46.583357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.271 [2024-11-28 12:50:46.583388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.271 qpair failed and we were unable to recover it. 00:27:04.271 [2024-11-28 12:50:46.583591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.271 [2024-11-28 12:50:46.583625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.271 qpair failed and we were unable to recover it. 00:27:04.271 [2024-11-28 12:50:46.583869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.271 [2024-11-28 12:50:46.583903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.271 qpair failed and we were unable to recover it. 00:27:04.271 [2024-11-28 12:50:46.584071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.271 [2024-11-28 12:50:46.584106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.271 qpair failed and we were unable to recover it. 00:27:04.271 [2024-11-28 12:50:46.584323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.271 [2024-11-28 12:50:46.584354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.271 qpair failed and we were unable to recover it. 00:27:04.271 [2024-11-28 12:50:46.584619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.271 [2024-11-28 12:50:46.584651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.271 qpair failed and we were unable to recover it. 00:27:04.271 [2024-11-28 12:50:46.584945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.271 [2024-11-28 12:50:46.584992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.271 qpair failed and we were unable to recover it. 00:27:04.271 [2024-11-28 12:50:46.585190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.271 [2024-11-28 12:50:46.585223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.271 qpair failed and we were unable to recover it. 00:27:04.271 [2024-11-28 12:50:46.585466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.271 [2024-11-28 12:50:46.585499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.271 qpair failed and we were unable to recover it. 00:27:04.271 [2024-11-28 12:50:46.585803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.271 [2024-11-28 12:50:46.585835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.271 qpair failed and we were unable to recover it. 00:27:04.271 [2024-11-28 12:50:46.586139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.271 [2024-11-28 12:50:46.586175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.271 qpair failed and we were unable to recover it. 00:27:04.271 [2024-11-28 12:50:46.586435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.271 [2024-11-28 12:50:46.586467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.271 qpair failed and we were unable to recover it. 00:27:04.271 [2024-11-28 12:50:46.586714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.271 [2024-11-28 12:50:46.586747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.271 qpair failed and we were unable to recover it. 00:27:04.271 [2024-11-28 12:50:46.586942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.271 [2024-11-28 12:50:46.586985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.271 qpair failed and we were unable to recover it. 00:27:04.271 [2024-11-28 12:50:46.587181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.271 [2024-11-28 12:50:46.587214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.272 qpair failed and we were unable to recover it. 00:27:04.272 [2024-11-28 12:50:46.587408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.272 [2024-11-28 12:50:46.587441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.272 qpair failed and we were unable to recover it. 00:27:04.272 [2024-11-28 12:50:46.587710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.272 [2024-11-28 12:50:46.587742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.272 qpair failed and we were unable to recover it. 00:27:04.272 [2024-11-28 12:50:46.587988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.272 [2024-11-28 12:50:46.588022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.272 qpair failed and we were unable to recover it. 00:27:04.272 [2024-11-28 12:50:46.588161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.272 [2024-11-28 12:50:46.588195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.272 qpair failed and we were unable to recover it. 00:27:04.272 [2024-11-28 12:50:46.588379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.272 [2024-11-28 12:50:46.588412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.272 qpair failed and we were unable to recover it. 00:27:04.272 [2024-11-28 12:50:46.588610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.272 [2024-11-28 12:50:46.588642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.272 qpair failed and we were unable to recover it. 00:27:04.272 [2024-11-28 12:50:46.588787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.272 [2024-11-28 12:50:46.588820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.272 qpair failed and we were unable to recover it. 00:27:04.272 [2024-11-28 12:50:46.589044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.272 [2024-11-28 12:50:46.589080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.272 qpair failed and we were unable to recover it. 00:27:04.272 [2024-11-28 12:50:46.589373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.272 [2024-11-28 12:50:46.589393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.272 qpair failed and we were unable to recover it. 00:27:04.272 [2024-11-28 12:50:46.589612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.272 [2024-11-28 12:50:46.589628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.272 qpair failed and we were unable to recover it. 00:27:04.272 [2024-11-28 12:50:46.589773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.272 [2024-11-28 12:50:46.589789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.272 qpair failed and we were unable to recover it. 00:27:04.272 [2024-11-28 12:50:46.589990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.272 [2024-11-28 12:50:46.590024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.272 qpair failed and we were unable to recover it. 00:27:04.272 [2024-11-28 12:50:46.590163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.272 [2024-11-28 12:50:46.590179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.272 qpair failed and we were unable to recover it. 00:27:04.272 [2024-11-28 12:50:46.590285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.272 [2024-11-28 12:50:46.590301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.272 qpair failed and we were unable to recover it. 00:27:04.272 [2024-11-28 12:50:46.590465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.272 [2024-11-28 12:50:46.590497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.272 qpair failed and we were unable to recover it. 00:27:04.272 [2024-11-28 12:50:46.590679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.272 [2024-11-28 12:50:46.590710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.272 qpair failed and we were unable to recover it. 00:27:04.272 [2024-11-28 12:50:46.590899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.272 [2024-11-28 12:50:46.590932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.272 qpair failed and we were unable to recover it. 00:27:04.272 [2024-11-28 12:50:46.591141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.272 [2024-11-28 12:50:46.591174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.272 qpair failed and we were unable to recover it. 00:27:04.272 [2024-11-28 12:50:46.591444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.272 [2024-11-28 12:50:46.591476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.272 qpair failed and we were unable to recover it. 00:27:04.272 [2024-11-28 12:50:46.591730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.272 [2024-11-28 12:50:46.591746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.272 qpair failed and we were unable to recover it. 00:27:04.272 [2024-11-28 12:50:46.591943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.272 [2024-11-28 12:50:46.591965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.272 qpair failed and we were unable to recover it. 00:27:04.272 [2024-11-28 12:50:46.592176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.272 [2024-11-28 12:50:46.592192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.272 qpair failed and we were unable to recover it. 00:27:04.272 [2024-11-28 12:50:46.592411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.272 [2024-11-28 12:50:46.592428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.272 qpair failed and we were unable to recover it. 00:27:04.272 [2024-11-28 12:50:46.592702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.272 [2024-11-28 12:50:46.592719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.272 qpair failed and we were unable to recover it. 00:27:04.272 [2024-11-28 12:50:46.592901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.272 [2024-11-28 12:50:46.592917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.272 qpair failed and we were unable to recover it. 00:27:04.272 [2024-11-28 12:50:46.593146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.272 [2024-11-28 12:50:46.593162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.272 qpair failed and we were unable to recover it. 00:27:04.272 [2024-11-28 12:50:46.593324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.272 [2024-11-28 12:50:46.593356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.272 qpair failed and we were unable to recover it. 00:27:04.272 [2024-11-28 12:50:46.593648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.272 [2024-11-28 12:50:46.593681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.272 qpair failed and we were unable to recover it. 00:27:04.272 [2024-11-28 12:50:46.593979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.272 [2024-11-28 12:50:46.594014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.272 qpair failed and we were unable to recover it. 00:27:04.272 [2024-11-28 12:50:46.594279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.272 [2024-11-28 12:50:46.594313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.272 qpair failed and we were unable to recover it. 00:27:04.272 [2024-11-28 12:50:46.594491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.272 [2024-11-28 12:50:46.594507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.272 qpair failed and we were unable to recover it. 00:27:04.272 [2024-11-28 12:50:46.594680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.272 [2024-11-28 12:50:46.594712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.272 qpair failed and we were unable to recover it. 00:27:04.272 [2024-11-28 12:50:46.594939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.272 [2024-11-28 12:50:46.594981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.272 qpair failed and we were unable to recover it. 00:27:04.272 [2024-11-28 12:50:46.595188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.272 [2024-11-28 12:50:46.595222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.272 qpair failed and we were unable to recover it. 00:27:04.272 [2024-11-28 12:50:46.595411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.272 [2024-11-28 12:50:46.595428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.272 qpair failed and we were unable to recover it. 00:27:04.272 [2024-11-28 12:50:46.595689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.272 [2024-11-28 12:50:46.595722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.272 qpair failed and we were unable to recover it. 00:27:04.272 [2024-11-28 12:50:46.595999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.272 [2024-11-28 12:50:46.596034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.272 qpair failed and we were unable to recover it. 00:27:04.272 [2024-11-28 12:50:46.596223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.272 [2024-11-28 12:50:46.596264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.272 qpair failed and we were unable to recover it. 00:27:04.272 [2024-11-28 12:50:46.596369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.273 [2024-11-28 12:50:46.596383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.273 qpair failed and we were unable to recover it. 00:27:04.273 [2024-11-28 12:50:46.596574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.273 [2024-11-28 12:50:46.596607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.273 qpair failed and we were unable to recover it. 00:27:04.273 [2024-11-28 12:50:46.596810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.273 [2024-11-28 12:50:46.596844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.273 qpair failed and we were unable to recover it. 00:27:04.273 [2024-11-28 12:50:46.597093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.273 [2024-11-28 12:50:46.597126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.273 qpair failed and we were unable to recover it. 00:27:04.273 [2024-11-28 12:50:46.597405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.273 [2024-11-28 12:50:46.597421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.273 qpair failed and we were unable to recover it. 00:27:04.273 [2024-11-28 12:50:46.597652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.273 [2024-11-28 12:50:46.597669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.273 qpair failed and we were unable to recover it. 00:27:04.273 [2024-11-28 12:50:46.597811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.273 [2024-11-28 12:50:46.597827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.273 qpair failed and we were unable to recover it. 00:27:04.273 [2024-11-28 12:50:46.598045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.273 [2024-11-28 12:50:46.598079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.273 qpair failed and we were unable to recover it. 00:27:04.273 [2024-11-28 12:50:46.598271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.273 [2024-11-28 12:50:46.598288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.273 qpair failed and we were unable to recover it. 00:27:04.273 [2024-11-28 12:50:46.598388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.273 [2024-11-28 12:50:46.598404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.273 qpair failed and we were unable to recover it. 00:27:04.273 [2024-11-28 12:50:46.598495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.273 [2024-11-28 12:50:46.598509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.273 qpair failed and we were unable to recover it. 00:27:04.273 [2024-11-28 12:50:46.598752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.273 [2024-11-28 12:50:46.598790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.273 qpair failed and we were unable to recover it. 00:27:04.273 [2024-11-28 12:50:46.598933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.273 [2024-11-28 12:50:46.598974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.273 qpair failed and we were unable to recover it. 00:27:04.273 [2024-11-28 12:50:46.599162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.273 [2024-11-28 12:50:46.599195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.273 qpair failed and we were unable to recover it. 00:27:04.273 [2024-11-28 12:50:46.599543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.273 [2024-11-28 12:50:46.599577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.273 qpair failed and we were unable to recover it. 00:27:04.273 [2024-11-28 12:50:46.599872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.273 [2024-11-28 12:50:46.599906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.273 qpair failed and we were unable to recover it. 00:27:04.273 [2024-11-28 12:50:46.600181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.273 [2024-11-28 12:50:46.600216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.273 qpair failed and we were unable to recover it. 00:27:04.273 [2024-11-28 12:50:46.600495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.273 [2024-11-28 12:50:46.600529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.273 qpair failed and we were unable to recover it. 00:27:04.273 [2024-11-28 12:50:46.600740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.273 [2024-11-28 12:50:46.600773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.273 qpair failed and we were unable to recover it. 00:27:04.273 [2024-11-28 12:50:46.600968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.273 [2024-11-28 12:50:46.601003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.273 qpair failed and we were unable to recover it. 00:27:04.273 [2024-11-28 12:50:46.601248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.273 [2024-11-28 12:50:46.601264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.273 qpair failed and we were unable to recover it. 00:27:04.273 [2024-11-28 12:50:46.601428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.273 [2024-11-28 12:50:46.601445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.273 qpair failed and we were unable to recover it. 00:27:04.273 [2024-11-28 12:50:46.601701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.273 [2024-11-28 12:50:46.601733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.273 qpair failed and we were unable to recover it. 00:27:04.273 [2024-11-28 12:50:46.601957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.273 [2024-11-28 12:50:46.601992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.273 qpair failed and we were unable to recover it. 00:27:04.273 [2024-11-28 12:50:46.602242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.273 [2024-11-28 12:50:46.602274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.273 qpair failed and we were unable to recover it. 00:27:04.273 [2024-11-28 12:50:46.602409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.273 [2024-11-28 12:50:46.602442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.273 qpair failed and we were unable to recover it. 00:27:04.273 [2024-11-28 12:50:46.602715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.273 [2024-11-28 12:50:46.602748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.273 qpair failed and we were unable to recover it. 00:27:04.273 [2024-11-28 12:50:46.602933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.273 [2024-11-28 12:50:46.602977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.273 qpair failed and we were unable to recover it. 00:27:04.273 [2024-11-28 12:50:46.603187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.273 [2024-11-28 12:50:46.603231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.273 qpair failed and we were unable to recover it. 00:27:04.273 [2024-11-28 12:50:46.603469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.273 [2024-11-28 12:50:46.603485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.273 qpair failed and we were unable to recover it. 00:27:04.273 [2024-11-28 12:50:46.603698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.273 [2024-11-28 12:50:46.603714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.273 qpair failed and we were unable to recover it. 00:27:04.273 [2024-11-28 12:50:46.603816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.273 [2024-11-28 12:50:46.603833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.273 qpair failed and we were unable to recover it. 00:27:04.273 [2024-11-28 12:50:46.604091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.273 [2024-11-28 12:50:46.604108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.273 qpair failed and we were unable to recover it. 00:27:04.273 [2024-11-28 12:50:46.604331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.273 [2024-11-28 12:50:46.604347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.273 qpair failed and we were unable to recover it. 00:27:04.274 [2024-11-28 12:50:46.604558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.274 [2024-11-28 12:50:46.604574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.274 qpair failed and we were unable to recover it. 00:27:04.274 [2024-11-28 12:50:46.604806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.274 [2024-11-28 12:50:46.604822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.274 qpair failed and we were unable to recover it. 00:27:04.274 [2024-11-28 12:50:46.605071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.274 [2024-11-28 12:50:46.605107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.274 qpair failed and we were unable to recover it. 00:27:04.274 [2024-11-28 12:50:46.605384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.274 [2024-11-28 12:50:46.605416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.274 qpair failed and we were unable to recover it. 00:27:04.274 [2024-11-28 12:50:46.605530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.274 [2024-11-28 12:50:46.605568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.274 qpair failed and we were unable to recover it. 00:27:04.274 [2024-11-28 12:50:46.605753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.274 [2024-11-28 12:50:46.605786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.274 qpair failed and we were unable to recover it. 00:27:04.274 [2024-11-28 12:50:46.605935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.274 [2024-11-28 12:50:46.605978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.274 qpair failed and we were unable to recover it. 00:27:04.274 [2024-11-28 12:50:46.606202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.274 [2024-11-28 12:50:46.606235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.274 qpair failed and we were unable to recover it. 00:27:04.274 [2024-11-28 12:50:46.606365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.274 [2024-11-28 12:50:46.606381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.274 qpair failed and we were unable to recover it. 00:27:04.274 [2024-11-28 12:50:46.606633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.274 [2024-11-28 12:50:46.606666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.274 qpair failed and we were unable to recover it. 00:27:04.274 [2024-11-28 12:50:46.606954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.274 [2024-11-28 12:50:46.606988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.274 qpair failed and we were unable to recover it. 00:27:04.274 [2024-11-28 12:50:46.607261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.274 [2024-11-28 12:50:46.607277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.274 qpair failed and we were unable to recover it. 00:27:04.274 [2024-11-28 12:50:46.607516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.274 [2024-11-28 12:50:46.607532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.274 qpair failed and we were unable to recover it. 00:27:04.274 [2024-11-28 12:50:46.607718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.274 [2024-11-28 12:50:46.607734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.274 qpair failed and we were unable to recover it. 00:27:04.274 [2024-11-28 12:50:46.607945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.274 [2024-11-28 12:50:46.607969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.274 qpair failed and we were unable to recover it. 00:27:04.274 [2024-11-28 12:50:46.608230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.274 [2024-11-28 12:50:46.608246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.274 qpair failed and we were unable to recover it. 00:27:04.274 [2024-11-28 12:50:46.608458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.274 [2024-11-28 12:50:46.608475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.274 qpair failed and we were unable to recover it. 00:27:04.274 [2024-11-28 12:50:46.608708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.274 [2024-11-28 12:50:46.608723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.274 qpair failed and we were unable to recover it. 00:27:04.274 [2024-11-28 12:50:46.608891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.274 [2024-11-28 12:50:46.608908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.274 qpair failed and we were unable to recover it. 00:27:04.274 [2024-11-28 12:50:46.609076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.274 [2024-11-28 12:50:46.609093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.274 qpair failed and we were unable to recover it. 00:27:04.274 [2024-11-28 12:50:46.609177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.274 [2024-11-28 12:50:46.609192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.274 qpair failed and we were unable to recover it. 00:27:04.274 [2024-11-28 12:50:46.609354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.274 [2024-11-28 12:50:46.609369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.274 qpair failed and we were unable to recover it. 00:27:04.274 [2024-11-28 12:50:46.609618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.274 [2024-11-28 12:50:46.609634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.274 qpair failed and we were unable to recover it. 00:27:04.274 [2024-11-28 12:50:46.609797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.274 [2024-11-28 12:50:46.609813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.274 qpair failed and we were unable to recover it. 00:27:04.274 [2024-11-28 12:50:46.609986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.274 [2024-11-28 12:50:46.610003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.274 qpair failed and we were unable to recover it. 00:27:04.274 [2024-11-28 12:50:46.610153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.274 [2024-11-28 12:50:46.610169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.274 qpair failed and we were unable to recover it. 00:27:04.274 [2024-11-28 12:50:46.610325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.274 [2024-11-28 12:50:46.610340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.274 qpair failed and we were unable to recover it. 00:27:04.274 [2024-11-28 12:50:46.610602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.274 [2024-11-28 12:50:46.610619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.274 qpair failed and we were unable to recover it. 00:27:04.274 [2024-11-28 12:50:46.610844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.274 [2024-11-28 12:50:46.610876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.274 qpair failed and we were unable to recover it. 00:27:04.274 [2024-11-28 12:50:46.611040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.274 [2024-11-28 12:50:46.611058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.274 qpair failed and we were unable to recover it. 00:27:04.274 [2024-11-28 12:50:46.611234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.274 [2024-11-28 12:50:46.611250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.274 qpair failed and we were unable to recover it. 00:27:04.274 [2024-11-28 12:50:46.611478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.274 [2024-11-28 12:50:46.611494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.274 qpair failed and we were unable to recover it. 00:27:04.274 [2024-11-28 12:50:46.611611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.274 [2024-11-28 12:50:46.611627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.274 qpair failed and we were unable to recover it. 00:27:04.274 [2024-11-28 12:50:46.611853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.274 [2024-11-28 12:50:46.611870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.274 qpair failed and we were unable to recover it. 00:27:04.274 [2024-11-28 12:50:46.612093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.274 [2024-11-28 12:50:46.612109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.274 qpair failed and we were unable to recover it. 00:27:04.274 [2024-11-28 12:50:46.612217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.274 [2024-11-28 12:50:46.612232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.274 qpair failed and we were unable to recover it. 00:27:04.274 [2024-11-28 12:50:46.612386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.274 [2024-11-28 12:50:46.612402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.274 qpair failed and we were unable to recover it. 00:27:04.274 [2024-11-28 12:50:46.612549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.274 [2024-11-28 12:50:46.612566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.274 qpair failed and we were unable to recover it. 00:27:04.274 [2024-11-28 12:50:46.612777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.274 [2024-11-28 12:50:46.612793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.274 qpair failed and we were unable to recover it. 00:27:04.275 [2024-11-28 12:50:46.612882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.275 [2024-11-28 12:50:46.612897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.275 qpair failed and we were unable to recover it. 00:27:04.275 [2024-11-28 12:50:46.613062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.275 [2024-11-28 12:50:46.613079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.275 qpair failed and we were unable to recover it. 00:27:04.275 [2024-11-28 12:50:46.613323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.275 [2024-11-28 12:50:46.613339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.275 qpair failed and we were unable to recover it. 00:27:04.275 [2024-11-28 12:50:46.613453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.275 [2024-11-28 12:50:46.613468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.275 qpair failed and we were unable to recover it. 00:27:04.275 [2024-11-28 12:50:46.613612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.275 [2024-11-28 12:50:46.613628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.275 qpair failed and we were unable to recover it. 00:27:04.275 [2024-11-28 12:50:46.613787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.275 [2024-11-28 12:50:46.613803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.275 qpair failed and we were unable to recover it. 00:27:04.275 [2024-11-28 12:50:46.613967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.275 [2024-11-28 12:50:46.613986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.275 qpair failed and we were unable to recover it. 00:27:04.275 [2024-11-28 12:50:46.614144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.275 [2024-11-28 12:50:46.614160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.275 qpair failed and we were unable to recover it. 00:27:04.275 [2024-11-28 12:50:46.614429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.275 [2024-11-28 12:50:46.614445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.275 qpair failed and we were unable to recover it. 00:27:04.275 [2024-11-28 12:50:46.614611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.275 [2024-11-28 12:50:46.614627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.275 qpair failed and we were unable to recover it. 00:27:04.275 [2024-11-28 12:50:46.614787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.275 [2024-11-28 12:50:46.614803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.275 qpair failed and we were unable to recover it. 00:27:04.275 [2024-11-28 12:50:46.614956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.275 [2024-11-28 12:50:46.614972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.275 qpair failed and we were unable to recover it. 00:27:04.275 [2024-11-28 12:50:46.615142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.275 [2024-11-28 12:50:46.615157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.275 qpair failed and we were unable to recover it. 00:27:04.275 [2024-11-28 12:50:46.615301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.275 [2024-11-28 12:50:46.615318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.275 qpair failed and we were unable to recover it. 00:27:04.275 [2024-11-28 12:50:46.615552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.275 [2024-11-28 12:50:46.615568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.275 qpair failed and we were unable to recover it. 00:27:04.275 [2024-11-28 12:50:46.615828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.275 [2024-11-28 12:50:46.615844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.275 qpair failed and we were unable to recover it. 00:27:04.275 [2024-11-28 12:50:46.615998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.275 [2024-11-28 12:50:46.616014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.275 qpair failed and we were unable to recover it. 00:27:04.275 [2024-11-28 12:50:46.616181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.275 [2024-11-28 12:50:46.616197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.275 qpair failed and we were unable to recover it. 00:27:04.275 [2024-11-28 12:50:46.616275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.275 [2024-11-28 12:50:46.616290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.275 qpair failed and we were unable to recover it. 00:27:04.275 [2024-11-28 12:50:46.616473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.275 [2024-11-28 12:50:46.616488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.275 qpair failed and we were unable to recover it. 00:27:04.275 [2024-11-28 12:50:46.616722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.275 [2024-11-28 12:50:46.616738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.275 qpair failed and we were unable to recover it. 00:27:04.275 [2024-11-28 12:50:46.616986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.275 [2024-11-28 12:50:46.617003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.275 qpair failed and we were unable to recover it. 00:27:04.275 [2024-11-28 12:50:46.617104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.275 [2024-11-28 12:50:46.617120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.275 qpair failed and we were unable to recover it. 00:27:04.275 [2024-11-28 12:50:46.617286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.275 [2024-11-28 12:50:46.617302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.275 qpair failed and we were unable to recover it. 00:27:04.275 [2024-11-28 12:50:46.617519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.275 [2024-11-28 12:50:46.617536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.275 qpair failed and we were unable to recover it. 00:27:04.275 [2024-11-28 12:50:46.617761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.275 [2024-11-28 12:50:46.617777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.275 qpair failed and we were unable to recover it. 00:27:04.275 [2024-11-28 12:50:46.617944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.275 [2024-11-28 12:50:46.617966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.275 qpair failed and we were unable to recover it. 00:27:04.275 [2024-11-28 12:50:46.618124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.275 [2024-11-28 12:50:46.618139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.275 qpair failed and we were unable to recover it. 00:27:04.275 [2024-11-28 12:50:46.618288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.275 [2024-11-28 12:50:46.618304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.275 qpair failed and we were unable to recover it. 00:27:04.275 [2024-11-28 12:50:46.618468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.275 [2024-11-28 12:50:46.618483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.275 qpair failed and we were unable to recover it. 00:27:04.275 [2024-11-28 12:50:46.618692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.275 [2024-11-28 12:50:46.618708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.275 qpair failed and we were unable to recover it. 00:27:04.275 [2024-11-28 12:50:46.618918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.275 [2024-11-28 12:50:46.618934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.275 qpair failed and we were unable to recover it. 00:27:04.275 [2024-11-28 12:50:46.619104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.275 [2024-11-28 12:50:46.619121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.275 qpair failed and we were unable to recover it. 00:27:04.275 [2024-11-28 12:50:46.619277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.275 [2024-11-28 12:50:46.619297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.275 qpair failed and we were unable to recover it. 00:27:04.275 [2024-11-28 12:50:46.619537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.275 [2024-11-28 12:50:46.619554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.275 qpair failed and we were unable to recover it. 00:27:04.275 [2024-11-28 12:50:46.619830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.275 [2024-11-28 12:50:46.619847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.275 qpair failed and we were unable to recover it. 00:27:04.275 [2024-11-28 12:50:46.620011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.275 [2024-11-28 12:50:46.620028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.275 qpair failed and we were unable to recover it. 00:27:04.275 [2024-11-28 12:50:46.620203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.275 [2024-11-28 12:50:46.620220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.275 qpair failed and we were unable to recover it. 00:27:04.275 [2024-11-28 12:50:46.620455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.276 [2024-11-28 12:50:46.620471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.276 qpair failed and we were unable to recover it. 00:27:04.276 [2024-11-28 12:50:46.620624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.276 [2024-11-28 12:50:46.620641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.276 qpair failed and we were unable to recover it. 00:27:04.276 [2024-11-28 12:50:46.620851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.276 [2024-11-28 12:50:46.620867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.276 qpair failed and we were unable to recover it. 00:27:04.276 [2024-11-28 12:50:46.621032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.276 [2024-11-28 12:50:46.621050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.276 qpair failed and we were unable to recover it. 00:27:04.276 [2024-11-28 12:50:46.621283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.276 [2024-11-28 12:50:46.621300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.276 qpair failed and we were unable to recover it. 00:27:04.276 [2024-11-28 12:50:46.621530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.276 [2024-11-28 12:50:46.621546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.276 qpair failed and we were unable to recover it. 00:27:04.276 [2024-11-28 12:50:46.621728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.276 [2024-11-28 12:50:46.621747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.276 qpair failed and we were unable to recover it. 00:27:04.276 [2024-11-28 12:50:46.621909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.276 [2024-11-28 12:50:46.621926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.276 qpair failed and we were unable to recover it. 00:27:04.276 [2024-11-28 12:50:46.622089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.276 [2024-11-28 12:50:46.622106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.276 qpair failed and we were unable to recover it. 00:27:04.276 [2024-11-28 12:50:46.622288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.276 [2024-11-28 12:50:46.622316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.276 qpair failed and we were unable to recover it. 00:27:04.276 [2024-11-28 12:50:46.622478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.276 [2024-11-28 12:50:46.622492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.276 qpair failed and we were unable to recover it. 00:27:04.276 [2024-11-28 12:50:46.622645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.276 [2024-11-28 12:50:46.622658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.276 qpair failed and we were unable to recover it. 00:27:04.276 [2024-11-28 12:50:46.622911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.276 [2024-11-28 12:50:46.622924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.276 qpair failed and we were unable to recover it. 00:27:04.276 [2024-11-28 12:50:46.623088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.276 [2024-11-28 12:50:46.623101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.276 qpair failed and we were unable to recover it. 00:27:04.276 [2024-11-28 12:50:46.623399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.276 [2024-11-28 12:50:46.623415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.276 qpair failed and we were unable to recover it. 00:27:04.276 [2024-11-28 12:50:46.623554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.276 [2024-11-28 12:50:46.623567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.276 qpair failed and we were unable to recover it. 00:27:04.276 [2024-11-28 12:50:46.623744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.276 [2024-11-28 12:50:46.623756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.276 qpair failed and we were unable to recover it. 00:27:04.276 [2024-11-28 12:50:46.623982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.276 [2024-11-28 12:50:46.623995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.276 qpair failed and we were unable to recover it. 00:27:04.276 [2024-11-28 12:50:46.624153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.276 [2024-11-28 12:50:46.624165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.276 qpair failed and we were unable to recover it. 00:27:04.276 [2024-11-28 12:50:46.624309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.276 [2024-11-28 12:50:46.624322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.276 qpair failed and we were unable to recover it. 00:27:04.276 [2024-11-28 12:50:46.624555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.276 [2024-11-28 12:50:46.624568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.276 qpair failed and we were unable to recover it. 00:27:04.276 [2024-11-28 12:50:46.624778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.276 [2024-11-28 12:50:46.624791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.276 qpair failed and we were unable to recover it. 00:27:04.276 [2024-11-28 12:50:46.624996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.276 [2024-11-28 12:50:46.625015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.276 qpair failed and we were unable to recover it. 00:27:04.276 [2024-11-28 12:50:46.625153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.276 [2024-11-28 12:50:46.625166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.276 qpair failed and we were unable to recover it. 00:27:04.276 [2024-11-28 12:50:46.625266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.276 [2024-11-28 12:50:46.625277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.276 qpair failed and we were unable to recover it. 00:27:04.276 [2024-11-28 12:50:46.625492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.276 [2024-11-28 12:50:46.625506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.276 qpair failed and we were unable to recover it. 00:27:04.276 [2024-11-28 12:50:46.625659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.276 [2024-11-28 12:50:46.625672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.276 qpair failed and we were unable to recover it. 00:27:04.276 [2024-11-28 12:50:46.625853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.276 [2024-11-28 12:50:46.625866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.276 qpair failed and we were unable to recover it. 00:27:04.276 [2024-11-28 12:50:46.626118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.276 [2024-11-28 12:50:46.626132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.276 qpair failed and we were unable to recover it. 00:27:04.276 [2024-11-28 12:50:46.626365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.276 [2024-11-28 12:50:46.626377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.276 qpair failed and we were unable to recover it. 00:27:04.276 [2024-11-28 12:50:46.626608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.276 [2024-11-28 12:50:46.626621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.276 qpair failed and we were unable to recover it. 00:27:04.276 [2024-11-28 12:50:46.626860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.276 [2024-11-28 12:50:46.626874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.276 qpair failed and we were unable to recover it. 00:27:04.276 [2024-11-28 12:50:46.627030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.276 [2024-11-28 12:50:46.627043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.276 qpair failed and we were unable to recover it. 00:27:04.276 [2024-11-28 12:50:46.627282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.276 [2024-11-28 12:50:46.627296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.276 qpair failed and we were unable to recover it. 00:27:04.276 [2024-11-28 12:50:46.627540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.276 [2024-11-28 12:50:46.627554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.276 qpair failed and we were unable to recover it. 00:27:04.276 [2024-11-28 12:50:46.627647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.276 [2024-11-28 12:50:46.627657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.276 qpair failed and we were unable to recover it. 00:27:04.276 [2024-11-28 12:50:46.627834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.276 [2024-11-28 12:50:46.627847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.276 qpair failed and we were unable to recover it. 00:27:04.276 [2024-11-28 12:50:46.628053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.276 [2024-11-28 12:50:46.628070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.276 qpair failed and we were unable to recover it. 00:27:04.276 [2024-11-28 12:50:46.628242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.276 [2024-11-28 12:50:46.628255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.276 qpair failed and we were unable to recover it. 00:27:04.277 [2024-11-28 12:50:46.628470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.277 [2024-11-28 12:50:46.628483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.277 qpair failed and we were unable to recover it. 00:27:04.277 [2024-11-28 12:50:46.628629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.277 [2024-11-28 12:50:46.628642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.277 qpair failed and we were unable to recover it. 00:27:04.277 [2024-11-28 12:50:46.628814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.277 [2024-11-28 12:50:46.628828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.277 qpair failed and we were unable to recover it. 00:27:04.277 [2024-11-28 12:50:46.629081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.277 [2024-11-28 12:50:46.629096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.277 qpair failed and we were unable to recover it. 00:27:04.277 [2024-11-28 12:50:46.629185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.277 [2024-11-28 12:50:46.629196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.277 qpair failed and we were unable to recover it. 00:27:04.277 [2024-11-28 12:50:46.629441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.277 [2024-11-28 12:50:46.629455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.277 qpair failed and we were unable to recover it. 00:27:04.277 [2024-11-28 12:50:46.629683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.277 [2024-11-28 12:50:46.629696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.277 qpair failed and we were unable to recover it. 00:27:04.277 [2024-11-28 12:50:46.629848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.277 [2024-11-28 12:50:46.629860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.277 qpair failed and we were unable to recover it. 00:27:04.277 [2024-11-28 12:50:46.630015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.277 [2024-11-28 12:50:46.630028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.277 qpair failed and we were unable to recover it. 00:27:04.277 [2024-11-28 12:50:46.630259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.277 [2024-11-28 12:50:46.630271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.277 qpair failed and we were unable to recover it. 00:27:04.277 [2024-11-28 12:50:46.630449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.277 [2024-11-28 12:50:46.630483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.277 qpair failed and we were unable to recover it. 00:27:04.277 [2024-11-28 12:50:46.630604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.277 [2024-11-28 12:50:46.630628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.277 qpair failed and we were unable to recover it. 00:27:04.277 [2024-11-28 12:50:46.630805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.277 [2024-11-28 12:50:46.630824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.277 qpair failed and we were unable to recover it. 00:27:04.277 [2024-11-28 12:50:46.631074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.277 [2024-11-28 12:50:46.631090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.277 qpair failed and we were unable to recover it. 00:27:04.277 [2024-11-28 12:50:46.631310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.277 [2024-11-28 12:50:46.631324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.277 qpair failed and we were unable to recover it. 00:27:04.277 [2024-11-28 12:50:46.631553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.277 [2024-11-28 12:50:46.631568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.277 qpair failed and we were unable to recover it. 00:27:04.277 [2024-11-28 12:50:46.631815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.277 [2024-11-28 12:50:46.631827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.277 qpair failed and we were unable to recover it. 00:27:04.277 [2024-11-28 12:50:46.631928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.277 [2024-11-28 12:50:46.631941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.277 qpair failed and we were unable to recover it. 00:27:04.277 [2024-11-28 12:50:46.632162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.277 [2024-11-28 12:50:46.632175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.277 qpair failed and we were unable to recover it. 00:27:04.277 [2024-11-28 12:50:46.632330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.277 [2024-11-28 12:50:46.632354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.277 qpair failed and we were unable to recover it. 00:27:04.277 [2024-11-28 12:50:46.632575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.277 [2024-11-28 12:50:46.632589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.277 qpair failed and we were unable to recover it. 00:27:04.277 [2024-11-28 12:50:46.632741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.277 [2024-11-28 12:50:46.632754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.277 qpair failed and we were unable to recover it. 00:27:04.277 [2024-11-28 12:50:46.632977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.277 [2024-11-28 12:50:46.632991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.277 qpair failed and we were unable to recover it. 00:27:04.277 [2024-11-28 12:50:46.633088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.277 [2024-11-28 12:50:46.633102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.277 qpair failed and we were unable to recover it. 00:27:04.277 [2024-11-28 12:50:46.633247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.277 [2024-11-28 12:50:46.633262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.277 qpair failed and we were unable to recover it. 00:27:04.277 [2024-11-28 12:50:46.633438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.277 [2024-11-28 12:50:46.633452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.277 qpair failed and we were unable to recover it. 00:27:04.277 [2024-11-28 12:50:46.633539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.277 [2024-11-28 12:50:46.633550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.277 qpair failed and we were unable to recover it. 00:27:04.277 [2024-11-28 12:50:46.633724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.277 [2024-11-28 12:50:46.633738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.277 qpair failed and we were unable to recover it. 00:27:04.277 [2024-11-28 12:50:46.633902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.277 [2024-11-28 12:50:46.633915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.277 qpair failed and we were unable to recover it. 00:27:04.277 [2024-11-28 12:50:46.634125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.277 [2024-11-28 12:50:46.634138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.277 qpair failed and we were unable to recover it. 00:27:04.277 [2024-11-28 12:50:46.634312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.277 [2024-11-28 12:50:46.634324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.277 qpair failed and we were unable to recover it. 00:27:04.277 [2024-11-28 12:50:46.634532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.277 [2024-11-28 12:50:46.634545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.277 qpair failed and we were unable to recover it. 00:27:04.277 [2024-11-28 12:50:46.634680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.277 [2024-11-28 12:50:46.634692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.277 qpair failed and we were unable to recover it. 00:27:04.277 [2024-11-28 12:50:46.634854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.277 [2024-11-28 12:50:46.634867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.277 qpair failed and we were unable to recover it. 00:27:04.277 [2024-11-28 12:50:46.635044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.277 [2024-11-28 12:50:46.635060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.277 qpair failed and we were unable to recover it. 00:27:04.277 [2024-11-28 12:50:46.635143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.277 [2024-11-28 12:50:46.635155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.277 qpair failed and we were unable to recover it. 00:27:04.277 [2024-11-28 12:50:46.635319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.277 [2024-11-28 12:50:46.635332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.277 qpair failed and we were unable to recover it. 00:27:04.277 [2024-11-28 12:50:46.635537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.277 [2024-11-28 12:50:46.635549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.277 qpair failed and we were unable to recover it. 00:27:04.278 [2024-11-28 12:50:46.635634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.278 [2024-11-28 12:50:46.635645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.278 qpair failed and we were unable to recover it. 00:27:04.278 [2024-11-28 12:50:46.635801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.278 [2024-11-28 12:50:46.635814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.278 qpair failed and we were unable to recover it. 00:27:04.278 [2024-11-28 12:50:46.635990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.278 [2024-11-28 12:50:46.636003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.278 qpair failed and we were unable to recover it. 00:27:04.278 [2024-11-28 12:50:46.636157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.278 [2024-11-28 12:50:46.636169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.278 qpair failed and we were unable to recover it. 00:27:04.278 [2024-11-28 12:50:46.636323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.278 [2024-11-28 12:50:46.636336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.278 qpair failed and we were unable to recover it. 00:27:04.278 [2024-11-28 12:50:46.636562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.278 [2024-11-28 12:50:46.636576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.278 qpair failed and we were unable to recover it. 00:27:04.278 [2024-11-28 12:50:46.636786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.278 [2024-11-28 12:50:46.636799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.278 qpair failed and we were unable to recover it. 00:27:04.278 [2024-11-28 12:50:46.636961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.278 [2024-11-28 12:50:46.636977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.278 qpair failed and we were unable to recover it. 00:27:04.278 [2024-11-28 12:50:46.637147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.278 [2024-11-28 12:50:46.637160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.278 qpair failed and we were unable to recover it. 00:27:04.278 [2024-11-28 12:50:46.637320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.278 [2024-11-28 12:50:46.637333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.278 qpair failed and we were unable to recover it. 00:27:04.278 [2024-11-28 12:50:46.637560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.278 [2024-11-28 12:50:46.637573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.278 qpair failed and we were unable to recover it. 00:27:04.278 [2024-11-28 12:50:46.637718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.278 [2024-11-28 12:50:46.637729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.278 qpair failed and we were unable to recover it. 00:27:04.278 [2024-11-28 12:50:46.637958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.278 [2024-11-28 12:50:46.637996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.278 qpair failed and we were unable to recover it. 00:27:04.278 [2024-11-28 12:50:46.638324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.278 [2024-11-28 12:50:46.638360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.278 qpair failed and we were unable to recover it. 00:27:04.278 [2024-11-28 12:50:46.638451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.278 [2024-11-28 12:50:46.638468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.278 qpair failed and we were unable to recover it. 00:27:04.278 [2024-11-28 12:50:46.638653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.278 [2024-11-28 12:50:46.638668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.278 qpair failed and we were unable to recover it. 00:27:04.278 [2024-11-28 12:50:46.638901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.278 [2024-11-28 12:50:46.638917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.278 qpair failed and we were unable to recover it. 00:27:04.278 [2024-11-28 12:50:46.639134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.278 [2024-11-28 12:50:46.639155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.278 qpair failed and we were unable to recover it. 00:27:04.278 [2024-11-28 12:50:46.639413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.278 [2024-11-28 12:50:46.639429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.278 qpair failed and we were unable to recover it. 00:27:04.278 [2024-11-28 12:50:46.639640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.278 [2024-11-28 12:50:46.639656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.278 qpair failed and we were unable to recover it. 00:27:04.278 [2024-11-28 12:50:46.639893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.278 [2024-11-28 12:50:46.639909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.278 qpair failed and we were unable to recover it. 00:27:04.278 [2024-11-28 12:50:46.640090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.278 [2024-11-28 12:50:46.640106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.278 qpair failed and we were unable to recover it. 00:27:04.278 [2024-11-28 12:50:46.640272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.278 [2024-11-28 12:50:46.640288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.278 qpair failed and we were unable to recover it. 00:27:04.278 [2024-11-28 12:50:46.640524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.278 [2024-11-28 12:50:46.640540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.278 qpair failed and we were unable to recover it. 00:27:04.278 [2024-11-28 12:50:46.640748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.278 [2024-11-28 12:50:46.640764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.278 qpair failed and we were unable to recover it. 00:27:04.278 [2024-11-28 12:50:46.640930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.278 [2024-11-28 12:50:46.640946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.278 qpair failed and we were unable to recover it. 00:27:04.278 [2024-11-28 12:50:46.641200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.278 [2024-11-28 12:50:46.641216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.278 qpair failed and we were unable to recover it. 00:27:04.278 [2024-11-28 12:50:46.641453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.278 [2024-11-28 12:50:46.641470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.278 qpair failed and we were unable to recover it. 00:27:04.278 [2024-11-28 12:50:46.641705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.278 [2024-11-28 12:50:46.641722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.278 qpair failed and we were unable to recover it. 00:27:04.278 [2024-11-28 12:50:46.641961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.278 [2024-11-28 12:50:46.641978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.278 qpair failed and we were unable to recover it. 00:27:04.278 [2024-11-28 12:50:46.642133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.278 [2024-11-28 12:50:46.642149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.278 qpair failed and we were unable to recover it. 00:27:04.278 [2024-11-28 12:50:46.642339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.278 [2024-11-28 12:50:46.642355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.278 qpair failed and we were unable to recover it. 00:27:04.278 [2024-11-28 12:50:46.642510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.278 [2024-11-28 12:50:46.642525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.278 qpair failed and we were unable to recover it. 00:27:04.278 [2024-11-28 12:50:46.642715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.279 [2024-11-28 12:50:46.642732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.279 qpair failed and we were unable to recover it. 00:27:04.279 [2024-11-28 12:50:46.642971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.279 [2024-11-28 12:50:46.642987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.279 qpair failed and we were unable to recover it. 00:27:04.279 [2024-11-28 12:50:46.643094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.279 [2024-11-28 12:50:46.643109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.279 qpair failed and we were unable to recover it. 00:27:04.279 [2024-11-28 12:50:46.643346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.279 [2024-11-28 12:50:46.643361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.279 qpair failed and we were unable to recover it. 00:27:04.279 [2024-11-28 12:50:46.643546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.279 [2024-11-28 12:50:46.643561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.279 qpair failed and we were unable to recover it. 00:27:04.279 [2024-11-28 12:50:46.643771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.279 [2024-11-28 12:50:46.643787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.279 qpair failed and we were unable to recover it. 00:27:04.279 [2024-11-28 12:50:46.644000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.279 [2024-11-28 12:50:46.644019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.279 qpair failed and we were unable to recover it. 00:27:04.279 [2024-11-28 12:50:46.644230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.279 [2024-11-28 12:50:46.644246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.279 qpair failed and we were unable to recover it. 00:27:04.279 [2024-11-28 12:50:46.644410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.279 [2024-11-28 12:50:46.644426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.279 qpair failed and we were unable to recover it. 00:27:04.279 [2024-11-28 12:50:46.644637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.279 [2024-11-28 12:50:46.644653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.279 qpair failed and we were unable to recover it. 00:27:04.279 [2024-11-28 12:50:46.644748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.279 [2024-11-28 12:50:46.644762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.279 qpair failed and we were unable to recover it. 00:27:04.279 [2024-11-28 12:50:46.644904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.279 [2024-11-28 12:50:46.644919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.279 qpair failed and we were unable to recover it. 00:27:04.279 [2024-11-28 12:50:46.645155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.279 [2024-11-28 12:50:46.645172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.279 qpair failed and we were unable to recover it. 00:27:04.279 [2024-11-28 12:50:46.645385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.279 [2024-11-28 12:50:46.645401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.279 qpair failed and we were unable to recover it. 00:27:04.279 [2024-11-28 12:50:46.645607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.279 [2024-11-28 12:50:46.645624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.279 qpair failed and we were unable to recover it. 00:27:04.279 [2024-11-28 12:50:46.645835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.279 [2024-11-28 12:50:46.645851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.279 qpair failed and we were unable to recover it. 00:27:04.279 [2024-11-28 12:50:46.646085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.279 [2024-11-28 12:50:46.646102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.279 qpair failed and we were unable to recover it. 00:27:04.279 [2024-11-28 12:50:46.646321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.279 [2024-11-28 12:50:46.646337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.279 qpair failed and we were unable to recover it. 00:27:04.279 [2024-11-28 12:50:46.646523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.279 [2024-11-28 12:50:46.646539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.279 qpair failed and we were unable to recover it. 00:27:04.279 [2024-11-28 12:50:46.646749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.279 [2024-11-28 12:50:46.646765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.279 qpair failed and we were unable to recover it. 00:27:04.279 [2024-11-28 12:50:46.646976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.279 [2024-11-28 12:50:46.646993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.279 qpair failed and we were unable to recover it. 00:27:04.279 [2024-11-28 12:50:46.647151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.279 [2024-11-28 12:50:46.647167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.279 qpair failed and we were unable to recover it. 00:27:04.279 [2024-11-28 12:50:46.647328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.279 [2024-11-28 12:50:46.647345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.279 qpair failed and we were unable to recover it. 00:27:04.279 [2024-11-28 12:50:46.647497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.279 [2024-11-28 12:50:46.647513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.279 qpair failed and we were unable to recover it. 00:27:04.279 [2024-11-28 12:50:46.647665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.279 [2024-11-28 12:50:46.647681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.279 qpair failed and we were unable to recover it. 00:27:04.279 [2024-11-28 12:50:46.647914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.279 [2024-11-28 12:50:46.647930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.279 qpair failed and we were unable to recover it. 00:27:04.279 [2024-11-28 12:50:46.648030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.279 [2024-11-28 12:50:46.648045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.279 qpair failed and we were unable to recover it. 00:27:04.279 [2024-11-28 12:50:46.648214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.279 [2024-11-28 12:50:46.648230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.279 qpair failed and we were unable to recover it. 00:27:04.279 [2024-11-28 12:50:46.648464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.279 [2024-11-28 12:50:46.648480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.279 qpair failed and we were unable to recover it. 00:27:04.279 [2024-11-28 12:50:46.648728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.279 [2024-11-28 12:50:46.648744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.279 qpair failed and we were unable to recover it. 00:27:04.279 [2024-11-28 12:50:46.648887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.279 [2024-11-28 12:50:46.648903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.279 qpair failed and we were unable to recover it. 00:27:04.279 [2024-11-28 12:50:46.649120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.279 [2024-11-28 12:50:46.649136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.279 qpair failed and we were unable to recover it. 00:27:04.279 [2024-11-28 12:50:46.649211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.279 [2024-11-28 12:50:46.649226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.279 qpair failed and we were unable to recover it. 00:27:04.279 [2024-11-28 12:50:46.649510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.279 [2024-11-28 12:50:46.649526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.279 qpair failed and we were unable to recover it. 00:27:04.279 [2024-11-28 12:50:46.649697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.279 [2024-11-28 12:50:46.649714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.279 qpair failed and we were unable to recover it. 00:27:04.279 [2024-11-28 12:50:46.649812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.279 [2024-11-28 12:50:46.649827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.279 qpair failed and we were unable to recover it. 00:27:04.279 [2024-11-28 12:50:46.650060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.279 [2024-11-28 12:50:46.650077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.279 qpair failed and we were unable to recover it. 00:27:04.279 [2024-11-28 12:50:46.650187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.279 [2024-11-28 12:50:46.650202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.279 qpair failed and we were unable to recover it. 00:27:04.279 [2024-11-28 12:50:46.650413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.279 [2024-11-28 12:50:46.650429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.279 qpair failed and we were unable to recover it. 00:27:04.279 [2024-11-28 12:50:46.650657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.280 [2024-11-28 12:50:46.650674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.280 qpair failed and we were unable to recover it. 00:27:04.280 [2024-11-28 12:50:46.650854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.280 [2024-11-28 12:50:46.650871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.280 qpair failed and we were unable to recover it. 00:27:04.280 [2024-11-28 12:50:46.651107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.280 [2024-11-28 12:50:46.651123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.280 qpair failed and we were unable to recover it. 00:27:04.280 [2024-11-28 12:50:46.651354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.280 [2024-11-28 12:50:46.651370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.280 qpair failed and we were unable to recover it. 00:27:04.280 [2024-11-28 12:50:46.651527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.280 [2024-11-28 12:50:46.651543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.280 qpair failed and we were unable to recover it. 00:27:04.280 [2024-11-28 12:50:46.651737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.280 [2024-11-28 12:50:46.651753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.280 qpair failed and we were unable to recover it. 00:27:04.280 [2024-11-28 12:50:46.651983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.280 [2024-11-28 12:50:46.651999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.280 qpair failed and we were unable to recover it. 00:27:04.280 [2024-11-28 12:50:46.652160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.280 [2024-11-28 12:50:46.652176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.280 qpair failed and we were unable to recover it. 00:27:04.280 [2024-11-28 12:50:46.652328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.280 [2024-11-28 12:50:46.652344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.280 qpair failed and we were unable to recover it. 00:27:04.280 [2024-11-28 12:50:46.652435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.280 [2024-11-28 12:50:46.652449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.280 qpair failed and we were unable to recover it. 00:27:04.280 [2024-11-28 12:50:46.652545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.280 [2024-11-28 12:50:46.652560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.280 qpair failed and we were unable to recover it. 00:27:04.280 [2024-11-28 12:50:46.652815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.280 [2024-11-28 12:50:46.652830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.280 qpair failed and we were unable to recover it. 00:27:04.280 [2024-11-28 12:50:46.653083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.280 [2024-11-28 12:50:46.653099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.280 qpair failed and we were unable to recover it. 00:27:04.280 [2024-11-28 12:50:46.653262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.280 [2024-11-28 12:50:46.653277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.280 qpair failed and we were unable to recover it. 00:27:04.280 [2024-11-28 12:50:46.653519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.280 [2024-11-28 12:50:46.653534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.280 qpair failed and we were unable to recover it. 00:27:04.280 [2024-11-28 12:50:46.653734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.280 [2024-11-28 12:50:46.653750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.280 qpair failed and we were unable to recover it. 00:27:04.280 [2024-11-28 12:50:46.653974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.280 [2024-11-28 12:50:46.653991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.280 qpair failed and we were unable to recover it. 00:27:04.280 [2024-11-28 12:50:46.654153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.280 [2024-11-28 12:50:46.654169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.280 qpair failed and we were unable to recover it. 00:27:04.280 [2024-11-28 12:50:46.654377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.280 [2024-11-28 12:50:46.654392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.280 qpair failed and we were unable to recover it. 00:27:04.280 [2024-11-28 12:50:46.654603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.280 [2024-11-28 12:50:46.654619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.280 qpair failed and we were unable to recover it. 00:27:04.280 [2024-11-28 12:50:46.654730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.280 [2024-11-28 12:50:46.654746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.280 qpair failed and we were unable to recover it. 00:27:04.280 [2024-11-28 12:50:46.654967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.280 [2024-11-28 12:50:46.654984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.280 qpair failed and we were unable to recover it. 00:27:04.280 [2024-11-28 12:50:46.655164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.280 [2024-11-28 12:50:46.655180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.280 qpair failed and we were unable to recover it. 00:27:04.280 [2024-11-28 12:50:46.655382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.280 [2024-11-28 12:50:46.655397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.280 qpair failed and we were unable to recover it. 00:27:04.280 [2024-11-28 12:50:46.655554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.280 [2024-11-28 12:50:46.655570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.280 qpair failed and we were unable to recover it. 00:27:04.280 [2024-11-28 12:50:46.655797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.280 [2024-11-28 12:50:46.655813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.280 qpair failed and we were unable to recover it. 00:27:04.280 [2024-11-28 12:50:46.656045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.280 [2024-11-28 12:50:46.656061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.280 qpair failed and we were unable to recover it. 00:27:04.280 [2024-11-28 12:50:46.656316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.280 [2024-11-28 12:50:46.656332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.280 qpair failed and we were unable to recover it. 00:27:04.280 [2024-11-28 12:50:46.656563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.280 [2024-11-28 12:50:46.656580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.280 qpair failed and we were unable to recover it. 00:27:04.280 [2024-11-28 12:50:46.656720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.280 [2024-11-28 12:50:46.656736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.280 qpair failed and we were unable to recover it. 00:27:04.280 [2024-11-28 12:50:46.656883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.280 [2024-11-28 12:50:46.656900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.280 qpair failed and we were unable to recover it. 00:27:04.280 [2024-11-28 12:50:46.657054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.280 [2024-11-28 12:50:46.657071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.280 qpair failed and we were unable to recover it. 00:27:04.280 [2024-11-28 12:50:46.657244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.280 [2024-11-28 12:50:46.657260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.280 qpair failed and we were unable to recover it. 00:27:04.280 [2024-11-28 12:50:46.657406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.280 [2024-11-28 12:50:46.657422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.280 qpair failed and we were unable to recover it. 00:27:04.280 [2024-11-28 12:50:46.657665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.280 [2024-11-28 12:50:46.657681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.280 qpair failed and we were unable to recover it. 00:27:04.280 [2024-11-28 12:50:46.657765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.280 [2024-11-28 12:50:46.657782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.280 qpair failed and we were unable to recover it. 00:27:04.280 [2024-11-28 12:50:46.658008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.280 [2024-11-28 12:50:46.658025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.280 qpair failed and we were unable to recover it. 00:27:04.280 [2024-11-28 12:50:46.658240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.280 [2024-11-28 12:50:46.658255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.280 qpair failed and we were unable to recover it. 00:27:04.280 [2024-11-28 12:50:46.658399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.280 [2024-11-28 12:50:46.658415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.281 qpair failed and we were unable to recover it. 00:27:04.281 [2024-11-28 12:50:46.658557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.281 [2024-11-28 12:50:46.658574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.281 qpair failed and we were unable to recover it. 00:27:04.281 [2024-11-28 12:50:46.658801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.281 [2024-11-28 12:50:46.658817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.281 qpair failed and we were unable to recover it. 00:27:04.281 [2024-11-28 12:50:46.658920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.281 [2024-11-28 12:50:46.658934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.281 qpair failed and we were unable to recover it. 00:27:04.281 [2024-11-28 12:50:46.659099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.281 [2024-11-28 12:50:46.659121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.281 qpair failed and we were unable to recover it. 00:27:04.281 [2024-11-28 12:50:46.659358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.281 [2024-11-28 12:50:46.659372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.281 qpair failed and we were unable to recover it. 00:27:04.281 [2024-11-28 12:50:46.659623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.281 [2024-11-28 12:50:46.659637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.281 qpair failed and we were unable to recover it. 00:27:04.281 [2024-11-28 12:50:46.659797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.281 [2024-11-28 12:50:46.659810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.281 qpair failed and we were unable to recover it. 00:27:04.281 [2024-11-28 12:50:46.660014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.281 [2024-11-28 12:50:46.660029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.281 qpair failed and we were unable to recover it. 00:27:04.281 [2024-11-28 12:50:46.660270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.281 [2024-11-28 12:50:46.660282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.281 qpair failed and we were unable to recover it. 00:27:04.281 [2024-11-28 12:50:46.660363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.281 [2024-11-28 12:50:46.660373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.281 qpair failed and we were unable to recover it. 00:27:04.281 [2024-11-28 12:50:46.660575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.281 [2024-11-28 12:50:46.660589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.281 qpair failed and we were unable to recover it. 00:27:04.281 [2024-11-28 12:50:46.660790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.281 [2024-11-28 12:50:46.660806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.281 qpair failed and we were unable to recover it. 00:27:04.281 [2024-11-28 12:50:46.661058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.281 [2024-11-28 12:50:46.661071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.281 qpair failed and we were unable to recover it. 00:27:04.281 [2024-11-28 12:50:46.661328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.281 [2024-11-28 12:50:46.661341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.281 qpair failed and we were unable to recover it. 00:27:04.281 [2024-11-28 12:50:46.661497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.281 [2024-11-28 12:50:46.661509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.281 qpair failed and we were unable to recover it. 00:27:04.281 [2024-11-28 12:50:46.661684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.281 [2024-11-28 12:50:46.661697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.281 qpair failed and we were unable to recover it. 00:27:04.281 [2024-11-28 12:50:46.661908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.281 [2024-11-28 12:50:46.661920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.281 qpair failed and we were unable to recover it. 00:27:04.281 [2024-11-28 12:50:46.662123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.281 [2024-11-28 12:50:46.662136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.281 qpair failed and we were unable to recover it. 00:27:04.281 [2024-11-28 12:50:46.662285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.281 [2024-11-28 12:50:46.662298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.281 qpair failed and we were unable to recover it. 00:27:04.281 [2024-11-28 12:50:46.662499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.281 [2024-11-28 12:50:46.662512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.281 qpair failed and we were unable to recover it. 00:27:04.281 [2024-11-28 12:50:46.662741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.281 [2024-11-28 12:50:46.662754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.281 qpair failed and we were unable to recover it. 00:27:04.281 [2024-11-28 12:50:46.663019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.281 [2024-11-28 12:50:46.663033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.281 qpair failed and we were unable to recover it. 00:27:04.281 [2024-11-28 12:50:46.663254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.281 [2024-11-28 12:50:46.663266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.281 qpair failed and we were unable to recover it. 00:27:04.281 [2024-11-28 12:50:46.663441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.281 [2024-11-28 12:50:46.663470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.281 qpair failed and we were unable to recover it. 00:27:04.281 [2024-11-28 12:50:46.663655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.281 [2024-11-28 12:50:46.663672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.281 qpair failed and we were unable to recover it. 00:27:04.281 [2024-11-28 12:50:46.663893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.281 [2024-11-28 12:50:46.663917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.281 qpair failed and we were unable to recover it. 00:27:04.281 [2024-11-28 12:50:46.664101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.281 [2024-11-28 12:50:46.664116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.281 qpair failed and we were unable to recover it. 00:27:04.281 [2024-11-28 12:50:46.664277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.281 [2024-11-28 12:50:46.664290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.281 qpair failed and we were unable to recover it. 00:27:04.281 [2024-11-28 12:50:46.664529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.281 [2024-11-28 12:50:46.664546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.281 qpair failed and we were unable to recover it. 00:27:04.281 [2024-11-28 12:50:46.664699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.281 [2024-11-28 12:50:46.664711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.281 qpair failed and we were unable to recover it. 00:27:04.281 [2024-11-28 12:50:46.664803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.281 [2024-11-28 12:50:46.664814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.281 qpair failed and we were unable to recover it. 00:27:04.281 [2024-11-28 12:50:46.664963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.281 [2024-11-28 12:50:46.664976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.281 qpair failed and we were unable to recover it. 00:27:04.281 [2024-11-28 12:50:46.665126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.281 [2024-11-28 12:50:46.665140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.281 qpair failed and we were unable to recover it. 00:27:04.281 [2024-11-28 12:50:46.665343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.281 [2024-11-28 12:50:46.665356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.281 qpair failed and we were unable to recover it. 00:27:04.281 [2024-11-28 12:50:46.665582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.281 [2024-11-28 12:50:46.665595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.281 qpair failed and we were unable to recover it. 00:27:04.281 [2024-11-28 12:50:46.665836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.281 [2024-11-28 12:50:46.665848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.281 qpair failed and we were unable to recover it. 00:27:04.281 [2024-11-28 12:50:46.666080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.281 [2024-11-28 12:50:46.666097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.281 qpair failed and we were unable to recover it. 00:27:04.281 [2024-11-28 12:50:46.666323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.281 [2024-11-28 12:50:46.666337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.281 qpair failed and we were unable to recover it. 00:27:04.282 [2024-11-28 12:50:46.666550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.282 [2024-11-28 12:50:46.666563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.282 qpair failed and we were unable to recover it. 00:27:04.282 [2024-11-28 12:50:46.666712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.282 [2024-11-28 12:50:46.666725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.282 qpair failed and we were unable to recover it. 00:27:04.282 [2024-11-28 12:50:46.666858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.282 [2024-11-28 12:50:46.666870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.282 qpair failed and we were unable to recover it. 00:27:04.282 [2024-11-28 12:50:46.666960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.282 [2024-11-28 12:50:46.666972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.282 qpair failed and we were unable to recover it. 00:27:04.282 [2024-11-28 12:50:46.667108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.282 [2024-11-28 12:50:46.667119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.282 qpair failed and we were unable to recover it. 00:27:04.282 [2024-11-28 12:50:46.667357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.282 [2024-11-28 12:50:46.667370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.282 qpair failed and we were unable to recover it. 00:27:04.282 [2024-11-28 12:50:46.667518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.282 [2024-11-28 12:50:46.667532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.282 qpair failed and we were unable to recover it. 00:27:04.282 [2024-11-28 12:50:46.667718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.282 [2024-11-28 12:50:46.667731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.282 qpair failed and we were unable to recover it. 00:27:04.282 [2024-11-28 12:50:46.667982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.282 [2024-11-28 12:50:46.667997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.282 qpair failed and we were unable to recover it. 00:27:04.282 [2024-11-28 12:50:46.668198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.282 [2024-11-28 12:50:46.668212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.282 qpair failed and we were unable to recover it. 00:27:04.282 [2024-11-28 12:50:46.668437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.282 [2024-11-28 12:50:46.668451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.282 qpair failed and we were unable to recover it. 00:27:04.282 [2024-11-28 12:50:46.668678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.282 [2024-11-28 12:50:46.668690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.282 qpair failed and we were unable to recover it. 00:27:04.282 [2024-11-28 12:50:46.668913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.282 [2024-11-28 12:50:46.668926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.282 qpair failed and we were unable to recover it. 00:27:04.282 [2024-11-28 12:50:46.669176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.282 [2024-11-28 12:50:46.669191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.282 qpair failed and we were unable to recover it. 00:27:04.282 [2024-11-28 12:50:46.669337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.282 [2024-11-28 12:50:46.669350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.282 qpair failed and we were unable to recover it. 00:27:04.282 [2024-11-28 12:50:46.669575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.282 [2024-11-28 12:50:46.669589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.282 qpair failed and we were unable to recover it. 00:27:04.282 [2024-11-28 12:50:46.669745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.282 [2024-11-28 12:50:46.669757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.282 qpair failed and we were unable to recover it. 00:27:04.282 [2024-11-28 12:50:46.669825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.282 [2024-11-28 12:50:46.669840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.282 qpair failed and we were unable to recover it. 00:27:04.282 [2024-11-28 12:50:46.669937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.282 [2024-11-28 12:50:46.669953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.282 qpair failed and we were unable to recover it. 00:27:04.282 [2024-11-28 12:50:46.670176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.282 [2024-11-28 12:50:46.670190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.282 qpair failed and we were unable to recover it. 00:27:04.282 [2024-11-28 12:50:46.670391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.282 [2024-11-28 12:50:46.670404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.282 qpair failed and we were unable to recover it. 00:27:04.282 [2024-11-28 12:50:46.670631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.282 [2024-11-28 12:50:46.670645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.282 qpair failed and we were unable to recover it. 00:27:04.282 [2024-11-28 12:50:46.670794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.282 [2024-11-28 12:50:46.670807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.282 qpair failed and we were unable to recover it. 00:27:04.282 [2024-11-28 12:50:46.670961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.282 [2024-11-28 12:50:46.670975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.282 qpair failed and we were unable to recover it. 00:27:04.282 [2024-11-28 12:50:46.671109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.282 [2024-11-28 12:50:46.671123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.282 qpair failed and we were unable to recover it. 00:27:04.282 [2024-11-28 12:50:46.671335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.282 [2024-11-28 12:50:46.671380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.282 qpair failed and we were unable to recover it. 00:27:04.282 [2024-11-28 12:50:46.671597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.282 [2024-11-28 12:50:46.671631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.282 qpair failed and we were unable to recover it. 00:27:04.282 [2024-11-28 12:50:46.671896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.282 [2024-11-28 12:50:46.671934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.282 qpair failed and we were unable to recover it. 00:27:04.282 [2024-11-28 12:50:46.672247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.282 [2024-11-28 12:50:46.672283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.282 qpair failed and we were unable to recover it. 00:27:04.282 [2024-11-28 12:50:46.672550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.282 [2024-11-28 12:50:46.672584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.282 qpair failed and we were unable to recover it. 00:27:04.282 [2024-11-28 12:50:46.672869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.282 [2024-11-28 12:50:46.672900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.282 qpair failed and we were unable to recover it. 00:27:04.282 [2024-11-28 12:50:46.673107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.282 [2024-11-28 12:50:46.673142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.282 qpair failed and we were unable to recover it. 00:27:04.282 [2024-11-28 12:50:46.673414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.282 [2024-11-28 12:50:46.673448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.282 qpair failed and we were unable to recover it. 00:27:04.282 [2024-11-28 12:50:46.673717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.283 [2024-11-28 12:50:46.673729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.283 qpair failed and we were unable to recover it. 00:27:04.283 [2024-11-28 12:50:46.673873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.283 [2024-11-28 12:50:46.673885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.283 qpair failed and we were unable to recover it. 00:27:04.283 [2024-11-28 12:50:46.673995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.283 [2024-11-28 12:50:46.674007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.283 qpair failed and we were unable to recover it. 00:27:04.283 [2024-11-28 12:50:46.674244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.283 [2024-11-28 12:50:46.674277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.283 qpair failed and we were unable to recover it. 00:27:04.283 [2024-11-28 12:50:46.674472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.283 [2024-11-28 12:50:46.674504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.283 qpair failed and we were unable to recover it. 00:27:04.283 [2024-11-28 12:50:46.674762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.283 [2024-11-28 12:50:46.674795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.283 qpair failed and we were unable to recover it. 00:27:04.283 [2024-11-28 12:50:46.675096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.283 [2024-11-28 12:50:46.675132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.283 qpair failed and we were unable to recover it. 00:27:04.283 [2024-11-28 12:50:46.675393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.283 [2024-11-28 12:50:46.675426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.283 qpair failed and we were unable to recover it. 00:27:04.283 [2024-11-28 12:50:46.675637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.283 [2024-11-28 12:50:46.675670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.283 qpair failed and we were unable to recover it. 00:27:04.283 [2024-11-28 12:50:46.675912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.283 [2024-11-28 12:50:46.675945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.283 qpair failed and we were unable to recover it. 00:27:04.283 [2024-11-28 12:50:46.676254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.283 [2024-11-28 12:50:46.676287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.283 qpair failed and we were unable to recover it. 00:27:04.283 [2024-11-28 12:50:46.676569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.283 [2024-11-28 12:50:46.676602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.283 qpair failed and we were unable to recover it. 00:27:04.283 [2024-11-28 12:50:46.676902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.283 [2024-11-28 12:50:46.676934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.283 qpair failed and we were unable to recover it. 00:27:04.283 [2024-11-28 12:50:46.677152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.283 [2024-11-28 12:50:46.677186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.283 qpair failed and we were unable to recover it. 00:27:04.283 [2024-11-28 12:50:46.677450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.283 [2024-11-28 12:50:46.677483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.283 qpair failed and we were unable to recover it. 00:27:04.283 [2024-11-28 12:50:46.677755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.283 [2024-11-28 12:50:46.677789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.283 qpair failed and we were unable to recover it. 00:27:04.283 [2024-11-28 12:50:46.678041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.283 [2024-11-28 12:50:46.678075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.283 qpair failed and we were unable to recover it. 00:27:04.283 [2024-11-28 12:50:46.678333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.283 [2024-11-28 12:50:46.678368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.283 qpair failed and we were unable to recover it. 00:27:04.283 [2024-11-28 12:50:46.678581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.283 [2024-11-28 12:50:46.678613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.283 qpair failed and we were unable to recover it. 00:27:04.283 [2024-11-28 12:50:46.678797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.283 [2024-11-28 12:50:46.678831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.283 qpair failed and we were unable to recover it. 00:27:04.283 [2024-11-28 12:50:46.679010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.283 [2024-11-28 12:50:46.679045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.283 qpair failed and we were unable to recover it. 00:27:04.283 [2024-11-28 12:50:46.679245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.283 [2024-11-28 12:50:46.679277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.283 qpair failed and we were unable to recover it. 00:27:04.283 [2024-11-28 12:50:46.679519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.283 [2024-11-28 12:50:46.679532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.283 qpair failed and we were unable to recover it. 00:27:04.283 [2024-11-28 12:50:46.679689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.283 [2024-11-28 12:50:46.679722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.283 qpair failed and we were unable to recover it. 00:27:04.283 [2024-11-28 12:50:46.679919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.283 [2024-11-28 12:50:46.679962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.283 qpair failed and we were unable to recover it. 00:27:04.283 [2024-11-28 12:50:46.680212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.283 [2024-11-28 12:50:46.680245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.283 qpair failed and we were unable to recover it. 00:27:04.283 [2024-11-28 12:50:46.680512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.283 [2024-11-28 12:50:46.680546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.283 qpair failed and we were unable to recover it. 00:27:04.283 [2024-11-28 12:50:46.680791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.283 [2024-11-28 12:50:46.680824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.283 qpair failed and we were unable to recover it. 00:27:04.283 [2024-11-28 12:50:46.681084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.283 [2024-11-28 12:50:46.681120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.283 qpair failed and we were unable to recover it. 00:27:04.283 [2024-11-28 12:50:46.681331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.283 [2024-11-28 12:50:46.681365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.283 qpair failed and we were unable to recover it. 00:27:04.283 [2024-11-28 12:50:46.681565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.283 [2024-11-28 12:50:46.681597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.283 qpair failed and we were unable to recover it. 00:27:04.283 [2024-11-28 12:50:46.681812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.283 [2024-11-28 12:50:46.681824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.283 qpair failed and we were unable to recover it. 00:27:04.283 [2024-11-28 12:50:46.681997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.283 [2024-11-28 12:50:46.682013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.283 qpair failed and we were unable to recover it. 00:27:04.283 [2024-11-28 12:50:46.682163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.283 [2024-11-28 12:50:46.682195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.283 qpair failed and we were unable to recover it. 00:27:04.283 [2024-11-28 12:50:46.682308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.283 [2024-11-28 12:50:46.682341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.283 qpair failed and we were unable to recover it. 00:27:04.283 [2024-11-28 12:50:46.682551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.283 [2024-11-28 12:50:46.682584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.283 qpair failed and we were unable to recover it. 00:27:04.283 [2024-11-28 12:50:46.682862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.283 [2024-11-28 12:50:46.682874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.283 qpair failed and we were unable to recover it. 00:27:04.283 [2024-11-28 12:50:46.683110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.283 [2024-11-28 12:50:46.683145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.283 qpair failed and we were unable to recover it. 00:27:04.283 [2024-11-28 12:50:46.683290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.284 [2024-11-28 12:50:46.683324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.284 qpair failed and we were unable to recover it. 00:27:04.284 [2024-11-28 12:50:46.683594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.284 [2024-11-28 12:50:46.683627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.284 qpair failed and we were unable to recover it. 00:27:04.284 [2024-11-28 12:50:46.683905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.284 [2024-11-28 12:50:46.683931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.284 qpair failed and we were unable to recover it. 00:27:04.284 [2024-11-28 12:50:46.684226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.284 [2024-11-28 12:50:46.684259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.284 qpair failed and we were unable to recover it. 00:27:04.284 [2024-11-28 12:50:46.684532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.284 [2024-11-28 12:50:46.684566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.284 qpair failed and we were unable to recover it. 00:27:04.284 [2024-11-28 12:50:46.684699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.284 [2024-11-28 12:50:46.684731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.284 qpair failed and we were unable to recover it. 00:27:04.284 [2024-11-28 12:50:46.684849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.284 [2024-11-28 12:50:46.684880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.284 qpair failed and we were unable to recover it. 00:27:04.284 [2024-11-28 12:50:46.685032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.284 [2024-11-28 12:50:46.685068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.284 qpair failed and we were unable to recover it. 00:27:04.284 [2024-11-28 12:50:46.685260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.284 [2024-11-28 12:50:46.685292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.284 qpair failed and we were unable to recover it. 00:27:04.284 [2024-11-28 12:50:46.685555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.284 [2024-11-28 12:50:46.685589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.284 qpair failed and we were unable to recover it. 00:27:04.284 [2024-11-28 12:50:46.685694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.284 [2024-11-28 12:50:46.685706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.284 qpair failed and we were unable to recover it. 00:27:04.284 [2024-11-28 12:50:46.685937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.284 [2024-11-28 12:50:46.685982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.284 qpair failed and we were unable to recover it. 00:27:04.284 [2024-11-28 12:50:46.686174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.284 [2024-11-28 12:50:46.686207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.284 qpair failed and we were unable to recover it. 00:27:04.284 [2024-11-28 12:50:46.686453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.284 [2024-11-28 12:50:46.686486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.284 qpair failed and we were unable to recover it. 00:27:04.284 [2024-11-28 12:50:46.686673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.284 [2024-11-28 12:50:46.686685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.284 qpair failed and we were unable to recover it. 00:27:04.284 [2024-11-28 12:50:46.686851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.284 [2024-11-28 12:50:46.686884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.284 qpair failed and we were unable to recover it. 00:27:04.284 [2024-11-28 12:50:46.687152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.284 [2024-11-28 12:50:46.687187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.284 qpair failed and we were unable to recover it. 00:27:04.284 [2024-11-28 12:50:46.687369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.284 [2024-11-28 12:50:46.687403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.284 qpair failed and we were unable to recover it. 00:27:04.284 [2024-11-28 12:50:46.687600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.284 [2024-11-28 12:50:46.687612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.284 qpair failed and we were unable to recover it. 00:27:04.284 [2024-11-28 12:50:46.687792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.284 [2024-11-28 12:50:46.687824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.284 qpair failed and we were unable to recover it. 00:27:04.284 [2024-11-28 12:50:46.688018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.284 [2024-11-28 12:50:46.688051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.284 qpair failed and we were unable to recover it. 00:27:04.284 [2024-11-28 12:50:46.688347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.284 [2024-11-28 12:50:46.688381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.284 qpair failed and we were unable to recover it. 00:27:04.284 [2024-11-28 12:50:46.688565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.284 [2024-11-28 12:50:46.688598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.284 qpair failed and we were unable to recover it. 00:27:04.284 [2024-11-28 12:50:46.688807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.284 [2024-11-28 12:50:46.688841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.284 qpair failed and we were unable to recover it. 00:27:04.284 [2024-11-28 12:50:46.689115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.284 [2024-11-28 12:50:46.689150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.284 qpair failed and we were unable to recover it. 00:27:04.284 [2024-11-28 12:50:46.689435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.284 [2024-11-28 12:50:46.689468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.284 qpair failed and we were unable to recover it. 00:27:04.284 [2024-11-28 12:50:46.689648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.284 [2024-11-28 12:50:46.689682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.284 qpair failed and we were unable to recover it. 00:27:04.284 [2024-11-28 12:50:46.689982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.284 [2024-11-28 12:50:46.689994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.284 qpair failed and we were unable to recover it. 00:27:04.284 [2024-11-28 12:50:46.690242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.284 [2024-11-28 12:50:46.690285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.284 qpair failed and we were unable to recover it. 00:27:04.284 [2024-11-28 12:50:46.690576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.284 [2024-11-28 12:50:46.690609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.284 qpair failed and we were unable to recover it. 00:27:04.284 [2024-11-28 12:50:46.690829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.284 [2024-11-28 12:50:46.690862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.284 qpair failed and we were unable to recover it. 00:27:04.284 [2024-11-28 12:50:46.691134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.284 [2024-11-28 12:50:46.691170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.284 qpair failed and we were unable to recover it. 00:27:04.284 [2024-11-28 12:50:46.691394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.284 [2024-11-28 12:50:46.691427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.284 qpair failed and we were unable to recover it. 00:27:04.284 [2024-11-28 12:50:46.691638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.284 [2024-11-28 12:50:46.691650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.284 qpair failed and we were unable to recover it. 00:27:04.284 [2024-11-28 12:50:46.691900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.284 [2024-11-28 12:50:46.691915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.284 qpair failed and we were unable to recover it. 00:27:04.284 [2024-11-28 12:50:46.692142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.284 [2024-11-28 12:50:46.692155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.284 qpair failed and we were unable to recover it. 00:27:04.284 [2024-11-28 12:50:46.692356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.284 [2024-11-28 12:50:46.692389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.284 qpair failed and we were unable to recover it. 00:27:04.284 [2024-11-28 12:50:46.692635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.284 [2024-11-28 12:50:46.692668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.284 qpair failed and we were unable to recover it. 00:27:04.284 [2024-11-28 12:50:46.692848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.284 [2024-11-28 12:50:46.692882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.285 qpair failed and we were unable to recover it. 00:27:04.285 [2024-11-28 12:50:46.693143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.285 [2024-11-28 12:50:46.693178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.285 qpair failed and we were unable to recover it. 00:27:04.285 [2024-11-28 12:50:46.693475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.285 [2024-11-28 12:50:46.693498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.285 qpair failed and we were unable to recover it. 00:27:04.285 [2024-11-28 12:50:46.693748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.285 [2024-11-28 12:50:46.693761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.285 qpair failed and we were unable to recover it. 00:27:04.285 [2024-11-28 12:50:46.693961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.285 [2024-11-28 12:50:46.693974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.285 qpair failed and we were unable to recover it. 00:27:04.285 [2024-11-28 12:50:46.694119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.285 [2024-11-28 12:50:46.694131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.285 qpair failed and we were unable to recover it. 00:27:04.285 [2024-11-28 12:50:46.694341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.285 [2024-11-28 12:50:46.694374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.285 qpair failed and we were unable to recover it. 00:27:04.285 [2024-11-28 12:50:46.694568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.285 [2024-11-28 12:50:46.694601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.285 qpair failed and we were unable to recover it. 00:27:04.285 [2024-11-28 12:50:46.694848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.285 [2024-11-28 12:50:46.694880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.285 qpair failed and we were unable to recover it. 00:27:04.285 [2024-11-28 12:50:46.695072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.285 [2024-11-28 12:50:46.695107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.285 qpair failed and we were unable to recover it. 00:27:04.285 [2024-11-28 12:50:46.695359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.285 [2024-11-28 12:50:46.695393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.285 qpair failed and we were unable to recover it. 00:27:04.285 [2024-11-28 12:50:46.695637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.285 [2024-11-28 12:50:46.695669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.285 qpair failed and we were unable to recover it. 00:27:04.285 [2024-11-28 12:50:46.695906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.285 [2024-11-28 12:50:46.695919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.285 qpair failed and we were unable to recover it. 00:27:04.285 [2024-11-28 12:50:46.696167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.285 [2024-11-28 12:50:46.696180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.285 qpair failed and we were unable to recover it. 00:27:04.285 [2024-11-28 12:50:46.696431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.285 [2024-11-28 12:50:46.696464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.285 qpair failed and we were unable to recover it. 00:27:04.285 [2024-11-28 12:50:46.696663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.285 [2024-11-28 12:50:46.696697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.285 qpair failed and we were unable to recover it. 00:27:04.285 [2024-11-28 12:50:46.696943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.285 [2024-11-28 12:50:46.696990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.285 qpair failed and we were unable to recover it. 00:27:04.285 [2024-11-28 12:50:46.697207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.285 [2024-11-28 12:50:46.697240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.285 qpair failed and we were unable to recover it. 00:27:04.285 [2024-11-28 12:50:46.697507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.285 [2024-11-28 12:50:46.697541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.285 qpair failed and we were unable to recover it. 00:27:04.285 [2024-11-28 12:50:46.697836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.285 [2024-11-28 12:50:46.697848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.285 qpair failed and we were unable to recover it. 00:27:04.285 [2024-11-28 12:50:46.698016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.285 [2024-11-28 12:50:46.698029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.285 qpair failed and we were unable to recover it. 00:27:04.285 [2024-11-28 12:50:46.698257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.285 [2024-11-28 12:50:46.698269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.285 qpair failed and we were unable to recover it. 00:27:04.285 [2024-11-28 12:50:46.698475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.285 [2024-11-28 12:50:46.698508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.285 qpair failed and we were unable to recover it. 00:27:04.285 [2024-11-28 12:50:46.698789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.285 [2024-11-28 12:50:46.698821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.285 qpair failed and we were unable to recover it. 00:27:04.285 [2024-11-28 12:50:46.699015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.285 [2024-11-28 12:50:46.699051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.285 qpair failed and we were unable to recover it. 00:27:04.285 [2024-11-28 12:50:46.699277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.285 [2024-11-28 12:50:46.699311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.285 qpair failed and we were unable to recover it. 00:27:04.285 [2024-11-28 12:50:46.699504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.285 [2024-11-28 12:50:46.699536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.285 qpair failed and we were unable to recover it. 00:27:04.285 [2024-11-28 12:50:46.699809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.285 [2024-11-28 12:50:46.699842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.285 qpair failed and we were unable to recover it. 00:27:04.285 [2024-11-28 12:50:46.700118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.285 [2024-11-28 12:50:46.700153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.285 qpair failed and we were unable to recover it. 00:27:04.285 [2024-11-28 12:50:46.700285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.285 [2024-11-28 12:50:46.700317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.285 qpair failed and we were unable to recover it. 00:27:04.285 [2024-11-28 12:50:46.700511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.285 [2024-11-28 12:50:46.700523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.285 qpair failed and we were unable to recover it. 00:27:04.285 [2024-11-28 12:50:46.700677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.285 [2024-11-28 12:50:46.700690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.285 qpair failed and we were unable to recover it. 00:27:04.285 [2024-11-28 12:50:46.700923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.285 [2024-11-28 12:50:46.700978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.285 qpair failed and we were unable to recover it. 00:27:04.285 [2024-11-28 12:50:46.701220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.285 [2024-11-28 12:50:46.701269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.285 qpair failed and we were unable to recover it. 00:27:04.285 [2024-11-28 12:50:46.701549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.285 [2024-11-28 12:50:46.701566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.285 qpair failed and we were unable to recover it. 00:27:04.285 [2024-11-28 12:50:46.701779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.285 [2024-11-28 12:50:46.701791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.285 qpair failed and we were unable to recover it. 00:27:04.285 [2024-11-28 12:50:46.701994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.285 [2024-11-28 12:50:46.702011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.285 qpair failed and we were unable to recover it. 00:27:04.285 [2024-11-28 12:50:46.702236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.285 [2024-11-28 12:50:46.702248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.285 qpair failed and we were unable to recover it. 00:27:04.285 [2024-11-28 12:50:46.702426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.285 [2024-11-28 12:50:46.702439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.285 qpair failed and we were unable to recover it. 00:27:04.286 [2024-11-28 12:50:46.702666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.286 [2024-11-28 12:50:46.702678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.286 qpair failed and we were unable to recover it. 00:27:04.286 [2024-11-28 12:50:46.702776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.286 [2024-11-28 12:50:46.702787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.286 qpair failed and we were unable to recover it. 00:27:04.286 [2024-11-28 12:50:46.703004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.286 [2024-11-28 12:50:46.703033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.286 qpair failed and we were unable to recover it. 00:27:04.286 [2024-11-28 12:50:46.703239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.286 [2024-11-28 12:50:46.703252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.286 qpair failed and we were unable to recover it. 00:27:04.286 [2024-11-28 12:50:46.703404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.286 [2024-11-28 12:50:46.703417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.286 qpair failed and we were unable to recover it. 00:27:04.286 [2024-11-28 12:50:46.703572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.286 [2024-11-28 12:50:46.703592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.286 qpair failed and we were unable to recover it. 00:27:04.286 [2024-11-28 12:50:46.703766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.286 [2024-11-28 12:50:46.703780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.286 qpair failed and we were unable to recover it. 00:27:04.286 [2024-11-28 12:50:46.703920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.286 [2024-11-28 12:50:46.703933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.286 qpair failed and we were unable to recover it. 00:27:04.286 [2024-11-28 12:50:46.704139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.286 [2024-11-28 12:50:46.704153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.286 qpair failed and we were unable to recover it. 00:27:04.286 [2024-11-28 12:50:46.704315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.286 [2024-11-28 12:50:46.704349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.286 qpair failed and we were unable to recover it. 00:27:04.286 [2024-11-28 12:50:46.704553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.286 [2024-11-28 12:50:46.704586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.286 qpair failed and we were unable to recover it. 00:27:04.286 [2024-11-28 12:50:46.704722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.286 [2024-11-28 12:50:46.704756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.286 qpair failed and we were unable to recover it. 00:27:04.286 [2024-11-28 12:50:46.705003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.286 [2024-11-28 12:50:46.705016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.286 qpair failed and we were unable to recover it. 00:27:04.286 [2024-11-28 12:50:46.705241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.286 [2024-11-28 12:50:46.705254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.286 qpair failed and we were unable to recover it. 00:27:04.286 [2024-11-28 12:50:46.705522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.286 [2024-11-28 12:50:46.705541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.286 qpair failed and we were unable to recover it. 00:27:04.566 [2024-11-28 12:50:46.705747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.566 [2024-11-28 12:50:46.705760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.566 qpair failed and we were unable to recover it. 00:27:04.566 [2024-11-28 12:50:46.705964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.566 [2024-11-28 12:50:46.705978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.566 qpair failed and we were unable to recover it. 00:27:04.566 [2024-11-28 12:50:46.706084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.566 [2024-11-28 12:50:46.706097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.566 qpair failed and we were unable to recover it. 00:27:04.566 [2024-11-28 12:50:46.706395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.566 [2024-11-28 12:50:46.706407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.566 qpair failed and we were unable to recover it. 00:27:04.566 [2024-11-28 12:50:46.706641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.566 [2024-11-28 12:50:46.706664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.566 qpair failed and we were unable to recover it. 00:27:04.566 [2024-11-28 12:50:46.706867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.566 [2024-11-28 12:50:46.706879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.566 qpair failed and we were unable to recover it. 00:27:04.566 [2024-11-28 12:50:46.707094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.566 [2024-11-28 12:50:46.707107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.566 qpair failed and we were unable to recover it. 00:27:04.566 [2024-11-28 12:50:46.707309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.566 [2024-11-28 12:50:46.707321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.566 qpair failed and we were unable to recover it. 00:27:04.566 [2024-11-28 12:50:46.707489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.566 [2024-11-28 12:50:46.707501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.566 qpair failed and we were unable to recover it. 00:27:04.566 [2024-11-28 12:50:46.707726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.566 [2024-11-28 12:50:46.707738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.566 qpair failed and we were unable to recover it. 00:27:04.566 [2024-11-28 12:50:46.707922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.566 [2024-11-28 12:50:46.707935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.566 qpair failed and we were unable to recover it. 00:27:04.566 [2024-11-28 12:50:46.708097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.566 [2024-11-28 12:50:46.708110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.566 qpair failed and we were unable to recover it. 00:27:04.566 [2024-11-28 12:50:46.708337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.566 [2024-11-28 12:50:46.708349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.566 qpair failed and we were unable to recover it. 00:27:04.566 [2024-11-28 12:50:46.708497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.566 [2024-11-28 12:50:46.708509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.566 qpair failed and we were unable to recover it. 00:27:04.566 [2024-11-28 12:50:46.708668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.566 [2024-11-28 12:50:46.708681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.566 qpair failed and we were unable to recover it. 00:27:04.566 [2024-11-28 12:50:46.708882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.566 [2024-11-28 12:50:46.708894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.566 qpair failed and we were unable to recover it. 00:27:04.566 [2024-11-28 12:50:46.709129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.566 [2024-11-28 12:50:46.709142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.566 qpair failed and we were unable to recover it. 00:27:04.566 [2024-11-28 12:50:46.709231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.566 [2024-11-28 12:50:46.709242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.566 qpair failed and we were unable to recover it. 00:27:04.566 [2024-11-28 12:50:46.709388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.566 [2024-11-28 12:50:46.709400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.566 qpair failed and we were unable to recover it. 00:27:04.566 [2024-11-28 12:50:46.709555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.566 [2024-11-28 12:50:46.709568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.566 qpair failed and we were unable to recover it. 00:27:04.566 [2024-11-28 12:50:46.709639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.566 [2024-11-28 12:50:46.709650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.566 qpair failed and we were unable to recover it. 00:27:04.566 [2024-11-28 12:50:46.709794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.566 [2024-11-28 12:50:46.709805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.566 qpair failed and we were unable to recover it. 00:27:04.566 [2024-11-28 12:50:46.710030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.566 [2024-11-28 12:50:46.710047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.566 qpair failed and we were unable to recover it. 00:27:04.566 [2024-11-28 12:50:46.710153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.566 [2024-11-28 12:50:46.710164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.566 qpair failed and we were unable to recover it. 00:27:04.566 [2024-11-28 12:50:46.710367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.566 [2024-11-28 12:50:46.710380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.566 qpair failed and we were unable to recover it. 00:27:04.566 [2024-11-28 12:50:46.710539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.566 [2024-11-28 12:50:46.710551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.566 qpair failed and we were unable to recover it. 00:27:04.566 [2024-11-28 12:50:46.710822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.566 [2024-11-28 12:50:46.710834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.566 qpair failed and we were unable to recover it. 00:27:04.566 [2024-11-28 12:50:46.710998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.566 [2024-11-28 12:50:46.711011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.566 qpair failed and we were unable to recover it. 00:27:04.566 [2024-11-28 12:50:46.711142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.566 [2024-11-28 12:50:46.711154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.566 qpair failed and we were unable to recover it. 00:27:04.566 [2024-11-28 12:50:46.711312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.566 [2024-11-28 12:50:46.711325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.566 qpair failed and we were unable to recover it. 00:27:04.566 [2024-11-28 12:50:46.711405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.566 [2024-11-28 12:50:46.711416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.566 qpair failed and we were unable to recover it. 00:27:04.566 [2024-11-28 12:50:46.711481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.566 [2024-11-28 12:50:46.711492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.566 qpair failed and we were unable to recover it. 00:27:04.566 [2024-11-28 12:50:46.711709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.566 [2024-11-28 12:50:46.711721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.566 qpair failed and we were unable to recover it. 00:27:04.566 [2024-11-28 12:50:46.711950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.566 [2024-11-28 12:50:46.711963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.566 qpair failed and we were unable to recover it. 00:27:04.567 [2024-11-28 12:50:46.712143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.567 [2024-11-28 12:50:46.712154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.567 qpair failed and we were unable to recover it. 00:27:04.567 [2024-11-28 12:50:46.712304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.567 [2024-11-28 12:50:46.712317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.567 qpair failed and we were unable to recover it. 00:27:04.567 [2024-11-28 12:50:46.712464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.567 [2024-11-28 12:50:46.712476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.567 qpair failed and we were unable to recover it. 00:27:04.567 [2024-11-28 12:50:46.712703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.567 [2024-11-28 12:50:46.712715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.567 qpair failed and we were unable to recover it. 00:27:04.567 [2024-11-28 12:50:46.712959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.567 [2024-11-28 12:50:46.712972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.567 qpair failed and we were unable to recover it. 00:27:04.567 [2024-11-28 12:50:46.713123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.567 [2024-11-28 12:50:46.713136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.567 qpair failed and we were unable to recover it. 00:27:04.567 [2024-11-28 12:50:46.713360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.567 [2024-11-28 12:50:46.713372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.567 qpair failed and we were unable to recover it. 00:27:04.567 [2024-11-28 12:50:46.713588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.567 [2024-11-28 12:50:46.713601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.567 qpair failed and we were unable to recover it. 00:27:04.567 [2024-11-28 12:50:46.713693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.567 [2024-11-28 12:50:46.713705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.567 qpair failed and we were unable to recover it. 00:27:04.567 [2024-11-28 12:50:46.713798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.567 [2024-11-28 12:50:46.713837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.567 qpair failed and we were unable to recover it. 00:27:04.567 [2024-11-28 12:50:46.714109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.567 [2024-11-28 12:50:46.714143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.567 qpair failed and we were unable to recover it. 00:27:04.567 [2024-11-28 12:50:46.714346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.567 [2024-11-28 12:50:46.714379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.567 qpair failed and we were unable to recover it. 00:27:04.567 [2024-11-28 12:50:46.714671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.567 [2024-11-28 12:50:46.714699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.567 qpair failed and we were unable to recover it. 00:27:04.567 [2024-11-28 12:50:46.714916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.567 [2024-11-28 12:50:46.714963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.567 qpair failed and we were unable to recover it. 00:27:04.567 [2024-11-28 12:50:46.715223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.567 [2024-11-28 12:50:46.715256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.567 qpair failed and we were unable to recover it. 00:27:04.567 [2024-11-28 12:50:46.715447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.567 [2024-11-28 12:50:46.715480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.567 qpair failed and we were unable to recover it. 00:27:04.567 [2024-11-28 12:50:46.715754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.567 [2024-11-28 12:50:46.715786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.567 qpair failed and we were unable to recover it. 00:27:04.567 [2024-11-28 12:50:46.716053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.567 [2024-11-28 12:50:46.716087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.567 qpair failed and we were unable to recover it. 00:27:04.567 [2024-11-28 12:50:46.716309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.567 [2024-11-28 12:50:46.716341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.567 qpair failed and we were unable to recover it. 00:27:04.567 [2024-11-28 12:50:46.716519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.567 [2024-11-28 12:50:46.716551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.567 qpair failed and we were unable to recover it. 00:27:04.567 [2024-11-28 12:50:46.716818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.567 [2024-11-28 12:50:46.716851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.567 qpair failed and we were unable to recover it. 00:27:04.567 [2024-11-28 12:50:46.717042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.567 [2024-11-28 12:50:46.717075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.567 qpair failed and we were unable to recover it. 00:27:04.567 [2024-11-28 12:50:46.717346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.567 [2024-11-28 12:50:46.717379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.567 qpair failed and we were unable to recover it. 00:27:04.567 [2024-11-28 12:50:46.717634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.567 [2024-11-28 12:50:46.717647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.567 qpair failed and we were unable to recover it. 00:27:04.567 [2024-11-28 12:50:46.717861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.567 [2024-11-28 12:50:46.717873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.567 qpair failed and we were unable to recover it. 00:27:04.567 [2024-11-28 12:50:46.718074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.567 [2024-11-28 12:50:46.718088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.567 qpair failed and we were unable to recover it. 00:27:04.567 [2024-11-28 12:50:46.718312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.567 [2024-11-28 12:50:46.718324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.567 qpair failed and we were unable to recover it. 00:27:04.567 [2024-11-28 12:50:46.718557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.567 [2024-11-28 12:50:46.718592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.567 qpair failed and we were unable to recover it. 00:27:04.567 [2024-11-28 12:50:46.718840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.567 [2024-11-28 12:50:46.718879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.567 qpair failed and we were unable to recover it. 00:27:04.567 [2024-11-28 12:50:46.719129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.567 [2024-11-28 12:50:46.719162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.567 qpair failed and we were unable to recover it. 00:27:04.567 [2024-11-28 12:50:46.719410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.567 [2024-11-28 12:50:46.719443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.567 qpair failed and we were unable to recover it. 00:27:04.567 [2024-11-28 12:50:46.719718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.567 [2024-11-28 12:50:46.719751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.567 qpair failed and we were unable to recover it. 00:27:04.567 [2024-11-28 12:50:46.719998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.567 [2024-11-28 12:50:46.720031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.567 qpair failed and we were unable to recover it. 00:27:04.567 [2024-11-28 12:50:46.720295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.567 [2024-11-28 12:50:46.720327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.567 qpair failed and we were unable to recover it. 00:27:04.567 [2024-11-28 12:50:46.720623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.567 [2024-11-28 12:50:46.720657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.567 qpair failed and we were unable to recover it. 00:27:04.567 [2024-11-28 12:50:46.720936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.567 [2024-11-28 12:50:46.720952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.567 qpair failed and we were unable to recover it. 00:27:04.567 [2024-11-28 12:50:46.721048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.567 [2024-11-28 12:50:46.721059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.567 qpair failed and we were unable to recover it. 00:27:04.568 [2024-11-28 12:50:46.721201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.568 [2024-11-28 12:50:46.721214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.568 qpair failed and we were unable to recover it. 00:27:04.568 [2024-11-28 12:50:46.721333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.568 [2024-11-28 12:50:46.721365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.568 qpair failed and we were unable to recover it. 00:27:04.568 [2024-11-28 12:50:46.721611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.568 [2024-11-28 12:50:46.721643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.568 qpair failed and we were unable to recover it. 00:27:04.568 [2024-11-28 12:50:46.721861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.568 [2024-11-28 12:50:46.721894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.568 qpair failed and we were unable to recover it. 00:27:04.568 [2024-11-28 12:50:46.722224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.568 [2024-11-28 12:50:46.722258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.568 qpair failed and we were unable to recover it. 00:27:04.568 [2024-11-28 12:50:46.722541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.568 [2024-11-28 12:50:46.722574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.568 qpair failed and we were unable to recover it. 00:27:04.568 [2024-11-28 12:50:46.722849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.568 [2024-11-28 12:50:46.722883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.568 qpair failed and we were unable to recover it. 00:27:04.568 [2024-11-28 12:50:46.723167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.568 [2024-11-28 12:50:46.723200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.568 qpair failed and we were unable to recover it. 00:27:04.568 [2024-11-28 12:50:46.723476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.568 [2024-11-28 12:50:46.723509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.568 qpair failed and we were unable to recover it. 00:27:04.568 [2024-11-28 12:50:46.723764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.568 [2024-11-28 12:50:46.723796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.568 qpair failed and we were unable to recover it. 00:27:04.568 [2024-11-28 12:50:46.723978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.568 [2024-11-28 12:50:46.723991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.568 qpair failed and we were unable to recover it. 00:27:04.568 [2024-11-28 12:50:46.724198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.568 [2024-11-28 12:50:46.724230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.568 qpair failed and we were unable to recover it. 00:27:04.568 [2024-11-28 12:50:46.724503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.568 [2024-11-28 12:50:46.724537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.568 qpair failed and we were unable to recover it. 00:27:04.568 [2024-11-28 12:50:46.724789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.568 [2024-11-28 12:50:46.724821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.568 qpair failed and we were unable to recover it. 00:27:04.568 [2024-11-28 12:50:46.725134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.568 [2024-11-28 12:50:46.725168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.568 qpair failed and we were unable to recover it. 00:27:04.568 [2024-11-28 12:50:46.725438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.568 [2024-11-28 12:50:46.725480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.568 qpair failed and we were unable to recover it. 00:27:04.568 [2024-11-28 12:50:46.725699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.568 [2024-11-28 12:50:46.725711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.568 qpair failed and we were unable to recover it. 00:27:04.568 [2024-11-28 12:50:46.725863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.568 [2024-11-28 12:50:46.725875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.568 qpair failed and we were unable to recover it. 00:27:04.568 [2024-11-28 12:50:46.726104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.568 [2024-11-28 12:50:46.726117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.568 qpair failed and we were unable to recover it. 00:27:04.568 [2024-11-28 12:50:46.726288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.568 [2024-11-28 12:50:46.726323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.568 qpair failed and we were unable to recover it. 00:27:04.568 [2024-11-28 12:50:46.726617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.568 [2024-11-28 12:50:46.726650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.568 qpair failed and we were unable to recover it. 00:27:04.568 [2024-11-28 12:50:46.726856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.568 [2024-11-28 12:50:46.726890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.568 qpair failed and we were unable to recover it. 00:27:04.568 [2024-11-28 12:50:46.727122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.568 [2024-11-28 12:50:46.727135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.568 qpair failed and we were unable to recover it. 00:27:04.568 [2024-11-28 12:50:46.727279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.568 [2024-11-28 12:50:46.727313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.568 qpair failed and we were unable to recover it. 00:27:04.568 [2024-11-28 12:50:46.727571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.568 [2024-11-28 12:50:46.727605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.568 qpair failed and we were unable to recover it. 00:27:04.568 [2024-11-28 12:50:46.727857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.568 [2024-11-28 12:50:46.727891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.568 qpair failed and we were unable to recover it. 00:27:04.568 [2024-11-28 12:50:46.728097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.568 [2024-11-28 12:50:46.728131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.568 qpair failed and we were unable to recover it. 00:27:04.568 [2024-11-28 12:50:46.728396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.568 [2024-11-28 12:50:46.728429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.568 qpair failed and we were unable to recover it. 00:27:04.568 [2024-11-28 12:50:46.728613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.568 [2024-11-28 12:50:46.728625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.568 qpair failed and we were unable to recover it. 00:27:04.568 [2024-11-28 12:50:46.728711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.568 [2024-11-28 12:50:46.728740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.568 qpair failed and we were unable to recover it. 00:27:04.568 [2024-11-28 12:50:46.728908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.568 [2024-11-28 12:50:46.728942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.568 qpair failed and we were unable to recover it. 00:27:04.568 [2024-11-28 12:50:46.729175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.568 [2024-11-28 12:50:46.729214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.568 qpair failed and we were unable to recover it. 00:27:04.568 [2024-11-28 12:50:46.729349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.568 [2024-11-28 12:50:46.729383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.568 qpair failed and we were unable to recover it. 00:27:04.568 [2024-11-28 12:50:46.729562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.568 [2024-11-28 12:50:46.729593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.568 qpair failed and we were unable to recover it. 00:27:04.568 [2024-11-28 12:50:46.729866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.568 [2024-11-28 12:50:46.729899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.568 qpair failed and we were unable to recover it. 00:27:04.568 [2024-11-28 12:50:46.730172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.568 [2024-11-28 12:50:46.730207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.568 qpair failed and we were unable to recover it. 00:27:04.568 [2024-11-28 12:50:46.730489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.568 [2024-11-28 12:50:46.730521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.568 qpair failed and we were unable to recover it. 00:27:04.568 [2024-11-28 12:50:46.730733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.568 [2024-11-28 12:50:46.730766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.569 qpair failed and we were unable to recover it. 00:27:04.569 [2024-11-28 12:50:46.731015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.569 [2024-11-28 12:50:46.731050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.569 qpair failed and we were unable to recover it. 00:27:04.569 [2024-11-28 12:50:46.731307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.569 [2024-11-28 12:50:46.731341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.569 qpair failed and we were unable to recover it. 00:27:04.569 [2024-11-28 12:50:46.731485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.569 [2024-11-28 12:50:46.731518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.569 qpair failed and we were unable to recover it. 00:27:04.569 [2024-11-28 12:50:46.731789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.569 [2024-11-28 12:50:46.731822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.569 qpair failed and we were unable to recover it. 00:27:04.569 [2024-11-28 12:50:46.732087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.569 [2024-11-28 12:50:46.732100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.569 qpair failed and we were unable to recover it. 00:27:04.569 [2024-11-28 12:50:46.732375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.569 [2024-11-28 12:50:46.732408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.569 qpair failed and we were unable to recover it. 00:27:04.569 [2024-11-28 12:50:46.732708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.569 [2024-11-28 12:50:46.732741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.569 qpair failed and we were unable to recover it. 00:27:04.569 [2024-11-28 12:50:46.732933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.569 [2024-11-28 12:50:46.732986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.569 qpair failed and we were unable to recover it. 00:27:04.569 [2024-11-28 12:50:46.733195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.569 [2024-11-28 12:50:46.733228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.569 qpair failed and we were unable to recover it. 00:27:04.569 [2024-11-28 12:50:46.733523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.569 [2024-11-28 12:50:46.733556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.569 qpair failed and we were unable to recover it. 00:27:04.569 [2024-11-28 12:50:46.733826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.569 [2024-11-28 12:50:46.733860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.569 qpair failed and we were unable to recover it. 00:27:04.569 [2024-11-28 12:50:46.734093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.569 [2024-11-28 12:50:46.734106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.569 qpair failed and we were unable to recover it. 00:27:04.569 [2024-11-28 12:50:46.734261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.569 [2024-11-28 12:50:46.734274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.569 qpair failed and we were unable to recover it. 00:27:04.569 [2024-11-28 12:50:46.734408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.569 [2024-11-28 12:50:46.734451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.569 qpair failed and we were unable to recover it. 00:27:04.569 [2024-11-28 12:50:46.734698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.569 [2024-11-28 12:50:46.734731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.569 qpair failed and we were unable to recover it. 00:27:04.569 [2024-11-28 12:50:46.735026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.569 [2024-11-28 12:50:46.735061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.569 qpair failed and we were unable to recover it. 00:27:04.569 [2024-11-28 12:50:46.735263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.569 [2024-11-28 12:50:46.735297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.569 qpair failed and we were unable to recover it. 00:27:04.569 [2024-11-28 12:50:46.735536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.569 [2024-11-28 12:50:46.735549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.569 qpair failed and we were unable to recover it. 00:27:04.569 [2024-11-28 12:50:46.735725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.569 [2024-11-28 12:50:46.735737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.569 qpair failed and we were unable to recover it. 00:27:04.569 [2024-11-28 12:50:46.735831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.569 [2024-11-28 12:50:46.735842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.569 qpair failed and we were unable to recover it. 00:27:04.569 [2024-11-28 12:50:46.735997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.569 [2024-11-28 12:50:46.736009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.569 qpair failed and we were unable to recover it. 00:27:04.569 [2024-11-28 12:50:46.736084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.569 [2024-11-28 12:50:46.736112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.569 qpair failed and we were unable to recover it. 00:27:04.569 [2024-11-28 12:50:46.736305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.569 [2024-11-28 12:50:46.736338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.569 qpair failed and we were unable to recover it. 00:27:04.569 [2024-11-28 12:50:46.736457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.569 [2024-11-28 12:50:46.736469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.569 qpair failed and we were unable to recover it. 00:27:04.569 [2024-11-28 12:50:46.736604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.569 [2024-11-28 12:50:46.736616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.569 qpair failed and we were unable to recover it. 00:27:04.569 [2024-11-28 12:50:46.736831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.569 [2024-11-28 12:50:46.736843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.569 qpair failed and we were unable to recover it. 00:27:04.569 [2024-11-28 12:50:46.737064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.569 [2024-11-28 12:50:46.737076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.569 qpair failed and we were unable to recover it. 00:27:04.569 [2024-11-28 12:50:46.737171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.569 [2024-11-28 12:50:46.737182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.569 qpair failed and we were unable to recover it. 00:27:04.569 [2024-11-28 12:50:46.737370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.569 [2024-11-28 12:50:46.737403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.569 qpair failed and we were unable to recover it. 00:27:04.569 [2024-11-28 12:50:46.737699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.569 [2024-11-28 12:50:46.737733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.569 qpair failed and we were unable to recover it. 00:27:04.569 [2024-11-28 12:50:46.737929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.569 [2024-11-28 12:50:46.737940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.569 qpair failed and we were unable to recover it. 00:27:04.569 [2024-11-28 12:50:46.738187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.569 [2024-11-28 12:50:46.738220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.569 qpair failed and we were unable to recover it. 00:27:04.569 [2024-11-28 12:50:46.738412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.569 [2024-11-28 12:50:46.738446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.569 qpair failed and we were unable to recover it. 00:27:04.569 [2024-11-28 12:50:46.738739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.569 [2024-11-28 12:50:46.738777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.569 qpair failed and we were unable to recover it. 00:27:04.569 [2024-11-28 12:50:46.738975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.569 [2024-11-28 12:50:46.739009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.569 qpair failed and we were unable to recover it. 00:27:04.569 [2024-11-28 12:50:46.739190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.569 [2024-11-28 12:50:46.739223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.569 qpair failed and we were unable to recover it. 00:27:04.569 [2024-11-28 12:50:46.739440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.569 [2024-11-28 12:50:46.739472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.569 qpair failed and we were unable to recover it. 00:27:04.569 [2024-11-28 12:50:46.739673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.570 [2024-11-28 12:50:46.739705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.570 qpair failed and we were unable to recover it. 00:27:04.570 [2024-11-28 12:50:46.739956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.570 [2024-11-28 12:50:46.739970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.570 qpair failed and we were unable to recover it. 00:27:04.570 [2024-11-28 12:50:46.740117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.570 [2024-11-28 12:50:46.740128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.570 qpair failed and we were unable to recover it. 00:27:04.570 [2024-11-28 12:50:46.740280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.570 [2024-11-28 12:50:46.740312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.570 qpair failed and we were unable to recover it. 00:27:04.570 [2024-11-28 12:50:46.740589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.570 [2024-11-28 12:50:46.740622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.570 qpair failed and we were unable to recover it. 00:27:04.570 [2024-11-28 12:50:46.740897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.570 [2024-11-28 12:50:46.740930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.570 qpair failed and we were unable to recover it. 00:27:04.570 [2024-11-28 12:50:46.741212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.570 [2024-11-28 12:50:46.741245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.570 qpair failed and we were unable to recover it. 00:27:04.570 [2024-11-28 12:50:46.741406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.570 [2024-11-28 12:50:46.741438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.570 qpair failed and we were unable to recover it. 00:27:04.570 [2024-11-28 12:50:46.741713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.570 [2024-11-28 12:50:46.741745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.570 qpair failed and we were unable to recover it. 00:27:04.570 [2024-11-28 12:50:46.741929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.570 [2024-11-28 12:50:46.741990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.570 qpair failed and we were unable to recover it. 00:27:04.570 [2024-11-28 12:50:46.742322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.570 [2024-11-28 12:50:46.742354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.570 qpair failed and we were unable to recover it. 00:27:04.570 [2024-11-28 12:50:46.742640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.570 [2024-11-28 12:50:46.742673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.570 qpair failed and we were unable to recover it. 00:27:04.570 [2024-11-28 12:50:46.742862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.570 [2024-11-28 12:50:46.742896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.570 qpair failed and we were unable to recover it. 00:27:04.570 [2024-11-28 12:50:46.743097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.570 [2024-11-28 12:50:46.743130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.570 qpair failed and we were unable to recover it. 00:27:04.570 [2024-11-28 12:50:46.743379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.570 [2024-11-28 12:50:46.743412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.570 qpair failed and we were unable to recover it. 00:27:04.570 [2024-11-28 12:50:46.743712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.570 [2024-11-28 12:50:46.743745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.570 qpair failed and we were unable to recover it. 00:27:04.570 [2024-11-28 12:50:46.743919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.570 [2024-11-28 12:50:46.743931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.570 qpair failed and we were unable to recover it. 00:27:04.570 [2024-11-28 12:50:46.744091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.570 [2024-11-28 12:50:46.744104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.570 qpair failed and we were unable to recover it. 00:27:04.570 [2024-11-28 12:50:46.744271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.570 [2024-11-28 12:50:46.744285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.570 qpair failed and we were unable to recover it. 00:27:04.570 [2024-11-28 12:50:46.744390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.570 [2024-11-28 12:50:46.744402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.570 qpair failed and we were unable to recover it. 00:27:04.570 [2024-11-28 12:50:46.744629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.570 [2024-11-28 12:50:46.744664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.570 qpair failed and we were unable to recover it. 00:27:04.570 [2024-11-28 12:50:46.744871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.570 [2024-11-28 12:50:46.744904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.570 qpair failed and we were unable to recover it. 00:27:04.570 [2024-11-28 12:50:46.745164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.570 [2024-11-28 12:50:46.745198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.570 qpair failed and we were unable to recover it. 00:27:04.570 [2024-11-28 12:50:46.745388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.570 [2024-11-28 12:50:46.745421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.570 qpair failed and we were unable to recover it. 00:27:04.570 [2024-11-28 12:50:46.745696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.570 [2024-11-28 12:50:46.745709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.570 qpair failed and we were unable to recover it. 00:27:04.570 [2024-11-28 12:50:46.745935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.570 [2024-11-28 12:50:46.745977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.570 qpair failed and we were unable to recover it. 00:27:04.570 [2024-11-28 12:50:46.746176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.570 [2024-11-28 12:50:46.746210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.570 qpair failed and we were unable to recover it. 00:27:04.570 [2024-11-28 12:50:46.746533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.570 [2024-11-28 12:50:46.746569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.570 qpair failed and we were unable to recover it. 00:27:04.570 [2024-11-28 12:50:46.746792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.570 [2024-11-28 12:50:46.746825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.570 qpair failed and we were unable to recover it. 00:27:04.570 [2024-11-28 12:50:46.747073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.570 [2024-11-28 12:50:46.747085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.570 qpair failed and we were unable to recover it. 00:27:04.570 [2024-11-28 12:50:46.747254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.570 [2024-11-28 12:50:46.747266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.570 qpair failed and we were unable to recover it. 00:27:04.570 [2024-11-28 12:50:46.747430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.570 [2024-11-28 12:50:46.747463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.570 qpair failed and we were unable to recover it. 00:27:04.570 [2024-11-28 12:50:46.747701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.571 [2024-11-28 12:50:46.747734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.571 qpair failed and we were unable to recover it. 00:27:04.571 [2024-11-28 12:50:46.748030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.571 [2024-11-28 12:50:46.748064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.571 qpair failed and we were unable to recover it. 00:27:04.571 [2024-11-28 12:50:46.748213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.571 [2024-11-28 12:50:46.748247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.571 qpair failed and we were unable to recover it. 00:27:04.571 [2024-11-28 12:50:46.748458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.571 [2024-11-28 12:50:46.748492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.571 qpair failed and we were unable to recover it. 00:27:04.571 [2024-11-28 12:50:46.748766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.571 [2024-11-28 12:50:46.748805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.571 qpair failed and we were unable to recover it. 00:27:04.571 [2024-11-28 12:50:46.749028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.571 [2024-11-28 12:50:46.749061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.571 qpair failed and we were unable to recover it. 00:27:04.571 [2024-11-28 12:50:46.749284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.571 [2024-11-28 12:50:46.749316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.571 qpair failed and we were unable to recover it. 00:27:04.571 [2024-11-28 12:50:46.749430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.571 [2024-11-28 12:50:46.749461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.571 qpair failed and we were unable to recover it. 00:27:04.571 [2024-11-28 12:50:46.749753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.571 [2024-11-28 12:50:46.749788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.571 qpair failed and we were unable to recover it. 00:27:04.571 [2024-11-28 12:50:46.749935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.571 [2024-11-28 12:50:46.749957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.571 qpair failed and we were unable to recover it. 00:27:04.571 [2024-11-28 12:50:46.750189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.571 [2024-11-28 12:50:46.750201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.571 qpair failed and we were unable to recover it. 00:27:04.571 [2024-11-28 12:50:46.750388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.571 [2024-11-28 12:50:46.750422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.571 qpair failed and we were unable to recover it. 00:27:04.571 [2024-11-28 12:50:46.750620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.571 [2024-11-28 12:50:46.750652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.571 qpair failed and we were unable to recover it. 00:27:04.571 [2024-11-28 12:50:46.750935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.571 [2024-11-28 12:50:46.750982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.571 qpair failed and we were unable to recover it. 00:27:04.571 [2024-11-28 12:50:46.751251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.571 [2024-11-28 12:50:46.751284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.571 qpair failed and we were unable to recover it. 00:27:04.571 [2024-11-28 12:50:46.751583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.571 [2024-11-28 12:50:46.751617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.571 qpair failed and we were unable to recover it. 00:27:04.571 [2024-11-28 12:50:46.751887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.571 [2024-11-28 12:50:46.751919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.571 qpair failed and we were unable to recover it. 00:27:04.571 [2024-11-28 12:50:46.752151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.571 [2024-11-28 12:50:46.752184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.571 qpair failed and we were unable to recover it. 00:27:04.571 [2024-11-28 12:50:46.752394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.571 [2024-11-28 12:50:46.752445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.571 qpair failed and we were unable to recover it. 00:27:04.571 [2024-11-28 12:50:46.752644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.571 [2024-11-28 12:50:46.752677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.571 qpair failed and we were unable to recover it. 00:27:04.571 [2024-11-28 12:50:46.752964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.571 [2024-11-28 12:50:46.753010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.571 qpair failed and we were unable to recover it. 00:27:04.571 [2024-11-28 12:50:46.753213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.571 [2024-11-28 12:50:46.753247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.571 qpair failed and we were unable to recover it. 00:27:04.571 [2024-11-28 12:50:46.753521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.571 [2024-11-28 12:50:46.753554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.571 qpair failed and we were unable to recover it. 00:27:04.571 [2024-11-28 12:50:46.753832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.571 [2024-11-28 12:50:46.753867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.571 qpair failed and we were unable to recover it. 00:27:04.571 [2024-11-28 12:50:46.754119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.571 [2024-11-28 12:50:46.754152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.571 qpair failed and we were unable to recover it. 00:27:04.571 [2024-11-28 12:50:46.754347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.571 [2024-11-28 12:50:46.754380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.571 qpair failed and we were unable to recover it. 00:27:04.571 [2024-11-28 12:50:46.754649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.571 [2024-11-28 12:50:46.754662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.571 qpair failed and we were unable to recover it. 00:27:04.571 [2024-11-28 12:50:46.754810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.571 [2024-11-28 12:50:46.754822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.571 qpair failed and we were unable to recover it. 00:27:04.571 [2024-11-28 12:50:46.755072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.571 [2024-11-28 12:50:46.755107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.571 qpair failed and we were unable to recover it. 00:27:04.571 [2024-11-28 12:50:46.755308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.571 [2024-11-28 12:50:46.755341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.571 qpair failed and we were unable to recover it. 00:27:04.571 [2024-11-28 12:50:46.755614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.571 [2024-11-28 12:50:46.755647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.571 qpair failed and we were unable to recover it. 00:27:04.571 [2024-11-28 12:50:46.755863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.571 [2024-11-28 12:50:46.755877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.571 qpair failed and we were unable to recover it. 00:27:04.571 [2024-11-28 12:50:46.756088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.571 [2024-11-28 12:50:46.756121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.571 qpair failed and we were unable to recover it. 00:27:04.571 [2024-11-28 12:50:46.756313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.571 [2024-11-28 12:50:46.756345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.571 qpair failed and we were unable to recover it. 00:27:04.571 [2024-11-28 12:50:46.756595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.571 [2024-11-28 12:50:46.756629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.571 qpair failed and we were unable to recover it. 00:27:04.571 [2024-11-28 12:50:46.756923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.571 [2024-11-28 12:50:46.756975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.571 qpair failed and we were unable to recover it. 00:27:04.571 [2024-11-28 12:50:46.757119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.571 [2024-11-28 12:50:46.757152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.571 qpair failed and we were unable to recover it. 00:27:04.571 [2024-11-28 12:50:46.757355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.571 [2024-11-28 12:50:46.757389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.571 qpair failed and we were unable to recover it. 00:27:04.571 [2024-11-28 12:50:46.757608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.572 [2024-11-28 12:50:46.757620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.572 qpair failed and we were unable to recover it. 00:27:04.572 [2024-11-28 12:50:46.757720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.572 [2024-11-28 12:50:46.757751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.572 qpair failed and we were unable to recover it. 00:27:04.572 [2024-11-28 12:50:46.758024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.572 [2024-11-28 12:50:46.758058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.572 qpair failed and we were unable to recover it. 00:27:04.572 [2024-11-28 12:50:46.758360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.572 [2024-11-28 12:50:46.758392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.572 qpair failed and we were unable to recover it. 00:27:04.572 [2024-11-28 12:50:46.758523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.572 [2024-11-28 12:50:46.758554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.572 qpair failed and we were unable to recover it. 00:27:04.572 [2024-11-28 12:50:46.758853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.572 [2024-11-28 12:50:46.758888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.572 qpair failed and we were unable to recover it. 00:27:04.572 [2024-11-28 12:50:46.759105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.572 [2024-11-28 12:50:46.759150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.572 qpair failed and we were unable to recover it. 00:27:04.572 [2024-11-28 12:50:46.759413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.572 [2024-11-28 12:50:46.759447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.572 qpair failed and we were unable to recover it. 00:27:04.572 [2024-11-28 12:50:46.759728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.572 [2024-11-28 12:50:46.759761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.572 qpair failed and we were unable to recover it. 00:27:04.572 [2024-11-28 12:50:46.760038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.572 [2024-11-28 12:50:46.760052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.572 qpair failed and we were unable to recover it. 00:27:04.572 [2024-11-28 12:50:46.760253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.572 [2024-11-28 12:50:46.760265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.572 qpair failed and we were unable to recover it. 00:27:04.572 [2024-11-28 12:50:46.760434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.572 [2024-11-28 12:50:46.760449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.572 qpair failed and we were unable to recover it. 00:27:04.572 [2024-11-28 12:50:46.760671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.572 [2024-11-28 12:50:46.760684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.572 qpair failed and we were unable to recover it. 00:27:04.572 [2024-11-28 12:50:46.760917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.572 [2024-11-28 12:50:46.760960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.572 qpair failed and we were unable to recover it. 00:27:04.572 [2024-11-28 12:50:46.761152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.572 [2024-11-28 12:50:46.761186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.572 qpair failed and we were unable to recover it. 00:27:04.572 [2024-11-28 12:50:46.761456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.572 [2024-11-28 12:50:46.761489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.572 qpair failed and we were unable to recover it. 00:27:04.572 [2024-11-28 12:50:46.761679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.572 [2024-11-28 12:50:46.761712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.572 qpair failed and we were unable to recover it. 00:27:04.572 [2024-11-28 12:50:46.761919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.572 [2024-11-28 12:50:46.761931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.572 qpair failed and we were unable to recover it. 00:27:04.572 [2024-11-28 12:50:46.762154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.572 [2024-11-28 12:50:46.762167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.572 qpair failed and we were unable to recover it. 00:27:04.572 [2024-11-28 12:50:46.762264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.572 [2024-11-28 12:50:46.762275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.572 qpair failed and we were unable to recover it. 00:27:04.572 [2024-11-28 12:50:46.762444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.572 [2024-11-28 12:50:46.762457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.572 qpair failed and we were unable to recover it. 00:27:04.572 [2024-11-28 12:50:46.762543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.572 [2024-11-28 12:50:46.762555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.572 qpair failed and we were unable to recover it. 00:27:04.572 [2024-11-28 12:50:46.762778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.572 [2024-11-28 12:50:46.762791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.572 qpair failed and we were unable to recover it. 00:27:04.572 [2024-11-28 12:50:46.762925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.572 [2024-11-28 12:50:46.762938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.572 qpair failed and we were unable to recover it. 00:27:04.572 [2024-11-28 12:50:46.763161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.572 [2024-11-28 12:50:46.763195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.572 qpair failed and we were unable to recover it. 00:27:04.572 [2024-11-28 12:50:46.763471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.572 [2024-11-28 12:50:46.763505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.572 qpair failed and we were unable to recover it. 00:27:04.572 [2024-11-28 12:50:46.763664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.572 [2024-11-28 12:50:46.763696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.572 qpair failed and we were unable to recover it. 00:27:04.572 [2024-11-28 12:50:46.763913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.572 [2024-11-28 12:50:46.763958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.572 qpair failed and we were unable to recover it. 00:27:04.572 [2024-11-28 12:50:46.764188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.572 [2024-11-28 12:50:46.764221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.572 qpair failed and we were unable to recover it. 00:27:04.572 [2024-11-28 12:50:46.764479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.572 [2024-11-28 12:50:46.764513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.572 qpair failed and we were unable to recover it. 00:27:04.572 [2024-11-28 12:50:46.764674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.572 [2024-11-28 12:50:46.764687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.572 qpair failed and we were unable to recover it. 00:27:04.572 [2024-11-28 12:50:46.764975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.572 [2024-11-28 12:50:46.765009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.572 qpair failed and we were unable to recover it. 00:27:04.572 [2024-11-28 12:50:46.765138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.572 [2024-11-28 12:50:46.765171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.572 qpair failed and we were unable to recover it. 00:27:04.572 [2024-11-28 12:50:46.765444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.572 [2024-11-28 12:50:46.765520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.572 qpair failed and we were unable to recover it. 00:27:04.572 [2024-11-28 12:50:46.765812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.572 [2024-11-28 12:50:46.765831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.572 qpair failed and we were unable to recover it. 00:27:04.572 [2024-11-28 12:50:46.766017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.572 [2024-11-28 12:50:46.766035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.572 qpair failed and we were unable to recover it. 00:27:04.572 [2024-11-28 12:50:46.766277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.573 [2024-11-28 12:50:46.766293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.573 qpair failed and we were unable to recover it. 00:27:04.573 [2024-11-28 12:50:46.766483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.573 [2024-11-28 12:50:46.766499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.573 qpair failed and we were unable to recover it. 00:27:04.573 [2024-11-28 12:50:46.766716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.573 [2024-11-28 12:50:46.766732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.573 qpair failed and we were unable to recover it. 00:27:04.573 [2024-11-28 12:50:46.766832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.573 [2024-11-28 12:50:46.766847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.573 qpair failed and we were unable to recover it. 00:27:04.573 [2024-11-28 12:50:46.767044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.573 [2024-11-28 12:50:46.767061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.573 qpair failed and we were unable to recover it. 00:27:04.573 [2024-11-28 12:50:46.767176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.573 [2024-11-28 12:50:46.767193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.573 qpair failed and we were unable to recover it. 00:27:04.573 [2024-11-28 12:50:46.767401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.573 [2024-11-28 12:50:46.767435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.573 qpair failed and we were unable to recover it. 00:27:04.573 [2024-11-28 12:50:46.767677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.573 [2024-11-28 12:50:46.767710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.573 qpair failed and we were unable to recover it. 00:27:04.573 [2024-11-28 12:50:46.767966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.573 [2024-11-28 12:50:46.768001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.573 qpair failed and we were unable to recover it. 00:27:04.573 [2024-11-28 12:50:46.768189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.573 [2024-11-28 12:50:46.768222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.573 qpair failed and we were unable to recover it. 00:27:04.573 [2024-11-28 12:50:46.768471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.573 [2024-11-28 12:50:46.768504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.573 qpair failed and we were unable to recover it. 00:27:04.573 [2024-11-28 12:50:46.768778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.573 [2024-11-28 12:50:46.768811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.573 qpair failed and we were unable to recover it. 00:27:04.573 [2024-11-28 12:50:46.769093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.573 [2024-11-28 12:50:46.769128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.573 qpair failed and we were unable to recover it. 00:27:04.573 [2024-11-28 12:50:46.769432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.573 [2024-11-28 12:50:46.769467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.573 qpair failed and we were unable to recover it. 00:27:04.573 [2024-11-28 12:50:46.769761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.573 [2024-11-28 12:50:46.769777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.573 qpair failed and we were unable to recover it. 00:27:04.573 [2024-11-28 12:50:46.769957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.573 [2024-11-28 12:50:46.769973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.573 qpair failed and we were unable to recover it. 00:27:04.573 [2024-11-28 12:50:46.770139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.573 [2024-11-28 12:50:46.770155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.573 qpair failed and we were unable to recover it. 00:27:04.573 [2024-11-28 12:50:46.770327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.573 [2024-11-28 12:50:46.770365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.573 qpair failed and we were unable to recover it. 00:27:04.573 [2024-11-28 12:50:46.770635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.573 [2024-11-28 12:50:46.770669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.573 qpair failed and we were unable to recover it. 00:27:04.573 [2024-11-28 12:50:46.770967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.573 [2024-11-28 12:50:46.771002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.573 qpair failed and we were unable to recover it. 00:27:04.573 [2024-11-28 12:50:46.771199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.573 [2024-11-28 12:50:46.771233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.573 qpair failed and we were unable to recover it. 00:27:04.573 [2024-11-28 12:50:46.771429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.573 [2024-11-28 12:50:46.771461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.573 qpair failed and we were unable to recover it. 00:27:04.573 [2024-11-28 12:50:46.771593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.573 [2024-11-28 12:50:46.771610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.573 qpair failed and we were unable to recover it. 00:27:04.573 [2024-11-28 12:50:46.771841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.573 [2024-11-28 12:50:46.771874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.573 qpair failed and we were unable to recover it. 00:27:04.573 [2024-11-28 12:50:46.772084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.573 [2024-11-28 12:50:46.772125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.573 qpair failed and we were unable to recover it. 00:27:04.573 [2024-11-28 12:50:46.772286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.573 [2024-11-28 12:50:46.772318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.573 qpair failed and we were unable to recover it. 00:27:04.573 [2024-11-28 12:50:46.772595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.573 [2024-11-28 12:50:46.772628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.573 qpair failed and we were unable to recover it. 00:27:04.573 [2024-11-28 12:50:46.772813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.573 [2024-11-28 12:50:46.772847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.573 qpair failed and we were unable to recover it. 00:27:04.573 [2024-11-28 12:50:46.773102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.573 [2024-11-28 12:50:46.773134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.573 qpair failed and we were unable to recover it. 00:27:04.573 [2024-11-28 12:50:46.773362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.573 [2024-11-28 12:50:46.773395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.573 qpair failed and we were unable to recover it. 00:27:04.573 [2024-11-28 12:50:46.773597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.573 [2024-11-28 12:50:46.773630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.573 qpair failed and we were unable to recover it. 00:27:04.573 [2024-11-28 12:50:46.773920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.573 [2024-11-28 12:50:46.773970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.573 qpair failed and we were unable to recover it. 00:27:04.573 [2024-11-28 12:50:46.774764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.573 [2024-11-28 12:50:46.774800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.573 qpair failed and we were unable to recover it. 00:27:04.573 [2024-11-28 12:50:46.775081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.573 [2024-11-28 12:50:46.775116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.573 qpair failed and we were unable to recover it. 00:27:04.573 [2024-11-28 12:50:46.775274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.573 [2024-11-28 12:50:46.775307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.573 qpair failed and we were unable to recover it. 00:27:04.573 [2024-11-28 12:50:46.775447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.573 [2024-11-28 12:50:46.775479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.573 qpair failed and we were unable to recover it. 00:27:04.573 [2024-11-28 12:50:46.775682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.573 [2024-11-28 12:50:46.775714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.573 qpair failed and we were unable to recover it. 00:27:04.573 [2024-11-28 12:50:46.775968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.573 [2024-11-28 12:50:46.776002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.573 qpair failed and we were unable to recover it. 00:27:04.573 [2024-11-28 12:50:46.776205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.573 [2024-11-28 12:50:46.776239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.573 qpair failed and we were unable to recover it. 00:27:04.574 [2024-11-28 12:50:46.776513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.574 [2024-11-28 12:50:46.776546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.574 qpair failed and we were unable to recover it. 00:27:04.574 [2024-11-28 12:50:46.776822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.574 [2024-11-28 12:50:46.776855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.574 qpair failed and we were unable to recover it. 00:27:04.574 [2024-11-28 12:50:46.777150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.574 [2024-11-28 12:50:46.777183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.574 qpair failed and we were unable to recover it. 00:27:04.574 [2024-11-28 12:50:46.777394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.574 [2024-11-28 12:50:46.777427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.574 qpair failed and we were unable to recover it. 00:27:04.574 [2024-11-28 12:50:46.777698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.574 [2024-11-28 12:50:46.777731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.574 qpair failed and we were unable to recover it. 00:27:04.574 [2024-11-28 12:50:46.777926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.574 [2024-11-28 12:50:46.777966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.574 qpair failed and we were unable to recover it. 00:27:04.574 [2024-11-28 12:50:46.778152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.574 [2024-11-28 12:50:46.778184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.574 qpair failed and we were unable to recover it. 00:27:04.574 [2024-11-28 12:50:46.778380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.574 [2024-11-28 12:50:46.778413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.574 qpair failed and we were unable to recover it. 00:27:04.574 [2024-11-28 12:50:46.778643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.574 [2024-11-28 12:50:46.778660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.574 qpair failed and we were unable to recover it. 00:27:04.574 [2024-11-28 12:50:46.778873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.574 [2024-11-28 12:50:46.778890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.574 qpair failed and we were unable to recover it. 00:27:04.574 [2024-11-28 12:50:46.779037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.574 [2024-11-28 12:50:46.779054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.574 qpair failed and we were unable to recover it. 00:27:04.574 [2024-11-28 12:50:46.779195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.574 [2024-11-28 12:50:46.779212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.574 qpair failed and we were unable to recover it. 00:27:04.574 [2024-11-28 12:50:46.779457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.574 [2024-11-28 12:50:46.779473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.574 qpair failed and we were unable to recover it. 00:27:04.574 [2024-11-28 12:50:46.779583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.574 [2024-11-28 12:50:46.779599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.574 qpair failed and we were unable to recover it. 00:27:04.574 [2024-11-28 12:50:46.779846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.574 [2024-11-28 12:50:46.779878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.574 qpair failed and we were unable to recover it. 00:27:04.574 [2024-11-28 12:50:46.780078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.574 [2024-11-28 12:50:46.780112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.574 qpair failed and we were unable to recover it. 00:27:04.574 [2024-11-28 12:50:46.780362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.574 [2024-11-28 12:50:46.780395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.574 qpair failed and we were unable to recover it. 00:27:04.574 [2024-11-28 12:50:46.780579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.574 [2024-11-28 12:50:46.780612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.574 qpair failed and we were unable to recover it. 00:27:04.574 [2024-11-28 12:50:46.780809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.574 [2024-11-28 12:50:46.780843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.574 qpair failed and we were unable to recover it. 00:27:04.574 [2024-11-28 12:50:46.780975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.574 [2024-11-28 12:50:46.780993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.574 qpair failed and we were unable to recover it. 00:27:04.574 [2024-11-28 12:50:46.781226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.574 [2024-11-28 12:50:46.781243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.574 qpair failed and we were unable to recover it. 00:27:04.574 [2024-11-28 12:50:46.781434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.574 [2024-11-28 12:50:46.781451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.574 qpair failed and we were unable to recover it. 00:27:04.574 [2024-11-28 12:50:46.781758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.574 [2024-11-28 12:50:46.781790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.574 qpair failed and we were unable to recover it. 00:27:04.574 [2024-11-28 12:50:46.782085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.574 [2024-11-28 12:50:46.782119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.574 qpair failed and we were unable to recover it. 00:27:04.574 [2024-11-28 12:50:46.782247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.574 [2024-11-28 12:50:46.782280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.574 qpair failed and we were unable to recover it. 00:27:04.574 [2024-11-28 12:50:46.782407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.574 [2024-11-28 12:50:46.782439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.574 qpair failed and we were unable to recover it. 00:27:04.574 [2024-11-28 12:50:46.782720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.574 [2024-11-28 12:50:46.782752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.574 qpair failed and we were unable to recover it. 00:27:04.574 [2024-11-28 12:50:46.782959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.574 [2024-11-28 12:50:46.782994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.574 qpair failed and we were unable to recover it. 00:27:04.574 [2024-11-28 12:50:46.783130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.574 [2024-11-28 12:50:46.783162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.574 qpair failed and we were unable to recover it. 00:27:04.574 [2024-11-28 12:50:46.783384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.574 [2024-11-28 12:50:46.783417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.574 qpair failed and we were unable to recover it. 00:27:04.574 [2024-11-28 12:50:46.783727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.574 [2024-11-28 12:50:46.783760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.574 qpair failed and we were unable to recover it. 00:27:04.574 [2024-11-28 12:50:46.784012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.574 [2024-11-28 12:50:46.784028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.574 qpair failed and we were unable to recover it. 00:27:04.574 [2024-11-28 12:50:46.784156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.574 [2024-11-28 12:50:46.784188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.574 qpair failed and we were unable to recover it. 00:27:04.574 [2024-11-28 12:50:46.784447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.574 [2024-11-28 12:50:46.784481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.574 qpair failed and we were unable to recover it. 00:27:04.574 [2024-11-28 12:50:46.784758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.574 [2024-11-28 12:50:46.784791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.574 qpair failed and we were unable to recover it. 00:27:04.574 [2024-11-28 12:50:46.785054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.574 [2024-11-28 12:50:46.785088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.574 qpair failed and we were unable to recover it. 00:27:04.574 [2024-11-28 12:50:46.785387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.574 [2024-11-28 12:50:46.785420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.574 qpair failed and we were unable to recover it. 00:27:04.574 [2024-11-28 12:50:46.785703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.574 [2024-11-28 12:50:46.785735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.575 qpair failed and we were unable to recover it. 00:27:04.575 [2024-11-28 12:50:46.786008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.575 [2024-11-28 12:50:46.786024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.575 qpair failed and we were unable to recover it. 00:27:04.575 [2024-11-28 12:50:46.786198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.575 [2024-11-28 12:50:46.786231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.575 qpair failed and we were unable to recover it. 00:27:04.575 [2024-11-28 12:50:46.786434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.575 [2024-11-28 12:50:46.786468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.575 qpair failed and we were unable to recover it. 00:27:04.575 [2024-11-28 12:50:46.786779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.575 [2024-11-28 12:50:46.786812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.575 qpair failed and we were unable to recover it. 00:27:04.575 [2024-11-28 12:50:46.787080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.575 [2024-11-28 12:50:46.787115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.575 qpair failed and we were unable to recover it. 00:27:04.575 [2024-11-28 12:50:46.787331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.575 [2024-11-28 12:50:46.787364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.575 qpair failed and we were unable to recover it. 00:27:04.575 [2024-11-28 12:50:46.787595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.575 [2024-11-28 12:50:46.787627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.575 qpair failed and we were unable to recover it. 00:27:04.575 [2024-11-28 12:50:46.787844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.575 [2024-11-28 12:50:46.787877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.575 qpair failed and we were unable to recover it. 00:27:04.575 [2024-11-28 12:50:46.788067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.575 [2024-11-28 12:50:46.788084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.575 qpair failed and we were unable to recover it. 00:27:04.575 [2024-11-28 12:50:46.788262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.575 [2024-11-28 12:50:46.788278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.575 qpair failed and we were unable to recover it. 00:27:04.575 [2024-11-28 12:50:46.788462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.575 [2024-11-28 12:50:46.788494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.575 qpair failed and we were unable to recover it. 00:27:04.575 [2024-11-28 12:50:46.788696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.575 [2024-11-28 12:50:46.788731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.575 qpair failed and we were unable to recover it. 00:27:04.575 [2024-11-28 12:50:46.788918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.575 [2024-11-28 12:50:46.788962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.575 qpair failed and we were unable to recover it. 00:27:04.575 [2024-11-28 12:50:46.789151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.575 [2024-11-28 12:50:46.789167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.575 qpair failed and we were unable to recover it. 00:27:04.575 [2024-11-28 12:50:46.789366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.575 [2024-11-28 12:50:46.789398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.575 qpair failed and we were unable to recover it. 00:27:04.575 [2024-11-28 12:50:46.789594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.575 [2024-11-28 12:50:46.789633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.575 qpair failed and we were unable to recover it. 00:27:04.575 [2024-11-28 12:50:46.789883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.575 [2024-11-28 12:50:46.789916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.575 qpair failed and we were unable to recover it. 00:27:04.575 [2024-11-28 12:50:46.790188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.575 [2024-11-28 12:50:46.790221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.575 qpair failed and we were unable to recover it. 00:27:04.575 [2024-11-28 12:50:46.790525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.575 [2024-11-28 12:50:46.790557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.575 qpair failed and we were unable to recover it. 00:27:04.575 [2024-11-28 12:50:46.790764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.575 [2024-11-28 12:50:46.790781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.575 qpair failed and we were unable to recover it. 00:27:04.575 [2024-11-28 12:50:46.791014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.575 [2024-11-28 12:50:46.791031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.575 qpair failed and we were unable to recover it. 00:27:04.575 [2024-11-28 12:50:46.791254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.575 [2024-11-28 12:50:46.791271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.575 qpair failed and we were unable to recover it. 00:27:04.575 [2024-11-28 12:50:46.791426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.575 [2024-11-28 12:50:46.791441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.575 qpair failed and we were unable to recover it. 00:27:04.575 [2024-11-28 12:50:46.791586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.575 [2024-11-28 12:50:46.791602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.575 qpair failed and we were unable to recover it. 00:27:04.575 [2024-11-28 12:50:46.791778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.575 [2024-11-28 12:50:46.791811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.575 qpair failed and we were unable to recover it. 00:27:04.575 [2024-11-28 12:50:46.792014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.575 [2024-11-28 12:50:46.792048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.575 qpair failed and we were unable to recover it. 00:27:04.575 [2024-11-28 12:50:46.792350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.575 [2024-11-28 12:50:46.792383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.575 qpair failed and we were unable to recover it. 00:27:04.575 [2024-11-28 12:50:46.792631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.575 [2024-11-28 12:50:46.792664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.575 qpair failed and we were unable to recover it. 00:27:04.575 [2024-11-28 12:50:46.792848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.575 [2024-11-28 12:50:46.792880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.575 qpair failed and we were unable to recover it. 00:27:04.575 [2024-11-28 12:50:46.793090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.575 [2024-11-28 12:50:46.793108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.575 qpair failed and we were unable to recover it. 00:27:04.575 [2024-11-28 12:50:46.793303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.575 [2024-11-28 12:50:46.793335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.575 qpair failed and we were unable to recover it. 00:27:04.575 [2024-11-28 12:50:46.793529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.575 [2024-11-28 12:50:46.793562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.575 qpair failed and we were unable to recover it. 00:27:04.575 [2024-11-28 12:50:46.793811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.575 [2024-11-28 12:50:46.793844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.576 qpair failed and we were unable to recover it. 00:27:04.576 [2024-11-28 12:50:46.794025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.576 [2024-11-28 12:50:46.794042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.576 qpair failed and we were unable to recover it. 00:27:04.576 [2024-11-28 12:50:46.794276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.576 [2024-11-28 12:50:46.794292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.576 qpair failed and we were unable to recover it. 00:27:04.576 [2024-11-28 12:50:46.794465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.576 [2024-11-28 12:50:46.794499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.576 qpair failed and we were unable to recover it. 00:27:04.576 [2024-11-28 12:50:46.794803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.576 [2024-11-28 12:50:46.794835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.576 qpair failed and we were unable to recover it. 00:27:04.576 [2024-11-28 12:50:46.795109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.576 [2024-11-28 12:50:46.795126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.576 qpair failed and we were unable to recover it. 00:27:04.576 [2024-11-28 12:50:46.795308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.576 [2024-11-28 12:50:46.795325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.576 qpair failed and we were unable to recover it. 00:27:04.576 [2024-11-28 12:50:46.795567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.576 [2024-11-28 12:50:46.795599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.576 qpair failed and we were unable to recover it. 00:27:04.576 [2024-11-28 12:50:46.795898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.576 [2024-11-28 12:50:46.795931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.576 qpair failed and we were unable to recover it. 00:27:04.576 [2024-11-28 12:50:46.796195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.576 [2024-11-28 12:50:46.796212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.576 qpair failed and we were unable to recover it. 00:27:04.576 [2024-11-28 12:50:46.796442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.576 [2024-11-28 12:50:46.796458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.576 qpair failed and we were unable to recover it. 00:27:04.576 [2024-11-28 12:50:46.796675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.576 [2024-11-28 12:50:46.796691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.576 qpair failed and we were unable to recover it. 00:27:04.576 [2024-11-28 12:50:46.796926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.576 [2024-11-28 12:50:46.796942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.576 qpair failed and we were unable to recover it. 00:27:04.576 [2024-11-28 12:50:46.797217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.576 [2024-11-28 12:50:46.797234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.576 qpair failed and we were unable to recover it. 00:27:04.576 [2024-11-28 12:50:46.797349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.576 [2024-11-28 12:50:46.797381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.576 qpair failed and we were unable to recover it. 00:27:04.576 [2024-11-28 12:50:46.797568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.576 [2024-11-28 12:50:46.797601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.576 qpair failed and we were unable to recover it. 00:27:04.576 [2024-11-28 12:50:46.797799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.576 [2024-11-28 12:50:46.797832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.576 qpair failed and we were unable to recover it. 00:27:04.576 [2024-11-28 12:50:46.798079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.576 [2024-11-28 12:50:46.798096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.576 qpair failed and we were unable to recover it. 00:27:04.576 [2024-11-28 12:50:46.798259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.576 [2024-11-28 12:50:46.798291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.576 qpair failed and we were unable to recover it. 00:27:04.576 [2024-11-28 12:50:46.798568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.576 [2024-11-28 12:50:46.798601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.576 qpair failed and we were unable to recover it. 00:27:04.576 [2024-11-28 12:50:46.798807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.576 [2024-11-28 12:50:46.798839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.576 qpair failed and we were unable to recover it. 00:27:04.576 [2024-11-28 12:50:46.799094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.576 [2024-11-28 12:50:46.799128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.576 qpair failed and we were unable to recover it. 00:27:04.576 [2024-11-28 12:50:46.799404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.576 [2024-11-28 12:50:46.799438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.576 qpair failed and we were unable to recover it. 00:27:04.576 [2024-11-28 12:50:46.799720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.576 [2024-11-28 12:50:46.799753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.576 qpair failed and we were unable to recover it. 00:27:04.576 [2024-11-28 12:50:46.799944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.576 [2024-11-28 12:50:46.799989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.576 qpair failed and we were unable to recover it. 00:27:04.576 [2024-11-28 12:50:46.800223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.576 [2024-11-28 12:50:46.800256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.576 qpair failed and we were unable to recover it. 00:27:04.576 [2024-11-28 12:50:46.800466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.576 [2024-11-28 12:50:46.800498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.576 qpair failed and we were unable to recover it. 00:27:04.576 [2024-11-28 12:50:46.800690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.576 [2024-11-28 12:50:46.800706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.576 qpair failed and we were unable to recover it. 00:27:04.576 [2024-11-28 12:50:46.800968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.576 [2024-11-28 12:50:46.801003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.576 qpair failed and we were unable to recover it. 00:27:04.576 [2024-11-28 12:50:46.801251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.576 [2024-11-28 12:50:46.801283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.576 qpair failed and we were unable to recover it. 00:27:04.576 [2024-11-28 12:50:46.801484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.576 [2024-11-28 12:50:46.801517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.576 qpair failed and we were unable to recover it. 00:27:04.576 [2024-11-28 12:50:46.801792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.576 [2024-11-28 12:50:46.801826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.576 qpair failed and we were unable to recover it. 00:27:04.576 [2024-11-28 12:50:46.802014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.576 [2024-11-28 12:50:46.802032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.576 qpair failed and we were unable to recover it. 00:27:04.576 [2024-11-28 12:50:46.802298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.576 [2024-11-28 12:50:46.802330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.576 qpair failed and we were unable to recover it. 00:27:04.576 [2024-11-28 12:50:46.802579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.576 [2024-11-28 12:50:46.802611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.576 qpair failed and we were unable to recover it. 00:27:04.576 [2024-11-28 12:50:46.802873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.576 [2024-11-28 12:50:46.802890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.576 qpair failed and we were unable to recover it. 00:27:04.576 [2024-11-28 12:50:46.803123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.576 [2024-11-28 12:50:46.803140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.576 qpair failed and we were unable to recover it. 00:27:04.576 [2024-11-28 12:50:46.803360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.576 [2024-11-28 12:50:46.803393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.576 qpair failed and we were unable to recover it. 00:27:04.576 [2024-11-28 12:50:46.803680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.576 [2024-11-28 12:50:46.803720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.576 qpair failed and we were unable to recover it. 00:27:04.576 [2024-11-28 12:50:46.803938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.577 [2024-11-28 12:50:46.803960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.577 qpair failed and we were unable to recover it. 00:27:04.577 [2024-11-28 12:50:46.804145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.577 [2024-11-28 12:50:46.804163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.577 qpair failed and we were unable to recover it. 00:27:04.577 [2024-11-28 12:50:46.804397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.577 [2024-11-28 12:50:46.804430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.577 qpair failed and we were unable to recover it. 00:27:04.577 [2024-11-28 12:50:46.804572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.577 [2024-11-28 12:50:46.804605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.577 qpair failed and we were unable to recover it. 00:27:04.577 [2024-11-28 12:50:46.804886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.577 [2024-11-28 12:50:46.804903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.577 qpair failed and we were unable to recover it. 00:27:04.577 [2024-11-28 12:50:46.805072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.577 [2024-11-28 12:50:46.805089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.577 qpair failed and we were unable to recover it. 00:27:04.577 [2024-11-28 12:50:46.805311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.577 [2024-11-28 12:50:46.805328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.577 qpair failed and we were unable to recover it. 00:27:04.577 [2024-11-28 12:50:46.805418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.577 [2024-11-28 12:50:46.805433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.577 qpair failed and we were unable to recover it. 00:27:04.577 [2024-11-28 12:50:46.805526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.577 [2024-11-28 12:50:46.805541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.577 qpair failed and we were unable to recover it. 00:27:04.577 [2024-11-28 12:50:46.805756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.577 [2024-11-28 12:50:46.805773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.577 qpair failed and we were unable to recover it. 00:27:04.577 [2024-11-28 12:50:46.805937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.577 [2024-11-28 12:50:46.805960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.577 qpair failed and we were unable to recover it. 00:27:04.577 [2024-11-28 12:50:46.806195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.577 [2024-11-28 12:50:46.806211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.577 qpair failed and we were unable to recover it. 00:27:04.577 [2024-11-28 12:50:46.806453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.577 [2024-11-28 12:50:46.806492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.577 qpair failed and we were unable to recover it. 00:27:04.577 [2024-11-28 12:50:46.806646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.577 [2024-11-28 12:50:46.806678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.577 qpair failed and we were unable to recover it. 00:27:04.577 [2024-11-28 12:50:46.806814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.577 [2024-11-28 12:50:46.806856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.577 qpair failed and we were unable to recover it. 00:27:04.577 [2024-11-28 12:50:46.807101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.577 [2024-11-28 12:50:46.807131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.577 qpair failed and we were unable to recover it. 00:27:04.577 [2024-11-28 12:50:46.807316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.577 [2024-11-28 12:50:46.807348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.577 qpair failed and we were unable to recover it. 00:27:04.577 [2024-11-28 12:50:46.807625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.577 [2024-11-28 12:50:46.807665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.577 qpair failed and we were unable to recover it. 00:27:04.577 [2024-11-28 12:50:46.807939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.577 [2024-11-28 12:50:46.807983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.577 qpair failed and we were unable to recover it. 00:27:04.577 [2024-11-28 12:50:46.808133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.577 [2024-11-28 12:50:46.808166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.577 qpair failed and we were unable to recover it. 00:27:04.577 [2024-11-28 12:50:46.808429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.577 [2024-11-28 12:50:46.808461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.577 qpair failed and we were unable to recover it. 00:27:04.577 [2024-11-28 12:50:46.808714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.577 [2024-11-28 12:50:46.808747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.577 qpair failed and we were unable to recover it. 00:27:04.577 [2024-11-28 12:50:46.808892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.577 [2024-11-28 12:50:46.808924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.577 qpair failed and we were unable to recover it. 00:27:04.577 [2024-11-28 12:50:46.809117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.577 [2024-11-28 12:50:46.809134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.577 qpair failed and we were unable to recover it. 00:27:04.577 [2024-11-28 12:50:46.809354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.577 [2024-11-28 12:50:46.809386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.577 qpair failed and we were unable to recover it. 00:27:04.577 [2024-11-28 12:50:46.809662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.577 [2024-11-28 12:50:46.809693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.577 qpair failed and we were unable to recover it. 00:27:04.577 [2024-11-28 12:50:46.809972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.577 [2024-11-28 12:50:46.809990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.577 qpair failed and we were unable to recover it. 00:27:04.577 [2024-11-28 12:50:46.810230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.577 [2024-11-28 12:50:46.810247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.577 qpair failed and we were unable to recover it. 00:27:04.577 [2024-11-28 12:50:46.810416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.577 [2024-11-28 12:50:46.810432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.577 qpair failed and we were unable to recover it. 00:27:04.577 [2024-11-28 12:50:46.810682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.577 [2024-11-28 12:50:46.810715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.577 qpair failed and we were unable to recover it. 00:27:04.577 [2024-11-28 12:50:46.810918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.577 [2024-11-28 12:50:46.810935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.577 qpair failed and we were unable to recover it. 00:27:04.577 [2024-11-28 12:50:46.811095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.577 [2024-11-28 12:50:46.811128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.577 qpair failed and we were unable to recover it. 00:27:04.577 [2024-11-28 12:50:46.811400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.577 [2024-11-28 12:50:46.811434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.577 qpair failed and we were unable to recover it. 00:27:04.577 [2024-11-28 12:50:46.811658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.577 [2024-11-28 12:50:46.811691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.577 qpair failed and we were unable to recover it. 00:27:04.577 [2024-11-28 12:50:46.811813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.577 [2024-11-28 12:50:46.811830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.577 qpair failed and we were unable to recover it. 00:27:04.577 [2024-11-28 12:50:46.811928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.577 [2024-11-28 12:50:46.811944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.577 qpair failed and we were unable to recover it. 00:27:04.577 [2024-11-28 12:50:46.812199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.577 [2024-11-28 12:50:46.812233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.577 qpair failed and we were unable to recover it. 00:27:04.577 [2024-11-28 12:50:46.812562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.577 [2024-11-28 12:50:46.812593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.577 qpair failed and we were unable to recover it. 00:27:04.577 [2024-11-28 12:50:46.812847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.577 [2024-11-28 12:50:46.812864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.578 qpair failed and we were unable to recover it. 00:27:04.578 [2024-11-28 12:50:46.813041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.578 [2024-11-28 12:50:46.813059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.578 qpair failed and we were unable to recover it. 00:27:04.578 [2024-11-28 12:50:46.813239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.578 [2024-11-28 12:50:46.813272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.578 qpair failed and we were unable to recover it. 00:27:04.578 [2024-11-28 12:50:46.813558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.578 [2024-11-28 12:50:46.813592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.578 qpair failed and we were unable to recover it. 00:27:04.578 [2024-11-28 12:50:46.813822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.578 [2024-11-28 12:50:46.813854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.578 qpair failed and we were unable to recover it. 00:27:04.578 [2024-11-28 12:50:46.814129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.578 [2024-11-28 12:50:46.814164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.578 qpair failed and we were unable to recover it. 00:27:04.578 [2024-11-28 12:50:46.814449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.578 [2024-11-28 12:50:46.814482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.578 qpair failed and we were unable to recover it. 00:27:04.578 [2024-11-28 12:50:46.814738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.578 [2024-11-28 12:50:46.814770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.578 qpair failed and we were unable to recover it. 00:27:04.578 [2024-11-28 12:50:46.815066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.578 [2024-11-28 12:50:46.815083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.578 qpair failed and we were unable to recover it. 00:27:04.578 [2024-11-28 12:50:46.815296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.578 [2024-11-28 12:50:46.815313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.578 qpair failed and we were unable to recover it. 00:27:04.578 [2024-11-28 12:50:46.815530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.578 [2024-11-28 12:50:46.815547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.578 qpair failed and we were unable to recover it. 00:27:04.578 [2024-11-28 12:50:46.815722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.578 [2024-11-28 12:50:46.815739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.578 qpair failed and we were unable to recover it. 00:27:04.578 [2024-11-28 12:50:46.815899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.578 [2024-11-28 12:50:46.815932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.578 qpair failed and we were unable to recover it. 00:27:04.578 [2024-11-28 12:50:46.816197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.578 [2024-11-28 12:50:46.816231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.578 qpair failed and we were unable to recover it. 00:27:04.578 [2024-11-28 12:50:46.816532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.578 [2024-11-28 12:50:46.816564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.578 qpair failed and we were unable to recover it. 00:27:04.578 [2024-11-28 12:50:46.816759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.578 [2024-11-28 12:50:46.816798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.578 qpair failed and we were unable to recover it. 00:27:04.578 [2024-11-28 12:50:46.816936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.578 [2024-11-28 12:50:46.816960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.578 qpair failed and we were unable to recover it. 00:27:04.578 [2024-11-28 12:50:46.817150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.578 [2024-11-28 12:50:46.817183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.578 qpair failed and we were unable to recover it. 00:27:04.578 [2024-11-28 12:50:46.817447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.578 [2024-11-28 12:50:46.817480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.578 qpair failed and we were unable to recover it. 00:27:04.578 [2024-11-28 12:50:46.817754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.578 [2024-11-28 12:50:46.817786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.578 qpair failed and we were unable to recover it. 00:27:04.578 [2024-11-28 12:50:46.818003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.578 [2024-11-28 12:50:46.818037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.578 qpair failed and we were unable to recover it. 00:27:04.578 [2024-11-28 12:50:46.818294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.578 [2024-11-28 12:50:46.818326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.578 qpair failed and we were unable to recover it. 00:27:04.578 [2024-11-28 12:50:46.818580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.578 [2024-11-28 12:50:46.818613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.578 qpair failed and we were unable to recover it. 00:27:04.578 [2024-11-28 12:50:46.818932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.578 [2024-11-28 12:50:46.818975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.578 qpair failed and we were unable to recover it. 00:27:04.578 [2024-11-28 12:50:46.819252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.578 [2024-11-28 12:50:46.819285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.578 qpair failed and we were unable to recover it. 00:27:04.578 [2024-11-28 12:50:46.819535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.578 [2024-11-28 12:50:46.819568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.578 qpair failed and we were unable to recover it. 00:27:04.578 [2024-11-28 12:50:46.819771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.578 [2024-11-28 12:50:46.819804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.578 qpair failed and we were unable to recover it. 00:27:04.578 [2024-11-28 12:50:46.819922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.578 [2024-11-28 12:50:46.819939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.578 qpair failed and we were unable to recover it. 00:27:04.578 [2024-11-28 12:50:46.820106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.578 [2024-11-28 12:50:46.820151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.578 qpair failed and we were unable to recover it. 00:27:04.578 [2024-11-28 12:50:46.820431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.578 [2024-11-28 12:50:46.820465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.578 qpair failed and we were unable to recover it. 00:27:04.578 [2024-11-28 12:50:46.820766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.578 [2024-11-28 12:50:46.820800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.578 qpair failed and we were unable to recover it. 00:27:04.578 [2024-11-28 12:50:46.821013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.578 [2024-11-28 12:50:46.821047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.578 qpair failed and we were unable to recover it. 00:27:04.578 [2024-11-28 12:50:46.821264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.578 [2024-11-28 12:50:46.821297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.578 qpair failed and we were unable to recover it. 00:27:04.578 [2024-11-28 12:50:46.821491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.578 [2024-11-28 12:50:46.821523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.578 qpair failed and we were unable to recover it. 00:27:04.578 [2024-11-28 12:50:46.821775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.578 [2024-11-28 12:50:46.821808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.578 qpair failed and we were unable to recover it. 00:27:04.578 [2024-11-28 12:50:46.822121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.578 [2024-11-28 12:50:46.822156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.578 qpair failed and we were unable to recover it. 00:27:04.578 [2024-11-28 12:50:46.822449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.578 [2024-11-28 12:50:46.822482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.578 qpair failed and we were unable to recover it. 00:27:04.578 [2024-11-28 12:50:46.822628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.578 [2024-11-28 12:50:46.822669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.579 qpair failed and we were unable to recover it. 00:27:04.579 [2024-11-28 12:50:46.822908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.579 [2024-11-28 12:50:46.822925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.579 qpair failed and we were unable to recover it. 00:27:04.579 [2024-11-28 12:50:46.823193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.579 [2024-11-28 12:50:46.823211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.579 qpair failed and we were unable to recover it. 00:27:04.579 [2024-11-28 12:50:46.823426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.579 [2024-11-28 12:50:46.823462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.579 qpair failed and we were unable to recover it. 00:27:04.579 [2024-11-28 12:50:46.823654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.579 [2024-11-28 12:50:46.823687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.579 qpair failed and we were unable to recover it. 00:27:04.579 [2024-11-28 12:50:46.823900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.579 [2024-11-28 12:50:46.823939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.579 qpair failed and we were unable to recover it. 00:27:04.579 [2024-11-28 12:50:46.824253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.579 [2024-11-28 12:50:46.824287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.579 qpair failed and we were unable to recover it. 00:27:04.579 [2024-11-28 12:50:46.824508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.579 [2024-11-28 12:50:46.824541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.579 qpair failed and we were unable to recover it. 00:27:04.579 [2024-11-28 12:50:46.824677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.579 [2024-11-28 12:50:46.824710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.579 qpair failed and we were unable to recover it. 00:27:04.579 [2024-11-28 12:50:46.824982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.579 [2024-11-28 12:50:46.824998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.579 qpair failed and we were unable to recover it. 00:27:04.579 [2024-11-28 12:50:46.825150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.579 [2024-11-28 12:50:46.825184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.579 qpair failed and we were unable to recover it. 00:27:04.579 [2024-11-28 12:50:46.825456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.579 [2024-11-28 12:50:46.825488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.579 qpair failed and we were unable to recover it. 00:27:04.579 [2024-11-28 12:50:46.825770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.579 [2024-11-28 12:50:46.825804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.579 qpair failed and we were unable to recover it. 00:27:04.579 [2024-11-28 12:50:46.826092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.579 [2024-11-28 12:50:46.826126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.579 qpair failed and we were unable to recover it. 00:27:04.579 [2024-11-28 12:50:46.826342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.579 [2024-11-28 12:50:46.826375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.579 qpair failed and we were unable to recover it. 00:27:04.579 [2024-11-28 12:50:46.826524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.579 [2024-11-28 12:50:46.826557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.579 qpair failed and we were unable to recover it. 00:27:04.579 [2024-11-28 12:50:46.826767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.579 [2024-11-28 12:50:46.826800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.579 qpair failed and we were unable to recover it. 00:27:04.579 [2024-11-28 12:50:46.827056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.579 [2024-11-28 12:50:46.827074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.579 qpair failed and we were unable to recover it. 00:27:04.579 [2024-11-28 12:50:46.827296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.579 [2024-11-28 12:50:46.827329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.579 qpair failed and we were unable to recover it. 00:27:04.579 [2024-11-28 12:50:46.827689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.579 [2024-11-28 12:50:46.827764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.579 qpair failed and we were unable to recover it. 00:27:04.579 [2024-11-28 12:50:46.828035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.579 [2024-11-28 12:50:46.828056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.579 qpair failed and we were unable to recover it. 00:27:04.579 [2024-11-28 12:50:46.828226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.579 [2024-11-28 12:50:46.828243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.579 qpair failed and we were unable to recover it. 00:27:04.579 [2024-11-28 12:50:46.828350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.579 [2024-11-28 12:50:46.828381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.579 qpair failed and we were unable to recover it. 00:27:04.579 [2024-11-28 12:50:46.828515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.579 [2024-11-28 12:50:46.828547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.579 qpair failed and we were unable to recover it. 00:27:04.579 [2024-11-28 12:50:46.828729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.579 [2024-11-28 12:50:46.828762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.579 qpair failed and we were unable to recover it. 00:27:04.579 [2024-11-28 12:50:46.829041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.579 [2024-11-28 12:50:46.829074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.579 qpair failed and we were unable to recover it. 00:27:04.579 [2024-11-28 12:50:46.829307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.579 [2024-11-28 12:50:46.829342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.579 qpair failed and we were unable to recover it. 00:27:04.579 [2024-11-28 12:50:46.829537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.579 [2024-11-28 12:50:46.829570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.579 qpair failed and we were unable to recover it. 00:27:04.579 [2024-11-28 12:50:46.829767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.579 [2024-11-28 12:50:46.829800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.579 qpair failed and we were unable to recover it. 00:27:04.579 [2024-11-28 12:50:46.830096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.579 [2024-11-28 12:50:46.830130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.579 qpair failed and we were unable to recover it. 00:27:04.579 [2024-11-28 12:50:46.830336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.579 [2024-11-28 12:50:46.830369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.579 qpair failed and we were unable to recover it. 00:27:04.579 [2024-11-28 12:50:46.830655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.579 [2024-11-28 12:50:46.830688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.579 qpair failed and we were unable to recover it. 00:27:04.579 [2024-11-28 12:50:46.830944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.579 [2024-11-28 12:50:46.831002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.579 qpair failed and we were unable to recover it. 00:27:04.579 [2024-11-28 12:50:46.831145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.579 [2024-11-28 12:50:46.831178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.579 qpair failed and we were unable to recover it. 00:27:04.579 [2024-11-28 12:50:46.831476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.579 [2024-11-28 12:50:46.831509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.579 qpair failed and we were unable to recover it. 00:27:04.579 [2024-11-28 12:50:46.831810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.580 [2024-11-28 12:50:46.831844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.580 qpair failed and we were unable to recover it. 00:27:04.580 [2024-11-28 12:50:46.832133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.580 [2024-11-28 12:50:46.832151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.580 qpair failed and we were unable to recover it. 00:27:04.580 [2024-11-28 12:50:46.832436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.580 [2024-11-28 12:50:46.832468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.580 qpair failed and we were unable to recover it. 00:27:04.580 [2024-11-28 12:50:46.832695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.580 [2024-11-28 12:50:46.832727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.580 qpair failed and we were unable to recover it. 00:27:04.580 [2024-11-28 12:50:46.833001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.580 [2024-11-28 12:50:46.833019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.580 qpair failed and we were unable to recover it. 00:27:04.580 [2024-11-28 12:50:46.833267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.580 [2024-11-28 12:50:46.833299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.580 qpair failed and we were unable to recover it. 00:27:04.580 [2024-11-28 12:50:46.833523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.580 [2024-11-28 12:50:46.833557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.580 qpair failed and we were unable to recover it. 00:27:04.580 [2024-11-28 12:50:46.833839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.580 [2024-11-28 12:50:46.833872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.580 qpair failed and we were unable to recover it. 00:27:04.580 [2024-11-28 12:50:46.834127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.580 [2024-11-28 12:50:46.834161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.580 qpair failed and we were unable to recover it. 00:27:04.580 [2024-11-28 12:50:46.834437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.580 [2024-11-28 12:50:46.834469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.580 qpair failed and we were unable to recover it. 00:27:04.580 [2024-11-28 12:50:46.834663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.580 [2024-11-28 12:50:46.834695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.580 qpair failed and we were unable to recover it. 00:27:04.580 [2024-11-28 12:50:46.834889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.580 [2024-11-28 12:50:46.834924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.580 qpair failed and we were unable to recover it. 00:27:04.580 [2024-11-28 12:50:46.835182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.580 [2024-11-28 12:50:46.835217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.580 qpair failed and we were unable to recover it. 00:27:04.580 [2024-11-28 12:50:46.835387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.580 [2024-11-28 12:50:46.835402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.580 qpair failed and we were unable to recover it. 00:27:04.580 [2024-11-28 12:50:46.835591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.580 [2024-11-28 12:50:46.835626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.580 qpair failed and we were unable to recover it. 00:27:04.580 [2024-11-28 12:50:46.835765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.580 [2024-11-28 12:50:46.835799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.580 qpair failed and we were unable to recover it. 00:27:04.580 [2024-11-28 12:50:46.836057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.580 [2024-11-28 12:50:46.836094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.580 qpair failed and we were unable to recover it. 00:27:04.580 [2024-11-28 12:50:46.836393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.580 [2024-11-28 12:50:46.836427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.580 qpair failed and we were unable to recover it. 00:27:04.580 [2024-11-28 12:50:46.836634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.580 [2024-11-28 12:50:46.836667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.580 qpair failed and we were unable to recover it. 00:27:04.580 [2024-11-28 12:50:46.836958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.580 [2024-11-28 12:50:46.836993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.580 qpair failed and we were unable to recover it. 00:27:04.580 [2024-11-28 12:50:46.837266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.580 [2024-11-28 12:50:46.837300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.580 qpair failed and we were unable to recover it. 00:27:04.580 [2024-11-28 12:50:46.837519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.580 [2024-11-28 12:50:46.837553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.580 qpair failed and we were unable to recover it. 00:27:04.580 [2024-11-28 12:50:46.837831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.580 [2024-11-28 12:50:46.837872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.580 qpair failed and we were unable to recover it. 00:27:04.580 [2024-11-28 12:50:46.838080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.580 [2024-11-28 12:50:46.838094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.580 qpair failed and we were unable to recover it. 00:27:04.580 [2024-11-28 12:50:46.838256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.580 [2024-11-28 12:50:46.838275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.580 qpair failed and we were unable to recover it. 00:27:04.580 [2024-11-28 12:50:46.838438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.580 [2024-11-28 12:50:46.838472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.580 qpair failed and we were unable to recover it. 00:27:04.580 [2024-11-28 12:50:46.838682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.580 [2024-11-28 12:50:46.838715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.580 qpair failed and we were unable to recover it. 00:27:04.580 [2024-11-28 12:50:46.838981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.580 [2024-11-28 12:50:46.839014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.580 qpair failed and we were unable to recover it. 00:27:04.580 [2024-11-28 12:50:46.839274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.580 [2024-11-28 12:50:46.839306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.580 qpair failed and we were unable to recover it. 00:27:04.580 [2024-11-28 12:50:46.839607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.580 [2024-11-28 12:50:46.839641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.580 qpair failed and we were unable to recover it. 00:27:04.580 [2024-11-28 12:50:46.839909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.580 [2024-11-28 12:50:46.839926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.580 qpair failed and we were unable to recover it. 00:27:04.580 [2024-11-28 12:50:46.840100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.580 [2024-11-28 12:50:46.840117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.580 qpair failed and we were unable to recover it. 00:27:04.580 [2024-11-28 12:50:46.840360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.580 [2024-11-28 12:50:46.840394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.580 qpair failed and we were unable to recover it. 00:27:04.581 [2024-11-28 12:50:46.840614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.581 [2024-11-28 12:50:46.840647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.581 qpair failed and we were unable to recover it. 00:27:04.581 [2024-11-28 12:50:46.840843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.581 [2024-11-28 12:50:46.840876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.581 qpair failed and we were unable to recover it. 00:27:04.581 [2024-11-28 12:50:46.841100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.581 [2024-11-28 12:50:46.841134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.581 qpair failed and we were unable to recover it. 00:27:04.581 [2024-11-28 12:50:46.841435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.581 [2024-11-28 12:50:46.841468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.581 qpair failed and we were unable to recover it. 00:27:04.581 [2024-11-28 12:50:46.841741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.581 [2024-11-28 12:50:46.841780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.581 qpair failed and we were unable to recover it. 00:27:04.581 [2024-11-28 12:50:46.842005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.581 [2024-11-28 12:50:46.842038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.581 qpair failed and we were unable to recover it. 00:27:04.581 [2024-11-28 12:50:46.842255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.581 [2024-11-28 12:50:46.842287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.581 qpair failed and we were unable to recover it. 00:27:04.581 [2024-11-28 12:50:46.842416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.581 [2024-11-28 12:50:46.842448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.581 qpair failed and we were unable to recover it. 00:27:04.581 [2024-11-28 12:50:46.842750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.581 [2024-11-28 12:50:46.842782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.581 qpair failed and we were unable to recover it. 00:27:04.581 [2024-11-28 12:50:46.842977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.581 [2024-11-28 12:50:46.843022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.581 qpair failed and we were unable to recover it. 00:27:04.581 [2024-11-28 12:50:46.843249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.581 [2024-11-28 12:50:46.843282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.581 qpair failed and we were unable to recover it. 00:27:04.581 [2024-11-28 12:50:46.843563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.581 [2024-11-28 12:50:46.843597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.581 qpair failed and we were unable to recover it. 00:27:04.581 [2024-11-28 12:50:46.843786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.581 [2024-11-28 12:50:46.843819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.581 qpair failed and we were unable to recover it. 00:27:04.581 [2024-11-28 12:50:46.844018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.581 [2024-11-28 12:50:46.844035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.581 qpair failed and we were unable to recover it. 00:27:04.581 [2024-11-28 12:50:46.844276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.581 [2024-11-28 12:50:46.844309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.581 qpair failed and we were unable to recover it. 00:27:04.581 [2024-11-28 12:50:46.844563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.581 [2024-11-28 12:50:46.844596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.581 qpair failed and we were unable to recover it. 00:27:04.581 [2024-11-28 12:50:46.844860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.581 [2024-11-28 12:50:46.844893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.581 qpair failed and we were unable to recover it. 00:27:04.581 [2024-11-28 12:50:46.845185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.581 [2024-11-28 12:50:46.845219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.581 qpair failed and we were unable to recover it. 00:27:04.581 [2024-11-28 12:50:46.845432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.581 [2024-11-28 12:50:46.845465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.581 qpair failed and we were unable to recover it. 00:27:04.581 [2024-11-28 12:50:46.845743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.581 [2024-11-28 12:50:46.845776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.581 qpair failed and we were unable to recover it. 00:27:04.581 [2024-11-28 12:50:46.845966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.581 [2024-11-28 12:50:46.846000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.581 qpair failed and we were unable to recover it. 00:27:04.581 [2024-11-28 12:50:46.846188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.581 [2024-11-28 12:50:46.846205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.581 qpair failed and we were unable to recover it. 00:27:04.581 [2024-11-28 12:50:46.846380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.581 [2024-11-28 12:50:46.846414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.581 qpair failed and we were unable to recover it. 00:27:04.581 [2024-11-28 12:50:46.846718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.581 [2024-11-28 12:50:46.846752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.581 qpair failed and we were unable to recover it. 00:27:04.581 [2024-11-28 12:50:46.847013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.581 [2024-11-28 12:50:46.847048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.581 qpair failed and we were unable to recover it. 00:27:04.581 [2024-11-28 12:50:46.847287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.581 [2024-11-28 12:50:46.847319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.581 qpair failed and we were unable to recover it. 00:27:04.581 [2024-11-28 12:50:46.847600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.581 [2024-11-28 12:50:46.847634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.581 qpair failed and we were unable to recover it. 00:27:04.581 [2024-11-28 12:50:46.847821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.581 [2024-11-28 12:50:46.847838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.581 qpair failed and we were unable to recover it. 00:27:04.581 [2024-11-28 12:50:46.848102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.581 [2024-11-28 12:50:46.848119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.581 qpair failed and we were unable to recover it. 00:27:04.581 [2024-11-28 12:50:46.848383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.581 [2024-11-28 12:50:46.848415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.581 qpair failed and we were unable to recover it. 00:27:04.581 [2024-11-28 12:50:46.848646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.581 [2024-11-28 12:50:46.848680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.581 qpair failed and we were unable to recover it. 00:27:04.581 [2024-11-28 12:50:46.848888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.581 [2024-11-28 12:50:46.848905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.581 qpair failed and we were unable to recover it. 00:27:04.581 [2024-11-28 12:50:46.849088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.581 [2024-11-28 12:50:46.849122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.581 qpair failed and we were unable to recover it. 00:27:04.581 [2024-11-28 12:50:46.849376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.581 [2024-11-28 12:50:46.849410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.581 qpair failed and we were unable to recover it. 00:27:04.581 [2024-11-28 12:50:46.849689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.581 [2024-11-28 12:50:46.849723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.581 qpair failed and we were unable to recover it. 00:27:04.581 [2024-11-28 12:50:46.850018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.581 [2024-11-28 12:50:46.850051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.581 qpair failed and we were unable to recover it. 00:27:04.581 [2024-11-28 12:50:46.850266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.581 [2024-11-28 12:50:46.850283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.581 qpair failed and we were unable to recover it. 00:27:04.581 [2024-11-28 12:50:46.850441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.581 [2024-11-28 12:50:46.850457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.581 qpair failed and we were unable to recover it. 00:27:04.581 [2024-11-28 12:50:46.850701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.582 [2024-11-28 12:50:46.850734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.582 qpair failed and we were unable to recover it. 00:27:04.582 [2024-11-28 12:50:46.850994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.582 [2024-11-28 12:50:46.851030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.582 qpair failed and we were unable to recover it. 00:27:04.582 [2024-11-28 12:50:46.851325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.582 [2024-11-28 12:50:46.851343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.582 qpair failed and we were unable to recover it. 00:27:04.582 [2024-11-28 12:50:46.851537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.582 [2024-11-28 12:50:46.851571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.582 qpair failed and we were unable to recover it. 00:27:04.582 [2024-11-28 12:50:46.851829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.582 [2024-11-28 12:50:46.851863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.582 qpair failed and we were unable to recover it. 00:27:04.582 [2024-11-28 12:50:46.852135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.582 [2024-11-28 12:50:46.852169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.582 qpair failed and we were unable to recover it. 00:27:04.582 [2024-11-28 12:50:46.852455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.582 [2024-11-28 12:50:46.852495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.582 qpair failed and we were unable to recover it. 00:27:04.582 [2024-11-28 12:50:46.852769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.582 [2024-11-28 12:50:46.852801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.582 qpair failed and we were unable to recover it. 00:27:04.582 [2024-11-28 12:50:46.853024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.582 [2024-11-28 12:50:46.853059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.582 qpair failed and we were unable to recover it. 00:27:04.582 [2024-11-28 12:50:46.853243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.582 [2024-11-28 12:50:46.853277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.582 qpair failed and we were unable to recover it. 00:27:04.582 [2024-11-28 12:50:46.853429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.582 [2024-11-28 12:50:46.853461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.582 qpair failed and we were unable to recover it. 00:27:04.582 [2024-11-28 12:50:46.853669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.582 [2024-11-28 12:50:46.853702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.582 qpair failed and we were unable to recover it. 00:27:04.582 [2024-11-28 12:50:46.853995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.582 [2024-11-28 12:50:46.854030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.582 qpair failed and we were unable to recover it. 00:27:04.582 [2024-11-28 12:50:46.854171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.582 [2024-11-28 12:50:46.854205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.582 qpair failed and we were unable to recover it. 00:27:04.582 [2024-11-28 12:50:46.854510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.582 [2024-11-28 12:50:46.854544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.582 qpair failed and we were unable to recover it. 00:27:04.582 [2024-11-28 12:50:46.854826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.582 [2024-11-28 12:50:46.854860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.582 qpair failed and we were unable to recover it. 00:27:04.582 [2024-11-28 12:50:46.855132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.582 [2024-11-28 12:50:46.855149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.582 qpair failed and we were unable to recover it. 00:27:04.582 [2024-11-28 12:50:46.855381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.582 [2024-11-28 12:50:46.855398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.582 qpair failed and we were unable to recover it. 00:27:04.582 [2024-11-28 12:50:46.855648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.582 [2024-11-28 12:50:46.855665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.582 qpair failed and we were unable to recover it. 00:27:04.582 [2024-11-28 12:50:46.855832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.582 [2024-11-28 12:50:46.855850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.582 qpair failed and we were unable to recover it. 00:27:04.582 [2024-11-28 12:50:46.856030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.582 [2024-11-28 12:50:46.856065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.582 qpair failed and we were unable to recover it. 00:27:04.582 [2024-11-28 12:50:46.856329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.582 [2024-11-28 12:50:46.856363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.582 qpair failed and we were unable to recover it. 00:27:04.582 [2024-11-28 12:50:46.856564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.582 [2024-11-28 12:50:46.856599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.582 qpair failed and we were unable to recover it. 00:27:04.582 [2024-11-28 12:50:46.856879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.582 [2024-11-28 12:50:46.856914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.582 qpair failed and we were unable to recover it. 00:27:04.582 [2024-11-28 12:50:46.857201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.582 [2024-11-28 12:50:46.857235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.582 qpair failed and we were unable to recover it. 00:27:04.582 [2024-11-28 12:50:46.857437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.582 [2024-11-28 12:50:46.857469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.582 qpair failed and we were unable to recover it. 00:27:04.582 [2024-11-28 12:50:46.857731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.582 [2024-11-28 12:50:46.857764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.582 qpair failed and we were unable to recover it. 00:27:04.582 [2024-11-28 12:50:46.857973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.582 [2024-11-28 12:50:46.857990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.582 qpair failed and we were unable to recover it. 00:27:04.582 [2024-11-28 12:50:46.858218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.582 [2024-11-28 12:50:46.858234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.582 qpair failed and we were unable to recover it. 00:27:04.582 [2024-11-28 12:50:46.858382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.582 [2024-11-28 12:50:46.858400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.582 qpair failed and we were unable to recover it. 00:27:04.582 [2024-11-28 12:50:46.858611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.582 [2024-11-28 12:50:46.858627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.582 qpair failed and we were unable to recover it. 00:27:04.582 [2024-11-28 12:50:46.858853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.582 [2024-11-28 12:50:46.858871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.582 qpair failed and we were unable to recover it. 00:27:04.582 [2024-11-28 12:50:46.859057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.582 [2024-11-28 12:50:46.859083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.582 qpair failed and we were unable to recover it. 00:27:04.582 [2024-11-28 12:50:46.859303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.582 [2024-11-28 12:50:46.859351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.582 qpair failed and we were unable to recover it. 00:27:04.582 [2024-11-28 12:50:46.859582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.582 [2024-11-28 12:50:46.859601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.582 qpair failed and we were unable to recover it. 00:27:04.582 [2024-11-28 12:50:46.859760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.582 [2024-11-28 12:50:46.859778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.582 qpair failed and we were unable to recover it. 00:27:04.582 [2024-11-28 12:50:46.859960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.582 [2024-11-28 12:50:46.859977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.582 qpair failed and we were unable to recover it. 00:27:04.582 [2024-11-28 12:50:46.860243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.582 [2024-11-28 12:50:46.860260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.582 qpair failed and we were unable to recover it. 00:27:04.582 [2024-11-28 12:50:46.860411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.583 [2024-11-28 12:50:46.860428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.583 qpair failed and we were unable to recover it. 00:27:04.583 [2024-11-28 12:50:46.860600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.583 [2024-11-28 12:50:46.860617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.583 qpair failed and we were unable to recover it. 00:27:04.583 [2024-11-28 12:50:46.860864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.583 [2024-11-28 12:50:46.860881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.583 qpair failed and we were unable to recover it. 00:27:04.583 [2024-11-28 12:50:46.861113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.583 [2024-11-28 12:50:46.861130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.583 qpair failed and we were unable to recover it. 00:27:04.583 [2024-11-28 12:50:46.861371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.583 [2024-11-28 12:50:46.861388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.583 qpair failed and we were unable to recover it. 00:27:04.583 [2024-11-28 12:50:46.861635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.583 [2024-11-28 12:50:46.861652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.583 qpair failed and we were unable to recover it. 00:27:04.583 [2024-11-28 12:50:46.861822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.583 [2024-11-28 12:50:46.861839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.583 qpair failed and we were unable to recover it. 00:27:04.583 [2024-11-28 12:50:46.862011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.583 [2024-11-28 12:50:46.862028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.583 qpair failed and we were unable to recover it. 00:27:04.583 [2024-11-28 12:50:46.862199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.583 [2024-11-28 12:50:46.862216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.583 qpair failed and we were unable to recover it. 00:27:04.583 [2024-11-28 12:50:46.862467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.583 [2024-11-28 12:50:46.862484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.583 qpair failed and we were unable to recover it. 00:27:04.583 [2024-11-28 12:50:46.862731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.583 [2024-11-28 12:50:46.862748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.583 qpair failed and we were unable to recover it. 00:27:04.583 [2024-11-28 12:50:46.862994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.583 [2024-11-28 12:50:46.863012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.583 qpair failed and we were unable to recover it. 00:27:04.583 [2024-11-28 12:50:46.863194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.583 [2024-11-28 12:50:46.863210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.583 qpair failed and we were unable to recover it. 00:27:04.583 [2024-11-28 12:50:46.863404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.583 [2024-11-28 12:50:46.863421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.583 qpair failed and we were unable to recover it. 00:27:04.583 [2024-11-28 12:50:46.863582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.583 [2024-11-28 12:50:46.863599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.583 qpair failed and we were unable to recover it. 00:27:04.583 [2024-11-28 12:50:46.863689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.583 [2024-11-28 12:50:46.863704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.583 qpair failed and we were unable to recover it. 00:27:04.583 [2024-11-28 12:50:46.863929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.583 [2024-11-28 12:50:46.863946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.583 qpair failed and we were unable to recover it. 00:27:04.583 [2024-11-28 12:50:46.864144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.583 [2024-11-28 12:50:46.864161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.583 qpair failed and we were unable to recover it. 00:27:04.583 [2024-11-28 12:50:46.864399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.583 [2024-11-28 12:50:46.864417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.583 qpair failed and we were unable to recover it. 00:27:04.583 [2024-11-28 12:50:46.864657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.583 [2024-11-28 12:50:46.864674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.583 qpair failed and we were unable to recover it. 00:27:04.583 [2024-11-28 12:50:46.864844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.583 [2024-11-28 12:50:46.864861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.583 qpair failed and we were unable to recover it. 00:27:04.583 [2024-11-28 12:50:46.865056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.583 [2024-11-28 12:50:46.865073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.583 qpair failed and we were unable to recover it. 00:27:04.583 [2024-11-28 12:50:46.865306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.583 [2024-11-28 12:50:46.865326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.583 qpair failed and we were unable to recover it. 00:27:04.583 [2024-11-28 12:50:46.865594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.583 [2024-11-28 12:50:46.865611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.583 qpair failed and we were unable to recover it. 00:27:04.583 [2024-11-28 12:50:46.865782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.583 [2024-11-28 12:50:46.865799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.583 qpair failed and we were unable to recover it. 00:27:04.583 [2024-11-28 12:50:46.866051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.583 [2024-11-28 12:50:46.866068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.583 qpair failed and we were unable to recover it. 00:27:04.583 [2024-11-28 12:50:46.866217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.583 [2024-11-28 12:50:46.866234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.583 qpair failed and we were unable to recover it. 00:27:04.583 [2024-11-28 12:50:46.866521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.583 [2024-11-28 12:50:46.866538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.583 qpair failed and we were unable to recover it. 00:27:04.583 [2024-11-28 12:50:46.866821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.583 [2024-11-28 12:50:46.866839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.583 qpair failed and we were unable to recover it. 00:27:04.583 [2024-11-28 12:50:46.867001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.583 [2024-11-28 12:50:46.867018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.583 qpair failed and we were unable to recover it. 00:27:04.583 [2024-11-28 12:50:46.867242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.583 [2024-11-28 12:50:46.867259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.583 qpair failed and we were unable to recover it. 00:27:04.583 [2024-11-28 12:50:46.867423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.583 [2024-11-28 12:50:46.867440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.583 qpair failed and we were unable to recover it. 00:27:04.583 [2024-11-28 12:50:46.867602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.583 [2024-11-28 12:50:46.867618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.583 qpair failed and we were unable to recover it. 00:27:04.583 [2024-11-28 12:50:46.867786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.583 [2024-11-28 12:50:46.867803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.583 qpair failed and we were unable to recover it. 00:27:04.583 [2024-11-28 12:50:46.867956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.583 [2024-11-28 12:50:46.867973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.583 qpair failed and we were unable to recover it. 00:27:04.583 [2024-11-28 12:50:46.868242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.583 [2024-11-28 12:50:46.868259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.583 qpair failed and we were unable to recover it. 00:27:04.583 [2024-11-28 12:50:46.868505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.583 [2024-11-28 12:50:46.868522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.583 qpair failed and we were unable to recover it. 00:27:04.583 [2024-11-28 12:50:46.868711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.583 [2024-11-28 12:50:46.868728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.583 qpair failed and we were unable to recover it. 00:27:04.583 [2024-11-28 12:50:46.869016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.583 [2024-11-28 12:50:46.869033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.583 qpair failed and we were unable to recover it. 00:27:04.584 [2024-11-28 12:50:46.869276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.584 [2024-11-28 12:50:46.869293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.584 qpair failed and we were unable to recover it. 00:27:04.584 [2024-11-28 12:50:46.869510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.584 [2024-11-28 12:50:46.869527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.584 qpair failed and we were unable to recover it. 00:27:04.584 [2024-11-28 12:50:46.869689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.584 [2024-11-28 12:50:46.869707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.584 qpair failed and we were unable to recover it. 00:27:04.584 [2024-11-28 12:50:46.869805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.584 [2024-11-28 12:50:46.869820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.584 qpair failed and we were unable to recover it. 00:27:04.584 [2024-11-28 12:50:46.870041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.584 [2024-11-28 12:50:46.870057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.584 qpair failed and we were unable to recover it. 00:27:04.584 [2024-11-28 12:50:46.870291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.584 [2024-11-28 12:50:46.870308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.584 qpair failed and we were unable to recover it. 00:27:04.584 [2024-11-28 12:50:46.870466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.584 [2024-11-28 12:50:46.870482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.584 qpair failed and we were unable to recover it. 00:27:04.584 [2024-11-28 12:50:46.870725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.584 [2024-11-28 12:50:46.870743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.584 qpair failed and we were unable to recover it. 00:27:04.584 [2024-11-28 12:50:46.870904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.584 [2024-11-28 12:50:46.870920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.584 qpair failed and we were unable to recover it. 00:27:04.584 [2024-11-28 12:50:46.871158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.584 [2024-11-28 12:50:46.871175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.584 qpair failed and we were unable to recover it. 00:27:04.584 [2024-11-28 12:50:46.871392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.584 [2024-11-28 12:50:46.871409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.584 qpair failed and we were unable to recover it. 00:27:04.584 [2024-11-28 12:50:46.871696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.584 [2024-11-28 12:50:46.871713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.584 qpair failed and we were unable to recover it. 00:27:04.584 [2024-11-28 12:50:46.871969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.584 [2024-11-28 12:50:46.871985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.584 qpair failed and we were unable to recover it. 00:27:04.584 [2024-11-28 12:50:46.872250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.584 [2024-11-28 12:50:46.872268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.584 qpair failed and we were unable to recover it. 00:27:04.584 [2024-11-28 12:50:46.872485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.584 [2024-11-28 12:50:46.872503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.584 qpair failed and we were unable to recover it. 00:27:04.584 [2024-11-28 12:50:46.872665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.584 [2024-11-28 12:50:46.872682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.584 qpair failed and we were unable to recover it. 00:27:04.584 [2024-11-28 12:50:46.872922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.584 [2024-11-28 12:50:46.872939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.584 qpair failed and we were unable to recover it. 00:27:04.584 [2024-11-28 12:50:46.873112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.584 [2024-11-28 12:50:46.873128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.584 qpair failed and we were unable to recover it. 00:27:04.584 [2024-11-28 12:50:46.873302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.584 [2024-11-28 12:50:46.873319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.584 qpair failed and we were unable to recover it. 00:27:04.584 [2024-11-28 12:50:46.873473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.584 [2024-11-28 12:50:46.873488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.584 qpair failed and we were unable to recover it. 00:27:04.584 [2024-11-28 12:50:46.873708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.584 [2024-11-28 12:50:46.873724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.584 qpair failed and we were unable to recover it. 00:27:04.584 [2024-11-28 12:50:46.873966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.584 [2024-11-28 12:50:46.873983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.584 qpair failed and we were unable to recover it. 00:27:04.584 [2024-11-28 12:50:46.874096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.584 [2024-11-28 12:50:46.874113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.584 qpair failed and we were unable to recover it. 00:27:04.584 [2024-11-28 12:50:46.874372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.584 [2024-11-28 12:50:46.874389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.584 qpair failed and we were unable to recover it. 00:27:04.584 [2024-11-28 12:50:46.874669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.584 [2024-11-28 12:50:46.874710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.584 qpair failed and we were unable to recover it. 00:27:04.584 [2024-11-28 12:50:46.875009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.584 [2024-11-28 12:50:46.875050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.584 qpair failed and we were unable to recover it. 00:27:04.584 [2024-11-28 12:50:46.875298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.584 [2024-11-28 12:50:46.875331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.584 qpair failed and we were unable to recover it. 00:27:04.584 [2024-11-28 12:50:46.875557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.584 [2024-11-28 12:50:46.875573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.584 qpair failed and we were unable to recover it. 00:27:04.584 [2024-11-28 12:50:46.875728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.584 [2024-11-28 12:50:46.875743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.584 qpair failed and we were unable to recover it. 00:27:04.584 [2024-11-28 12:50:46.875889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.585 [2024-11-28 12:50:46.875902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.585 qpair failed and we were unable to recover it. 00:27:04.585 [2024-11-28 12:50:46.876003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.585 [2024-11-28 12:50:46.876015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.585 qpair failed and we were unable to recover it. 00:27:04.585 [2024-11-28 12:50:46.876101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.585 [2024-11-28 12:50:46.876113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.585 qpair failed and we were unable to recover it. 00:27:04.585 [2024-11-28 12:50:46.876258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.585 [2024-11-28 12:50:46.876273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.585 qpair failed and we were unable to recover it. 00:27:04.585 [2024-11-28 12:50:46.876433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.585 [2024-11-28 12:50:46.876447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.585 qpair failed and we were unable to recover it. 00:27:04.585 [2024-11-28 12:50:46.876598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.585 [2024-11-28 12:50:46.876611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.585 qpair failed and we were unable to recover it. 00:27:04.585 [2024-11-28 12:50:46.876771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.585 [2024-11-28 12:50:46.876784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.585 qpair failed and we were unable to recover it. 00:27:04.585 [2024-11-28 12:50:46.876876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.585 [2024-11-28 12:50:46.876891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.585 qpair failed and we were unable to recover it. 00:27:04.585 [2024-11-28 12:50:46.877035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.585 [2024-11-28 12:50:46.877054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.585 qpair failed and we were unable to recover it. 00:27:04.585 [2024-11-28 12:50:46.877270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.585 [2024-11-28 12:50:46.877284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.585 qpair failed and we were unable to recover it. 00:27:04.585 [2024-11-28 12:50:46.877514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.585 [2024-11-28 12:50:46.877526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.585 qpair failed and we were unable to recover it. 00:27:04.585 [2024-11-28 12:50:46.877735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.585 [2024-11-28 12:50:46.877749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.585 qpair failed and we were unable to recover it. 00:27:04.585 [2024-11-28 12:50:46.877906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.585 [2024-11-28 12:50:46.877919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.585 qpair failed and we were unable to recover it. 00:27:04.585 [2024-11-28 12:50:46.878001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.585 [2024-11-28 12:50:46.878016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.585 qpair failed and we were unable to recover it. 00:27:04.585 [2024-11-28 12:50:46.878267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.585 [2024-11-28 12:50:46.878284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.585 qpair failed and we were unable to recover it. 00:27:04.585 [2024-11-28 12:50:46.878522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.585 [2024-11-28 12:50:46.878535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.585 qpair failed and we were unable to recover it. 00:27:04.585 [2024-11-28 12:50:46.878771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.585 [2024-11-28 12:50:46.878784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.585 qpair failed and we were unable to recover it. 00:27:04.585 [2024-11-28 12:50:46.879008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.585 [2024-11-28 12:50:46.879022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.585 qpair failed and we were unable to recover it. 00:27:04.585 [2024-11-28 12:50:46.879189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.585 [2024-11-28 12:50:46.879205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.585 qpair failed and we were unable to recover it. 00:27:04.585 [2024-11-28 12:50:46.879387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.585 [2024-11-28 12:50:46.879401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.585 qpair failed and we were unable to recover it. 00:27:04.585 [2024-11-28 12:50:46.879577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.585 [2024-11-28 12:50:46.879592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.585 qpair failed and we were unable to recover it. 00:27:04.585 [2024-11-28 12:50:46.879736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.585 [2024-11-28 12:50:46.879750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.585 qpair failed and we were unable to recover it. 00:27:04.585 [2024-11-28 12:50:46.879912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.585 [2024-11-28 12:50:46.879926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.585 qpair failed and we were unable to recover it. 00:27:04.585 [2024-11-28 12:50:46.880017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.585 [2024-11-28 12:50:46.880029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.585 qpair failed and we were unable to recover it. 00:27:04.585 [2024-11-28 12:50:46.880258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.585 [2024-11-28 12:50:46.880272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.585 qpair failed and we were unable to recover it. 00:27:04.585 [2024-11-28 12:50:46.880432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.585 [2024-11-28 12:50:46.880447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.585 qpair failed and we were unable to recover it. 00:27:04.585 [2024-11-28 12:50:46.880656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.585 [2024-11-28 12:50:46.880669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.585 qpair failed and we were unable to recover it. 00:27:04.585 [2024-11-28 12:50:46.880888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.585 [2024-11-28 12:50:46.880902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.585 qpair failed and we were unable to recover it. 00:27:04.585 [2024-11-28 12:50:46.880988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.585 [2024-11-28 12:50:46.881001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.585 qpair failed and we were unable to recover it. 00:27:04.585 [2024-11-28 12:50:46.881115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.585 [2024-11-28 12:50:46.881130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.585 qpair failed and we were unable to recover it. 00:27:04.585 [2024-11-28 12:50:46.881363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.585 [2024-11-28 12:50:46.881380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.585 qpair failed and we were unable to recover it. 00:27:04.585 [2024-11-28 12:50:46.881555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.585 [2024-11-28 12:50:46.881568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.585 qpair failed and we were unable to recover it. 00:27:04.585 [2024-11-28 12:50:46.881651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.585 [2024-11-28 12:50:46.881664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.585 qpair failed and we were unable to recover it. 00:27:04.585 [2024-11-28 12:50:46.881877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.585 [2024-11-28 12:50:46.881891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.585 qpair failed and we were unable to recover it. 00:27:04.585 [2024-11-28 12:50:46.882135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.585 [2024-11-28 12:50:46.882151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.585 qpair failed and we were unable to recover it. 00:27:04.585 [2024-11-28 12:50:46.882257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.585 [2024-11-28 12:50:46.882272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.585 qpair failed and we were unable to recover it. 00:27:04.585 [2024-11-28 12:50:46.882505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.585 [2024-11-28 12:50:46.882518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.585 qpair failed and we were unable to recover it. 00:27:04.585 [2024-11-28 12:50:46.882700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.585 [2024-11-28 12:50:46.882715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.585 qpair failed and we were unable to recover it. 00:27:04.585 [2024-11-28 12:50:46.882956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.586 [2024-11-28 12:50:46.882975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.586 qpair failed and we were unable to recover it. 00:27:04.586 [2024-11-28 12:50:46.883138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.586 [2024-11-28 12:50:46.883151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.586 qpair failed and we were unable to recover it. 00:27:04.586 [2024-11-28 12:50:46.883360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.586 [2024-11-28 12:50:46.883374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.586 qpair failed and we were unable to recover it. 00:27:04.586 [2024-11-28 12:50:46.883554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.586 [2024-11-28 12:50:46.883568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.586 qpair failed and we were unable to recover it. 00:27:04.586 [2024-11-28 12:50:46.883776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.586 [2024-11-28 12:50:46.883789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.586 qpair failed and we were unable to recover it. 00:27:04.586 [2024-11-28 12:50:46.884023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.586 [2024-11-28 12:50:46.884038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.586 qpair failed and we were unable to recover it. 00:27:04.586 [2024-11-28 12:50:46.884221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.586 [2024-11-28 12:50:46.884234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.586 qpair failed and we were unable to recover it. 00:27:04.586 [2024-11-28 12:50:46.884376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.586 [2024-11-28 12:50:46.884392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.586 qpair failed and we were unable to recover it. 00:27:04.586 [2024-11-28 12:50:46.884554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.586 [2024-11-28 12:50:46.884568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.586 qpair failed and we were unable to recover it. 00:27:04.586 [2024-11-28 12:50:46.884664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.586 [2024-11-28 12:50:46.884678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.586 qpair failed and we were unable to recover it. 00:27:04.586 [2024-11-28 12:50:46.884886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.586 [2024-11-28 12:50:46.884899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.586 qpair failed and we were unable to recover it. 00:27:04.586 [2024-11-28 12:50:46.885114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.586 [2024-11-28 12:50:46.885128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.586 qpair failed and we were unable to recover it. 00:27:04.586 [2024-11-28 12:50:46.885272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.586 [2024-11-28 12:50:46.885286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.586 qpair failed and we were unable to recover it. 00:27:04.586 [2024-11-28 12:50:46.885511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.586 [2024-11-28 12:50:46.885524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.586 qpair failed and we were unable to recover it. 00:27:04.586 [2024-11-28 12:50:46.885614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.586 [2024-11-28 12:50:46.885628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.586 qpair failed and we were unable to recover it. 00:27:04.586 [2024-11-28 12:50:46.885890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.586 [2024-11-28 12:50:46.885903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.586 qpair failed and we were unable to recover it. 00:27:04.586 [2024-11-28 12:50:46.886054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.586 [2024-11-28 12:50:46.886067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.586 qpair failed and we were unable to recover it. 00:27:04.586 [2024-11-28 12:50:46.886333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.586 [2024-11-28 12:50:46.886347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.586 qpair failed and we were unable to recover it. 00:27:04.586 [2024-11-28 12:50:46.886487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.586 [2024-11-28 12:50:46.886499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.586 qpair failed and we were unable to recover it. 00:27:04.586 [2024-11-28 12:50:46.886655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.586 [2024-11-28 12:50:46.886669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.586 qpair failed and we were unable to recover it. 00:27:04.586 [2024-11-28 12:50:46.886829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.586 [2024-11-28 12:50:46.886843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.586 qpair failed and we were unable to recover it. 00:27:04.586 [2024-11-28 12:50:46.886962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.586 [2024-11-28 12:50:46.886977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.586 qpair failed and we were unable to recover it. 00:27:04.586 [2024-11-28 12:50:46.887242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.586 [2024-11-28 12:50:46.887255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.586 qpair failed and we were unable to recover it. 00:27:04.586 [2024-11-28 12:50:46.887485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.586 [2024-11-28 12:50:46.887499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.586 qpair failed and we were unable to recover it. 00:27:04.586 [2024-11-28 12:50:46.887731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.586 [2024-11-28 12:50:46.887745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.586 qpair failed and we were unable to recover it. 00:27:04.586 [2024-11-28 12:50:46.887893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.586 [2024-11-28 12:50:46.887906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.586 qpair failed and we were unable to recover it. 00:27:04.586 [2024-11-28 12:50:46.888079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.586 [2024-11-28 12:50:46.888093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.586 qpair failed and we were unable to recover it. 00:27:04.586 [2024-11-28 12:50:46.888328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.586 [2024-11-28 12:50:46.888341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.586 qpair failed and we were unable to recover it. 00:27:04.586 [2024-11-28 12:50:46.888593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.586 [2024-11-28 12:50:46.888606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.586 qpair failed and we were unable to recover it. 00:27:04.586 [2024-11-28 12:50:46.888757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.586 [2024-11-28 12:50:46.888772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.586 qpair failed and we were unable to recover it. 00:27:04.586 [2024-11-28 12:50:46.888930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.586 [2024-11-28 12:50:46.888944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.586 qpair failed and we were unable to recover it. 00:27:04.586 [2024-11-28 12:50:46.889118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.586 [2024-11-28 12:50:46.889131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.586 qpair failed and we were unable to recover it. 00:27:04.586 [2024-11-28 12:50:46.889271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.586 [2024-11-28 12:50:46.889285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.586 qpair failed and we were unable to recover it. 00:27:04.586 [2024-11-28 12:50:46.889500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.586 [2024-11-28 12:50:46.889512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.586 qpair failed and we were unable to recover it. 00:27:04.586 [2024-11-28 12:50:46.889655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.586 [2024-11-28 12:50:46.889668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.586 qpair failed and we were unable to recover it. 00:27:04.586 [2024-11-28 12:50:46.889761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.586 [2024-11-28 12:50:46.889772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.586 qpair failed and we were unable to recover it. 00:27:04.586 [2024-11-28 12:50:46.890007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.586 [2024-11-28 12:50:46.890024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.586 qpair failed and we were unable to recover it. 00:27:04.586 [2024-11-28 12:50:46.890134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.586 [2024-11-28 12:50:46.890150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.586 qpair failed and we were unable to recover it. 00:27:04.587 [2024-11-28 12:50:46.890250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.587 [2024-11-28 12:50:46.890263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.587 qpair failed and we were unable to recover it. 00:27:04.587 [2024-11-28 12:50:46.890438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.587 [2024-11-28 12:50:46.890452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.587 qpair failed and we were unable to recover it. 00:27:04.587 [2024-11-28 12:50:46.890725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.587 [2024-11-28 12:50:46.890741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.587 qpair failed and we were unable to recover it. 00:27:04.587 [2024-11-28 12:50:46.890888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.587 [2024-11-28 12:50:46.890902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.587 qpair failed and we were unable to recover it. 00:27:04.587 [2024-11-28 12:50:46.891133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.587 [2024-11-28 12:50:46.891148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.587 qpair failed and we were unable to recover it. 00:27:04.587 [2024-11-28 12:50:46.891380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.587 [2024-11-28 12:50:46.891395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.587 qpair failed and we were unable to recover it. 00:27:04.587 [2024-11-28 12:50:46.891606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.587 [2024-11-28 12:50:46.891620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.587 qpair failed and we were unable to recover it. 00:27:04.587 [2024-11-28 12:50:46.891883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.587 [2024-11-28 12:50:46.891900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.587 qpair failed and we were unable to recover it. 00:27:04.587 [2024-11-28 12:50:46.892012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.587 [2024-11-28 12:50:46.892027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.587 qpair failed and we were unable to recover it. 00:27:04.587 [2024-11-28 12:50:46.892170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.587 [2024-11-28 12:50:46.892183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.587 qpair failed and we were unable to recover it. 00:27:04.587 [2024-11-28 12:50:46.892417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.587 [2024-11-28 12:50:46.892431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.587 qpair failed and we were unable to recover it. 00:27:04.587 [2024-11-28 12:50:46.892585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.587 [2024-11-28 12:50:46.892599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.587 qpair failed and we were unable to recover it. 00:27:04.587 [2024-11-28 12:50:46.892690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.587 [2024-11-28 12:50:46.892702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.587 qpair failed and we were unable to recover it. 00:27:04.587 [2024-11-28 12:50:46.892937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.587 [2024-11-28 12:50:46.892958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.587 qpair failed and we were unable to recover it. 00:27:04.587 [2024-11-28 12:50:46.893186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.587 [2024-11-28 12:50:46.893199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.587 qpair failed and we were unable to recover it. 00:27:04.587 [2024-11-28 12:50:46.893348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.587 [2024-11-28 12:50:46.893364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.587 qpair failed and we were unable to recover it. 00:27:04.587 [2024-11-28 12:50:46.893526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.587 [2024-11-28 12:50:46.893539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.587 qpair failed and we were unable to recover it. 00:27:04.587 [2024-11-28 12:50:46.893773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.587 [2024-11-28 12:50:46.893789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.587 qpair failed and we were unable to recover it. 00:27:04.587 [2024-11-28 12:50:46.894033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.587 [2024-11-28 12:50:46.894048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.587 qpair failed and we were unable to recover it. 00:27:04.587 [2024-11-28 12:50:46.894258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.587 [2024-11-28 12:50:46.894273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.587 qpair failed and we were unable to recover it. 00:27:04.587 [2024-11-28 12:50:46.894432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.587 [2024-11-28 12:50:46.894458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.587 qpair failed and we were unable to recover it. 00:27:04.587 [2024-11-28 12:50:46.894546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.587 [2024-11-28 12:50:46.894558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.587 qpair failed and we were unable to recover it. 00:27:04.587 [2024-11-28 12:50:46.894716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.587 [2024-11-28 12:50:46.894729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.587 qpair failed and we were unable to recover it. 00:27:04.587 [2024-11-28 12:50:46.894877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.587 [2024-11-28 12:50:46.894890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.587 qpair failed and we were unable to recover it. 00:27:04.587 [2024-11-28 12:50:46.895046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.587 [2024-11-28 12:50:46.895060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.587 qpair failed and we were unable to recover it. 00:27:04.587 [2024-11-28 12:50:46.895234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.587 [2024-11-28 12:50:46.895248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.587 qpair failed and we were unable to recover it. 00:27:04.587 [2024-11-28 12:50:46.895422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.587 [2024-11-28 12:50:46.895436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.587 qpair failed and we were unable to recover it. 00:27:04.587 [2024-11-28 12:50:46.895678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.587 [2024-11-28 12:50:46.895693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.587 qpair failed and we were unable to recover it. 00:27:04.587 [2024-11-28 12:50:46.895874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.587 [2024-11-28 12:50:46.895888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.587 qpair failed and we were unable to recover it. 00:27:04.587 [2024-11-28 12:50:46.896055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.587 [2024-11-28 12:50:46.896069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.587 qpair failed and we were unable to recover it. 00:27:04.587 [2024-11-28 12:50:46.896296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.587 [2024-11-28 12:50:46.896309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.587 qpair failed and we were unable to recover it. 00:27:04.587 [2024-11-28 12:50:46.896508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.587 [2024-11-28 12:50:46.896521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.587 qpair failed and we were unable to recover it. 00:27:04.587 [2024-11-28 12:50:46.896753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.587 [2024-11-28 12:50:46.896769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.587 qpair failed and we were unable to recover it. 00:27:04.587 [2024-11-28 12:50:46.896985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.587 [2024-11-28 12:50:46.896999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.587 qpair failed and we were unable to recover it. 00:27:04.587 [2024-11-28 12:50:46.897179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.587 [2024-11-28 12:50:46.897193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.587 qpair failed and we were unable to recover it. 00:27:04.587 [2024-11-28 12:50:46.897425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.587 [2024-11-28 12:50:46.897439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.587 qpair failed and we were unable to recover it. 00:27:04.587 [2024-11-28 12:50:46.897668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.587 [2024-11-28 12:50:46.897682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.587 qpair failed and we were unable to recover it. 00:27:04.587 [2024-11-28 12:50:46.897841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.588 [2024-11-28 12:50:46.897853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.588 qpair failed and we were unable to recover it. 00:27:04.588 [2024-11-28 12:50:46.898034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.588 [2024-11-28 12:50:46.898048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.588 qpair failed and we were unable to recover it. 00:27:04.588 [2024-11-28 12:50:46.898274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.588 [2024-11-28 12:50:46.898290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.588 qpair failed and we were unable to recover it. 00:27:04.588 [2024-11-28 12:50:46.898479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.588 [2024-11-28 12:50:46.898493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.588 qpair failed and we were unable to recover it. 00:27:04.588 [2024-11-28 12:50:46.898661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.588 [2024-11-28 12:50:46.898674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.588 qpair failed and we were unable to recover it. 00:27:04.588 [2024-11-28 12:50:46.898830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.588 [2024-11-28 12:50:46.898844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.588 qpair failed and we were unable to recover it. 00:27:04.588 [2024-11-28 12:50:46.899065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.588 [2024-11-28 12:50:46.899081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.588 qpair failed and we were unable to recover it. 00:27:04.588 [2024-11-28 12:50:46.899316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.588 [2024-11-28 12:50:46.899329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.588 qpair failed and we were unable to recover it. 00:27:04.588 [2024-11-28 12:50:46.899494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.588 [2024-11-28 12:50:46.899506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.588 qpair failed and we were unable to recover it. 00:27:04.588 [2024-11-28 12:50:46.899645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.588 [2024-11-28 12:50:46.899659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.588 qpair failed and we were unable to recover it. 00:27:04.588 [2024-11-28 12:50:46.899751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.588 [2024-11-28 12:50:46.899762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.588 qpair failed and we were unable to recover it. 00:27:04.588 [2024-11-28 12:50:46.899914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.588 [2024-11-28 12:50:46.899927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.588 qpair failed and we were unable to recover it. 00:27:04.588 [2024-11-28 12:50:46.899998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.588 [2024-11-28 12:50:46.900010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.588 qpair failed and we were unable to recover it. 00:27:04.588 [2024-11-28 12:50:46.900169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.588 [2024-11-28 12:50:46.900183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.588 qpair failed and we were unable to recover it. 00:27:04.588 [2024-11-28 12:50:46.900341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.588 [2024-11-28 12:50:46.900354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.588 qpair failed and we were unable to recover it. 00:27:04.588 [2024-11-28 12:50:46.900566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.588 [2024-11-28 12:50:46.900581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.588 qpair failed and we were unable to recover it. 00:27:04.588 [2024-11-28 12:50:46.900840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.588 [2024-11-28 12:50:46.900855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.588 qpair failed and we were unable to recover it. 00:27:04.588 [2024-11-28 12:50:46.901024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.588 [2024-11-28 12:50:46.901038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.588 qpair failed and we were unable to recover it. 00:27:04.588 [2024-11-28 12:50:46.901226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.588 [2024-11-28 12:50:46.901240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.588 qpair failed and we were unable to recover it. 00:27:04.588 [2024-11-28 12:50:46.901418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.588 [2024-11-28 12:50:46.901431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.588 qpair failed and we were unable to recover it. 00:27:04.588 [2024-11-28 12:50:46.901587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.588 [2024-11-28 12:50:46.901600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.588 qpair failed and we were unable to recover it. 00:27:04.588 [2024-11-28 12:50:46.901821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.588 [2024-11-28 12:50:46.901834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.588 qpair failed and we were unable to recover it. 00:27:04.588 [2024-11-28 12:50:46.902053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.588 [2024-11-28 12:50:46.902067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.588 qpair failed and we were unable to recover it. 00:27:04.588 [2024-11-28 12:50:46.902343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.588 [2024-11-28 12:50:46.902357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.588 qpair failed and we were unable to recover it. 00:27:04.588 [2024-11-28 12:50:46.902499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.588 [2024-11-28 12:50:46.902516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.588 qpair failed and we were unable to recover it. 00:27:04.588 [2024-11-28 12:50:46.902691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.588 [2024-11-28 12:50:46.902704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.588 qpair failed and we were unable to recover it. 00:27:04.588 [2024-11-28 12:50:46.902847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.588 [2024-11-28 12:50:46.902861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.588 qpair failed and we were unable to recover it. 00:27:04.588 [2024-11-28 12:50:46.903002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.588 [2024-11-28 12:50:46.903016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.588 qpair failed and we were unable to recover it. 00:27:04.588 [2024-11-28 12:50:46.903114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.588 [2024-11-28 12:50:46.903125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.588 qpair failed and we were unable to recover it. 00:27:04.588 [2024-11-28 12:50:46.903338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.588 [2024-11-28 12:50:46.903352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.588 qpair failed and we were unable to recover it. 00:27:04.588 [2024-11-28 12:50:46.903518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.588 [2024-11-28 12:50:46.903531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.588 qpair failed and we were unable to recover it. 00:27:04.588 [2024-11-28 12:50:46.903781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.588 [2024-11-28 12:50:46.903795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.588 qpair failed and we were unable to recover it. 00:27:04.588 [2024-11-28 12:50:46.904001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.588 [2024-11-28 12:50:46.904015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.588 qpair failed and we were unable to recover it. 00:27:04.588 [2024-11-28 12:50:46.904225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.588 [2024-11-28 12:50:46.904240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.588 qpair failed and we were unable to recover it. 00:27:04.588 [2024-11-28 12:50:46.904324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.588 [2024-11-28 12:50:46.904335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.588 qpair failed and we were unable to recover it. 00:27:04.588 [2024-11-28 12:50:46.904561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.588 [2024-11-28 12:50:46.904575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.588 qpair failed and we were unable to recover it. 00:27:04.588 [2024-11-28 12:50:46.904671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.588 [2024-11-28 12:50:46.904683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.588 qpair failed and we were unable to recover it. 00:27:04.588 [2024-11-28 12:50:46.904926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.588 [2024-11-28 12:50:46.904940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.588 qpair failed and we were unable to recover it. 00:27:04.589 [2024-11-28 12:50:46.905194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.589 [2024-11-28 12:50:46.905208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.589 qpair failed and we were unable to recover it. 00:27:04.589 [2024-11-28 12:50:46.905388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.589 [2024-11-28 12:50:46.905401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.589 qpair failed and we were unable to recover it. 00:27:04.589 [2024-11-28 12:50:46.905631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.589 [2024-11-28 12:50:46.905646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.589 qpair failed and we were unable to recover it. 00:27:04.589 [2024-11-28 12:50:46.905901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.589 [2024-11-28 12:50:46.905917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.589 qpair failed and we were unable to recover it. 00:27:04.589 [2024-11-28 12:50:46.906149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.589 [2024-11-28 12:50:46.906168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.589 qpair failed and we were unable to recover it. 00:27:04.589 [2024-11-28 12:50:46.906345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.589 [2024-11-28 12:50:46.906359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.589 qpair failed and we were unable to recover it. 00:27:04.589 [2024-11-28 12:50:46.906606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.589 [2024-11-28 12:50:46.906619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.589 qpair failed and we were unable to recover it. 00:27:04.589 [2024-11-28 12:50:46.906773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.589 [2024-11-28 12:50:46.906785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.589 qpair failed and we were unable to recover it. 00:27:04.589 [2024-11-28 12:50:46.906989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.589 [2024-11-28 12:50:46.907003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.589 qpair failed and we were unable to recover it. 00:27:04.589 [2024-11-28 12:50:46.907093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.589 [2024-11-28 12:50:46.907104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.589 qpair failed and we were unable to recover it. 00:27:04.589 [2024-11-28 12:50:46.907267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.589 [2024-11-28 12:50:46.907280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.589 qpair failed and we were unable to recover it. 00:27:04.589 [2024-11-28 12:50:46.907355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.589 [2024-11-28 12:50:46.907367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.589 qpair failed and we were unable to recover it. 00:27:04.589 [2024-11-28 12:50:46.907456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.589 [2024-11-28 12:50:46.907469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.589 qpair failed and we were unable to recover it. 00:27:04.589 [2024-11-28 12:50:46.907624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.589 [2024-11-28 12:50:46.907637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.589 qpair failed and we were unable to recover it. 00:27:04.589 [2024-11-28 12:50:46.907789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.589 [2024-11-28 12:50:46.907802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.589 qpair failed and we were unable to recover it. 00:27:04.589 [2024-11-28 12:50:46.907883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.589 [2024-11-28 12:50:46.907894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.589 qpair failed and we were unable to recover it. 00:27:04.589 [2024-11-28 12:50:46.908130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.589 [2024-11-28 12:50:46.908144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.589 qpair failed and we were unable to recover it. 00:27:04.589 [2024-11-28 12:50:46.908296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.589 [2024-11-28 12:50:46.908309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.589 qpair failed and we were unable to recover it. 00:27:04.589 [2024-11-28 12:50:46.908399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.589 [2024-11-28 12:50:46.908410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.589 qpair failed and we were unable to recover it. 00:27:04.589 [2024-11-28 12:50:46.908638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.589 [2024-11-28 12:50:46.908651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.589 qpair failed and we were unable to recover it. 00:27:04.589 [2024-11-28 12:50:46.908857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.589 [2024-11-28 12:50:46.908873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.589 qpair failed and we were unable to recover it. 00:27:04.589 [2024-11-28 12:50:46.909111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.589 [2024-11-28 12:50:46.909124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.589 qpair failed and we were unable to recover it. 00:27:04.589 [2024-11-28 12:50:46.909354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.589 [2024-11-28 12:50:46.909367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.589 qpair failed and we were unable to recover it. 00:27:04.589 [2024-11-28 12:50:46.909573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.589 [2024-11-28 12:50:46.909586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.589 qpair failed and we were unable to recover it. 00:27:04.589 [2024-11-28 12:50:46.909736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.589 [2024-11-28 12:50:46.909749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.589 qpair failed and we were unable to recover it. 00:27:04.589 [2024-11-28 12:50:46.909898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.589 [2024-11-28 12:50:46.909911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.590 qpair failed and we were unable to recover it. 00:27:04.590 [2024-11-28 12:50:46.910169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.590 [2024-11-28 12:50:46.910182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.590 qpair failed and we were unable to recover it. 00:27:04.590 [2024-11-28 12:50:46.910391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.590 [2024-11-28 12:50:46.910404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.590 qpair failed and we were unable to recover it. 00:27:04.590 [2024-11-28 12:50:46.910627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.590 [2024-11-28 12:50:46.910640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.590 qpair failed and we were unable to recover it. 00:27:04.590 [2024-11-28 12:50:46.910802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.590 [2024-11-28 12:50:46.910815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.590 qpair failed and we were unable to recover it. 00:27:04.590 [2024-11-28 12:50:46.910974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.590 [2024-11-28 12:50:46.910990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.590 qpair failed and we were unable to recover it. 00:27:04.590 [2024-11-28 12:50:46.911139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.590 [2024-11-28 12:50:46.911152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.590 qpair failed and we were unable to recover it. 00:27:04.590 [2024-11-28 12:50:46.911381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.590 [2024-11-28 12:50:46.911394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.590 qpair failed and we were unable to recover it. 00:27:04.590 [2024-11-28 12:50:46.911622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.590 [2024-11-28 12:50:46.911635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.590 qpair failed and we were unable to recover it. 00:27:04.590 [2024-11-28 12:50:46.911854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.590 [2024-11-28 12:50:46.911866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.590 qpair failed and we were unable to recover it. 00:27:04.590 [2024-11-28 12:50:46.912048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.590 [2024-11-28 12:50:46.912062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.590 qpair failed and we were unable to recover it. 00:27:04.590 [2024-11-28 12:50:46.912293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.590 [2024-11-28 12:50:46.912307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.590 qpair failed and we were unable to recover it. 00:27:04.590 [2024-11-28 12:50:46.912463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.590 [2024-11-28 12:50:46.912477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.590 qpair failed and we were unable to recover it. 00:27:04.590 [2024-11-28 12:50:46.912637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.590 [2024-11-28 12:50:46.912651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.590 qpair failed and we were unable to recover it. 00:27:04.590 [2024-11-28 12:50:46.912822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.590 [2024-11-28 12:50:46.912835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.590 qpair failed and we were unable to recover it. 00:27:04.590 [2024-11-28 12:50:46.912992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.590 [2024-11-28 12:50:46.913005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.590 qpair failed and we were unable to recover it. 00:27:04.590 [2024-11-28 12:50:46.913162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.590 [2024-11-28 12:50:46.913175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.590 qpair failed and we were unable to recover it. 00:27:04.590 [2024-11-28 12:50:46.913314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.590 [2024-11-28 12:50:46.913327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.590 qpair failed and we were unable to recover it. 00:27:04.590 [2024-11-28 12:50:46.913560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.590 [2024-11-28 12:50:46.913589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.590 qpair failed and we were unable to recover it. 00:27:04.590 [2024-11-28 12:50:46.913781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.590 [2024-11-28 12:50:46.913797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.590 qpair failed and we were unable to recover it. 00:27:04.590 [2024-11-28 12:50:46.914009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.590 [2024-11-28 12:50:46.914023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.590 qpair failed and we were unable to recover it. 00:27:04.590 [2024-11-28 12:50:46.914204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.590 [2024-11-28 12:50:46.914217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.590 qpair failed and we were unable to recover it. 00:27:04.590 [2024-11-28 12:50:46.914442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.590 [2024-11-28 12:50:46.914457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.590 qpair failed and we were unable to recover it. 00:27:04.590 [2024-11-28 12:50:46.914687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.590 [2024-11-28 12:50:46.914702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.590 qpair failed and we were unable to recover it. 00:27:04.590 [2024-11-28 12:50:46.914865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.590 [2024-11-28 12:50:46.914878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.590 qpair failed and we were unable to recover it. 00:27:04.590 [2024-11-28 12:50:46.915053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.590 [2024-11-28 12:50:46.915066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.590 qpair failed and we were unable to recover it. 00:27:04.590 [2024-11-28 12:50:46.915285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.590 [2024-11-28 12:50:46.915300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.590 qpair failed and we were unable to recover it. 00:27:04.590 [2024-11-28 12:50:46.915524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.590 [2024-11-28 12:50:46.915538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.590 qpair failed and we were unable to recover it. 00:27:04.590 [2024-11-28 12:50:46.915674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.590 [2024-11-28 12:50:46.915687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.590 qpair failed and we were unable to recover it. 00:27:04.590 [2024-11-28 12:50:46.915916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.590 [2024-11-28 12:50:46.915932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.590 qpair failed and we were unable to recover it. 00:27:04.590 [2024-11-28 12:50:46.916135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.590 [2024-11-28 12:50:46.916160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.590 qpair failed and we were unable to recover it. 00:27:04.590 [2024-11-28 12:50:46.916343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.590 [2024-11-28 12:50:46.916360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.590 qpair failed and we were unable to recover it. 00:27:04.590 [2024-11-28 12:50:46.916548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.590 [2024-11-28 12:50:46.916564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.590 qpair failed and we were unable to recover it. 00:27:04.590 [2024-11-28 12:50:46.916860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.590 [2024-11-28 12:50:46.916877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.590 qpair failed and we were unable to recover it. 00:27:04.590 [2024-11-28 12:50:46.917056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.590 [2024-11-28 12:50:46.917073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.590 qpair failed and we were unable to recover it. 00:27:04.590 [2024-11-28 12:50:46.917230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.590 [2024-11-28 12:50:46.917246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.590 qpair failed and we were unable to recover it. 00:27:04.590 [2024-11-28 12:50:46.917413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.590 [2024-11-28 12:50:46.917430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.590 qpair failed and we were unable to recover it. 00:27:04.590 [2024-11-28 12:50:46.917689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.590 [2024-11-28 12:50:46.917706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.590 qpair failed and we were unable to recover it. 00:27:04.590 [2024-11-28 12:50:46.917868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.590 [2024-11-28 12:50:46.917885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.591 qpair failed and we were unable to recover it. 00:27:04.591 [2024-11-28 12:50:46.918119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.591 [2024-11-28 12:50:46.918136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.591 qpair failed and we were unable to recover it. 00:27:04.591 [2024-11-28 12:50:46.918291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.591 [2024-11-28 12:50:46.918307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.591 qpair failed and we were unable to recover it. 00:27:04.591 [2024-11-28 12:50:46.918518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.591 [2024-11-28 12:50:46.918534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.591 qpair failed and we were unable to recover it. 00:27:04.591 [2024-11-28 12:50:46.918769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.591 [2024-11-28 12:50:46.918785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.591 qpair failed and we were unable to recover it. 00:27:04.591 [2024-11-28 12:50:46.918965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.591 [2024-11-28 12:50:46.918982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.591 qpair failed and we were unable to recover it. 00:27:04.591 [2024-11-28 12:50:46.919219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.591 [2024-11-28 12:50:46.919236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.591 qpair failed and we were unable to recover it. 00:27:04.591 [2024-11-28 12:50:46.919469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.591 [2024-11-28 12:50:46.919488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.591 qpair failed and we were unable to recover it. 00:27:04.591 [2024-11-28 12:50:46.919653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.591 [2024-11-28 12:50:46.919670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.591 qpair failed and we were unable to recover it. 00:27:04.591 [2024-11-28 12:50:46.919833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.591 [2024-11-28 12:50:46.919848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.591 qpair failed and we were unable to recover it. 00:27:04.591 [2024-11-28 12:50:46.920005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.591 [2024-11-28 12:50:46.920022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.591 qpair failed and we were unable to recover it. 00:27:04.591 [2024-11-28 12:50:46.920254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.591 [2024-11-28 12:50:46.920270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.591 qpair failed and we were unable to recover it. 00:27:04.591 [2024-11-28 12:50:46.920537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.591 [2024-11-28 12:50:46.920554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.591 qpair failed and we were unable to recover it. 00:27:04.591 [2024-11-28 12:50:46.920776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.591 [2024-11-28 12:50:46.920793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.591 qpair failed and we were unable to recover it. 00:27:04.591 [2024-11-28 12:50:46.921053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.591 [2024-11-28 12:50:46.921069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.591 qpair failed and we were unable to recover it. 00:27:04.591 [2024-11-28 12:50:46.921332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.591 [2024-11-28 12:50:46.921349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.591 qpair failed and we were unable to recover it. 00:27:04.591 [2024-11-28 12:50:46.921533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.591 [2024-11-28 12:50:46.921549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.591 qpair failed and we were unable to recover it. 00:27:04.591 [2024-11-28 12:50:46.921719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.591 [2024-11-28 12:50:46.921735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.591 qpair failed and we were unable to recover it. 00:27:04.591 [2024-11-28 12:50:46.921836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.591 [2024-11-28 12:50:46.921851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.591 qpair failed and we were unable to recover it. 00:27:04.591 [2024-11-28 12:50:46.922011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.591 [2024-11-28 12:50:46.922028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.591 qpair failed and we were unable to recover it. 00:27:04.591 [2024-11-28 12:50:46.922187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.591 [2024-11-28 12:50:46.922204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.591 qpair failed and we were unable to recover it. 00:27:04.591 [2024-11-28 12:50:46.922460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.591 [2024-11-28 12:50:46.922480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.591 qpair failed and we were unable to recover it. 00:27:04.591 [2024-11-28 12:50:46.922726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.591 [2024-11-28 12:50:46.922742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.591 qpair failed and we were unable to recover it. 00:27:04.591 [2024-11-28 12:50:46.922959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.591 [2024-11-28 12:50:46.922976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.591 qpair failed and we were unable to recover it. 00:27:04.591 [2024-11-28 12:50:46.923197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.591 [2024-11-28 12:50:46.923214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.591 qpair failed and we were unable to recover it. 00:27:04.591 [2024-11-28 12:50:46.923400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.591 [2024-11-28 12:50:46.923416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.591 qpair failed and we were unable to recover it. 00:27:04.591 [2024-11-28 12:50:46.923670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.591 [2024-11-28 12:50:46.923687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.591 qpair failed and we were unable to recover it. 00:27:04.591 [2024-11-28 12:50:46.923922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.591 [2024-11-28 12:50:46.923939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.591 qpair failed and we were unable to recover it. 00:27:04.591 [2024-11-28 12:50:46.924091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.591 [2024-11-28 12:50:46.924108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.591 qpair failed and we were unable to recover it. 00:27:04.591 [2024-11-28 12:50:46.924366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.591 [2024-11-28 12:50:46.924382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.591 qpair failed and we were unable to recover it. 00:27:04.591 [2024-11-28 12:50:46.924679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.591 [2024-11-28 12:50:46.924696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.591 qpair failed and we were unable to recover it. 00:27:04.591 [2024-11-28 12:50:46.924941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.591 [2024-11-28 12:50:46.924963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.591 qpair failed and we were unable to recover it. 00:27:04.591 [2024-11-28 12:50:46.925123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.591 [2024-11-28 12:50:46.925139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.591 qpair failed and we were unable to recover it. 00:27:04.591 [2024-11-28 12:50:46.925311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.591 [2024-11-28 12:50:46.925327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.591 qpair failed and we were unable to recover it. 00:27:04.591 [2024-11-28 12:50:46.925478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.591 [2024-11-28 12:50:46.925494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.591 qpair failed and we were unable to recover it. 00:27:04.591 [2024-11-28 12:50:46.925722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.591 [2024-11-28 12:50:46.925738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.591 qpair failed and we were unable to recover it. 00:27:04.591 [2024-11-28 12:50:46.925895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.591 [2024-11-28 12:50:46.925912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.591 qpair failed and we were unable to recover it. 00:27:04.591 [2024-11-28 12:50:46.926126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.591 [2024-11-28 12:50:46.926143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.591 qpair failed and we were unable to recover it. 00:27:04.591 [2024-11-28 12:50:46.926284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.591 [2024-11-28 12:50:46.926300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.591 qpair failed and we were unable to recover it. 00:27:04.592 [2024-11-28 12:50:46.926511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.592 [2024-11-28 12:50:46.926527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.592 qpair failed and we were unable to recover it. 00:27:04.592 [2024-11-28 12:50:46.926706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.592 [2024-11-28 12:50:46.926722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.592 qpair failed and we were unable to recover it. 00:27:04.592 [2024-11-28 12:50:46.926821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.592 [2024-11-28 12:50:46.926836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.592 qpair failed and we were unable to recover it. 00:27:04.592 [2024-11-28 12:50:46.926997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.592 [2024-11-28 12:50:46.927014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.592 qpair failed and we were unable to recover it. 00:27:04.592 [2024-11-28 12:50:46.927196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.592 [2024-11-28 12:50:46.927214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.592 qpair failed and we were unable to recover it. 00:27:04.592 [2024-11-28 12:50:46.927382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.592 [2024-11-28 12:50:46.927398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.592 qpair failed and we were unable to recover it. 00:27:04.592 [2024-11-28 12:50:46.927562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.592 [2024-11-28 12:50:46.927578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.592 qpair failed and we were unable to recover it. 00:27:04.592 [2024-11-28 12:50:46.927656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.592 [2024-11-28 12:50:46.927669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.592 qpair failed and we were unable to recover it. 00:27:04.592 [2024-11-28 12:50:46.927819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.592 [2024-11-28 12:50:46.927834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.592 qpair failed and we were unable to recover it. 00:27:04.592 [2024-11-28 12:50:46.927986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.592 [2024-11-28 12:50:46.928002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.592 qpair failed and we were unable to recover it. 00:27:04.592 [2024-11-28 12:50:46.928156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.592 [2024-11-28 12:50:46.928173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.592 qpair failed and we were unable to recover it. 00:27:04.592 [2024-11-28 12:50:46.928316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.592 [2024-11-28 12:50:46.928332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.592 qpair failed and we were unable to recover it. 00:27:04.592 [2024-11-28 12:50:46.928474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.592 [2024-11-28 12:50:46.928490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.592 qpair failed and we were unable to recover it. 00:27:04.592 [2024-11-28 12:50:46.928700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.592 [2024-11-28 12:50:46.928717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.592 qpair failed and we were unable to recover it. 00:27:04.592 [2024-11-28 12:50:46.928958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.592 [2024-11-28 12:50:46.928976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.592 qpair failed and we were unable to recover it. 00:27:04.592 [2024-11-28 12:50:46.929234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.592 [2024-11-28 12:50:46.929250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.592 qpair failed and we were unable to recover it. 00:27:04.592 [2024-11-28 12:50:46.929396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.592 [2024-11-28 12:50:46.929412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.592 qpair failed and we were unable to recover it. 00:27:04.592 [2024-11-28 12:50:46.929658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.592 [2024-11-28 12:50:46.929675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.592 qpair failed and we were unable to recover it. 00:27:04.592 [2024-11-28 12:50:46.929910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.592 [2024-11-28 12:50:46.929925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.592 qpair failed and we were unable to recover it. 00:27:04.592 [2024-11-28 12:50:46.930108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.592 [2024-11-28 12:50:46.930125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.592 qpair failed and we were unable to recover it. 00:27:04.592 [2024-11-28 12:50:46.930311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.592 [2024-11-28 12:50:46.930328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.592 qpair failed and we were unable to recover it. 00:27:04.592 [2024-11-28 12:50:46.930523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.592 [2024-11-28 12:50:46.930540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.592 qpair failed and we were unable to recover it. 00:27:04.592 [2024-11-28 12:50:46.930794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.592 [2024-11-28 12:50:46.930814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.592 qpair failed and we were unable to recover it. 00:27:04.592 [2024-11-28 12:50:46.931053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.592 [2024-11-28 12:50:46.931069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.592 qpair failed and we were unable to recover it. 00:27:04.592 [2024-11-28 12:50:46.931305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.592 [2024-11-28 12:50:46.931322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.592 qpair failed and we were unable to recover it. 00:27:04.592 [2024-11-28 12:50:46.931559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.592 [2024-11-28 12:50:46.931575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.592 qpair failed and we were unable to recover it. 00:27:04.592 [2024-11-28 12:50:46.931714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.592 [2024-11-28 12:50:46.931730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.592 qpair failed and we were unable to recover it. 00:27:04.592 [2024-11-28 12:50:46.931959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.592 [2024-11-28 12:50:46.931975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.592 qpair failed and we were unable to recover it. 00:27:04.592 [2024-11-28 12:50:46.932189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.592 [2024-11-28 12:50:46.932205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.592 qpair failed and we were unable to recover it. 00:27:04.592 [2024-11-28 12:50:46.932379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.592 [2024-11-28 12:50:46.932396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.592 qpair failed and we were unable to recover it. 00:27:04.592 [2024-11-28 12:50:46.932635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.592 [2024-11-28 12:50:46.932651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.592 qpair failed and we were unable to recover it. 00:27:04.592 [2024-11-28 12:50:46.932887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.592 [2024-11-28 12:50:46.932905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.592 qpair failed and we were unable to recover it. 00:27:04.592 [2024-11-28 12:50:46.933135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.592 [2024-11-28 12:50:46.933151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.592 qpair failed and we were unable to recover it. 00:27:04.592 [2024-11-28 12:50:46.933292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.592 [2024-11-28 12:50:46.933308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.592 qpair failed and we were unable to recover it. 00:27:04.592 [2024-11-28 12:50:46.933520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.592 [2024-11-28 12:50:46.933536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.592 qpair failed and we were unable to recover it. 00:27:04.592 [2024-11-28 12:50:46.933757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.592 [2024-11-28 12:50:46.933774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.592 qpair failed and we were unable to recover it. 00:27:04.592 [2024-11-28 12:50:46.933873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.592 [2024-11-28 12:50:46.933893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.592 qpair failed and we were unable to recover it. 00:27:04.592 [2024-11-28 12:50:46.934146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.592 [2024-11-28 12:50:46.934164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.593 qpair failed and we were unable to recover it. 00:27:04.593 [2024-11-28 12:50:46.934333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.593 [2024-11-28 12:50:46.934349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.593 qpair failed and we were unable to recover it. 00:27:04.593 [2024-11-28 12:50:46.934559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.593 [2024-11-28 12:50:46.934577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.593 qpair failed and we were unable to recover it. 00:27:04.593 [2024-11-28 12:50:46.934690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.593 [2024-11-28 12:50:46.934705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.593 qpair failed and we were unable to recover it. 00:27:04.593 [2024-11-28 12:50:46.934916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.593 [2024-11-28 12:50:46.934936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.593 qpair failed and we were unable to recover it. 00:27:04.593 [2024-11-28 12:50:46.935160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.593 [2024-11-28 12:50:46.935175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.593 qpair failed and we were unable to recover it. 00:27:04.593 [2024-11-28 12:50:46.935449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.593 [2024-11-28 12:50:46.935466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.593 qpair failed and we were unable to recover it. 00:27:04.593 [2024-11-28 12:50:46.935549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.593 [2024-11-28 12:50:46.935564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.593 qpair failed and we were unable to recover it. 00:27:04.593 [2024-11-28 12:50:46.935789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.593 [2024-11-28 12:50:46.935808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.593 qpair failed and we were unable to recover it. 00:27:04.593 [2024-11-28 12:50:46.936069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.593 [2024-11-28 12:50:46.936089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.593 qpair failed and we were unable to recover it. 00:27:04.593 [2024-11-28 12:50:46.936326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.593 [2024-11-28 12:50:46.936343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.593 qpair failed and we were unable to recover it. 00:27:04.593 [2024-11-28 12:50:46.936551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.593 [2024-11-28 12:50:46.936567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.593 qpair failed and we were unable to recover it. 00:27:04.593 [2024-11-28 12:50:46.936798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.593 [2024-11-28 12:50:46.936818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.593 qpair failed and we were unable to recover it. 00:27:04.593 [2024-11-28 12:50:46.936959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.593 [2024-11-28 12:50:46.936973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.593 qpair failed and we were unable to recover it. 00:27:04.593 [2024-11-28 12:50:46.937171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.593 [2024-11-28 12:50:46.937185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.593 qpair failed and we were unable to recover it. 00:27:04.593 [2024-11-28 12:50:46.937265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.593 [2024-11-28 12:50:46.937277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.593 qpair failed and we were unable to recover it. 00:27:04.593 [2024-11-28 12:50:46.937422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.593 [2024-11-28 12:50:46.937436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.593 qpair failed and we were unable to recover it. 00:27:04.593 [2024-11-28 12:50:46.937515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.593 [2024-11-28 12:50:46.937527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.593 qpair failed and we were unable to recover it. 00:27:04.593 [2024-11-28 12:50:46.937747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.593 [2024-11-28 12:50:46.937761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.593 qpair failed and we were unable to recover it. 00:27:04.593 [2024-11-28 12:50:46.937993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.593 [2024-11-28 12:50:46.938007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.593 qpair failed and we were unable to recover it. 00:27:04.593 [2024-11-28 12:50:46.938265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.593 [2024-11-28 12:50:46.938278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.593 qpair failed and we were unable to recover it. 00:27:04.593 [2024-11-28 12:50:46.938381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.593 [2024-11-28 12:50:46.938394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.593 qpair failed and we were unable to recover it. 00:27:04.593 [2024-11-28 12:50:46.938550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.593 [2024-11-28 12:50:46.938562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.593 qpair failed and we were unable to recover it. 00:27:04.593 [2024-11-28 12:50:46.938723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.593 [2024-11-28 12:50:46.938736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.593 qpair failed and we were unable to recover it. 00:27:04.593 [2024-11-28 12:50:46.938890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.593 [2024-11-28 12:50:46.938904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.593 qpair failed and we were unable to recover it. 00:27:04.593 [2024-11-28 12:50:46.939044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.593 [2024-11-28 12:50:46.939060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.593 qpair failed and we were unable to recover it. 00:27:04.593 [2024-11-28 12:50:46.939337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.593 [2024-11-28 12:50:46.939351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.593 qpair failed and we were unable to recover it. 00:27:04.593 [2024-11-28 12:50:46.939591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.593 [2024-11-28 12:50:46.939605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.593 qpair failed and we were unable to recover it. 00:27:04.593 [2024-11-28 12:50:46.939749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.593 [2024-11-28 12:50:46.939763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.593 qpair failed and we were unable to recover it. 00:27:04.593 [2024-11-28 12:50:46.939931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.593 [2024-11-28 12:50:46.939945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.593 qpair failed and we were unable to recover it. 00:27:04.593 [2024-11-28 12:50:46.940076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.593 [2024-11-28 12:50:46.940089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.593 qpair failed and we were unable to recover it. 00:27:04.593 [2024-11-28 12:50:46.940244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.593 [2024-11-28 12:50:46.940256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.593 qpair failed and we were unable to recover it. 00:27:04.593 [2024-11-28 12:50:46.940411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.593 [2024-11-28 12:50:46.940423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.593 qpair failed and we were unable to recover it. 00:27:04.593 [2024-11-28 12:50:46.940626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.593 [2024-11-28 12:50:46.940639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.593 qpair failed and we were unable to recover it. 00:27:04.593 [2024-11-28 12:50:46.940853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.593 [2024-11-28 12:50:46.940868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.593 qpair failed and we were unable to recover it. 00:27:04.593 [2024-11-28 12:50:46.940958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.593 [2024-11-28 12:50:46.940971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.593 qpair failed and we were unable to recover it. 00:27:04.593 [2024-11-28 12:50:46.941177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.593 [2024-11-28 12:50:46.941192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.593 qpair failed and we were unable to recover it. 00:27:04.593 [2024-11-28 12:50:46.941327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.593 [2024-11-28 12:50:46.941340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.593 qpair failed and we were unable to recover it. 00:27:04.593 [2024-11-28 12:50:46.941586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.593 [2024-11-28 12:50:46.941602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.593 qpair failed and we were unable to recover it. 00:27:04.594 [2024-11-28 12:50:46.941781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.594 [2024-11-28 12:50:46.941793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.594 qpair failed and we were unable to recover it. 00:27:04.594 [2024-11-28 12:50:46.941994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.594 [2024-11-28 12:50:46.942007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.594 qpair failed and we were unable to recover it. 00:27:04.594 [2024-11-28 12:50:46.942090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.594 [2024-11-28 12:50:46.942101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.594 qpair failed and we were unable to recover it. 00:27:04.594 [2024-11-28 12:50:46.942346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.594 [2024-11-28 12:50:46.942359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.594 qpair failed and we were unable to recover it. 00:27:04.594 [2024-11-28 12:50:46.942562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.594 [2024-11-28 12:50:46.942578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.594 qpair failed and we were unable to recover it. 00:27:04.594 [2024-11-28 12:50:46.942737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.594 [2024-11-28 12:50:46.942750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.594 qpair failed and we were unable to recover it. 00:27:04.594 [2024-11-28 12:50:46.942902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.594 [2024-11-28 12:50:46.942915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.594 qpair failed and we were unable to recover it. 00:27:04.594 [2024-11-28 12:50:46.943072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.594 [2024-11-28 12:50:46.943086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.594 qpair failed and we were unable to recover it. 00:27:04.594 [2024-11-28 12:50:46.943319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.594 [2024-11-28 12:50:46.943334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.594 qpair failed and we were unable to recover it. 00:27:04.594 [2024-11-28 12:50:46.943574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.594 [2024-11-28 12:50:46.943586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.594 qpair failed and we were unable to recover it. 00:27:04.594 [2024-11-28 12:50:46.943836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.594 [2024-11-28 12:50:46.943852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.594 qpair failed and we were unable to recover it. 00:27:04.594 [2024-11-28 12:50:46.944072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.594 [2024-11-28 12:50:46.944085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.594 qpair failed and we were unable to recover it. 00:27:04.594 [2024-11-28 12:50:46.944189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.594 [2024-11-28 12:50:46.944202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.594 qpair failed and we were unable to recover it. 00:27:04.594 [2024-11-28 12:50:46.944364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.594 [2024-11-28 12:50:46.944377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.594 qpair failed and we were unable to recover it. 00:27:04.594 [2024-11-28 12:50:46.944618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.594 [2024-11-28 12:50:46.944632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.594 qpair failed and we were unable to recover it. 00:27:04.594 [2024-11-28 12:50:46.944871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.594 [2024-11-28 12:50:46.944884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.594 qpair failed and we were unable to recover it. 00:27:04.594 [2024-11-28 12:50:46.945040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.594 [2024-11-28 12:50:46.945053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.594 qpair failed and we were unable to recover it. 00:27:04.594 [2024-11-28 12:50:46.945206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.594 [2024-11-28 12:50:46.945218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.594 qpair failed and we were unable to recover it. 00:27:04.594 [2024-11-28 12:50:46.945369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.594 [2024-11-28 12:50:46.945381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.594 qpair failed and we were unable to recover it. 00:27:04.594 [2024-11-28 12:50:46.945485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.594 [2024-11-28 12:50:46.945498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.594 qpair failed and we were unable to recover it. 00:27:04.594 [2024-11-28 12:50:46.945632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.594 [2024-11-28 12:50:46.945645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.594 qpair failed and we were unable to recover it. 00:27:04.594 [2024-11-28 12:50:46.945870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.594 [2024-11-28 12:50:46.945885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.594 qpair failed and we were unable to recover it. 00:27:04.594 [2024-11-28 12:50:46.946085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.594 [2024-11-28 12:50:46.946099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.594 qpair failed and we were unable to recover it. 00:27:04.594 [2024-11-28 12:50:46.946400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.594 [2024-11-28 12:50:46.946415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.594 qpair failed and we were unable to recover it. 00:27:04.594 [2024-11-28 12:50:46.946634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.594 [2024-11-28 12:50:46.946647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.594 qpair failed and we were unable to recover it. 00:27:04.594 [2024-11-28 12:50:46.946872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.594 [2024-11-28 12:50:46.946884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.594 qpair failed and we were unable to recover it. 00:27:04.594 [2024-11-28 12:50:46.947107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.594 [2024-11-28 12:50:46.947123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.594 qpair failed and we were unable to recover it. 00:27:04.594 [2024-11-28 12:50:46.947294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.594 [2024-11-28 12:50:46.947308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.594 qpair failed and we were unable to recover it. 00:27:04.594 [2024-11-28 12:50:46.947456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.594 [2024-11-28 12:50:46.947469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.594 qpair failed and we were unable to recover it. 00:27:04.594 [2024-11-28 12:50:46.947557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.594 [2024-11-28 12:50:46.947569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.594 qpair failed and we were unable to recover it. 00:27:04.594 [2024-11-28 12:50:46.947717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.594 [2024-11-28 12:50:46.947729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.594 qpair failed and we were unable to recover it. 00:27:04.594 [2024-11-28 12:50:46.947932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.594 [2024-11-28 12:50:46.947946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.594 qpair failed and we were unable to recover it. 00:27:04.594 [2024-11-28 12:50:46.948165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.594 [2024-11-28 12:50:46.948178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.595 qpair failed and we were unable to recover it. 00:27:04.595 [2024-11-28 12:50:46.948258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.595 [2024-11-28 12:50:46.948269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.595 qpair failed and we were unable to recover it. 00:27:04.595 [2024-11-28 12:50:46.948497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.595 [2024-11-28 12:50:46.948510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.595 qpair failed and we were unable to recover it. 00:27:04.595 [2024-11-28 12:50:46.948663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.595 [2024-11-28 12:50:46.948675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.595 qpair failed and we were unable to recover it. 00:27:04.595 [2024-11-28 12:50:46.948821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.595 [2024-11-28 12:50:46.948834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.595 qpair failed and we were unable to recover it. 00:27:04.595 [2024-11-28 12:50:46.948989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.595 [2024-11-28 12:50:46.949002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.595 qpair failed and we were unable to recover it. 00:27:04.595 [2024-11-28 12:50:46.949151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.595 [2024-11-28 12:50:46.949164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.595 qpair failed and we were unable to recover it. 00:27:04.595 [2024-11-28 12:50:46.949317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.595 [2024-11-28 12:50:46.949330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.595 qpair failed and we were unable to recover it. 00:27:04.595 [2024-11-28 12:50:46.949424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.595 [2024-11-28 12:50:46.949436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.595 qpair failed and we were unable to recover it. 00:27:04.595 [2024-11-28 12:50:46.949665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.595 [2024-11-28 12:50:46.949678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.595 qpair failed and we were unable to recover it. 00:27:04.595 [2024-11-28 12:50:46.949928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.595 [2024-11-28 12:50:46.949941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.595 qpair failed and we were unable to recover it. 00:27:04.595 [2024-11-28 12:50:46.950114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.595 [2024-11-28 12:50:46.950127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.595 qpair failed and we were unable to recover it. 00:27:04.595 [2024-11-28 12:50:46.950420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.595 [2024-11-28 12:50:46.950432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.595 qpair failed and we were unable to recover it. 00:27:04.595 [2024-11-28 12:50:46.950645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.595 [2024-11-28 12:50:46.950658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.595 qpair failed and we were unable to recover it. 00:27:04.595 [2024-11-28 12:50:46.950868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.595 [2024-11-28 12:50:46.950882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.595 qpair failed and we were unable to recover it. 00:27:04.595 [2024-11-28 12:50:46.951065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.595 [2024-11-28 12:50:46.951078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.595 qpair failed and we were unable to recover it. 00:27:04.595 [2024-11-28 12:50:46.951232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.595 [2024-11-28 12:50:46.951247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.595 qpair failed and we were unable to recover it. 00:27:04.595 [2024-11-28 12:50:46.951397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.595 [2024-11-28 12:50:46.951412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.595 qpair failed and we were unable to recover it. 00:27:04.595 [2024-11-28 12:50:46.951514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.595 [2024-11-28 12:50:46.951526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.595 qpair failed and we were unable to recover it. 00:27:04.595 [2024-11-28 12:50:46.951766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.595 [2024-11-28 12:50:46.951779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.595 qpair failed and we were unable to recover it. 00:27:04.595 [2024-11-28 12:50:46.951995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.595 [2024-11-28 12:50:46.952008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.595 qpair failed and we were unable to recover it. 00:27:04.595 [2024-11-28 12:50:46.952233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.595 [2024-11-28 12:50:46.952259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.595 qpair failed and we were unable to recover it. 00:27:04.595 [2024-11-28 12:50:46.952473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.595 [2024-11-28 12:50:46.952490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.595 qpair failed and we were unable to recover it. 00:27:04.595 [2024-11-28 12:50:46.952712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.595 [2024-11-28 12:50:46.952729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.595 qpair failed and we were unable to recover it. 00:27:04.595 [2024-11-28 12:50:46.952806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.595 [2024-11-28 12:50:46.952821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.595 qpair failed and we were unable to recover it. 00:27:04.595 [2024-11-28 12:50:46.952982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.595 [2024-11-28 12:50:46.953000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.595 qpair failed and we were unable to recover it. 00:27:04.595 [2024-11-28 12:50:46.953115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.595 [2024-11-28 12:50:46.953131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.595 qpair failed and we were unable to recover it. 00:27:04.595 [2024-11-28 12:50:46.953294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.595 [2024-11-28 12:50:46.953310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.595 qpair failed and we were unable to recover it. 00:27:04.595 [2024-11-28 12:50:46.953450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.595 [2024-11-28 12:50:46.953466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.595 qpair failed and we were unable to recover it. 00:27:04.595 [2024-11-28 12:50:46.953695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.595 [2024-11-28 12:50:46.953711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.595 qpair failed and we were unable to recover it. 00:27:04.595 [2024-11-28 12:50:46.953945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.595 [2024-11-28 12:50:46.953966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.595 qpair failed and we were unable to recover it. 00:27:04.595 [2024-11-28 12:50:46.954197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.595 [2024-11-28 12:50:46.954213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.595 qpair failed and we were unable to recover it. 00:27:04.595 [2024-11-28 12:50:46.954368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.595 [2024-11-28 12:50:46.954384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.595 qpair failed and we were unable to recover it. 00:27:04.595 [2024-11-28 12:50:46.954628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.595 [2024-11-28 12:50:46.954643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.595 qpair failed and we were unable to recover it. 00:27:04.595 [2024-11-28 12:50:46.954905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.595 [2024-11-28 12:50:46.954921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.595 qpair failed and we were unable to recover it. 00:27:04.595 [2024-11-28 12:50:46.955123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.595 [2024-11-28 12:50:46.955139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.595 qpair failed and we were unable to recover it. 00:27:04.595 [2024-11-28 12:50:46.955296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.595 [2024-11-28 12:50:46.955310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.595 qpair failed and we were unable to recover it. 00:27:04.595 [2024-11-28 12:50:46.955464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.595 [2024-11-28 12:50:46.955479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.595 qpair failed and we were unable to recover it. 00:27:04.595 [2024-11-28 12:50:46.955690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.595 [2024-11-28 12:50:46.955705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.595 qpair failed and we were unable to recover it. 00:27:04.595 [2024-11-28 12:50:46.955861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.596 [2024-11-28 12:50:46.955876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.596 qpair failed and we were unable to recover it. 00:27:04.596 [2024-11-28 12:50:46.955982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.596 [2024-11-28 12:50:46.955998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.596 qpair failed and we were unable to recover it. 00:27:04.596 [2024-11-28 12:50:46.956161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.596 [2024-11-28 12:50:46.956176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.596 qpair failed and we were unable to recover it. 00:27:04.596 [2024-11-28 12:50:46.956347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.596 [2024-11-28 12:50:46.956361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.596 qpair failed and we were unable to recover it. 00:27:04.596 [2024-11-28 12:50:46.956607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.596 [2024-11-28 12:50:46.956621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.596 qpair failed and we were unable to recover it. 00:27:04.596 [2024-11-28 12:50:46.956789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.596 [2024-11-28 12:50:46.956804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.596 qpair failed and we were unable to recover it. 00:27:04.596 [2024-11-28 12:50:46.956959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.596 [2024-11-28 12:50:46.956974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.596 qpair failed and we were unable to recover it. 00:27:04.596 [2024-11-28 12:50:46.957164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.596 [2024-11-28 12:50:46.957179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.596 qpair failed and we were unable to recover it. 00:27:04.596 [2024-11-28 12:50:46.957416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.596 [2024-11-28 12:50:46.957431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.596 qpair failed and we were unable to recover it. 00:27:04.596 [2024-11-28 12:50:46.957582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.596 [2024-11-28 12:50:46.957601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.596 qpair failed and we were unable to recover it. 00:27:04.596 [2024-11-28 12:50:46.957816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.596 [2024-11-28 12:50:46.957831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.596 qpair failed and we were unable to recover it. 00:27:04.596 [2024-11-28 12:50:46.958013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.596 [2024-11-28 12:50:46.958028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.596 qpair failed and we were unable to recover it. 00:27:04.596 [2024-11-28 12:50:46.958142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.596 [2024-11-28 12:50:46.958155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.596 qpair failed and we were unable to recover it. 00:27:04.596 [2024-11-28 12:50:46.958382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.596 [2024-11-28 12:50:46.958396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.596 qpair failed and we were unable to recover it. 00:27:04.596 [2024-11-28 12:50:46.958502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.596 [2024-11-28 12:50:46.958516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.596 qpair failed and we were unable to recover it. 00:27:04.596 [2024-11-28 12:50:46.958718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.596 [2024-11-28 12:50:46.958731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.596 qpair failed and we were unable to recover it. 00:27:04.596 [2024-11-28 12:50:46.958890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.596 [2024-11-28 12:50:46.958904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.596 qpair failed and we were unable to recover it. 00:27:04.596 [2024-11-28 12:50:46.959067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.596 [2024-11-28 12:50:46.959082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.596 qpair failed and we were unable to recover it. 00:27:04.596 [2024-11-28 12:50:46.959238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.596 [2024-11-28 12:50:46.959253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.596 qpair failed and we were unable to recover it. 00:27:04.596 [2024-11-28 12:50:46.959503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.596 [2024-11-28 12:50:46.959518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.596 qpair failed and we were unable to recover it. 00:27:04.596 [2024-11-28 12:50:46.959728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.596 [2024-11-28 12:50:46.959743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.596 qpair failed and we were unable to recover it. 00:27:04.596 [2024-11-28 12:50:46.960005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.596 [2024-11-28 12:50:46.960022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.596 qpair failed and we were unable to recover it. 00:27:04.596 [2024-11-28 12:50:46.960191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.596 [2024-11-28 12:50:46.960206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.596 qpair failed and we were unable to recover it. 00:27:04.596 [2024-11-28 12:50:46.960383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.596 [2024-11-28 12:50:46.960398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.596 qpair failed and we were unable to recover it. 00:27:04.596 [2024-11-28 12:50:46.960629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.596 [2024-11-28 12:50:46.960645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.596 qpair failed and we were unable to recover it. 00:27:04.596 [2024-11-28 12:50:46.960873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.596 [2024-11-28 12:50:46.960888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.596 qpair failed and we were unable to recover it. 00:27:04.596 [2024-11-28 12:50:46.961095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.596 [2024-11-28 12:50:46.961112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.596 qpair failed and we were unable to recover it. 00:27:04.596 [2024-11-28 12:50:46.961322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.596 [2024-11-28 12:50:46.961339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.596 qpair failed and we were unable to recover it. 00:27:04.596 [2024-11-28 12:50:46.961522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.596 [2024-11-28 12:50:46.961537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.596 qpair failed and we were unable to recover it. 00:27:04.596 [2024-11-28 12:50:46.961641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.596 [2024-11-28 12:50:46.961656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.596 qpair failed and we were unable to recover it. 00:27:04.596 [2024-11-28 12:50:46.961865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.596 [2024-11-28 12:50:46.961880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.596 qpair failed and we were unable to recover it. 00:27:04.596 [2024-11-28 12:50:46.962024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.596 [2024-11-28 12:50:46.962039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.596 qpair failed and we were unable to recover it. 00:27:04.596 [2024-11-28 12:50:46.962246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.596 [2024-11-28 12:50:46.962262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.596 qpair failed and we were unable to recover it. 00:27:04.596 [2024-11-28 12:50:46.962483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.596 [2024-11-28 12:50:46.962499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.596 qpair failed and we were unable to recover it. 00:27:04.596 [2024-11-28 12:50:46.962611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.596 [2024-11-28 12:50:46.962625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.596 qpair failed and we were unable to recover it. 00:27:04.596 [2024-11-28 12:50:46.962779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.596 [2024-11-28 12:50:46.962793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.596 qpair failed and we were unable to recover it. 00:27:04.596 [2024-11-28 12:50:46.962956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.596 [2024-11-28 12:50:46.962974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.596 qpair failed and we were unable to recover it. 00:27:04.596 [2024-11-28 12:50:46.963222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.596 [2024-11-28 12:50:46.963237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.596 qpair failed and we were unable to recover it. 00:27:04.596 [2024-11-28 12:50:46.963337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.596 [2024-11-28 12:50:46.963352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.597 qpair failed and we were unable to recover it. 00:27:04.597 [2024-11-28 12:50:46.963607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.597 [2024-11-28 12:50:46.963622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.597 qpair failed and we were unable to recover it. 00:27:04.597 [2024-11-28 12:50:46.963802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.597 [2024-11-28 12:50:46.963818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.597 qpair failed and we were unable to recover it. 00:27:04.597 [2024-11-28 12:50:46.963910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.597 [2024-11-28 12:50:46.963926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.597 qpair failed and we were unable to recover it. 00:27:04.597 [2024-11-28 12:50:46.964163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.597 [2024-11-28 12:50:46.964179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.597 qpair failed and we were unable to recover it. 00:27:04.597 [2024-11-28 12:50:46.964338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.597 [2024-11-28 12:50:46.964355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.597 qpair failed and we were unable to recover it. 00:27:04.597 [2024-11-28 12:50:46.964499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.597 [2024-11-28 12:50:46.964515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.597 qpair failed and we were unable to recover it. 00:27:04.597 [2024-11-28 12:50:46.964677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.597 [2024-11-28 12:50:46.964693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.597 qpair failed and we were unable to recover it. 00:27:04.597 [2024-11-28 12:50:46.964846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.597 [2024-11-28 12:50:46.964862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.597 qpair failed and we were unable to recover it. 00:27:04.597 [2024-11-28 12:50:46.965083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.597 [2024-11-28 12:50:46.965100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.597 qpair failed and we were unable to recover it. 00:27:04.597 [2024-11-28 12:50:46.965254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.597 [2024-11-28 12:50:46.965270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.597 qpair failed and we were unable to recover it. 00:27:04.597 [2024-11-28 12:50:46.965376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.597 [2024-11-28 12:50:46.965392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.597 qpair failed and we were unable to recover it. 00:27:04.597 [2024-11-28 12:50:46.965527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.597 [2024-11-28 12:50:46.965561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.597 qpair failed and we were unable to recover it. 00:27:04.597 [2024-11-28 12:50:46.965793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.597 [2024-11-28 12:50:46.965815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.597 qpair failed and we were unable to recover it. 00:27:04.597 [2024-11-28 12:50:46.966072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.597 [2024-11-28 12:50:46.966087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.597 qpair failed and we were unable to recover it. 00:27:04.597 [2024-11-28 12:50:46.966293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.597 [2024-11-28 12:50:46.966307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.597 qpair failed and we were unable to recover it. 00:27:04.597 [2024-11-28 12:50:46.966511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.597 [2024-11-28 12:50:46.966525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.597 qpair failed and we were unable to recover it. 00:27:04.597 [2024-11-28 12:50:46.966697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.597 [2024-11-28 12:50:46.966710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.597 qpair failed and we were unable to recover it. 00:27:04.597 [2024-11-28 12:50:46.966871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.597 [2024-11-28 12:50:46.966885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.597 qpair failed and we were unable to recover it. 00:27:04.597 [2024-11-28 12:50:46.967104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.597 [2024-11-28 12:50:46.967118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.597 qpair failed and we were unable to recover it. 00:27:04.597 [2024-11-28 12:50:46.967269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.597 [2024-11-28 12:50:46.967282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.597 qpair failed and we were unable to recover it. 00:27:04.597 [2024-11-28 12:50:46.967437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.597 [2024-11-28 12:50:46.967449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.597 qpair failed and we were unable to recover it. 00:27:04.597 [2024-11-28 12:50:46.967583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.597 [2024-11-28 12:50:46.967596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.597 qpair failed and we were unable to recover it. 00:27:04.597 [2024-11-28 12:50:46.967759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.597 [2024-11-28 12:50:46.967771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.597 qpair failed and we were unable to recover it. 00:27:04.597 [2024-11-28 12:50:46.967926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.597 [2024-11-28 12:50:46.967939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.597 qpair failed and we were unable to recover it. 00:27:04.597 [2024-11-28 12:50:46.968133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.597 [2024-11-28 12:50:46.968149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.597 qpair failed and we were unable to recover it. 00:27:04.597 [2024-11-28 12:50:46.968388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.597 [2024-11-28 12:50:46.968402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.597 qpair failed and we were unable to recover it. 00:27:04.597 [2024-11-28 12:50:46.968558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.597 [2024-11-28 12:50:46.968571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.597 qpair failed and we were unable to recover it. 00:27:04.597 [2024-11-28 12:50:46.968654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.597 [2024-11-28 12:50:46.968665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.597 qpair failed and we were unable to recover it. 00:27:04.597 [2024-11-28 12:50:46.968905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.597 [2024-11-28 12:50:46.968917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.597 qpair failed and we were unable to recover it. 00:27:04.597 [2024-11-28 12:50:46.969070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.597 [2024-11-28 12:50:46.969084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.597 qpair failed and we were unable to recover it. 00:27:04.597 [2024-11-28 12:50:46.969325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.597 [2024-11-28 12:50:46.969339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.597 qpair failed and we were unable to recover it. 00:27:04.597 [2024-11-28 12:50:46.969430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.597 [2024-11-28 12:50:46.969446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.597 qpair failed and we were unable to recover it. 00:27:04.597 [2024-11-28 12:50:46.969660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.597 [2024-11-28 12:50:46.969680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.597 qpair failed and we were unable to recover it. 00:27:04.597 [2024-11-28 12:50:46.969772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.597 [2024-11-28 12:50:46.969785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.597 qpair failed and we were unable to recover it. 00:27:04.597 [2024-11-28 12:50:46.969886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.597 [2024-11-28 12:50:46.969899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.597 qpair failed and we were unable to recover it. 00:27:04.597 [2024-11-28 12:50:46.970149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.597 [2024-11-28 12:50:46.970164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.597 qpair failed and we were unable to recover it. 00:27:04.597 [2024-11-28 12:50:46.970343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.597 [2024-11-28 12:50:46.970355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.597 qpair failed and we were unable to recover it. 00:27:04.597 [2024-11-28 12:50:46.970520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.597 [2024-11-28 12:50:46.970533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.597 qpair failed and we were unable to recover it. 00:27:04.598 [2024-11-28 12:50:46.970615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.598 [2024-11-28 12:50:46.970627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.598 qpair failed and we were unable to recover it. 00:27:04.598 [2024-11-28 12:50:46.970771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.598 [2024-11-28 12:50:46.970785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.598 qpair failed and we were unable to recover it. 00:27:04.598 [2024-11-28 12:50:46.970969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.598 [2024-11-28 12:50:46.970983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.598 qpair failed and we were unable to recover it. 00:27:04.598 [2024-11-28 12:50:46.971083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.598 [2024-11-28 12:50:46.971096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.598 qpair failed and we were unable to recover it. 00:27:04.598 [2024-11-28 12:50:46.971275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.598 [2024-11-28 12:50:46.971287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.598 qpair failed and we were unable to recover it. 00:27:04.598 [2024-11-28 12:50:46.971422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.598 [2024-11-28 12:50:46.971434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.598 qpair failed and we were unable to recover it. 00:27:04.598 [2024-11-28 12:50:46.971576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.598 [2024-11-28 12:50:46.971589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.598 qpair failed and we were unable to recover it. 00:27:04.598 [2024-11-28 12:50:46.971760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.598 [2024-11-28 12:50:46.971773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.598 qpair failed and we were unable to recover it. 00:27:04.598 [2024-11-28 12:50:46.972001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.598 [2024-11-28 12:50:46.972014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.598 qpair failed and we were unable to recover it. 00:27:04.598 [2024-11-28 12:50:46.972159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.598 [2024-11-28 12:50:46.972172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.598 qpair failed and we were unable to recover it. 00:27:04.598 [2024-11-28 12:50:46.972275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.598 [2024-11-28 12:50:46.972288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.598 qpair failed and we were unable to recover it. 00:27:04.598 [2024-11-28 12:50:46.972391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.598 [2024-11-28 12:50:46.972404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.598 qpair failed and we were unable to recover it. 00:27:04.598 [2024-11-28 12:50:46.972631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.598 [2024-11-28 12:50:46.972644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.598 qpair failed and we were unable to recover it. 00:27:04.598 [2024-11-28 12:50:46.972843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.598 [2024-11-28 12:50:46.972871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.598 qpair failed and we were unable to recover it. 00:27:04.598 [2024-11-28 12:50:46.973114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.598 [2024-11-28 12:50:46.973131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.598 qpair failed and we were unable to recover it. 00:27:04.598 [2024-11-28 12:50:46.973234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.598 [2024-11-28 12:50:46.973249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.598 qpair failed and we were unable to recover it. 00:27:04.598 [2024-11-28 12:50:46.973422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.598 [2024-11-28 12:50:46.973454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.598 qpair failed and we were unable to recover it. 00:27:04.598 [2024-11-28 12:50:46.973739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.598 [2024-11-28 12:50:46.973771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.598 qpair failed and we were unable to recover it. 00:27:04.598 [2024-11-28 12:50:46.973960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.598 [2024-11-28 12:50:46.973996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.598 qpair failed and we were unable to recover it. 00:27:04.598 [2024-11-28 12:50:46.974259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.598 [2024-11-28 12:50:46.974292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.598 qpair failed and we were unable to recover it. 00:27:04.598 [2024-11-28 12:50:46.974496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.598 [2024-11-28 12:50:46.974512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.598 qpair failed and we were unable to recover it. 00:27:04.598 [2024-11-28 12:50:46.974693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.598 [2024-11-28 12:50:46.974709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.598 qpair failed and we were unable to recover it. 00:27:04.598 [2024-11-28 12:50:46.974893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.598 [2024-11-28 12:50:46.974925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.598 qpair failed and we were unable to recover it. 00:27:04.598 [2024-11-28 12:50:46.975133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.598 [2024-11-28 12:50:46.975164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.598 qpair failed and we were unable to recover it. 00:27:04.598 [2024-11-28 12:50:46.975353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.598 [2024-11-28 12:50:46.975384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.598 qpair failed and we were unable to recover it. 00:27:04.598 [2024-11-28 12:50:46.975628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.598 [2024-11-28 12:50:46.975645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.598 qpair failed and we were unable to recover it. 00:27:04.598 [2024-11-28 12:50:46.975801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.598 [2024-11-28 12:50:46.975821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.598 qpair failed and we were unable to recover it. 00:27:04.598 [2024-11-28 12:50:46.976085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.598 [2024-11-28 12:50:46.976118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.598 qpair failed and we were unable to recover it. 00:27:04.598 [2024-11-28 12:50:46.976392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.598 [2024-11-28 12:50:46.976424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.598 qpair failed and we were unable to recover it. 00:27:04.598 [2024-11-28 12:50:46.976725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.598 [2024-11-28 12:50:46.976757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.598 qpair failed and we were unable to recover it. 00:27:04.598 [2024-11-28 12:50:46.976894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.598 [2024-11-28 12:50:46.976925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.598 qpair failed and we were unable to recover it. 00:27:04.598 [2024-11-28 12:50:46.977139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.598 [2024-11-28 12:50:46.977172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.598 qpair failed and we were unable to recover it. 00:27:04.598 [2024-11-28 12:50:46.977419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.598 [2024-11-28 12:50:46.977452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.598 qpair failed and we were unable to recover it. 00:27:04.598 [2024-11-28 12:50:46.977662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.598 [2024-11-28 12:50:46.977679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.598 qpair failed and we were unable to recover it. 00:27:04.599 [2024-11-28 12:50:46.977888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.599 [2024-11-28 12:50:46.977904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.599 qpair failed and we were unable to recover it. 00:27:04.599 [2024-11-28 12:50:46.978078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.599 [2024-11-28 12:50:46.978091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.599 qpair failed and we were unable to recover it. 00:27:04.599 [2024-11-28 12:50:46.978247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.599 [2024-11-28 12:50:46.978258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.599 qpair failed and we were unable to recover it. 00:27:04.599 [2024-11-28 12:50:46.978387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.599 [2024-11-28 12:50:46.978399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.599 qpair failed and we were unable to recover it. 00:27:04.599 [2024-11-28 12:50:46.978496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.599 [2024-11-28 12:50:46.978529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.599 qpair failed and we were unable to recover it. 00:27:04.599 [2024-11-28 12:50:46.978793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.599 [2024-11-28 12:50:46.978826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.599 qpair failed and we were unable to recover it. 00:27:04.599 [2024-11-28 12:50:46.979127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.599 [2024-11-28 12:50:46.979162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.599 qpair failed and we were unable to recover it. 00:27:04.599 [2024-11-28 12:50:46.979360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.599 [2024-11-28 12:50:46.979393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.599 qpair failed and we were unable to recover it. 00:27:04.599 [2024-11-28 12:50:46.979676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.599 [2024-11-28 12:50:46.979709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.599 qpair failed and we were unable to recover it. 00:27:04.599 [2024-11-28 12:50:46.979901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.599 [2024-11-28 12:50:46.979935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.599 qpair failed and we were unable to recover it. 00:27:04.599 [2024-11-28 12:50:46.980099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.599 [2024-11-28 12:50:46.980134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.599 qpair failed and we were unable to recover it. 00:27:04.599 [2024-11-28 12:50:46.980354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.599 [2024-11-28 12:50:46.980393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.599 qpair failed and we were unable to recover it. 00:27:04.599 [2024-11-28 12:50:46.980597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.599 [2024-11-28 12:50:46.980609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.599 qpair failed and we were unable to recover it. 00:27:04.599 [2024-11-28 12:50:46.980761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.599 [2024-11-28 12:50:46.980795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.599 qpair failed and we were unable to recover it. 00:27:04.599 [2024-11-28 12:50:46.981044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.599 [2024-11-28 12:50:46.981078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.599 qpair failed and we were unable to recover it. 00:27:04.599 [2024-11-28 12:50:46.981271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.599 [2024-11-28 12:50:46.981305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.599 qpair failed and we were unable to recover it. 00:27:04.599 [2024-11-28 12:50:46.981449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.599 [2024-11-28 12:50:46.981484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.599 qpair failed and we were unable to recover it. 00:27:04.599 [2024-11-28 12:50:46.981774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.599 [2024-11-28 12:50:46.981807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.599 qpair failed and we were unable to recover it. 00:27:04.599 [2024-11-28 12:50:46.982080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.599 [2024-11-28 12:50:46.982132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.599 qpair failed and we were unable to recover it. 00:27:04.599 [2024-11-28 12:50:46.982278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.599 [2024-11-28 12:50:46.982303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.599 qpair failed and we were unable to recover it. 00:27:04.599 [2024-11-28 12:50:46.982405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.599 [2024-11-28 12:50:46.982420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.599 qpair failed and we were unable to recover it. 00:27:04.599 [2024-11-28 12:50:46.982646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.599 [2024-11-28 12:50:46.982663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.599 qpair failed and we were unable to recover it. 00:27:04.599 [2024-11-28 12:50:46.982821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.599 [2024-11-28 12:50:46.982841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.599 qpair failed and we were unable to recover it. 00:27:04.599 [2024-11-28 12:50:46.983028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.599 [2024-11-28 12:50:46.983064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.599 qpair failed and we were unable to recover it. 00:27:04.599 [2024-11-28 12:50:46.983339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.599 [2024-11-28 12:50:46.983370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.599 qpair failed and we were unable to recover it. 00:27:04.599 [2024-11-28 12:50:46.983507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.599 [2024-11-28 12:50:46.983523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.599 qpair failed and we were unable to recover it. 00:27:04.599 [2024-11-28 12:50:46.983770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.599 [2024-11-28 12:50:46.983802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.599 qpair failed and we were unable to recover it. 00:27:04.599 [2024-11-28 12:50:46.983990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.599 [2024-11-28 12:50:46.984023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.599 qpair failed and we were unable to recover it. 00:27:04.599 [2024-11-28 12:50:46.984274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.599 [2024-11-28 12:50:46.984307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.599 qpair failed and we were unable to recover it. 00:27:04.599 [2024-11-28 12:50:46.984443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.599 [2024-11-28 12:50:46.984474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.599 qpair failed and we were unable to recover it. 00:27:04.599 [2024-11-28 12:50:46.984752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.599 [2024-11-28 12:50:46.984785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.599 qpair failed and we were unable to recover it. 00:27:04.599 [2024-11-28 12:50:46.984984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.599 [2024-11-28 12:50:46.985017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.599 qpair failed and we were unable to recover it. 00:27:04.599 [2024-11-28 12:50:46.985162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.599 [2024-11-28 12:50:46.985194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.599 qpair failed and we were unable to recover it. 00:27:04.599 [2024-11-28 12:50:46.985461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.599 [2024-11-28 12:50:46.985494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.599 qpair failed and we were unable to recover it. 00:27:04.600 [2024-11-28 12:50:46.985693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.600 [2024-11-28 12:50:46.985726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.600 qpair failed and we were unable to recover it. 00:27:04.600 [2024-11-28 12:50:46.985973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.600 [2024-11-28 12:50:46.986006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.600 qpair failed and we were unable to recover it. 00:27:04.600 [2024-11-28 12:50:46.986235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.600 [2024-11-28 12:50:46.986250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.600 qpair failed and we were unable to recover it. 00:27:04.600 [2024-11-28 12:50:46.986459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.600 [2024-11-28 12:50:46.986476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.600 qpair failed and we were unable to recover it. 00:27:04.600 [2024-11-28 12:50:46.986672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.600 [2024-11-28 12:50:46.986688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.600 qpair failed and we were unable to recover it. 00:27:04.600 [2024-11-28 12:50:46.986894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.600 [2024-11-28 12:50:46.986910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.600 qpair failed and we were unable to recover it. 00:27:04.600 [2024-11-28 12:50:46.987093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.600 [2024-11-28 12:50:46.987109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.600 qpair failed and we were unable to recover it. 00:27:04.600 [2024-11-28 12:50:46.987278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.600 [2024-11-28 12:50:46.987310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.600 qpair failed and we were unable to recover it. 00:27:04.600 [2024-11-28 12:50:46.987509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.600 [2024-11-28 12:50:46.987543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.600 qpair failed and we were unable to recover it. 00:27:04.600 [2024-11-28 12:50:46.987701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.600 [2024-11-28 12:50:46.987736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.600 qpair failed and we were unable to recover it. 00:27:04.600 [2024-11-28 12:50:46.988031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.600 [2024-11-28 12:50:46.988065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.600 qpair failed and we were unable to recover it. 00:27:04.600 [2024-11-28 12:50:46.988301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.600 [2024-11-28 12:50:46.988334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.600 qpair failed and we were unable to recover it. 00:27:04.600 [2024-11-28 12:50:46.988513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.600 [2024-11-28 12:50:46.988529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.600 qpair failed and we were unable to recover it. 00:27:04.600 [2024-11-28 12:50:46.988693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.600 [2024-11-28 12:50:46.988725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.600 qpair failed and we were unable to recover it. 00:27:04.600 [2024-11-28 12:50:46.989024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.600 [2024-11-28 12:50:46.989059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.600 qpair failed and we were unable to recover it. 00:27:04.600 [2024-11-28 12:50:46.989261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.600 [2024-11-28 12:50:46.989293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.600 qpair failed and we were unable to recover it. 00:27:04.600 [2024-11-28 12:50:46.989491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.600 [2024-11-28 12:50:46.989521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.600 qpair failed and we were unable to recover it. 00:27:04.600 [2024-11-28 12:50:46.989715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.600 [2024-11-28 12:50:46.989747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.600 qpair failed and we were unable to recover it. 00:27:04.600 [2024-11-28 12:50:46.989982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.600 [2024-11-28 12:50:46.990017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.600 qpair failed and we were unable to recover it. 00:27:04.600 [2024-11-28 12:50:46.990297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.600 [2024-11-28 12:50:46.990314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.600 qpair failed and we were unable to recover it. 00:27:04.600 [2024-11-28 12:50:46.990492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.600 [2024-11-28 12:50:46.990508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.600 qpair failed and we were unable to recover it. 00:27:04.600 [2024-11-28 12:50:46.990714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.600 [2024-11-28 12:50:46.990747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.600 qpair failed and we were unable to recover it. 00:27:04.600 [2024-11-28 12:50:46.991017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.600 [2024-11-28 12:50:46.991052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.600 qpair failed and we were unable to recover it. 00:27:04.600 [2024-11-28 12:50:46.991307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.600 [2024-11-28 12:50:46.991323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.600 qpair failed and we were unable to recover it. 00:27:04.600 [2024-11-28 12:50:46.991540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.600 [2024-11-28 12:50:46.991572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.600 qpair failed and we were unable to recover it. 00:27:04.600 [2024-11-28 12:50:46.991763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.600 [2024-11-28 12:50:46.991802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.600 qpair failed and we were unable to recover it. 00:27:04.600 [2024-11-28 12:50:46.992072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.600 [2024-11-28 12:50:46.992105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.600 qpair failed and we were unable to recover it. 00:27:04.600 [2024-11-28 12:50:46.992353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.600 [2024-11-28 12:50:46.992369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.600 qpair failed and we were unable to recover it. 00:27:04.600 [2024-11-28 12:50:46.992533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.600 [2024-11-28 12:50:46.992550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.600 qpair failed and we were unable to recover it. 00:27:04.600 [2024-11-28 12:50:46.992637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.600 [2024-11-28 12:50:46.992652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.600 qpair failed and we were unable to recover it. 00:27:04.600 [2024-11-28 12:50:46.992882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.600 [2024-11-28 12:50:46.992914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.600 qpair failed and we were unable to recover it. 00:27:04.600 [2024-11-28 12:50:46.993144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.600 [2024-11-28 12:50:46.993177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.600 qpair failed and we were unable to recover it. 00:27:04.600 [2024-11-28 12:50:46.993329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.600 [2024-11-28 12:50:46.993363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.600 qpair failed and we were unable to recover it. 00:27:04.600 [2024-11-28 12:50:46.993651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.600 [2024-11-28 12:50:46.993668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.600 qpair failed and we were unable to recover it. 00:27:04.600 [2024-11-28 12:50:46.993822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.600 [2024-11-28 12:50:46.993838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.600 qpair failed and we were unable to recover it. 00:27:04.600 [2024-11-28 12:50:46.994077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.600 [2024-11-28 12:50:46.994094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.600 qpair failed and we were unable to recover it. 00:27:04.600 [2024-11-28 12:50:46.994260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.601 [2024-11-28 12:50:46.994292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.601 qpair failed and we were unable to recover it. 00:27:04.601 [2024-11-28 12:50:46.994487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.601 [2024-11-28 12:50:46.994518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.601 qpair failed and we were unable to recover it. 00:27:04.601 [2024-11-28 12:50:46.994818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.601 [2024-11-28 12:50:46.994850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.601 qpair failed and we were unable to recover it. 00:27:04.601 [2024-11-28 12:50:46.995082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.601 [2024-11-28 12:50:46.995117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.601 qpair failed and we were unable to recover it. 00:27:04.601 [2024-11-28 12:50:46.995271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.601 [2024-11-28 12:50:46.995303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.601 qpair failed and we were unable to recover it. 00:27:04.601 [2024-11-28 12:50:46.995439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.601 [2024-11-28 12:50:46.995455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.601 qpair failed and we were unable to recover it. 00:27:04.601 [2024-11-28 12:50:46.995596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.601 [2024-11-28 12:50:46.995612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.601 qpair failed and we were unable to recover it. 00:27:04.601 [2024-11-28 12:50:46.995774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.601 [2024-11-28 12:50:46.995790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.601 qpair failed and we were unable to recover it. 00:27:04.601 [2024-11-28 12:50:46.995897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.601 [2024-11-28 12:50:46.995929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.601 qpair failed and we were unable to recover it. 00:27:04.601 [2024-11-28 12:50:46.996227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.601 [2024-11-28 12:50:46.996260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.601 qpair failed and we were unable to recover it. 00:27:04.601 [2024-11-28 12:50:46.996482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.601 [2024-11-28 12:50:46.996514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.601 qpair failed and we were unable to recover it. 00:27:04.601 [2024-11-28 12:50:46.996709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.601 [2024-11-28 12:50:46.996741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.601 qpair failed and we were unable to recover it. 00:27:04.601 [2024-11-28 12:50:46.997019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.601 [2024-11-28 12:50:46.997036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.601 qpair failed and we were unable to recover it. 00:27:04.601 [2024-11-28 12:50:46.997298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.601 [2024-11-28 12:50:46.997329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.601 qpair failed and we were unable to recover it. 00:27:04.601 [2024-11-28 12:50:46.997591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.601 [2024-11-28 12:50:46.997624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.601 qpair failed and we were unable to recover it. 00:27:04.601 [2024-11-28 12:50:46.997920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.601 [2024-11-28 12:50:46.997968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.601 qpair failed and we were unable to recover it. 00:27:04.601 [2024-11-28 12:50:46.998175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.601 [2024-11-28 12:50:46.998209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.601 qpair failed and we were unable to recover it. 00:27:04.601 [2024-11-28 12:50:46.998405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.601 [2024-11-28 12:50:46.998437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.601 qpair failed and we were unable to recover it. 00:27:04.601 [2024-11-28 12:50:46.998584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.601 [2024-11-28 12:50:46.998615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.601 qpair failed and we were unable to recover it. 00:27:04.601 [2024-11-28 12:50:46.998817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.601 [2024-11-28 12:50:46.998850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.601 qpair failed and we were unable to recover it. 00:27:04.601 [2024-11-28 12:50:46.999058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.601 [2024-11-28 12:50:46.999091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.601 qpair failed and we were unable to recover it. 00:27:04.601 [2024-11-28 12:50:46.999343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.601 [2024-11-28 12:50:46.999376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.601 qpair failed and we were unable to recover it. 00:27:04.601 [2024-11-28 12:50:46.999571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.601 [2024-11-28 12:50:46.999602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.601 qpair failed and we were unable to recover it. 00:27:04.601 [2024-11-28 12:50:46.999853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.601 [2024-11-28 12:50:46.999885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.601 qpair failed and we were unable to recover it. 00:27:04.601 [2024-11-28 12:50:47.000188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.601 [2024-11-28 12:50:47.000222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.601 qpair failed and we were unable to recover it. 00:27:04.601 [2024-11-28 12:50:47.000481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.601 [2024-11-28 12:50:47.000512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.601 qpair failed and we were unable to recover it. 00:27:04.601 [2024-11-28 12:50:47.000805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.601 [2024-11-28 12:50:47.000838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.601 qpair failed and we were unable to recover it. 00:27:04.601 [2024-11-28 12:50:47.001058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.601 [2024-11-28 12:50:47.001092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.601 qpair failed and we were unable to recover it. 00:27:04.601 [2024-11-28 12:50:47.001354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.601 [2024-11-28 12:50:47.001387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.601 qpair failed and we were unable to recover it. 00:27:04.601 [2024-11-28 12:50:47.001626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.601 [2024-11-28 12:50:47.001665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.601 qpair failed and we were unable to recover it. 00:27:04.601 [2024-11-28 12:50:47.001887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.601 [2024-11-28 12:50:47.001920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.601 qpair failed and we were unable to recover it. 00:27:04.601 [2024-11-28 12:50:47.002140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.601 [2024-11-28 12:50:47.002173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.601 qpair failed and we were unable to recover it. 00:27:04.601 [2024-11-28 12:50:47.002373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.601 [2024-11-28 12:50:47.002405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.601 qpair failed and we were unable to recover it. 00:27:04.601 [2024-11-28 12:50:47.002675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.601 [2024-11-28 12:50:47.002691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.601 qpair failed and we were unable to recover it. 00:27:04.601 [2024-11-28 12:50:47.002904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.601 [2024-11-28 12:50:47.002920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.601 qpair failed and we were unable to recover it. 00:27:04.601 [2024-11-28 12:50:47.003102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.601 [2024-11-28 12:50:47.003118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.601 qpair failed and we were unable to recover it. 00:27:04.601 [2024-11-28 12:50:47.003274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.602 [2024-11-28 12:50:47.003291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.602 qpair failed and we were unable to recover it. 00:27:04.602 [2024-11-28 12:50:47.003452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.602 [2024-11-28 12:50:47.003467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.602 qpair failed and we were unable to recover it. 00:27:04.602 [2024-11-28 12:50:47.003630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.602 [2024-11-28 12:50:47.003647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.602 qpair failed and we were unable to recover it. 00:27:04.602 [2024-11-28 12:50:47.003908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.602 [2024-11-28 12:50:47.003940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.602 qpair failed and we were unable to recover it. 00:27:04.602 [2024-11-28 12:50:47.004079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.602 [2024-11-28 12:50:47.004095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.602 qpair failed and we were unable to recover it. 00:27:04.602 [2024-11-28 12:50:47.004277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.602 [2024-11-28 12:50:47.004294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.602 qpair failed and we were unable to recover it. 00:27:04.602 [2024-11-28 12:50:47.004391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.602 [2024-11-28 12:50:47.004406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.602 qpair failed and we were unable to recover it. 00:27:04.602 [2024-11-28 12:50:47.004663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.602 [2024-11-28 12:50:47.004696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.602 qpair failed and we were unable to recover it. 00:27:04.602 [2024-11-28 12:50:47.004819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.602 [2024-11-28 12:50:47.004851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.602 qpair failed and we were unable to recover it. 00:27:04.602 [2024-11-28 12:50:47.005119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.602 [2024-11-28 12:50:47.005153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.602 qpair failed and we were unable to recover it. 00:27:04.602 [2024-11-28 12:50:47.005334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.602 [2024-11-28 12:50:47.005366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.602 qpair failed and we were unable to recover it. 00:27:04.602 [2024-11-28 12:50:47.005533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.602 [2024-11-28 12:50:47.005552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.602 qpair failed and we were unable to recover it. 00:27:04.602 [2024-11-28 12:50:47.005788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.602 [2024-11-28 12:50:47.005820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.602 qpair failed and we were unable to recover it. 00:27:04.602 [2024-11-28 12:50:47.006099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.602 [2024-11-28 12:50:47.006134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.602 qpair failed and we were unable to recover it. 00:27:04.602 [2024-11-28 12:50:47.006324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.602 [2024-11-28 12:50:47.006340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.602 qpair failed and we were unable to recover it. 00:27:04.602 [2024-11-28 12:50:47.006521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.602 [2024-11-28 12:50:47.006553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.602 qpair failed and we were unable to recover it. 00:27:04.602 [2024-11-28 12:50:47.006758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.602 [2024-11-28 12:50:47.006790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.602 qpair failed and we were unable to recover it. 00:27:04.602 [2024-11-28 12:50:47.006918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.602 [2024-11-28 12:50:47.006958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.602 qpair failed and we were unable to recover it. 00:27:04.602 [2024-11-28 12:50:47.007200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.602 [2024-11-28 12:50:47.007215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.602 qpair failed and we were unable to recover it. 00:27:04.602 [2024-11-28 12:50:47.007317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.602 [2024-11-28 12:50:47.007331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.602 qpair failed and we were unable to recover it. 00:27:04.602 [2024-11-28 12:50:47.007556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.602 [2024-11-28 12:50:47.007589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.602 qpair failed and we were unable to recover it. 00:27:04.602 [2024-11-28 12:50:47.007728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.602 [2024-11-28 12:50:47.007762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.602 qpair failed and we were unable to recover it. 00:27:04.602 [2024-11-28 12:50:47.007964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.602 [2024-11-28 12:50:47.007998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.602 qpair failed and we were unable to recover it. 00:27:04.602 [2024-11-28 12:50:47.008258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.602 [2024-11-28 12:50:47.008274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.602 qpair failed and we were unable to recover it. 00:27:04.602 [2024-11-28 12:50:47.008457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.602 [2024-11-28 12:50:47.008488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.602 qpair failed and we were unable to recover it. 00:27:04.602 [2024-11-28 12:50:47.008739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.602 [2024-11-28 12:50:47.008772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.602 qpair failed and we were unable to recover it. 00:27:04.602 [2024-11-28 12:50:47.008929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.602 [2024-11-28 12:50:47.008970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.602 qpair failed and we were unable to recover it. 00:27:04.602 [2024-11-28 12:50:47.009213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.602 [2024-11-28 12:50:47.009246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.602 qpair failed and we were unable to recover it. 00:27:04.602 [2024-11-28 12:50:47.009488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.602 [2024-11-28 12:50:47.009520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.602 qpair failed and we were unable to recover it. 00:27:04.602 [2024-11-28 12:50:47.009787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.602 [2024-11-28 12:50:47.009803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.602 qpair failed and we were unable to recover it. 00:27:04.602 [2024-11-28 12:50:47.010033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.602 [2024-11-28 12:50:47.010049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.602 qpair failed and we were unable to recover it. 00:27:04.602 [2024-11-28 12:50:47.010152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.602 [2024-11-28 12:50:47.010167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.602 qpair failed and we were unable to recover it. 00:27:04.602 [2024-11-28 12:50:47.010377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.602 [2024-11-28 12:50:47.010393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.602 qpair failed and we were unable to recover it. 00:27:04.602 [2024-11-28 12:50:47.010554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.602 [2024-11-28 12:50:47.010573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.602 qpair failed and we were unable to recover it. 00:27:04.602 [2024-11-28 12:50:47.010807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.602 [2024-11-28 12:50:47.010823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.602 qpair failed and we were unable to recover it. 00:27:04.602 [2024-11-28 12:50:47.010903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.602 [2024-11-28 12:50:47.010917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.602 qpair failed and we were unable to recover it. 00:27:04.602 [2024-11-28 12:50:47.011114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.602 [2024-11-28 12:50:47.011146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.603 qpair failed and we were unable to recover it. 00:27:04.603 [2024-11-28 12:50:47.011341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.603 [2024-11-28 12:50:47.011374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.603 qpair failed and we were unable to recover it. 00:27:04.603 [2024-11-28 12:50:47.011638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.603 [2024-11-28 12:50:47.011671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.603 qpair failed and we were unable to recover it. 00:27:04.603 [2024-11-28 12:50:47.011879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.603 [2024-11-28 12:50:47.011911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.603 qpair failed and we were unable to recover it. 00:27:04.603 [2024-11-28 12:50:47.012050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.603 [2024-11-28 12:50:47.012083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.603 qpair failed and we were unable to recover it. 00:27:04.603 [2024-11-28 12:50:47.012295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.603 [2024-11-28 12:50:47.012328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.603 qpair failed and we were unable to recover it. 00:27:04.603 [2024-11-28 12:50:47.012557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.603 [2024-11-28 12:50:47.012589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.603 qpair failed and we were unable to recover it. 00:27:04.603 [2024-11-28 12:50:47.012853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.603 [2024-11-28 12:50:47.012885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.603 qpair failed and we were unable to recover it. 00:27:04.603 [2024-11-28 12:50:47.013155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.603 [2024-11-28 12:50:47.013189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.603 qpair failed and we were unable to recover it. 00:27:04.603 [2024-11-28 12:50:47.013454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.603 [2024-11-28 12:50:47.013486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.603 qpair failed and we were unable to recover it. 00:27:04.603 [2024-11-28 12:50:47.013742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.603 [2024-11-28 12:50:47.013757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.603 qpair failed and we were unable to recover it. 00:27:04.603 [2024-11-28 12:50:47.013909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.603 [2024-11-28 12:50:47.013925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.603 qpair failed and we were unable to recover it. 00:27:04.603 [2024-11-28 12:50:47.014110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.603 [2024-11-28 12:50:47.014144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.603 qpair failed and we were unable to recover it. 00:27:04.603 [2024-11-28 12:50:47.014331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.603 [2024-11-28 12:50:47.014362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.603 qpair failed and we were unable to recover it. 00:27:04.603 [2024-11-28 12:50:47.014518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.603 [2024-11-28 12:50:47.014550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.603 qpair failed and we were unable to recover it. 00:27:04.603 [2024-11-28 12:50:47.014836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.603 [2024-11-28 12:50:47.014852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.603 qpair failed and we were unable to recover it. 00:27:04.603 [2024-11-28 12:50:47.014960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.603 [2024-11-28 12:50:47.014975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.603 qpair failed and we were unable to recover it. 00:27:04.603 [2024-11-28 12:50:47.015117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.603 [2024-11-28 12:50:47.015133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.603 qpair failed and we were unable to recover it. 00:27:04.603 [2024-11-28 12:50:47.015225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.603 [2024-11-28 12:50:47.015241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.603 qpair failed and we were unable to recover it. 00:27:04.603 [2024-11-28 12:50:47.015435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.603 [2024-11-28 12:50:47.015468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.603 qpair failed and we were unable to recover it. 00:27:04.603 [2024-11-28 12:50:47.015586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.603 [2024-11-28 12:50:47.015618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.603 qpair failed and we were unable to recover it. 00:27:04.603 [2024-11-28 12:50:47.015821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.603 [2024-11-28 12:50:47.015853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.603 qpair failed and we were unable to recover it. 00:27:04.603 [2024-11-28 12:50:47.016124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.603 [2024-11-28 12:50:47.016157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.603 qpair failed and we were unable to recover it. 00:27:04.603 [2024-11-28 12:50:47.016352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.603 [2024-11-28 12:50:47.016386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.603 qpair failed and we were unable to recover it. 00:27:04.603 [2024-11-28 12:50:47.016595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.603 [2024-11-28 12:50:47.016612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.603 qpair failed and we were unable to recover it. 00:27:04.603 [2024-11-28 12:50:47.016872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.603 [2024-11-28 12:50:47.016888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.603 qpair failed and we were unable to recover it. 00:27:04.603 [2024-11-28 12:50:47.017065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.603 [2024-11-28 12:50:47.017081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.603 qpair failed and we were unable to recover it. 00:27:04.603 [2024-11-28 12:50:47.017259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.603 [2024-11-28 12:50:47.017292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.603 qpair failed and we were unable to recover it. 00:27:04.603 [2024-11-28 12:50:47.017488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.603 [2024-11-28 12:50:47.017521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.603 qpair failed and we were unable to recover it. 00:27:04.603 [2024-11-28 12:50:47.017765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.603 [2024-11-28 12:50:47.017799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.603 qpair failed and we were unable to recover it. 00:27:04.603 [2024-11-28 12:50:47.018050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.603 [2024-11-28 12:50:47.018083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.603 qpair failed and we were unable to recover it. 00:27:04.604 [2024-11-28 12:50:47.018328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.604 [2024-11-28 12:50:47.018360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.604 qpair failed and we were unable to recover it. 00:27:04.604 [2024-11-28 12:50:47.018637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.604 [2024-11-28 12:50:47.018653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.604 qpair failed and we were unable to recover it. 00:27:04.604 [2024-11-28 12:50:47.018869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.604 [2024-11-28 12:50:47.018885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.604 qpair failed and we were unable to recover it. 00:27:04.604 [2024-11-28 12:50:47.019096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.604 [2024-11-28 12:50:47.019112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.604 qpair failed and we were unable to recover it. 00:27:04.604 [2024-11-28 12:50:47.019368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.604 [2024-11-28 12:50:47.019409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.604 qpair failed and we were unable to recover it. 00:27:04.604 [2024-11-28 12:50:47.019557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.604 [2024-11-28 12:50:47.019590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.604 qpair failed and we were unable to recover it. 00:27:04.604 [2024-11-28 12:50:47.019836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.604 [2024-11-28 12:50:47.019874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.604 qpair failed and we were unable to recover it. 00:27:04.604 [2024-11-28 12:50:47.020175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.604 [2024-11-28 12:50:47.020209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.604 qpair failed and we were unable to recover it. 00:27:04.604 [2024-11-28 12:50:47.020474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.604 [2024-11-28 12:50:47.020509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.604 qpair failed and we were unable to recover it. 00:27:04.604 [2024-11-28 12:50:47.020720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.604 [2024-11-28 12:50:47.020752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.604 qpair failed and we were unable to recover it. 00:27:04.604 [2024-11-28 12:50:47.020954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.604 [2024-11-28 12:50:47.020988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.604 qpair failed and we were unable to recover it. 00:27:04.604 [2024-11-28 12:50:47.021317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.604 [2024-11-28 12:50:47.021347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.604 qpair failed and we were unable to recover it. 00:27:04.604 [2024-11-28 12:50:47.021489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.604 [2024-11-28 12:50:47.021521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.604 qpair failed and we were unable to recover it. 00:27:04.604 [2024-11-28 12:50:47.021701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.604 [2024-11-28 12:50:47.021734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.604 qpair failed and we were unable to recover it. 00:27:04.604 [2024-11-28 12:50:47.021979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.604 [2024-11-28 12:50:47.022013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.604 qpair failed and we were unable to recover it. 00:27:04.604 [2024-11-28 12:50:47.022233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.604 [2024-11-28 12:50:47.022265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.604 qpair failed and we were unable to recover it. 00:27:04.604 [2024-11-28 12:50:47.022402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.604 [2024-11-28 12:50:47.022434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.604 qpair failed and we were unable to recover it. 00:27:04.604 [2024-11-28 12:50:47.022573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.604 [2024-11-28 12:50:47.022607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.604 qpair failed and we were unable to recover it. 00:27:04.604 [2024-11-28 12:50:47.022879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.604 [2024-11-28 12:50:47.022912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.604 qpair failed and we were unable to recover it. 00:27:04.604 [2024-11-28 12:50:47.023112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.604 [2024-11-28 12:50:47.023127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.604 qpair failed and we were unable to recover it. 00:27:04.604 [2024-11-28 12:50:47.023221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.604 [2024-11-28 12:50:47.023237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.604 qpair failed and we were unable to recover it. 00:27:04.604 [2024-11-28 12:50:47.023470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.604 [2024-11-28 12:50:47.023486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.604 qpair failed and we were unable to recover it. 00:27:04.604 [2024-11-28 12:50:47.023714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.604 [2024-11-28 12:50:47.023730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.604 qpair failed and we were unable to recover it. 00:27:04.604 [2024-11-28 12:50:47.023962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.604 [2024-11-28 12:50:47.023980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.604 qpair failed and we were unable to recover it. 00:27:04.604 [2024-11-28 12:50:47.024103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.604 [2024-11-28 12:50:47.024119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.604 qpair failed and we were unable to recover it. 00:27:04.604 [2024-11-28 12:50:47.024291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.604 [2024-11-28 12:50:47.024323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.604 qpair failed and we were unable to recover it. 00:27:04.604 [2024-11-28 12:50:47.024470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.604 [2024-11-28 12:50:47.024502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.604 qpair failed and we were unable to recover it. 00:27:04.604 [2024-11-28 12:50:47.024793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.604 [2024-11-28 12:50:47.024827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.604 qpair failed and we were unable to recover it. 00:27:04.604 [2024-11-28 12:50:47.025110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.604 [2024-11-28 12:50:47.025143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.604 qpair failed and we were unable to recover it. 00:27:04.604 [2024-11-28 12:50:47.025341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.604 [2024-11-28 12:50:47.025356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.604 qpair failed and we were unable to recover it. 00:27:04.604 [2024-11-28 12:50:47.025576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.604 [2024-11-28 12:50:47.025608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.604 qpair failed and we were unable to recover it. 00:27:04.604 [2024-11-28 12:50:47.025879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.604 [2024-11-28 12:50:47.025911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.604 qpair failed and we were unable to recover it. 00:27:04.604 [2024-11-28 12:50:47.026114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.604 [2024-11-28 12:50:47.026147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.604 qpair failed and we were unable to recover it. 00:27:04.604 [2024-11-28 12:50:47.026357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.604 [2024-11-28 12:50:47.026390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.604 qpair failed and we were unable to recover it. 00:27:04.604 [2024-11-28 12:50:47.026528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.604 [2024-11-28 12:50:47.026544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.604 qpair failed and we were unable to recover it. 00:27:04.604 [2024-11-28 12:50:47.026747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.604 [2024-11-28 12:50:47.026780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.605 qpair failed and we were unable to recover it. 00:27:04.605 [2024-11-28 12:50:47.027099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.605 [2024-11-28 12:50:47.027134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.605 qpair failed and we were unable to recover it. 00:27:04.605 [2024-11-28 12:50:47.027332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.605 [2024-11-28 12:50:47.027365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.605 qpair failed and we were unable to recover it. 00:27:04.605 [2024-11-28 12:50:47.027577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.605 [2024-11-28 12:50:47.027609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.605 qpair failed and we were unable to recover it. 00:27:04.605 [2024-11-28 12:50:47.027817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.605 [2024-11-28 12:50:47.027850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.605 qpair failed and we were unable to recover it. 00:27:04.605 [2024-11-28 12:50:47.028079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.605 [2024-11-28 12:50:47.028114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.605 qpair failed and we were unable to recover it. 00:27:04.605 [2024-11-28 12:50:47.028308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.605 [2024-11-28 12:50:47.028325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.605 qpair failed and we were unable to recover it. 00:27:04.605 [2024-11-28 12:50:47.028399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.605 [2024-11-28 12:50:47.028438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.605 qpair failed and we were unable to recover it. 00:27:04.605 [2024-11-28 12:50:47.028677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.605 [2024-11-28 12:50:47.028710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.605 qpair failed and we were unable to recover it. 00:27:04.605 [2024-11-28 12:50:47.028946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.605 [2024-11-28 12:50:47.029005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.605 qpair failed and we were unable to recover it. 00:27:04.605 [2024-11-28 12:50:47.029146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.605 [2024-11-28 12:50:47.029178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.605 qpair failed and we were unable to recover it. 00:27:04.605 [2024-11-28 12:50:47.029362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.605 [2024-11-28 12:50:47.029402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.605 qpair failed and we were unable to recover it. 00:27:04.605 [2024-11-28 12:50:47.029757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.605 [2024-11-28 12:50:47.029790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.605 qpair failed and we were unable to recover it. 00:27:04.605 [2024-11-28 12:50:47.029997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.605 [2024-11-28 12:50:47.030030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.605 qpair failed and we were unable to recover it. 00:27:04.605 [2024-11-28 12:50:47.030233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.605 [2024-11-28 12:50:47.030267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.605 qpair failed and we were unable to recover it. 00:27:04.605 [2024-11-28 12:50:47.030447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.605 [2024-11-28 12:50:47.030479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.605 qpair failed and we were unable to recover it. 00:27:04.605 [2024-11-28 12:50:47.030778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.605 [2024-11-28 12:50:47.030810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.605 qpair failed and we were unable to recover it. 00:27:04.605 [2024-11-28 12:50:47.031037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.605 [2024-11-28 12:50:47.031071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.605 qpair failed and we were unable to recover it. 00:27:04.605 [2024-11-28 12:50:47.031263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.605 [2024-11-28 12:50:47.031280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.605 qpair failed and we were unable to recover it. 00:27:04.605 [2024-11-28 12:50:47.031459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.605 [2024-11-28 12:50:47.031491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.605 qpair failed and we were unable to recover it. 00:27:04.605 [2024-11-28 12:50:47.031689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.605 [2024-11-28 12:50:47.031722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.605 qpair failed and we were unable to recover it. 00:27:04.605 [2024-11-28 12:50:47.032004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.605 [2024-11-28 12:50:47.032038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.605 qpair failed and we were unable to recover it. 00:27:04.605 [2024-11-28 12:50:47.032251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.605 [2024-11-28 12:50:47.032283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.605 qpair failed and we were unable to recover it. 00:27:04.605 [2024-11-28 12:50:47.032567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.605 [2024-11-28 12:50:47.032599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.605 qpair failed and we were unable to recover it. 00:27:04.605 [2024-11-28 12:50:47.032784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.605 [2024-11-28 12:50:47.032816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.605 qpair failed and we were unable to recover it. 00:27:04.605 [2024-11-28 12:50:47.033020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.605 [2024-11-28 12:50:47.033055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.605 qpair failed and we were unable to recover it. 00:27:04.605 [2024-11-28 12:50:47.033275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.605 [2024-11-28 12:50:47.033291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.605 qpair failed and we were unable to recover it. 00:27:04.605 [2024-11-28 12:50:47.033477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.605 [2024-11-28 12:50:47.033510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.605 qpair failed and we were unable to recover it. 00:27:04.605 [2024-11-28 12:50:47.033720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.605 [2024-11-28 12:50:47.033754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.605 qpair failed and we were unable to recover it. 00:27:04.605 [2024-11-28 12:50:47.034026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.605 [2024-11-28 12:50:47.034059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.605 qpair failed and we were unable to recover it. 00:27:04.605 [2024-11-28 12:50:47.034261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.605 [2024-11-28 12:50:47.034293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.605 qpair failed and we were unable to recover it. 00:27:04.605 [2024-11-28 12:50:47.034499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.605 [2024-11-28 12:50:47.034532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.605 qpair failed and we were unable to recover it. 00:27:04.605 [2024-11-28 12:50:47.034710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.605 [2024-11-28 12:50:47.034742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.605 qpair failed and we were unable to recover it. 00:27:04.605 [2024-11-28 12:50:47.034967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.605 [2024-11-28 12:50:47.035001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.605 qpair failed and we were unable to recover it. 00:27:04.605 [2024-11-28 12:50:47.035161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.605 [2024-11-28 12:50:47.035193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.605 qpair failed and we were unable to recover it. 00:27:04.605 [2024-11-28 12:50:47.035370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.605 [2024-11-28 12:50:47.035402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.605 qpair failed and we were unable to recover it. 00:27:04.605 [2024-11-28 12:50:47.035547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.605 [2024-11-28 12:50:47.035580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.605 qpair failed and we were unable to recover it. 00:27:04.605 [2024-11-28 12:50:47.035845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.605 [2024-11-28 12:50:47.035878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.605 qpair failed and we were unable to recover it. 00:27:04.606 [2024-11-28 12:50:47.036073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.606 [2024-11-28 12:50:47.036107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.606 qpair failed and we were unable to recover it. 00:27:04.606 [2024-11-28 12:50:47.036325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.606 [2024-11-28 12:50:47.036357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.606 qpair failed and we were unable to recover it. 00:27:04.606 [2024-11-28 12:50:47.036581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.606 [2024-11-28 12:50:47.036597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.606 qpair failed and we were unable to recover it. 00:27:04.606 [2024-11-28 12:50:47.036740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.606 [2024-11-28 12:50:47.036757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.606 qpair failed and we were unable to recover it. 00:27:04.606 [2024-11-28 12:50:47.036856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.606 [2024-11-28 12:50:47.036894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.606 qpair failed and we were unable to recover it. 00:27:04.606 [2024-11-28 12:50:47.037115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.606 [2024-11-28 12:50:47.037148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.606 qpair failed and we were unable to recover it. 00:27:04.606 [2024-11-28 12:50:47.037344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.606 [2024-11-28 12:50:47.037361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.606 qpair failed and we were unable to recover it. 00:27:04.606 [2024-11-28 12:50:47.037547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.606 [2024-11-28 12:50:47.037580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.606 qpair failed and we were unable to recover it. 00:27:04.606 [2024-11-28 12:50:47.037869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.606 [2024-11-28 12:50:47.037902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.606 qpair failed and we were unable to recover it. 00:27:04.606 [2024-11-28 12:50:47.038068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.606 [2024-11-28 12:50:47.038101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.606 qpair failed and we were unable to recover it. 00:27:04.606 [2024-11-28 12:50:47.038303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.606 [2024-11-28 12:50:47.038337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.606 qpair failed and we were unable to recover it. 00:27:04.606 [2024-11-28 12:50:47.038535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.606 [2024-11-28 12:50:47.038567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.606 qpair failed and we were unable to recover it. 00:27:04.606 [2024-11-28 12:50:47.038824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.606 [2024-11-28 12:50:47.038840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.606 qpair failed and we were unable to recover it. 00:27:04.606 [2024-11-28 12:50:47.039073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.606 [2024-11-28 12:50:47.039093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.606 qpair failed and we were unable to recover it. 00:27:04.606 [2024-11-28 12:50:47.039306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.606 [2024-11-28 12:50:47.039323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.606 qpair failed and we were unable to recover it. 00:27:04.606 [2024-11-28 12:50:47.039490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.606 [2024-11-28 12:50:47.039507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.606 qpair failed and we were unable to recover it. 00:27:04.606 [2024-11-28 12:50:47.039685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.606 [2024-11-28 12:50:47.039719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.606 qpair failed and we were unable to recover it. 00:27:04.606 [2024-11-28 12:50:47.039859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.606 [2024-11-28 12:50:47.039892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.606 qpair failed and we were unable to recover it. 00:27:04.606 [2024-11-28 12:50:47.040102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.606 [2024-11-28 12:50:47.040135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.606 qpair failed and we were unable to recover it. 00:27:04.606 [2024-11-28 12:50:47.040335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.606 [2024-11-28 12:50:47.040369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.606 qpair failed and we were unable to recover it. 00:27:04.606 [2024-11-28 12:50:47.040647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.606 [2024-11-28 12:50:47.040678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.606 qpair failed and we were unable to recover it. 00:27:04.606 [2024-11-28 12:50:47.041045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.606 [2024-11-28 12:50:47.041080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.606 qpair failed and we were unable to recover it. 00:27:04.606 [2024-11-28 12:50:47.041308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.606 [2024-11-28 12:50:47.041325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.606 qpair failed and we were unable to recover it. 00:27:04.606 [2024-11-28 12:50:47.041493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.606 [2024-11-28 12:50:47.041524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.606 qpair failed and we were unable to recover it. 00:27:04.606 [2024-11-28 12:50:47.041729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.606 [2024-11-28 12:50:47.041761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.606 qpair failed and we were unable to recover it. 00:27:04.606 [2024-11-28 12:50:47.041982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.606 [2024-11-28 12:50:47.042016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.606 qpair failed and we were unable to recover it. 00:27:04.606 [2024-11-28 12:50:47.042208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.606 [2024-11-28 12:50:47.042239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.606 qpair failed and we were unable to recover it. 00:27:04.606 [2024-11-28 12:50:47.042392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.606 [2024-11-28 12:50:47.042426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.606 qpair failed and we were unable to recover it. 00:27:04.606 [2024-11-28 12:50:47.042655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.606 [2024-11-28 12:50:47.042689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.606 qpair failed and we were unable to recover it. 00:27:04.606 [2024-11-28 12:50:47.042971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.606 [2024-11-28 12:50:47.043004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.606 qpair failed and we were unable to recover it. 00:27:04.606 [2024-11-28 12:50:47.043161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.606 [2024-11-28 12:50:47.043194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.606 qpair failed and we were unable to recover it. 00:27:04.606 [2024-11-28 12:50:47.043391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.606 [2024-11-28 12:50:47.043407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.606 qpair failed and we were unable to recover it. 00:27:04.606 [2024-11-28 12:50:47.043521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.606 [2024-11-28 12:50:47.043553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.606 qpair failed and we were unable to recover it. 00:27:04.606 [2024-11-28 12:50:47.043839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.606 [2024-11-28 12:50:47.043871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.606 qpair failed and we were unable to recover it. 00:27:04.606 [2024-11-28 12:50:47.044078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.606 [2024-11-28 12:50:47.044110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.606 qpair failed and we were unable to recover it. 00:27:04.606 [2024-11-28 12:50:47.044242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.606 [2024-11-28 12:50:47.044258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.606 qpair failed and we were unable to recover it. 00:27:04.606 [2024-11-28 12:50:47.044359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.606 [2024-11-28 12:50:47.044375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.606 qpair failed and we were unable to recover it. 00:27:04.606 [2024-11-28 12:50:47.044468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.606 [2024-11-28 12:50:47.044482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.606 qpair failed and we were unable to recover it. 00:27:04.606 [2024-11-28 12:50:47.044728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.606 [2024-11-28 12:50:47.044745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.606 qpair failed and we were unable to recover it. 00:27:04.606 [2024-11-28 12:50:47.044994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.606 [2024-11-28 12:50:47.045011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.606 qpair failed and we were unable to recover it. 00:27:04.606 [2024-11-28 12:50:47.045161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.606 [2024-11-28 12:50:47.045177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.606 qpair failed and we were unable to recover it. 00:27:04.607 [2024-11-28 12:50:47.045362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.607 [2024-11-28 12:50:47.045380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.607 qpair failed and we were unable to recover it. 00:27:04.607 [2024-11-28 12:50:47.045594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.607 [2024-11-28 12:50:47.045611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.607 qpair failed and we were unable to recover it. 00:27:04.607 [2024-11-28 12:50:47.045830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.607 [2024-11-28 12:50:47.045847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.607 qpair failed and we were unable to recover it. 00:27:04.607 [2024-11-28 12:50:47.046083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.607 [2024-11-28 12:50:47.046099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.607 qpair failed and we were unable to recover it. 00:27:04.607 [2024-11-28 12:50:47.046261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.607 [2024-11-28 12:50:47.046278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.607 qpair failed and we were unable to recover it. 00:27:04.607 [2024-11-28 12:50:47.046448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.607 [2024-11-28 12:50:47.046480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.607 qpair failed and we were unable to recover it. 00:27:04.607 [2024-11-28 12:50:47.046684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.607 [2024-11-28 12:50:47.046716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.607 qpair failed and we were unable to recover it. 00:27:04.607 [2024-11-28 12:50:47.046905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.607 [2024-11-28 12:50:47.046939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.607 qpair failed and we were unable to recover it. 00:27:04.607 [2024-11-28 12:50:47.047100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.607 [2024-11-28 12:50:47.047133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.607 qpair failed and we were unable to recover it. 00:27:04.607 [2024-11-28 12:50:47.047409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.607 [2024-11-28 12:50:47.047443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.607 qpair failed and we were unable to recover it. 00:27:04.607 [2024-11-28 12:50:47.047746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.607 [2024-11-28 12:50:47.047763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.607 qpair failed and we were unable to recover it. 00:27:04.607 [2024-11-28 12:50:47.047988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.607 [2024-11-28 12:50:47.048005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.607 qpair failed and we were unable to recover it. 00:27:04.607 [2024-11-28 12:50:47.048246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.607 [2024-11-28 12:50:47.048266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.607 qpair failed and we were unable to recover it. 00:27:04.607 [2024-11-28 12:50:47.048478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.607 [2024-11-28 12:50:47.048495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.607 qpair failed and we were unable to recover it. 00:27:04.607 [2024-11-28 12:50:47.048694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.607 [2024-11-28 12:50:47.048726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.607 qpair failed and we were unable to recover it. 00:27:04.607 [2024-11-28 12:50:47.048971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.607 [2024-11-28 12:50:47.049003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.607 qpair failed and we were unable to recover it. 00:27:04.607 [2024-11-28 12:50:47.049280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.607 [2024-11-28 12:50:47.049312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.607 qpair failed and we were unable to recover it. 00:27:04.607 [2024-11-28 12:50:47.049593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.607 [2024-11-28 12:50:47.049626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.607 qpair failed and we were unable to recover it. 00:27:04.607 [2024-11-28 12:50:47.049848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.607 [2024-11-28 12:50:47.049880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.607 qpair failed and we were unable to recover it. 00:27:04.607 [2024-11-28 12:50:47.050029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.607 [2024-11-28 12:50:47.050063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.607 qpair failed and we were unable to recover it. 00:27:04.607 [2024-11-28 12:50:47.050259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.607 [2024-11-28 12:50:47.050291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.607 qpair failed and we were unable to recover it. 00:27:04.607 [2024-11-28 12:50:47.050440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.607 [2024-11-28 12:50:47.050473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.607 qpair failed and we were unable to recover it. 00:27:04.607 [2024-11-28 12:50:47.050736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.607 [2024-11-28 12:50:47.050769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.607 qpair failed and we were unable to recover it. 00:27:04.607 [2024-11-28 12:50:47.051040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.607 [2024-11-28 12:50:47.051073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.607 qpair failed and we were unable to recover it. 00:27:04.607 [2024-11-28 12:50:47.051323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.607 [2024-11-28 12:50:47.051355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.607 qpair failed and we were unable to recover it. 00:27:04.607 [2024-11-28 12:50:47.051546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.607 [2024-11-28 12:50:47.051587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.607 qpair failed and we were unable to recover it. 00:27:04.607 [2024-11-28 12:50:47.051755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.607 [2024-11-28 12:50:47.051772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.607 qpair failed and we were unable to recover it. 00:27:04.607 [2024-11-28 12:50:47.051954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.607 [2024-11-28 12:50:47.051971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.607 qpair failed and we were unable to recover it. 00:27:04.607 [2024-11-28 12:50:47.052091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.607 [2024-11-28 12:50:47.052108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.607 qpair failed and we were unable to recover it. 00:27:04.607 [2024-11-28 12:50:47.052336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.607 [2024-11-28 12:50:47.052352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.607 qpair failed and we were unable to recover it. 00:27:04.607 [2024-11-28 12:50:47.052517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.607 [2024-11-28 12:50:47.052534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.607 qpair failed and we were unable to recover it. 00:27:04.607 [2024-11-28 12:50:47.052776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.607 [2024-11-28 12:50:47.052792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.607 qpair failed and we were unable to recover it. 00:27:04.607 [2024-11-28 12:50:47.052914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.607 [2024-11-28 12:50:47.052973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.607 qpair failed and we were unable to recover it. 00:27:04.607 [2024-11-28 12:50:47.053190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.607 [2024-11-28 12:50:47.053224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.607 qpair failed and we were unable to recover it. 00:27:04.607 [2024-11-28 12:50:47.053491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.607 [2024-11-28 12:50:47.053532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.607 qpair failed and we were unable to recover it. 00:27:04.607 [2024-11-28 12:50:47.053766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.607 [2024-11-28 12:50:47.053782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.607 qpair failed and we were unable to recover it. 00:27:04.607 [2024-11-28 12:50:47.053940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.607 [2024-11-28 12:50:47.053963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.607 qpair failed and we were unable to recover it. 00:27:04.607 [2024-11-28 12:50:47.054069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.607 [2024-11-28 12:50:47.054086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.607 qpair failed and we were unable to recover it. 00:27:04.607 [2024-11-28 12:50:47.054197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.607 [2024-11-28 12:50:47.054214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.607 qpair failed and we were unable to recover it. 00:27:04.607 [2024-11-28 12:50:47.054399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.608 [2024-11-28 12:50:47.054439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.608 qpair failed and we were unable to recover it. 00:27:04.608 [2024-11-28 12:50:47.054631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.608 [2024-11-28 12:50:47.054648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.608 qpair failed and we were unable to recover it. 00:27:04.608 [2024-11-28 12:50:47.054810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.608 [2024-11-28 12:50:47.054843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.608 qpair failed and we were unable to recover it. 00:27:04.608 [2024-11-28 12:50:47.055119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.608 [2024-11-28 12:50:47.055156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.608 qpair failed and we were unable to recover it. 00:27:04.608 [2024-11-28 12:50:47.055360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.608 [2024-11-28 12:50:47.055378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.608 qpair failed and we were unable to recover it. 00:27:04.608 [2024-11-28 12:50:47.055526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.608 [2024-11-28 12:50:47.055569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.608 qpair failed and we were unable to recover it. 00:27:04.608 [2024-11-28 12:50:47.055783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.608 [2024-11-28 12:50:47.055816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.608 qpair failed and we were unable to recover it. 00:27:04.608 [2024-11-28 12:50:47.056028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.608 [2024-11-28 12:50:47.056064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.608 qpair failed and we were unable to recover it. 00:27:04.608 [2024-11-28 12:50:47.056191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.608 [2024-11-28 12:50:47.056224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.608 qpair failed and we were unable to recover it. 00:27:04.608 [2024-11-28 12:50:47.056490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.608 [2024-11-28 12:50:47.056524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.608 qpair failed and we were unable to recover it. 00:27:04.608 [2024-11-28 12:50:47.056707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.608 [2024-11-28 12:50:47.056723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.608 qpair failed and we were unable to recover it. 00:27:04.608 [2024-11-28 12:50:47.056820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.608 [2024-11-28 12:50:47.056835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.608 qpair failed and we were unable to recover it. 00:27:04.608 [2024-11-28 12:50:47.057069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.608 [2024-11-28 12:50:47.057086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.608 qpair failed and we were unable to recover it. 00:27:04.608 [2024-11-28 12:50:47.057248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.608 [2024-11-28 12:50:47.057264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.608 qpair failed and we were unable to recover it. 00:27:04.608 [2024-11-28 12:50:47.057368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.608 [2024-11-28 12:50:47.057383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.608 qpair failed and we were unable to recover it. 00:27:04.608 [2024-11-28 12:50:47.057487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.608 [2024-11-28 12:50:47.057520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.608 qpair failed and we were unable to recover it. 00:27:04.608 [2024-11-28 12:50:47.057837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.608 [2024-11-28 12:50:47.057870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.608 qpair failed and we were unable to recover it. 00:27:04.608 [2024-11-28 12:50:47.058161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.608 [2024-11-28 12:50:47.058196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.608 qpair failed and we were unable to recover it. 00:27:04.608 [2024-11-28 12:50:47.058473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.608 [2024-11-28 12:50:47.058518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.608 qpair failed and we were unable to recover it. 00:27:04.608 [2024-11-28 12:50:47.058783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.608 [2024-11-28 12:50:47.058799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.608 qpair failed and we were unable to recover it. 00:27:04.608 [2024-11-28 12:50:47.058988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.608 [2024-11-28 12:50:47.059005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.608 qpair failed and we were unable to recover it. 00:27:04.608 [2024-11-28 12:50:47.059115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.608 [2024-11-28 12:50:47.059131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.608 qpair failed and we were unable to recover it. 00:27:04.608 [2024-11-28 12:50:47.059347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.608 [2024-11-28 12:50:47.059380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.608 qpair failed and we were unable to recover it. 00:27:04.608 [2024-11-28 12:50:47.059607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.608 [2024-11-28 12:50:47.059640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.608 qpair failed and we were unable to recover it. 00:27:04.608 [2024-11-28 12:50:47.059960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.608 [2024-11-28 12:50:47.059995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.608 qpair failed and we were unable to recover it. 00:27:04.608 [2024-11-28 12:50:47.060200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.608 [2024-11-28 12:50:47.060232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.608 qpair failed and we were unable to recover it. 00:27:04.608 [2024-11-28 12:50:47.060438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.608 [2024-11-28 12:50:47.060470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.608 qpair failed and we were unable to recover it. 00:27:04.608 [2024-11-28 12:50:47.060601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.608 [2024-11-28 12:50:47.060648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.608 qpair failed and we were unable to recover it. 00:27:04.891 [2024-11-28 12:50:47.060939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.891 [2024-11-28 12:50:47.060961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.891 qpair failed and we were unable to recover it. 00:27:04.891 [2024-11-28 12:50:47.061153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.891 [2024-11-28 12:50:47.061170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.891 qpair failed and we were unable to recover it. 00:27:04.891 [2024-11-28 12:50:47.061294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.891 [2024-11-28 12:50:47.061310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.891 qpair failed and we were unable to recover it. 00:27:04.891 [2024-11-28 12:50:47.061474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.891 [2024-11-28 12:50:47.061491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.891 qpair failed and we were unable to recover it. 00:27:04.891 [2024-11-28 12:50:47.061774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.891 [2024-11-28 12:50:47.061790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.891 qpair failed and we were unable to recover it. 00:27:04.891 [2024-11-28 12:50:47.061944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.892 [2024-11-28 12:50:47.061967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.892 qpair failed and we were unable to recover it. 00:27:04.892 [2024-11-28 12:50:47.062193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.892 [2024-11-28 12:50:47.062209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.892 qpair failed and we were unable to recover it. 00:27:04.892 [2024-11-28 12:50:47.062299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.892 [2024-11-28 12:50:47.062314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.892 qpair failed and we were unable to recover it. 00:27:04.892 [2024-11-28 12:50:47.062415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.892 [2024-11-28 12:50:47.062431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.892 qpair failed and we were unable to recover it. 00:27:04.892 [2024-11-28 12:50:47.062523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.892 [2024-11-28 12:50:47.062538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.892 qpair failed and we were unable to recover it. 00:27:04.892 [2024-11-28 12:50:47.062700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.892 [2024-11-28 12:50:47.062716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.892 qpair failed and we were unable to recover it. 00:27:04.892 [2024-11-28 12:50:47.062883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.892 [2024-11-28 12:50:47.062900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.892 qpair failed and we were unable to recover it. 00:27:04.892 [2024-11-28 12:50:47.063014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.892 [2024-11-28 12:50:47.063031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.892 qpair failed and we were unable to recover it. 00:27:04.892 [2024-11-28 12:50:47.063192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.892 [2024-11-28 12:50:47.063208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.892 qpair failed and we were unable to recover it. 00:27:04.892 [2024-11-28 12:50:47.063317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.892 [2024-11-28 12:50:47.063334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.892 qpair failed and we were unable to recover it. 00:27:04.892 [2024-11-28 12:50:47.063502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.892 [2024-11-28 12:50:47.063518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.892 qpair failed and we were unable to recover it. 00:27:04.892 [2024-11-28 12:50:47.063606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.892 [2024-11-28 12:50:47.063620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.892 qpair failed and we were unable to recover it. 00:27:04.892 [2024-11-28 12:50:47.063794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.892 [2024-11-28 12:50:47.063810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.892 qpair failed and we were unable to recover it. 00:27:04.892 [2024-11-28 12:50:47.063976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.892 [2024-11-28 12:50:47.063993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.892 qpair failed and we were unable to recover it. 00:27:04.892 [2024-11-28 12:50:47.064101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.892 [2024-11-28 12:50:47.064117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.892 qpair failed and we were unable to recover it. 00:27:04.892 [2024-11-28 12:50:47.064224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.892 [2024-11-28 12:50:47.064240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.892 qpair failed and we were unable to recover it. 00:27:04.892 [2024-11-28 12:50:47.064330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.892 [2024-11-28 12:50:47.064345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.892 qpair failed and we were unable to recover it. 00:27:04.892 [2024-11-28 12:50:47.064516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.892 [2024-11-28 12:50:47.064533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.892 qpair failed and we were unable to recover it. 00:27:04.892 [2024-11-28 12:50:47.064744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.892 [2024-11-28 12:50:47.064761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.892 qpair failed and we were unable to recover it. 00:27:04.892 [2024-11-28 12:50:47.064868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.892 [2024-11-28 12:50:47.064883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.892 qpair failed and we were unable to recover it. 00:27:04.892 [2024-11-28 12:50:47.065052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.892 [2024-11-28 12:50:47.065069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.892 qpair failed and we were unable to recover it. 00:27:04.892 [2024-11-28 12:50:47.065239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.892 [2024-11-28 12:50:47.065259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.892 qpair failed and we were unable to recover it. 00:27:04.892 [2024-11-28 12:50:47.065376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.892 [2024-11-28 12:50:47.065394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.892 qpair failed and we were unable to recover it. 00:27:04.892 [2024-11-28 12:50:47.065561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.892 [2024-11-28 12:50:47.065578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.892 qpair failed and we were unable to recover it. 00:27:04.892 [2024-11-28 12:50:47.065735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.892 [2024-11-28 12:50:47.065752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.892 qpair failed and we were unable to recover it. 00:27:04.892 [2024-11-28 12:50:47.065926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.892 [2024-11-28 12:50:47.065945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.892 qpair failed and we were unable to recover it. 00:27:04.892 [2024-11-28 12:50:47.066054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.892 [2024-11-28 12:50:47.066070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.892 qpair failed and we were unable to recover it. 00:27:04.892 [2024-11-28 12:50:47.066175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.892 [2024-11-28 12:50:47.066191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.892 qpair failed and we were unable to recover it. 00:27:04.892 [2024-11-28 12:50:47.066344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.892 [2024-11-28 12:50:47.066360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.892 qpair failed and we were unable to recover it. 00:27:04.892 [2024-11-28 12:50:47.066464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.892 [2024-11-28 12:50:47.066482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.892 qpair failed and we were unable to recover it. 00:27:04.892 [2024-11-28 12:50:47.066576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.892 [2024-11-28 12:50:47.066591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.892 qpair failed and we were unable to recover it. 00:27:04.892 [2024-11-28 12:50:47.066742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.892 [2024-11-28 12:50:47.066758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.892 qpair failed and we were unable to recover it. 00:27:04.892 [2024-11-28 12:50:47.066996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.892 [2024-11-28 12:50:47.067013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.892 qpair failed and we were unable to recover it. 00:27:04.892 [2024-11-28 12:50:47.067107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.892 [2024-11-28 12:50:47.067122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.892 qpair failed and we were unable to recover it. 00:27:04.892 [2024-11-28 12:50:47.067339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.892 [2024-11-28 12:50:47.067372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.892 qpair failed and we were unable to recover it. 00:27:04.892 [2024-11-28 12:50:47.067661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.892 [2024-11-28 12:50:47.067733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.892 qpair failed and we were unable to recover it. 00:27:04.892 [2024-11-28 12:50:47.068961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.892 [2024-11-28 12:50:47.068993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.892 qpair failed and we were unable to recover it. 00:27:04.892 [2024-11-28 12:50:47.069277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.892 [2024-11-28 12:50:47.069312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.892 qpair failed and we were unable to recover it. 00:27:04.893 [2024-11-28 12:50:47.069597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.893 [2024-11-28 12:50:47.069631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.893 qpair failed and we were unable to recover it. 00:27:04.893 [2024-11-28 12:50:47.069922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.893 [2024-11-28 12:50:47.069973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.893 qpair failed and we were unable to recover it. 00:27:04.893 [2024-11-28 12:50:47.070179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.893 [2024-11-28 12:50:47.070212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.893 qpair failed and we were unable to recover it. 00:27:04.893 [2024-11-28 12:50:47.070466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.893 [2024-11-28 12:50:47.070499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.893 qpair failed and we were unable to recover it. 00:27:04.893 [2024-11-28 12:50:47.070707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.893 [2024-11-28 12:50:47.070722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.893 qpair failed and we were unable to recover it. 00:27:04.893 [2024-11-28 12:50:47.070938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.893 [2024-11-28 12:50:47.070980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.893 qpair failed and we were unable to recover it. 00:27:04.893 [2024-11-28 12:50:47.071235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.893 [2024-11-28 12:50:47.071267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.893 qpair failed and we were unable to recover it. 00:27:04.893 [2024-11-28 12:50:47.071472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.893 [2024-11-28 12:50:47.071506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.893 qpair failed and we were unable to recover it. 00:27:04.893 [2024-11-28 12:50:47.071704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.893 [2024-11-28 12:50:47.071738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.893 qpair failed and we were unable to recover it. 00:27:04.893 [2024-11-28 12:50:47.071875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.893 [2024-11-28 12:50:47.071908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.893 qpair failed and we were unable to recover it. 00:27:04.893 [2024-11-28 12:50:47.072151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.893 [2024-11-28 12:50:47.072194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.893 qpair failed and we were unable to recover it. 00:27:04.893 [2024-11-28 12:50:47.072460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.893 [2024-11-28 12:50:47.072477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.893 qpair failed and we were unable to recover it. 00:27:04.893 [2024-11-28 12:50:47.072650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.893 [2024-11-28 12:50:47.072666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.893 qpair failed and we were unable to recover it. 00:27:04.893 [2024-11-28 12:50:47.072832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.893 [2024-11-28 12:50:47.072866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.893 qpair failed and we were unable to recover it. 00:27:04.893 [2024-11-28 12:50:47.076972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.893 [2024-11-28 12:50:47.077013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.893 qpair failed and we were unable to recover it. 00:27:04.893 [2024-11-28 12:50:47.077237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.893 [2024-11-28 12:50:47.077252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.893 qpair failed and we were unable to recover it. 00:27:04.893 [2024-11-28 12:50:47.077385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.893 [2024-11-28 12:50:47.077400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.893 qpair failed and we were unable to recover it. 00:27:04.893 [2024-11-28 12:50:47.077513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.893 [2024-11-28 12:50:47.077529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.893 qpair failed and we were unable to recover it. 00:27:04.893 [2024-11-28 12:50:47.077648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.893 [2024-11-28 12:50:47.077663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.893 qpair failed and we were unable to recover it. 00:27:04.893 [2024-11-28 12:50:47.077899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.893 [2024-11-28 12:50:47.077917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.893 qpair failed and we were unable to recover it. 00:27:04.893 [2024-11-28 12:50:47.078107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.893 [2024-11-28 12:50:47.078129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.893 qpair failed and we were unable to recover it. 00:27:04.893 [2024-11-28 12:50:47.078303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.893 [2024-11-28 12:50:47.078320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.893 qpair failed and we were unable to recover it. 00:27:04.893 [2024-11-28 12:50:47.078455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.893 [2024-11-28 12:50:47.078471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.893 qpair failed and we were unable to recover it. 00:27:04.893 [2024-11-28 12:50:47.078672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.893 [2024-11-28 12:50:47.078688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.893 qpair failed and we were unable to recover it. 00:27:04.893 [2024-11-28 12:50:47.078877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.893 [2024-11-28 12:50:47.078893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.893 qpair failed and we were unable to recover it. 00:27:04.893 [2024-11-28 12:50:47.079053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.893 [2024-11-28 12:50:47.079069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.893 qpair failed and we were unable to recover it. 00:27:04.893 [2024-11-28 12:50:47.079188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.893 [2024-11-28 12:50:47.079206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.893 qpair failed and we were unable to recover it. 00:27:04.893 [2024-11-28 12:50:47.079281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.893 [2024-11-28 12:50:47.079296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.893 qpair failed and we were unable to recover it. 00:27:04.893 [2024-11-28 12:50:47.079397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.893 [2024-11-28 12:50:47.079413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.893 qpair failed and we were unable to recover it. 00:27:04.893 [2024-11-28 12:50:47.079652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.893 [2024-11-28 12:50:47.079669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.893 qpair failed and we were unable to recover it. 00:27:04.893 [2024-11-28 12:50:47.079776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.893 [2024-11-28 12:50:47.079793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.893 qpair failed and we were unable to recover it. 00:27:04.893 [2024-11-28 12:50:47.080005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.893 [2024-11-28 12:50:47.080025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.893 qpair failed and we were unable to recover it. 00:27:04.893 [2024-11-28 12:50:47.080209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.893 [2024-11-28 12:50:47.080225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.893 qpair failed and we were unable to recover it. 00:27:04.893 [2024-11-28 12:50:47.080380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.893 [2024-11-28 12:50:47.080396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.893 qpair failed and we were unable to recover it. 00:27:04.893 [2024-11-28 12:50:47.080552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.893 [2024-11-28 12:50:47.080569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.893 qpair failed and we were unable to recover it. 00:27:04.893 [2024-11-28 12:50:47.080711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.893 [2024-11-28 12:50:47.080727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.893 qpair failed and we were unable to recover it. 00:27:04.893 [2024-11-28 12:50:47.080819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.893 [2024-11-28 12:50:47.080835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.893 qpair failed and we were unable to recover it. 00:27:04.893 [2024-11-28 12:50:47.081005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.893 [2024-11-28 12:50:47.081021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.893 qpair failed and we were unable to recover it. 00:27:04.893 [2024-11-28 12:50:47.081125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.894 [2024-11-28 12:50:47.081142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.894 qpair failed and we were unable to recover it. 00:27:04.894 [2024-11-28 12:50:47.081297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.894 [2024-11-28 12:50:47.081313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.894 qpair failed and we were unable to recover it. 00:27:04.894 [2024-11-28 12:50:47.081410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.894 [2024-11-28 12:50:47.081425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.894 qpair failed and we were unable to recover it. 00:27:04.894 [2024-11-28 12:50:47.081585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.894 [2024-11-28 12:50:47.081602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.894 qpair failed and we were unable to recover it. 00:27:04.894 [2024-11-28 12:50:47.081859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.894 [2024-11-28 12:50:47.081876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.894 qpair failed and we were unable to recover it. 00:27:04.894 [2024-11-28 12:50:47.082084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.894 [2024-11-28 12:50:47.082101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.894 qpair failed and we were unable to recover it. 00:27:04.894 [2024-11-28 12:50:47.082266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.894 [2024-11-28 12:50:47.082282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.894 qpair failed and we were unable to recover it. 00:27:04.894 [2024-11-28 12:50:47.082526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.894 [2024-11-28 12:50:47.082559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.894 qpair failed and we were unable to recover it. 00:27:04.894 [2024-11-28 12:50:47.082753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.894 [2024-11-28 12:50:47.082786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.894 qpair failed and we were unable to recover it. 00:27:04.894 [2024-11-28 12:50:47.083015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.894 [2024-11-28 12:50:47.083049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.894 qpair failed and we were unable to recover it. 00:27:04.894 [2024-11-28 12:50:47.083249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.894 [2024-11-28 12:50:47.083281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.894 qpair failed and we were unable to recover it. 00:27:04.894 [2024-11-28 12:50:47.083571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.894 [2024-11-28 12:50:47.083587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.894 qpair failed and we were unable to recover it. 00:27:04.894 [2024-11-28 12:50:47.083788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.894 [2024-11-28 12:50:47.083807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.894 qpair failed and we were unable to recover it. 00:27:04.894 [2024-11-28 12:50:47.083915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.894 [2024-11-28 12:50:47.083932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.894 qpair failed and we were unable to recover it. 00:27:04.894 [2024-11-28 12:50:47.084152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.894 [2024-11-28 12:50:47.084186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.894 qpair failed and we were unable to recover it. 00:27:04.894 [2024-11-28 12:50:47.084387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.894 [2024-11-28 12:50:47.084420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.894 qpair failed and we were unable to recover it. 00:27:04.894 [2024-11-28 12:50:47.084572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.894 [2024-11-28 12:50:47.084606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.894 qpair failed and we were unable to recover it. 00:27:04.894 [2024-11-28 12:50:47.084885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.894 [2024-11-28 12:50:47.084926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.894 qpair failed and we were unable to recover it. 00:27:04.894 [2024-11-28 12:50:47.085077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.894 [2024-11-28 12:50:47.085110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.894 qpair failed and we were unable to recover it. 00:27:04.894 [2024-11-28 12:50:47.085257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.894 [2024-11-28 12:50:47.085290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.894 qpair failed and we were unable to recover it. 00:27:04.894 [2024-11-28 12:50:47.085490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.894 [2024-11-28 12:50:47.085522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.894 qpair failed and we were unable to recover it. 00:27:04.894 [2024-11-28 12:50:47.085747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.894 [2024-11-28 12:50:47.085781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.894 qpair failed and we were unable to recover it. 00:27:04.894 [2024-11-28 12:50:47.086011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.894 [2024-11-28 12:50:47.086047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.894 qpair failed and we were unable to recover it. 00:27:04.894 [2024-11-28 12:50:47.086201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.894 [2024-11-28 12:50:47.086235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.894 qpair failed and we were unable to recover it. 00:27:04.894 [2024-11-28 12:50:47.086432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.894 [2024-11-28 12:50:47.086464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.894 qpair failed and we were unable to recover it. 00:27:04.894 [2024-11-28 12:50:47.086842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.894 [2024-11-28 12:50:47.086875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.894 qpair failed and we were unable to recover it. 00:27:04.894 [2024-11-28 12:50:47.087030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.894 [2024-11-28 12:50:47.087065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.894 qpair failed and we were unable to recover it. 00:27:04.894 [2024-11-28 12:50:47.087338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.894 [2024-11-28 12:50:47.087372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.894 qpair failed and we were unable to recover it. 00:27:04.894 [2024-11-28 12:50:47.087517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.894 [2024-11-28 12:50:47.087550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.894 qpair failed and we were unable to recover it. 00:27:04.894 [2024-11-28 12:50:47.087685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.894 [2024-11-28 12:50:47.087718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.894 qpair failed and we were unable to recover it. 00:27:04.894 [2024-11-28 12:50:47.088009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.894 [2024-11-28 12:50:47.088043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.894 qpair failed and we were unable to recover it. 00:27:04.894 [2024-11-28 12:50:47.088243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.894 [2024-11-28 12:50:47.088277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.894 qpair failed and we were unable to recover it. 00:27:04.894 [2024-11-28 12:50:47.088410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.894 [2024-11-28 12:50:47.088443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.894 qpair failed and we were unable to recover it. 00:27:04.894 [2024-11-28 12:50:47.088596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.894 [2024-11-28 12:50:47.088630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.894 qpair failed and we were unable to recover it. 00:27:04.894 [2024-11-28 12:50:47.088765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.894 [2024-11-28 12:50:47.088782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.894 qpair failed and we were unable to recover it. 00:27:04.894 [2024-11-28 12:50:47.088946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.894 [2024-11-28 12:50:47.088990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.894 qpair failed and we were unable to recover it. 00:27:04.894 [2024-11-28 12:50:47.089146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.894 [2024-11-28 12:50:47.089179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.894 qpair failed and we were unable to recover it. 00:27:04.894 [2024-11-28 12:50:47.090156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.894 [2024-11-28 12:50:47.090186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.894 qpair failed and we were unable to recover it. 00:27:04.894 [2024-11-28 12:50:47.090363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.895 [2024-11-28 12:50:47.090381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.895 qpair failed and we were unable to recover it. 00:27:04.895 [2024-11-28 12:50:47.090656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.895 [2024-11-28 12:50:47.090691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.895 qpair failed and we were unable to recover it. 00:27:04.895 [2024-11-28 12:50:47.090917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.895 [2024-11-28 12:50:47.090961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.895 qpair failed and we were unable to recover it. 00:27:04.895 [2024-11-28 12:50:47.091108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.895 [2024-11-28 12:50:47.091141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.895 qpair failed and we were unable to recover it. 00:27:04.895 [2024-11-28 12:50:47.091287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.895 [2024-11-28 12:50:47.091320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.895 qpair failed and we were unable to recover it. 00:27:04.895 [2024-11-28 12:50:47.091519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.895 [2024-11-28 12:50:47.091551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.895 qpair failed and we were unable to recover it. 00:27:04.895 [2024-11-28 12:50:47.091798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.895 [2024-11-28 12:50:47.091830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.895 qpair failed and we were unable to recover it. 00:27:04.895 [2024-11-28 12:50:47.092024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.895 [2024-11-28 12:50:47.092059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.895 qpair failed and we were unable to recover it. 00:27:04.895 [2024-11-28 12:50:47.092199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.895 [2024-11-28 12:50:47.092230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.895 qpair failed and we were unable to recover it. 00:27:04.895 [2024-11-28 12:50:47.092381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.895 [2024-11-28 12:50:47.092412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.895 qpair failed and we were unable to recover it. 00:27:04.895 [2024-11-28 12:50:47.093502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.895 [2024-11-28 12:50:47.093533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.895 qpair failed and we were unable to recover it. 00:27:04.895 [2024-11-28 12:50:47.093824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.895 [2024-11-28 12:50:47.093841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.895 qpair failed and we were unable to recover it. 00:27:04.895 [2024-11-28 12:50:47.094005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.895 [2024-11-28 12:50:47.094022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.895 qpair failed and we were unable to recover it. 00:27:04.895 [2024-11-28 12:50:47.094132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.895 [2024-11-28 12:50:47.094147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.895 qpair failed and we were unable to recover it. 00:27:04.895 [2024-11-28 12:50:47.094257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.895 [2024-11-28 12:50:47.094275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.895 qpair failed and we were unable to recover it. 00:27:04.895 [2024-11-28 12:50:47.094379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.895 [2024-11-28 12:50:47.094395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.895 qpair failed and we were unable to recover it. 00:27:04.895 [2024-11-28 12:50:47.094508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.895 [2024-11-28 12:50:47.094524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.895 qpair failed and we were unable to recover it. 00:27:04.895 [2024-11-28 12:50:47.094698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.895 [2024-11-28 12:50:47.094715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.895 qpair failed and we were unable to recover it. 00:27:04.895 [2024-11-28 12:50:47.094884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.895 [2024-11-28 12:50:47.094901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.895 qpair failed and we were unable to recover it. 00:27:04.895 [2024-11-28 12:50:47.095009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.895 [2024-11-28 12:50:47.095024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.895 qpair failed and we were unable to recover it. 00:27:04.895 [2024-11-28 12:50:47.095170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.895 [2024-11-28 12:50:47.095187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.895 qpair failed and we were unable to recover it. 00:27:04.895 [2024-11-28 12:50:47.095345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.895 [2024-11-28 12:50:47.095361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.895 qpair failed and we were unable to recover it. 00:27:04.895 [2024-11-28 12:50:47.095473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.895 [2024-11-28 12:50:47.095489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.895 qpair failed and we were unable to recover it. 00:27:04.895 [2024-11-28 12:50:47.095723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.895 [2024-11-28 12:50:47.095739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.895 qpair failed and we were unable to recover it. 00:27:04.895 [2024-11-28 12:50:47.095987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.895 [2024-11-28 12:50:47.096004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.895 qpair failed and we were unable to recover it. 00:27:04.895 [2024-11-28 12:50:47.096105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.895 [2024-11-28 12:50:47.096121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.895 qpair failed and we were unable to recover it. 00:27:04.895 [2024-11-28 12:50:47.096216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.895 [2024-11-28 12:50:47.096230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.895 qpair failed and we were unable to recover it. 00:27:04.895 [2024-11-28 12:50:47.096412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.895 [2024-11-28 12:50:47.096429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.895 qpair failed and we were unable to recover it. 00:27:04.895 [2024-11-28 12:50:47.096548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.895 [2024-11-28 12:50:47.096565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.895 qpair failed and we were unable to recover it. 00:27:04.895 [2024-11-28 12:50:47.096777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.895 [2024-11-28 12:50:47.096794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.895 qpair failed and we were unable to recover it. 00:27:04.895 [2024-11-28 12:50:47.096954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.895 [2024-11-28 12:50:47.096970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.895 qpair failed and we were unable to recover it. 00:27:04.895 [2024-11-28 12:50:47.097128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.895 [2024-11-28 12:50:47.097144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.895 qpair failed and we were unable to recover it. 00:27:04.895 [2024-11-28 12:50:47.097284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.895 [2024-11-28 12:50:47.097300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.895 qpair failed and we were unable to recover it. 00:27:04.895 [2024-11-28 12:50:47.097520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.895 [2024-11-28 12:50:47.097537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.895 qpair failed and we were unable to recover it. 00:27:04.895 [2024-11-28 12:50:47.097824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.895 [2024-11-28 12:50:47.097840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.895 qpair failed and we were unable to recover it. 00:27:04.895 [2024-11-28 12:50:47.098027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.895 [2024-11-28 12:50:47.098043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.895 qpair failed and we were unable to recover it. 00:27:04.895 [2024-11-28 12:50:47.098156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.895 [2024-11-28 12:50:47.098172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.895 qpair failed and we were unable to recover it. 00:27:04.895 [2024-11-28 12:50:47.098339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.895 [2024-11-28 12:50:47.098355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.895 qpair failed and we were unable to recover it. 00:27:04.896 [2024-11-28 12:50:47.098517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.896 [2024-11-28 12:50:47.098533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.896 qpair failed and we were unable to recover it. 00:27:04.896 [2024-11-28 12:50:47.098649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.896 [2024-11-28 12:50:47.098664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.896 qpair failed and we were unable to recover it. 00:27:04.896 [2024-11-28 12:50:47.098865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.896 [2024-11-28 12:50:47.098897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.896 qpair failed and we were unable to recover it. 00:27:04.896 [2024-11-28 12:50:47.099121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.896 [2024-11-28 12:50:47.099154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.896 qpair failed and we were unable to recover it. 00:27:04.896 [2024-11-28 12:50:47.099301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.896 [2024-11-28 12:50:47.099334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.896 qpair failed and we were unable to recover it. 00:27:04.896 [2024-11-28 12:50:47.099467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.896 [2024-11-28 12:50:47.099484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.896 qpair failed and we were unable to recover it. 00:27:04.896 [2024-11-28 12:50:47.099571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.896 [2024-11-28 12:50:47.099587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.896 qpair failed and we were unable to recover it. 00:27:04.896 [2024-11-28 12:50:47.099817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.896 [2024-11-28 12:50:47.099834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.896 qpair failed and we were unable to recover it. 00:27:04.896 [2024-11-28 12:50:47.100004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.896 [2024-11-28 12:50:47.100020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.896 qpair failed and we were unable to recover it. 00:27:04.896 [2024-11-28 12:50:47.100121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.896 [2024-11-28 12:50:47.100135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.896 qpair failed and we were unable to recover it. 00:27:04.896 [2024-11-28 12:50:47.100285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.896 [2024-11-28 12:50:47.100300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.896 qpair failed and we were unable to recover it. 00:27:04.896 [2024-11-28 12:50:47.100401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.896 [2024-11-28 12:50:47.100433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.896 qpair failed and we were unable to recover it. 00:27:04.896 [2024-11-28 12:50:47.100639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.896 [2024-11-28 12:50:47.100671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.896 qpair failed and we were unable to recover it. 00:27:04.896 [2024-11-28 12:50:47.100853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.896 [2024-11-28 12:50:47.100885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.896 qpair failed and we were unable to recover it. 00:27:04.896 [2024-11-28 12:50:47.101055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.896 [2024-11-28 12:50:47.101071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.896 qpair failed and we were unable to recover it. 00:27:04.896 [2024-11-28 12:50:47.101190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.896 [2024-11-28 12:50:47.101206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.896 qpair failed and we were unable to recover it. 00:27:04.896 [2024-11-28 12:50:47.101316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.896 [2024-11-28 12:50:47.101335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.896 qpair failed and we were unable to recover it. 00:27:04.896 [2024-11-28 12:50:47.101425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.896 [2024-11-28 12:50:47.101441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.896 qpair failed and we were unable to recover it. 00:27:04.896 [2024-11-28 12:50:47.101520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.896 [2024-11-28 12:50:47.101534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.896 qpair failed and we were unable to recover it. 00:27:04.896 [2024-11-28 12:50:47.101751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.896 [2024-11-28 12:50:47.101783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.896 qpair failed and we were unable to recover it. 00:27:04.896 [2024-11-28 12:50:47.102049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.896 [2024-11-28 12:50:47.102085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.896 qpair failed and we were unable to recover it. 00:27:04.896 [2024-11-28 12:50:47.102271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.896 [2024-11-28 12:50:47.102303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.896 qpair failed and we were unable to recover it. 00:27:04.896 [2024-11-28 12:50:47.102490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.896 [2024-11-28 12:50:47.102506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.896 qpair failed and we were unable to recover it. 00:27:04.896 [2024-11-28 12:50:47.102722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.896 [2024-11-28 12:50:47.102754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.896 qpair failed and we were unable to recover it. 00:27:04.896 [2024-11-28 12:50:47.102963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.896 [2024-11-28 12:50:47.102997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.896 qpair failed and we were unable to recover it. 00:27:04.896 [2024-11-28 12:50:47.103229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.896 [2024-11-28 12:50:47.103262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.896 qpair failed and we were unable to recover it. 00:27:04.896 [2024-11-28 12:50:47.103436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.896 [2024-11-28 12:50:47.103453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.896 qpair failed and we were unable to recover it. 00:27:04.896 [2024-11-28 12:50:47.104763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.896 [2024-11-28 12:50:47.104793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.896 qpair failed and we were unable to recover it. 00:27:04.896 [2024-11-28 12:50:47.104979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.896 [2024-11-28 12:50:47.104996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.896 qpair failed and we were unable to recover it. 00:27:04.896 [2024-11-28 12:50:47.105101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.896 [2024-11-28 12:50:47.105118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.896 qpair failed and we were unable to recover it. 00:27:04.896 [2024-11-28 12:50:47.105290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.896 [2024-11-28 12:50:47.105324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.896 qpair failed and we were unable to recover it. 00:27:04.896 [2024-11-28 12:50:47.105477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.897 [2024-11-28 12:50:47.105509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.897 qpair failed and we were unable to recover it. 00:27:04.897 [2024-11-28 12:50:47.105706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.897 [2024-11-28 12:50:47.105739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.897 qpair failed and we were unable to recover it. 00:27:04.897 [2024-11-28 12:50:47.105952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.897 [2024-11-28 12:50:47.105968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.897 qpair failed and we were unable to recover it. 00:27:04.897 [2024-11-28 12:50:47.106137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.897 [2024-11-28 12:50:47.106154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.897 qpair failed and we were unable to recover it. 00:27:04.897 [2024-11-28 12:50:47.106253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.897 [2024-11-28 12:50:47.106285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.897 qpair failed and we were unable to recover it. 00:27:04.897 [2024-11-28 12:50:47.106432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.897 [2024-11-28 12:50:47.106464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.897 qpair failed and we were unable to recover it. 00:27:04.897 [2024-11-28 12:50:47.106699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.897 [2024-11-28 12:50:47.106732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.897 qpair failed and we were unable to recover it. 00:27:04.897 [2024-11-28 12:50:47.106856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.897 [2024-11-28 12:50:47.106873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.897 qpair failed and we were unable to recover it. 00:27:04.897 [2024-11-28 12:50:47.107021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.897 [2024-11-28 12:50:47.107036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.897 qpair failed and we were unable to recover it. 00:27:04.897 [2024-11-28 12:50:47.108295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.897 [2024-11-28 12:50:47.108326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.897 qpair failed and we were unable to recover it. 00:27:04.897 [2024-11-28 12:50:47.108518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.897 [2024-11-28 12:50:47.108536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.897 qpair failed and we were unable to recover it. 00:27:04.897 [2024-11-28 12:50:47.108775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.897 [2024-11-28 12:50:47.108792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.897 qpair failed and we were unable to recover it. 00:27:04.897 [2024-11-28 12:50:47.108979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.897 [2024-11-28 12:50:47.108999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.897 qpair failed and we were unable to recover it. 00:27:04.897 [2024-11-28 12:50:47.109180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.897 [2024-11-28 12:50:47.109197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.897 qpair failed and we were unable to recover it. 00:27:04.897 [2024-11-28 12:50:47.109309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.897 [2024-11-28 12:50:47.109325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.897 qpair failed and we were unable to recover it. 00:27:04.897 [2024-11-28 12:50:47.109423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.897 [2024-11-28 12:50:47.109438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.897 qpair failed and we were unable to recover it. 00:27:04.897 [2024-11-28 12:50:47.109537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.897 [2024-11-28 12:50:47.109552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.897 qpair failed and we were unable to recover it. 00:27:04.897 [2024-11-28 12:50:47.109753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.897 [2024-11-28 12:50:47.109770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.897 qpair failed and we were unable to recover it. 00:27:04.897 [2024-11-28 12:50:47.109993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.897 [2024-11-28 12:50:47.110011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.897 qpair failed and we were unable to recover it. 00:27:04.897 [2024-11-28 12:50:47.110176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.897 [2024-11-28 12:50:47.110193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.897 qpair failed and we were unable to recover it. 00:27:04.897 [2024-11-28 12:50:47.110409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.897 [2024-11-28 12:50:47.110426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.897 qpair failed and we were unable to recover it. 00:27:04.897 [2024-11-28 12:50:47.110703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.897 [2024-11-28 12:50:47.110720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.897 qpair failed and we were unable to recover it. 00:27:04.897 [2024-11-28 12:50:47.110888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.897 [2024-11-28 12:50:47.110905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.897 qpair failed and we were unable to recover it. 00:27:04.897 [2024-11-28 12:50:47.111094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.897 [2024-11-28 12:50:47.111111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.897 qpair failed and we were unable to recover it. 00:27:04.897 [2024-11-28 12:50:47.111213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.897 [2024-11-28 12:50:47.111230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.897 qpair failed and we were unable to recover it. 00:27:04.897 [2024-11-28 12:50:47.111323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.897 [2024-11-28 12:50:47.111338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.897 qpair failed and we were unable to recover it. 00:27:04.897 [2024-11-28 12:50:47.111507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.897 [2024-11-28 12:50:47.111524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.897 qpair failed and we were unable to recover it. 00:27:04.897 [2024-11-28 12:50:47.111757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.897 [2024-11-28 12:50:47.111774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.897 qpair failed and we were unable to recover it. 00:27:04.897 [2024-11-28 12:50:47.112015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.897 [2024-11-28 12:50:47.112032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.897 qpair failed and we were unable to recover it. 00:27:04.897 [2024-11-28 12:50:47.112190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.897 [2024-11-28 12:50:47.112207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.897 qpair failed and we were unable to recover it. 00:27:04.897 [2024-11-28 12:50:47.112308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.897 [2024-11-28 12:50:47.112324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.897 qpair failed and we were unable to recover it. 00:27:04.897 [2024-11-28 12:50:47.112488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.897 [2024-11-28 12:50:47.112504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.897 qpair failed and we were unable to recover it. 00:27:04.897 [2024-11-28 12:50:47.112756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.897 [2024-11-28 12:50:47.112771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.897 qpair failed and we were unable to recover it. 00:27:04.897 [2024-11-28 12:50:47.113016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.897 [2024-11-28 12:50:47.113032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.897 qpair failed and we were unable to recover it. 00:27:04.897 [2024-11-28 12:50:47.113116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.897 [2024-11-28 12:50:47.113131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.897 qpair failed and we were unable to recover it. 00:27:04.897 [2024-11-28 12:50:47.113286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.897 [2024-11-28 12:50:47.113303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.897 qpair failed and we were unable to recover it. 00:27:04.897 [2024-11-28 12:50:47.113408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.897 [2024-11-28 12:50:47.113423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.897 qpair failed and we were unable to recover it. 00:27:04.897 [2024-11-28 12:50:47.113534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.897 [2024-11-28 12:50:47.113550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.897 qpair failed and we were unable to recover it. 00:27:04.897 [2024-11-28 12:50:47.113697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.897 [2024-11-28 12:50:47.113715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.898 qpair failed and we were unable to recover it. 00:27:04.898 [2024-11-28 12:50:47.113803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.898 [2024-11-28 12:50:47.113818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.898 qpair failed and we were unable to recover it. 00:27:04.898 [2024-11-28 12:50:47.113978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.898 [2024-11-28 12:50:47.113995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.898 qpair failed and we were unable to recover it. 00:27:04.898 [2024-11-28 12:50:47.114143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.898 [2024-11-28 12:50:47.114161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.898 qpair failed and we were unable to recover it. 00:27:04.898 [2024-11-28 12:50:47.114382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.898 [2024-11-28 12:50:47.114399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.898 qpair failed and we were unable to recover it. 00:27:04.898 [2024-11-28 12:50:47.114682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.898 [2024-11-28 12:50:47.114698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.898 qpair failed and we were unable to recover it. 00:27:04.898 [2024-11-28 12:50:47.114963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.898 [2024-11-28 12:50:47.114981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.898 qpair failed and we were unable to recover it. 00:27:04.898 [2024-11-28 12:50:47.115150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.898 [2024-11-28 12:50:47.115166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.898 qpair failed and we were unable to recover it. 00:27:04.898 [2024-11-28 12:50:47.115331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.898 [2024-11-28 12:50:47.115347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.898 qpair failed and we were unable to recover it. 00:27:04.898 [2024-11-28 12:50:47.115499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.898 [2024-11-28 12:50:47.115515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.898 qpair failed and we were unable to recover it. 00:27:04.898 [2024-11-28 12:50:47.115687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.898 [2024-11-28 12:50:47.115703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.898 qpair failed and we were unable to recover it. 00:27:04.898 [2024-11-28 12:50:47.115852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.898 [2024-11-28 12:50:47.115872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.898 qpair failed and we were unable to recover it. 00:27:04.898 [2024-11-28 12:50:47.116036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.898 [2024-11-28 12:50:47.116053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.898 qpair failed and we were unable to recover it. 00:27:04.898 [2024-11-28 12:50:47.116163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.898 [2024-11-28 12:50:47.116178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.898 qpair failed and we were unable to recover it. 00:27:04.898 [2024-11-28 12:50:47.116342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.898 [2024-11-28 12:50:47.116363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.898 qpair failed and we were unable to recover it. 00:27:04.898 [2024-11-28 12:50:47.116476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.898 [2024-11-28 12:50:47.116492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.898 qpair failed and we were unable to recover it. 00:27:04.898 [2024-11-28 12:50:47.116595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.898 [2024-11-28 12:50:47.116615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.898 qpair failed and we were unable to recover it. 00:27:04.898 [2024-11-28 12:50:47.116780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.898 [2024-11-28 12:50:47.116796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.898 qpair failed and we were unable to recover it. 00:27:04.898 [2024-11-28 12:50:47.116959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.898 [2024-11-28 12:50:47.116977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.898 qpair failed and we were unable to recover it. 00:27:04.898 [2024-11-28 12:50:47.117126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.898 [2024-11-28 12:50:47.117142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.898 qpair failed and we were unable to recover it. 00:27:04.898 [2024-11-28 12:50:47.117263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.898 [2024-11-28 12:50:47.117279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.898 qpair failed and we were unable to recover it. 00:27:04.898 [2024-11-28 12:50:47.117451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.898 [2024-11-28 12:50:47.117466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.898 qpair failed and we were unable to recover it. 00:27:04.898 [2024-11-28 12:50:47.117779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.898 [2024-11-28 12:50:47.117796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.898 qpair failed and we were unable to recover it. 00:27:04.898 [2024-11-28 12:50:47.117887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.898 [2024-11-28 12:50:47.117903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.898 qpair failed and we were unable to recover it. 00:27:04.898 [2024-11-28 12:50:47.118080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.898 [2024-11-28 12:50:47.118098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.898 qpair failed and we were unable to recover it. 00:27:04.898 [2024-11-28 12:50:47.118204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.898 [2024-11-28 12:50:47.118219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.898 qpair failed and we were unable to recover it. 00:27:04.898 [2024-11-28 12:50:47.118378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.898 [2024-11-28 12:50:47.118395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.898 qpair failed and we were unable to recover it. 00:27:04.898 [2024-11-28 12:50:47.118490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.898 [2024-11-28 12:50:47.118505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.898 qpair failed and we were unable to recover it. 00:27:04.898 [2024-11-28 12:50:47.118747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.898 [2024-11-28 12:50:47.118765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.898 qpair failed and we were unable to recover it. 00:27:04.898 [2024-11-28 12:50:47.118993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.898 [2024-11-28 12:50:47.119011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.898 qpair failed and we were unable to recover it. 00:27:04.898 [2024-11-28 12:50:47.119180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.898 [2024-11-28 12:50:47.119196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.898 qpair failed and we were unable to recover it. 00:27:04.898 [2024-11-28 12:50:47.119301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.898 [2024-11-28 12:50:47.119316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.898 qpair failed and we were unable to recover it. 00:27:04.898 [2024-11-28 12:50:47.119404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.898 [2024-11-28 12:50:47.119419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.898 qpair failed and we were unable to recover it. 00:27:04.898 [2024-11-28 12:50:47.119689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.898 [2024-11-28 12:50:47.119707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.898 qpair failed and we were unable to recover it. 00:27:04.898 [2024-11-28 12:50:47.119869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.898 [2024-11-28 12:50:47.119886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.898 qpair failed and we were unable to recover it. 00:27:04.898 [2024-11-28 12:50:47.120035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.898 [2024-11-28 12:50:47.120053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.898 qpair failed and we were unable to recover it. 00:27:04.898 [2024-11-28 12:50:47.120161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.898 [2024-11-28 12:50:47.120177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.898 qpair failed and we were unable to recover it. 00:27:04.898 [2024-11-28 12:50:47.120292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.898 [2024-11-28 12:50:47.120307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.898 qpair failed and we were unable to recover it. 00:27:04.898 [2024-11-28 12:50:47.120421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.898 [2024-11-28 12:50:47.120438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.899 qpair failed and we were unable to recover it. 00:27:04.899 [2024-11-28 12:50:47.120686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.899 [2024-11-28 12:50:47.120705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.899 qpair failed and we were unable to recover it. 00:27:04.899 [2024-11-28 12:50:47.120800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.899 [2024-11-28 12:50:47.120815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.899 qpair failed and we were unable to recover it. 00:27:04.899 [2024-11-28 12:50:47.120980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.899 [2024-11-28 12:50:47.120998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.899 qpair failed and we were unable to recover it. 00:27:04.899 [2024-11-28 12:50:47.121109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.899 [2024-11-28 12:50:47.121128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.899 qpair failed and we were unable to recover it. 00:27:04.899 [2024-11-28 12:50:47.121277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.899 [2024-11-28 12:50:47.121293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.899 qpair failed and we were unable to recover it. 00:27:04.899 [2024-11-28 12:50:47.121462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.899 [2024-11-28 12:50:47.121477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.899 qpair failed and we were unable to recover it. 00:27:04.899 [2024-11-28 12:50:47.121694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.899 [2024-11-28 12:50:47.121711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.899 qpair failed and we were unable to recover it. 00:27:04.899 [2024-11-28 12:50:47.121856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.899 [2024-11-28 12:50:47.121872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.899 qpair failed and we were unable to recover it. 00:27:04.899 [2024-11-28 12:50:47.122111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.899 [2024-11-28 12:50:47.122130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.899 qpair failed and we were unable to recover it. 00:27:04.899 [2024-11-28 12:50:47.122285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.899 [2024-11-28 12:50:47.122302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.899 qpair failed and we were unable to recover it. 00:27:04.899 [2024-11-28 12:50:47.122403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.899 [2024-11-28 12:50:47.122418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.899 qpair failed and we were unable to recover it. 00:27:04.899 [2024-11-28 12:50:47.122501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.899 [2024-11-28 12:50:47.122516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.899 qpair failed and we were unable to recover it. 00:27:04.899 [2024-11-28 12:50:47.122714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.899 [2024-11-28 12:50:47.122731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.899 qpair failed and we were unable to recover it. 00:27:04.899 [2024-11-28 12:50:47.122943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.899 [2024-11-28 12:50:47.122981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.899 qpair failed and we were unable to recover it. 00:27:04.899 [2024-11-28 12:50:47.123146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.899 [2024-11-28 12:50:47.123163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.899 qpair failed and we were unable to recover it. 00:27:04.899 [2024-11-28 12:50:47.123416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.899 [2024-11-28 12:50:47.123436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.899 qpair failed and we were unable to recover it. 00:27:04.899 [2024-11-28 12:50:47.123597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.899 [2024-11-28 12:50:47.123613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.899 qpair failed and we were unable to recover it. 00:27:04.899 [2024-11-28 12:50:47.123773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.899 [2024-11-28 12:50:47.123789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.899 qpair failed and we were unable to recover it. 00:27:04.899 [2024-11-28 12:50:47.123871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.899 [2024-11-28 12:50:47.123885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.899 qpair failed and we were unable to recover it. 00:27:04.899 [2024-11-28 12:50:47.124045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.899 [2024-11-28 12:50:47.124062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.899 qpair failed and we were unable to recover it. 00:27:04.899 [2024-11-28 12:50:47.124219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.899 [2024-11-28 12:50:47.124236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.899 qpair failed and we were unable to recover it. 00:27:04.899 [2024-11-28 12:50:47.124384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.899 [2024-11-28 12:50:47.124400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.899 qpair failed and we were unable to recover it. 00:27:04.899 [2024-11-28 12:50:47.124497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.899 [2024-11-28 12:50:47.124514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.899 qpair failed and we were unable to recover it. 00:27:04.899 [2024-11-28 12:50:47.124600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.899 [2024-11-28 12:50:47.124616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.899 qpair failed and we were unable to recover it. 00:27:04.899 [2024-11-28 12:50:47.124714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.899 [2024-11-28 12:50:47.124730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.899 qpair failed and we were unable to recover it. 00:27:04.899 [2024-11-28 12:50:47.124815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.899 [2024-11-28 12:50:47.124830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.899 qpair failed and we were unable to recover it. 00:27:04.899 [2024-11-28 12:50:47.124979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.899 [2024-11-28 12:50:47.124995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.899 qpair failed and we were unable to recover it. 00:27:04.899 [2024-11-28 12:50:47.125142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.899 [2024-11-28 12:50:47.125158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.899 qpair failed and we were unable to recover it. 00:27:04.899 [2024-11-28 12:50:47.125265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.899 [2024-11-28 12:50:47.125282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.899 qpair failed and we were unable to recover it. 00:27:04.899 [2024-11-28 12:50:47.125385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.899 [2024-11-28 12:50:47.125400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.899 qpair failed and we were unable to recover it. 00:27:04.899 [2024-11-28 12:50:47.125545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.899 [2024-11-28 12:50:47.125561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.899 qpair failed and we were unable to recover it. 00:27:04.899 [2024-11-28 12:50:47.125665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.899 [2024-11-28 12:50:47.125682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.899 qpair failed and we were unable to recover it. 00:27:04.899 [2024-11-28 12:50:47.125783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.899 [2024-11-28 12:50:47.125799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.899 qpair failed and we were unable to recover it. 00:27:04.899 [2024-11-28 12:50:47.125997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.899 [2024-11-28 12:50:47.126014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.899 qpair failed and we were unable to recover it. 00:27:04.899 [2024-11-28 12:50:47.126109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.899 [2024-11-28 12:50:47.126125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.899 qpair failed and we were unable to recover it. 00:27:04.899 [2024-11-28 12:50:47.126210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.899 [2024-11-28 12:50:47.126226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.899 qpair failed and we were unable to recover it. 00:27:04.899 [2024-11-28 12:50:47.126397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.899 [2024-11-28 12:50:47.126414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.899 qpair failed and we were unable to recover it. 00:27:04.899 [2024-11-28 12:50:47.126507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.899 [2024-11-28 12:50:47.126524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.900 qpair failed and we were unable to recover it. 00:27:04.900 [2024-11-28 12:50:47.126610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.900 [2024-11-28 12:50:47.126624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.900 qpair failed and we were unable to recover it. 00:27:04.900 [2024-11-28 12:50:47.126710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.900 [2024-11-28 12:50:47.126725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.900 qpair failed and we were unable to recover it. 00:27:04.900 [2024-11-28 12:50:47.126822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.900 [2024-11-28 12:50:47.126836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.900 qpair failed and we were unable to recover it. 00:27:04.900 [2024-11-28 12:50:47.126980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.900 [2024-11-28 12:50:47.126997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.900 qpair failed and we were unable to recover it. 00:27:04.900 [2024-11-28 12:50:47.127091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.900 [2024-11-28 12:50:47.127105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.900 qpair failed and we were unable to recover it. 00:27:04.900 [2024-11-28 12:50:47.127189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.900 [2024-11-28 12:50:47.127209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.900 qpair failed and we were unable to recover it. 00:27:04.900 [2024-11-28 12:50:47.127313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.900 [2024-11-28 12:50:47.127329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.900 qpair failed and we were unable to recover it. 00:27:04.900 [2024-11-28 12:50:47.127433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.900 [2024-11-28 12:50:47.127449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.900 qpair failed and we were unable to recover it. 00:27:04.900 [2024-11-28 12:50:47.127551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.900 [2024-11-28 12:50:47.127567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.900 qpair failed and we were unable to recover it. 00:27:04.900 [2024-11-28 12:50:47.127659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.900 [2024-11-28 12:50:47.127674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.900 qpair failed and we were unable to recover it. 00:27:04.900 [2024-11-28 12:50:47.127764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.900 [2024-11-28 12:50:47.127780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.900 qpair failed and we were unable to recover it. 00:27:04.900 [2024-11-28 12:50:47.127873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.900 [2024-11-28 12:50:47.127889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.900 qpair failed and we were unable to recover it. 00:27:04.900 [2024-11-28 12:50:47.127986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.900 [2024-11-28 12:50:47.128003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.900 qpair failed and we were unable to recover it. 00:27:04.900 [2024-11-28 12:50:47.128081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.900 [2024-11-28 12:50:47.128095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.900 qpair failed and we were unable to recover it. 00:27:04.900 [2024-11-28 12:50:47.128262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.900 [2024-11-28 12:50:47.128278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.900 qpair failed and we were unable to recover it. 00:27:04.900 [2024-11-28 12:50:47.128495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.900 [2024-11-28 12:50:47.128512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.900 qpair failed and we were unable to recover it. 00:27:04.900 [2024-11-28 12:50:47.128602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.900 [2024-11-28 12:50:47.128622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.900 qpair failed and we were unable to recover it. 00:27:04.900 [2024-11-28 12:50:47.128775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.900 [2024-11-28 12:50:47.128795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.900 qpair failed and we were unable to recover it. 00:27:04.900 [2024-11-28 12:50:47.128971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.900 [2024-11-28 12:50:47.128988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.900 qpair failed and we were unable to recover it. 00:27:04.900 [2024-11-28 12:50:47.129151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.900 [2024-11-28 12:50:47.129167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.900 qpair failed and we were unable to recover it. 00:27:04.900 [2024-11-28 12:50:47.129317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.900 [2024-11-28 12:50:47.129337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.900 qpair failed and we were unable to recover it. 00:27:04.900 [2024-11-28 12:50:47.129487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.900 [2024-11-28 12:50:47.129503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.900 qpair failed and we were unable to recover it. 00:27:04.900 [2024-11-28 12:50:47.129661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.900 [2024-11-28 12:50:47.129677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.900 qpair failed and we were unable to recover it. 00:27:04.900 [2024-11-28 12:50:47.129772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.900 [2024-11-28 12:50:47.129787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.900 qpair failed and we were unable to recover it. 00:27:04.900 [2024-11-28 12:50:47.129955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.900 [2024-11-28 12:50:47.129971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.900 qpair failed and we were unable to recover it. 00:27:04.900 [2024-11-28 12:50:47.130208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.900 [2024-11-28 12:50:47.130226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.900 qpair failed and we were unable to recover it. 00:27:04.900 [2024-11-28 12:50:47.130333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.900 [2024-11-28 12:50:47.130352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.900 qpair failed and we were unable to recover it. 00:27:04.900 [2024-11-28 12:50:47.130507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.900 [2024-11-28 12:50:47.130522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.900 qpair failed and we were unable to recover it. 00:27:04.900 [2024-11-28 12:50:47.130606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.900 [2024-11-28 12:50:47.130621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.900 qpair failed and we were unable to recover it. 00:27:04.900 [2024-11-28 12:50:47.130721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.900 [2024-11-28 12:50:47.130738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.900 qpair failed and we were unable to recover it. 00:27:04.900 [2024-11-28 12:50:47.130818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.900 [2024-11-28 12:50:47.130833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.900 qpair failed and we were unable to recover it. 00:27:04.900 [2024-11-28 12:50:47.131053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.900 [2024-11-28 12:50:47.131070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.900 qpair failed and we were unable to recover it. 00:27:04.900 [2024-11-28 12:50:47.131179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.900 [2024-11-28 12:50:47.131195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.900 qpair failed and we were unable to recover it. 00:27:04.900 [2024-11-28 12:50:47.131271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.900 [2024-11-28 12:50:47.131286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.900 qpair failed and we were unable to recover it. 00:27:04.900 [2024-11-28 12:50:47.131459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.900 [2024-11-28 12:50:47.131476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.900 qpair failed and we were unable to recover it. 00:27:04.900 [2024-11-28 12:50:47.131639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.901 [2024-11-28 12:50:47.131655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.901 qpair failed and we were unable to recover it. 00:27:04.901 [2024-11-28 12:50:47.131755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.901 [2024-11-28 12:50:47.131772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.901 qpair failed and we were unable to recover it. 00:27:04.901 [2024-11-28 12:50:47.131943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.901 [2024-11-28 12:50:47.131979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.901 qpair failed and we were unable to recover it. 00:27:04.901 [2024-11-28 12:50:47.132089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.901 [2024-11-28 12:50:47.132107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.901 qpair failed and we were unable to recover it. 00:27:04.901 [2024-11-28 12:50:47.132273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.901 [2024-11-28 12:50:47.132291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.901 qpair failed and we were unable to recover it. 00:27:04.901 [2024-11-28 12:50:47.132380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.901 [2024-11-28 12:50:47.132395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.901 qpair failed and we were unable to recover it. 00:27:04.901 [2024-11-28 12:50:47.132478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.901 [2024-11-28 12:50:47.132495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.901 qpair failed and we were unable to recover it. 00:27:04.901 [2024-11-28 12:50:47.132729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.901 [2024-11-28 12:50:47.132747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.901 qpair failed and we were unable to recover it. 00:27:04.901 [2024-11-28 12:50:47.132847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.901 [2024-11-28 12:50:47.132863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.901 qpair failed and we were unable to recover it. 00:27:04.901 [2024-11-28 12:50:47.133023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.901 [2024-11-28 12:50:47.133041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.901 qpair failed and we were unable to recover it. 00:27:04.901 [2024-11-28 12:50:47.133188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.901 [2024-11-28 12:50:47.133204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.901 qpair failed and we were unable to recover it. 00:27:04.901 [2024-11-28 12:50:47.133299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.901 [2024-11-28 12:50:47.133315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.901 qpair failed and we were unable to recover it. 00:27:04.901 [2024-11-28 12:50:47.133418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.901 [2024-11-28 12:50:47.133435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.901 qpair failed and we were unable to recover it. 00:27:04.901 [2024-11-28 12:50:47.133597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.901 [2024-11-28 12:50:47.133612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.901 qpair failed and we were unable to recover it. 00:27:04.901 [2024-11-28 12:50:47.133712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.901 [2024-11-28 12:50:47.133727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.901 qpair failed and we were unable to recover it. 00:27:04.901 [2024-11-28 12:50:47.133888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.901 [2024-11-28 12:50:47.133903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.901 qpair failed and we were unable to recover it. 00:27:04.901 [2024-11-28 12:50:47.133993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.901 [2024-11-28 12:50:47.134009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.901 qpair failed and we were unable to recover it. 00:27:04.901 [2024-11-28 12:50:47.134099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.901 [2024-11-28 12:50:47.134120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.901 qpair failed and we were unable to recover it. 00:27:04.901 [2024-11-28 12:50:47.134211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.901 [2024-11-28 12:50:47.134228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.901 qpair failed and we were unable to recover it. 00:27:04.901 [2024-11-28 12:50:47.134376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.901 [2024-11-28 12:50:47.134392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.901 qpair failed and we were unable to recover it. 00:27:04.901 [2024-11-28 12:50:47.134484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.901 [2024-11-28 12:50:47.134501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.901 qpair failed and we were unable to recover it. 00:27:04.901 [2024-11-28 12:50:47.134581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.901 [2024-11-28 12:50:47.134595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.901 qpair failed and we were unable to recover it. 00:27:04.901 [2024-11-28 12:50:47.134763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.901 [2024-11-28 12:50:47.134786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.901 qpair failed and we were unable to recover it. 00:27:04.901 [2024-11-28 12:50:47.134961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.901 [2024-11-28 12:50:47.134978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.901 qpair failed and we were unable to recover it. 00:27:04.901 [2024-11-28 12:50:47.135069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.901 [2024-11-28 12:50:47.135085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.901 qpair failed and we were unable to recover it. 00:27:04.901 [2024-11-28 12:50:47.135192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.901 [2024-11-28 12:50:47.135210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.901 qpair failed and we were unable to recover it. 00:27:04.901 [2024-11-28 12:50:47.135294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.901 [2024-11-28 12:50:47.135310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.901 qpair failed and we were unable to recover it. 00:27:04.901 [2024-11-28 12:50:47.135454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.901 [2024-11-28 12:50:47.135471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.901 qpair failed and we were unable to recover it. 00:27:04.901 [2024-11-28 12:50:47.135651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.901 [2024-11-28 12:50:47.135668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.901 qpair failed and we were unable to recover it. 00:27:04.901 [2024-11-28 12:50:47.135768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.901 [2024-11-28 12:50:47.135785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.901 qpair failed and we were unable to recover it. 00:27:04.901 [2024-11-28 12:50:47.135875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.901 [2024-11-28 12:50:47.135891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.901 qpair failed and we were unable to recover it. 00:27:04.901 [2024-11-28 12:50:47.135985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.901 [2024-11-28 12:50:47.136002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.901 qpair failed and we were unable to recover it. 00:27:04.901 [2024-11-28 12:50:47.136099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.901 [2024-11-28 12:50:47.136119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.901 qpair failed and we were unable to recover it. 00:27:04.901 [2024-11-28 12:50:47.136297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.901 [2024-11-28 12:50:47.136313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.901 qpair failed and we were unable to recover it. 00:27:04.901 [2024-11-28 12:50:47.136462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.901 [2024-11-28 12:50:47.136480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.901 qpair failed and we were unable to recover it. 00:27:04.901 [2024-11-28 12:50:47.136636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.901 [2024-11-28 12:50:47.136652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.901 qpair failed and we were unable to recover it. 00:27:04.901 [2024-11-28 12:50:47.136748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.902 [2024-11-28 12:50:47.136766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.902 qpair failed and we were unable to recover it. 00:27:04.902 [2024-11-28 12:50:47.136843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.902 [2024-11-28 12:50:47.136859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.902 qpair failed and we were unable to recover it. 00:27:04.902 [2024-11-28 12:50:47.137172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.902 [2024-11-28 12:50:47.137201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.902 qpair failed and we were unable to recover it. 00:27:04.902 [2024-11-28 12:50:47.137358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.902 [2024-11-28 12:50:47.137373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.902 qpair failed and we were unable to recover it. 00:27:04.902 [2024-11-28 12:50:47.137529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.902 [2024-11-28 12:50:47.137542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.902 qpair failed and we were unable to recover it. 00:27:04.902 [2024-11-28 12:50:47.137622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.902 [2024-11-28 12:50:47.137633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.902 qpair failed and we were unable to recover it. 00:27:04.902 [2024-11-28 12:50:47.137727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.902 [2024-11-28 12:50:47.137739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.902 qpair failed and we were unable to recover it. 00:27:04.902 [2024-11-28 12:50:47.137840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.902 [2024-11-28 12:50:47.137852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.902 qpair failed and we were unable to recover it. 00:27:04.902 [2024-11-28 12:50:47.137994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.902 [2024-11-28 12:50:47.138007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.902 qpair failed and we were unable to recover it. 00:27:04.902 [2024-11-28 12:50:47.138088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.902 [2024-11-28 12:50:47.138100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.902 qpair failed and we were unable to recover it. 00:27:04.902 [2024-11-28 12:50:47.138196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.902 [2024-11-28 12:50:47.138212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.902 qpair failed and we were unable to recover it. 00:27:04.902 [2024-11-28 12:50:47.138313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.902 [2024-11-28 12:50:47.138326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.902 qpair failed and we were unable to recover it. 00:27:04.902 [2024-11-28 12:50:47.138406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.902 [2024-11-28 12:50:47.138418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.902 qpair failed and we were unable to recover it. 00:27:04.902 [2024-11-28 12:50:47.138505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.902 [2024-11-28 12:50:47.138518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.902 qpair failed and we were unable to recover it. 00:27:04.902 [2024-11-28 12:50:47.138585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.902 [2024-11-28 12:50:47.138597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.902 qpair failed and we were unable to recover it. 00:27:04.902 [2024-11-28 12:50:47.138747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.902 [2024-11-28 12:50:47.138761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.902 qpair failed and we were unable to recover it. 00:27:04.902 [2024-11-28 12:50:47.138852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.902 [2024-11-28 12:50:47.138865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.902 qpair failed and we were unable to recover it. 00:27:04.902 [2024-11-28 12:50:47.138956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.902 [2024-11-28 12:50:47.138970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.902 qpair failed and we were unable to recover it. 00:27:04.902 [2024-11-28 12:50:47.139099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.902 [2024-11-28 12:50:47.139112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.902 qpair failed and we were unable to recover it. 00:27:04.902 [2024-11-28 12:50:47.139189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.902 [2024-11-28 12:50:47.139200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.902 qpair failed and we were unable to recover it. 00:27:04.902 [2024-11-28 12:50:47.139272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.902 [2024-11-28 12:50:47.139283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.902 qpair failed and we were unable to recover it. 00:27:04.902 [2024-11-28 12:50:47.139422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.902 [2024-11-28 12:50:47.139434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.902 qpair failed and we were unable to recover it. 00:27:04.902 [2024-11-28 12:50:47.139579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.902 [2024-11-28 12:50:47.139591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.902 qpair failed and we were unable to recover it. 00:27:04.902 [2024-11-28 12:50:47.139738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.902 [2024-11-28 12:50:47.139750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.902 qpair failed and we were unable to recover it. 00:27:04.902 [2024-11-28 12:50:47.139816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.902 [2024-11-28 12:50:47.139827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.902 qpair failed and we were unable to recover it. 00:27:04.902 [2024-11-28 12:50:47.139905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.902 [2024-11-28 12:50:47.139919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.902 qpair failed and we were unable to recover it. 00:27:04.902 [2024-11-28 12:50:47.140064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.902 [2024-11-28 12:50:47.140080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.902 qpair failed and we were unable to recover it. 00:27:04.902 [2024-11-28 12:50:47.140151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.902 [2024-11-28 12:50:47.140162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.902 qpair failed and we were unable to recover it. 00:27:04.902 [2024-11-28 12:50:47.140300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.902 [2024-11-28 12:50:47.140311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.902 qpair failed and we were unable to recover it. 00:27:04.902 [2024-11-28 12:50:47.140396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.902 [2024-11-28 12:50:47.140408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.902 qpair failed and we were unable to recover it. 00:27:04.902 [2024-11-28 12:50:47.140484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.902 [2024-11-28 12:50:47.140495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.902 qpair failed and we were unable to recover it. 00:27:04.902 [2024-11-28 12:50:47.140631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.902 [2024-11-28 12:50:47.140643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.902 qpair failed and we were unable to recover it. 00:27:04.902 [2024-11-28 12:50:47.140783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.902 [2024-11-28 12:50:47.140797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.902 qpair failed and we were unable to recover it. 00:27:04.902 [2024-11-28 12:50:47.140959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.902 [2024-11-28 12:50:47.140972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.902 qpair failed and we were unable to recover it. 00:27:04.902 [2024-11-28 12:50:47.141068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.902 [2024-11-28 12:50:47.141080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.902 qpair failed and we were unable to recover it. 00:27:04.902 [2024-11-28 12:50:47.141212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.902 [2024-11-28 12:50:47.141224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.902 qpair failed and we were unable to recover it. 00:27:04.902 [2024-11-28 12:50:47.141298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.902 [2024-11-28 12:50:47.141308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.902 qpair failed and we were unable to recover it. 00:27:04.902 [2024-11-28 12:50:47.141388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.902 [2024-11-28 12:50:47.141399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.902 qpair failed and we were unable to recover it. 00:27:04.902 [2024-11-28 12:50:47.141523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.903 [2024-11-28 12:50:47.141536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.903 qpair failed and we were unable to recover it. 00:27:04.903 [2024-11-28 12:50:47.141676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.903 [2024-11-28 12:50:47.141689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.903 qpair failed and we were unable to recover it. 00:27:04.903 [2024-11-28 12:50:47.141771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.903 [2024-11-28 12:50:47.141784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.903 qpair failed and we were unable to recover it. 00:27:04.903 [2024-11-28 12:50:47.141874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.903 [2024-11-28 12:50:47.141886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.903 qpair failed and we were unable to recover it. 00:27:04.903 [2024-11-28 12:50:47.141970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.903 [2024-11-28 12:50:47.141981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.903 qpair failed and we were unable to recover it. 00:27:04.903 [2024-11-28 12:50:47.142071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.903 [2024-11-28 12:50:47.142083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.903 qpair failed and we were unable to recover it. 00:27:04.903 [2024-11-28 12:50:47.142173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.903 [2024-11-28 12:50:47.142186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.903 qpair failed and we were unable to recover it. 00:27:04.903 [2024-11-28 12:50:47.142251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.903 [2024-11-28 12:50:47.142263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.903 qpair failed and we were unable to recover it. 00:27:04.903 [2024-11-28 12:50:47.142348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.903 [2024-11-28 12:50:47.142360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.903 qpair failed and we were unable to recover it. 00:27:04.903 [2024-11-28 12:50:47.142444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.903 [2024-11-28 12:50:47.142459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.903 qpair failed and we were unable to recover it. 00:27:04.903 [2024-11-28 12:50:47.142555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.903 [2024-11-28 12:50:47.142567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.903 qpair failed and we were unable to recover it. 00:27:04.903 [2024-11-28 12:50:47.142636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.903 [2024-11-28 12:50:47.142648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.903 qpair failed and we were unable to recover it. 00:27:04.903 [2024-11-28 12:50:47.142801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.903 [2024-11-28 12:50:47.142813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.903 qpair failed and we were unable to recover it. 00:27:04.903 [2024-11-28 12:50:47.142897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.903 [2024-11-28 12:50:47.142908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.903 qpair failed and we were unable to recover it. 00:27:04.903 [2024-11-28 12:50:47.142976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.903 [2024-11-28 12:50:47.142988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.903 qpair failed and we were unable to recover it. 00:27:04.903 [2024-11-28 12:50:47.143081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.903 [2024-11-28 12:50:47.143093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.903 qpair failed and we were unable to recover it. 00:27:04.903 [2024-11-28 12:50:47.143182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.903 [2024-11-28 12:50:47.143194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.903 qpair failed and we were unable to recover it. 00:27:04.903 [2024-11-28 12:50:47.143256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.903 [2024-11-28 12:50:47.143268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.903 qpair failed and we were unable to recover it. 00:27:04.903 [2024-11-28 12:50:47.143336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.903 [2024-11-28 12:50:47.143348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.903 qpair failed and we were unable to recover it. 00:27:04.903 [2024-11-28 12:50:47.143423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.903 [2024-11-28 12:50:47.143434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.903 qpair failed and we were unable to recover it. 00:27:04.903 [2024-11-28 12:50:47.143637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.903 [2024-11-28 12:50:47.143649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.903 qpair failed and we were unable to recover it. 00:27:04.903 [2024-11-28 12:50:47.143726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.903 [2024-11-28 12:50:47.143737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.903 qpair failed and we were unable to recover it. 00:27:04.903 [2024-11-28 12:50:47.143872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.903 [2024-11-28 12:50:47.143884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.903 qpair failed and we were unable to recover it. 00:27:04.903 [2024-11-28 12:50:47.143962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.903 [2024-11-28 12:50:47.143976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.903 qpair failed and we were unable to recover it. 00:27:04.903 [2024-11-28 12:50:47.144067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.903 [2024-11-28 12:50:47.144080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.903 qpair failed and we were unable to recover it. 00:27:04.903 [2024-11-28 12:50:47.144149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.903 [2024-11-28 12:50:47.144161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.903 qpair failed and we were unable to recover it. 00:27:04.903 [2024-11-28 12:50:47.144232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.903 [2024-11-28 12:50:47.144244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.903 qpair failed and we were unable to recover it. 00:27:04.903 [2024-11-28 12:50:47.144322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.903 [2024-11-28 12:50:47.144335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.903 qpair failed and we were unable to recover it. 00:27:04.903 [2024-11-28 12:50:47.144406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.903 [2024-11-28 12:50:47.144421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.903 qpair failed and we were unable to recover it. 00:27:04.903 [2024-11-28 12:50:47.144488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.903 [2024-11-28 12:50:47.144501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.903 qpair failed and we were unable to recover it. 00:27:04.903 [2024-11-28 12:50:47.144565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.903 [2024-11-28 12:50:47.144578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.903 qpair failed and we were unable to recover it. 00:27:04.903 [2024-11-28 12:50:47.144670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.903 [2024-11-28 12:50:47.144682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.903 qpair failed and we were unable to recover it. 00:27:04.903 [2024-11-28 12:50:47.144768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.903 [2024-11-28 12:50:47.144779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.903 qpair failed and we were unable to recover it. 00:27:04.903 [2024-11-28 12:50:47.144876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.903 [2024-11-28 12:50:47.144889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.903 qpair failed and we were unable to recover it. 00:27:04.903 [2024-11-28 12:50:47.144971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.903 [2024-11-28 12:50:47.144983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.903 qpair failed and we were unable to recover it. 00:27:04.903 [2024-11-28 12:50:47.145050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.903 [2024-11-28 12:50:47.145063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.903 qpair failed and we were unable to recover it. 00:27:04.903 [2024-11-28 12:50:47.145144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.903 [2024-11-28 12:50:47.145155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.903 qpair failed and we were unable to recover it. 00:27:04.903 [2024-11-28 12:50:47.145237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.903 [2024-11-28 12:50:47.145249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.903 qpair failed and we were unable to recover it. 00:27:04.903 [2024-11-28 12:50:47.145343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.904 [2024-11-28 12:50:47.145355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.904 qpair failed and we were unable to recover it. 00:27:04.904 [2024-11-28 12:50:47.145562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.904 [2024-11-28 12:50:47.145574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.904 qpair failed and we were unable to recover it. 00:27:04.904 [2024-11-28 12:50:47.145647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.904 [2024-11-28 12:50:47.145660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.904 qpair failed and we were unable to recover it. 00:27:04.904 [2024-11-28 12:50:47.145736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.904 [2024-11-28 12:50:47.145749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.904 qpair failed and we were unable to recover it. 00:27:04.904 [2024-11-28 12:50:47.145825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.904 [2024-11-28 12:50:47.145838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.904 qpair failed and we were unable to recover it. 00:27:04.904 [2024-11-28 12:50:47.145904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.904 [2024-11-28 12:50:47.145914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.904 qpair failed and we were unable to recover it. 00:27:04.904 [2024-11-28 12:50:47.145989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.904 [2024-11-28 12:50:47.146002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.904 qpair failed and we were unable to recover it. 00:27:04.904 [2024-11-28 12:50:47.146090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.904 [2024-11-28 12:50:47.146102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.904 qpair failed and we were unable to recover it. 00:27:04.904 [2024-11-28 12:50:47.146181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.904 [2024-11-28 12:50:47.146195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.904 qpair failed and we were unable to recover it. 00:27:04.904 [2024-11-28 12:50:47.146270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.904 [2024-11-28 12:50:47.146282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.904 qpair failed and we were unable to recover it. 00:27:04.904 [2024-11-28 12:50:47.146350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.904 [2024-11-28 12:50:47.146362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.904 qpair failed and we were unable to recover it. 00:27:04.904 [2024-11-28 12:50:47.146437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.904 [2024-11-28 12:50:47.146448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.904 qpair failed and we were unable to recover it. 00:27:04.904 [2024-11-28 12:50:47.146580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.904 [2024-11-28 12:50:47.146594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.904 qpair failed and we were unable to recover it. 00:27:04.904 [2024-11-28 12:50:47.146667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.904 [2024-11-28 12:50:47.146679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.904 qpair failed and we were unable to recover it. 00:27:04.904 [2024-11-28 12:50:47.146821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.904 [2024-11-28 12:50:47.146834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.904 qpair failed and we were unable to recover it. 00:27:04.904 [2024-11-28 12:50:47.146940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.904 [2024-11-28 12:50:47.146965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.904 qpair failed and we were unable to recover it. 00:27:04.904 [2024-11-28 12:50:47.147061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.904 [2024-11-28 12:50:47.147074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.904 qpair failed and we were unable to recover it. 00:27:04.904 [2024-11-28 12:50:47.147143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.904 [2024-11-28 12:50:47.147154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.904 qpair failed and we were unable to recover it. 00:27:04.904 [2024-11-28 12:50:47.147247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.904 [2024-11-28 12:50:47.147259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.904 qpair failed and we were unable to recover it. 00:27:04.904 [2024-11-28 12:50:47.147397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.904 [2024-11-28 12:50:47.147409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.904 qpair failed and we were unable to recover it. 00:27:04.904 [2024-11-28 12:50:47.147501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.904 [2024-11-28 12:50:47.147513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.904 qpair failed and we were unable to recover it. 00:27:04.904 [2024-11-28 12:50:47.147588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.904 [2024-11-28 12:50:47.147599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.904 qpair failed and we were unable to recover it. 00:27:04.904 [2024-11-28 12:50:47.147678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.904 [2024-11-28 12:50:47.147692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.904 qpair failed and we were unable to recover it. 00:27:04.904 [2024-11-28 12:50:47.147844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.904 [2024-11-28 12:50:47.147856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.904 qpair failed and we were unable to recover it. 00:27:04.904 [2024-11-28 12:50:47.147930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.904 [2024-11-28 12:50:47.147942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.904 qpair failed and we were unable to recover it. 00:27:04.904 [2024-11-28 12:50:47.148023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.904 [2024-11-28 12:50:47.148035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.904 qpair failed and we were unable to recover it. 00:27:04.904 [2024-11-28 12:50:47.148105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.904 [2024-11-28 12:50:47.148118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.904 qpair failed and we were unable to recover it. 00:27:04.904 [2024-11-28 12:50:47.148197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.904 [2024-11-28 12:50:47.148209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.904 qpair failed and we were unable to recover it. 00:27:04.904 [2024-11-28 12:50:47.148411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.904 [2024-11-28 12:50:47.148423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.904 qpair failed and we were unable to recover it. 00:27:04.904 [2024-11-28 12:50:47.148511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.904 [2024-11-28 12:50:47.148524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.904 qpair failed and we were unable to recover it. 00:27:04.904 [2024-11-28 12:50:47.148604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.904 [2024-11-28 12:50:47.148618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.904 qpair failed and we were unable to recover it. 00:27:04.904 [2024-11-28 12:50:47.148700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.904 [2024-11-28 12:50:47.148711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.904 qpair failed and we were unable to recover it. 00:27:04.904 [2024-11-28 12:50:47.148791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.904 [2024-11-28 12:50:47.148803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.904 qpair failed and we were unable to recover it. 00:27:04.904 [2024-11-28 12:50:47.148869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.904 [2024-11-28 12:50:47.148880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.904 qpair failed and we were unable to recover it. 00:27:04.904 [2024-11-28 12:50:47.148955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.904 [2024-11-28 12:50:47.148968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.904 qpair failed and we were unable to recover it. 00:27:04.904 [2024-11-28 12:50:47.149041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.904 [2024-11-28 12:50:47.149053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.904 qpair failed and we were unable to recover it. 00:27:04.904 [2024-11-28 12:50:47.149131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.904 [2024-11-28 12:50:47.149143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.905 qpair failed and we were unable to recover it. 00:27:04.905 [2024-11-28 12:50:47.149230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.905 [2024-11-28 12:50:47.149242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.905 qpair failed and we were unable to recover it. 00:27:04.905 [2024-11-28 12:50:47.149309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.905 [2024-11-28 12:50:47.149320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.905 qpair failed and we were unable to recover it. 00:27:04.905 [2024-11-28 12:50:47.149386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.905 [2024-11-28 12:50:47.149397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.905 qpair failed and we were unable to recover it. 00:27:04.905 [2024-11-28 12:50:47.149528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.905 [2024-11-28 12:50:47.149542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.905 qpair failed and we were unable to recover it. 00:27:04.905 [2024-11-28 12:50:47.149613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.905 [2024-11-28 12:50:47.149626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.905 qpair failed and we were unable to recover it. 00:27:04.905 [2024-11-28 12:50:47.149709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.905 [2024-11-28 12:50:47.149721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.905 qpair failed and we were unable to recover it. 00:27:04.905 [2024-11-28 12:50:47.149800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.905 [2024-11-28 12:50:47.149812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.905 qpair failed and we were unable to recover it. 00:27:04.905 [2024-11-28 12:50:47.149954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.905 [2024-11-28 12:50:47.149967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.905 qpair failed and we were unable to recover it. 00:27:04.905 [2024-11-28 12:50:47.150051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.905 [2024-11-28 12:50:47.150064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.905 qpair failed and we were unable to recover it. 00:27:04.905 [2024-11-28 12:50:47.150151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.905 [2024-11-28 12:50:47.150162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.905 qpair failed and we were unable to recover it. 00:27:04.905 [2024-11-28 12:50:47.150299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.905 [2024-11-28 12:50:47.150311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.905 qpair failed and we were unable to recover it. 00:27:04.905 [2024-11-28 12:50:47.150392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.905 [2024-11-28 12:50:47.150404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.905 qpair failed and we were unable to recover it. 00:27:04.905 [2024-11-28 12:50:47.150563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.905 [2024-11-28 12:50:47.150576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.905 qpair failed and we were unable to recover it. 00:27:04.905 [2024-11-28 12:50:47.150661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.905 [2024-11-28 12:50:47.150673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.905 qpair failed and we were unable to recover it. 00:27:04.905 [2024-11-28 12:50:47.150741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.905 [2024-11-28 12:50:47.150754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.905 qpair failed and we were unable to recover it. 00:27:04.905 [2024-11-28 12:50:47.150818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.905 [2024-11-28 12:50:47.150829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.905 qpair failed and we were unable to recover it. 00:27:04.905 [2024-11-28 12:50:47.150915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.905 [2024-11-28 12:50:47.150927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.905 qpair failed and we were unable to recover it. 00:27:04.905 [2024-11-28 12:50:47.151019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.905 [2024-11-28 12:50:47.151032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.905 qpair failed and we were unable to recover it. 00:27:04.905 [2024-11-28 12:50:47.151107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.905 [2024-11-28 12:50:47.151120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.905 qpair failed and we were unable to recover it. 00:27:04.905 [2024-11-28 12:50:47.151191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.905 [2024-11-28 12:50:47.151203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.905 qpair failed and we were unable to recover it. 00:27:04.905 [2024-11-28 12:50:47.151439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.905 [2024-11-28 12:50:47.151476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.905 qpair failed and we were unable to recover it. 00:27:04.905 [2024-11-28 12:50:47.151569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.905 [2024-11-28 12:50:47.151586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.905 qpair failed and we were unable to recover it. 00:27:04.905 [2024-11-28 12:50:47.151678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.905 [2024-11-28 12:50:47.151694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.905 qpair failed and we were unable to recover it. 00:27:04.905 [2024-11-28 12:50:47.151782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.905 [2024-11-28 12:50:47.151798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.905 qpair failed and we were unable to recover it. 00:27:04.905 [2024-11-28 12:50:47.151873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.905 [2024-11-28 12:50:47.151889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.905 qpair failed and we were unable to recover it. 00:27:04.905 [2024-11-28 12:50:47.151987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.905 [2024-11-28 12:50:47.152005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.905 qpair failed and we were unable to recover it. 00:27:04.905 [2024-11-28 12:50:47.152093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.905 [2024-11-28 12:50:47.152109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.905 qpair failed and we were unable to recover it. 00:27:04.905 [2024-11-28 12:50:47.152253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.905 [2024-11-28 12:50:47.152268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.905 qpair failed and we were unable to recover it. 00:27:04.905 [2024-11-28 12:50:47.152348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.905 [2024-11-28 12:50:47.152364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.905 qpair failed and we were unable to recover it. 00:27:04.905 [2024-11-28 12:50:47.152460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.905 [2024-11-28 12:50:47.152477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.905 qpair failed and we were unable to recover it. 00:27:04.905 [2024-11-28 12:50:47.152554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.905 [2024-11-28 12:50:47.152570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.905 qpair failed and we were unable to recover it. 00:27:04.905 [2024-11-28 12:50:47.152645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.905 [2024-11-28 12:50:47.152661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.905 qpair failed and we were unable to recover it. 00:27:04.905 [2024-11-28 12:50:47.152745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.905 [2024-11-28 12:50:47.152761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.905 qpair failed and we were unable to recover it. 00:27:04.906 [2024-11-28 12:50:47.152895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.906 [2024-11-28 12:50:47.152907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.906 qpair failed and we were unable to recover it. 00:27:04.906 [2024-11-28 12:50:47.153054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.906 [2024-11-28 12:50:47.153066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.906 qpair failed and we were unable to recover it. 00:27:04.906 [2024-11-28 12:50:47.153128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.906 [2024-11-28 12:50:47.153139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.906 qpair failed and we were unable to recover it. 00:27:04.906 [2024-11-28 12:50:47.153220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.906 [2024-11-28 12:50:47.153232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.906 qpair failed and we were unable to recover it. 00:27:04.906 [2024-11-28 12:50:47.153318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.906 [2024-11-28 12:50:47.153331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.906 qpair failed and we were unable to recover it. 00:27:04.906 [2024-11-28 12:50:47.153468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.906 [2024-11-28 12:50:47.153481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.906 qpair failed and we were unable to recover it. 00:27:04.906 [2024-11-28 12:50:47.153568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.906 [2024-11-28 12:50:47.153580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.906 qpair failed and we were unable to recover it. 00:27:04.906 [2024-11-28 12:50:47.153650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.906 [2024-11-28 12:50:47.153663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.906 qpair failed and we were unable to recover it. 00:27:04.906 [2024-11-28 12:50:47.153868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.906 [2024-11-28 12:50:47.153881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.906 qpair failed and we were unable to recover it. 00:27:04.906 [2024-11-28 12:50:47.154023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.906 [2024-11-28 12:50:47.154035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.906 qpair failed and we were unable to recover it. 00:27:04.906 [2024-11-28 12:50:47.154175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.906 [2024-11-28 12:50:47.154187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.906 qpair failed and we were unable to recover it. 00:27:04.906 [2024-11-28 12:50:47.154255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.906 [2024-11-28 12:50:47.154266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.906 qpair failed and we were unable to recover it. 00:27:04.906 [2024-11-28 12:50:47.154340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.906 [2024-11-28 12:50:47.154352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.906 qpair failed and we were unable to recover it. 00:27:04.906 [2024-11-28 12:50:47.154430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.906 [2024-11-28 12:50:47.154442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.906 qpair failed and we were unable to recover it. 00:27:04.906 [2024-11-28 12:50:47.154597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.906 [2024-11-28 12:50:47.154609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.906 qpair failed and we were unable to recover it. 00:27:04.906 [2024-11-28 12:50:47.154757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.906 [2024-11-28 12:50:47.154770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.906 qpair failed and we were unable to recover it. 00:27:04.906 [2024-11-28 12:50:47.154855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.906 [2024-11-28 12:50:47.154869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.906 qpair failed and we were unable to recover it. 00:27:04.906 [2024-11-28 12:50:47.154937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.906 [2024-11-28 12:50:47.154954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.906 qpair failed and we were unable to recover it. 00:27:04.906 [2024-11-28 12:50:47.155035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.906 [2024-11-28 12:50:47.155047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.906 qpair failed and we were unable to recover it. 00:27:04.906 [2024-11-28 12:50:47.155137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.906 [2024-11-28 12:50:47.155149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.906 qpair failed and we were unable to recover it. 00:27:04.906 [2024-11-28 12:50:47.155221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.906 [2024-11-28 12:50:47.155234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.906 qpair failed and we were unable to recover it. 00:27:04.906 [2024-11-28 12:50:47.155300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.906 [2024-11-28 12:50:47.155312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.906 qpair failed and we were unable to recover it. 00:27:04.906 [2024-11-28 12:50:47.155429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.906 [2024-11-28 12:50:47.155441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.906 qpair failed and we were unable to recover it. 00:27:04.906 [2024-11-28 12:50:47.155509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.906 [2024-11-28 12:50:47.155522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.906 qpair failed and we were unable to recover it. 00:27:04.906 [2024-11-28 12:50:47.155603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.906 [2024-11-28 12:50:47.155615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.906 qpair failed and we were unable to recover it. 00:27:04.906 [2024-11-28 12:50:47.155841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.906 [2024-11-28 12:50:47.155853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.906 qpair failed and we were unable to recover it. 00:27:04.906 [2024-11-28 12:50:47.155922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.906 [2024-11-28 12:50:47.155934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.906 qpair failed and we were unable to recover it. 00:27:04.906 [2024-11-28 12:50:47.156033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.906 [2024-11-28 12:50:47.156047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.906 qpair failed and we were unable to recover it. 00:27:04.906 [2024-11-28 12:50:47.156116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.906 [2024-11-28 12:50:47.156128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.906 qpair failed and we were unable to recover it. 00:27:04.906 [2024-11-28 12:50:47.156220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.906 [2024-11-28 12:50:47.156231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.906 qpair failed and we were unable to recover it. 00:27:04.906 [2024-11-28 12:50:47.156322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.906 [2024-11-28 12:50:47.156334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.906 qpair failed and we were unable to recover it. 00:27:04.906 [2024-11-28 12:50:47.156423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.906 [2024-11-28 12:50:47.156435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.906 qpair failed and we were unable to recover it. 00:27:04.906 [2024-11-28 12:50:47.156595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.906 [2024-11-28 12:50:47.156607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.906 qpair failed and we were unable to recover it. 00:27:04.906 [2024-11-28 12:50:47.156682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.906 [2024-11-28 12:50:47.156696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.906 qpair failed and we were unable to recover it. 00:27:04.906 [2024-11-28 12:50:47.156776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.906 [2024-11-28 12:50:47.156788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.906 qpair failed and we were unable to recover it. 00:27:04.906 [2024-11-28 12:50:47.156865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.906 [2024-11-28 12:50:47.156877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.906 qpair failed and we were unable to recover it. 00:27:04.906 [2024-11-28 12:50:47.156957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.906 [2024-11-28 12:50:47.156970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.906 qpair failed and we were unable to recover it. 00:27:04.906 [2024-11-28 12:50:47.157052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.906 [2024-11-28 12:50:47.157063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.906 qpair failed and we were unable to recover it. 00:27:04.907 [2024-11-28 12:50:47.157132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.907 [2024-11-28 12:50:47.157143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.907 qpair failed and we were unable to recover it. 00:27:04.907 [2024-11-28 12:50:47.157223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.907 [2024-11-28 12:50:47.157235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.907 qpair failed and we were unable to recover it. 00:27:04.907 [2024-11-28 12:50:47.157382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.907 [2024-11-28 12:50:47.157394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.907 qpair failed and we were unable to recover it. 00:27:04.907 [2024-11-28 12:50:47.157470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.907 [2024-11-28 12:50:47.157482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.907 qpair failed and we were unable to recover it. 00:27:04.907 [2024-11-28 12:50:47.157626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.907 [2024-11-28 12:50:47.157638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.907 qpair failed and we were unable to recover it. 00:27:04.907 [2024-11-28 12:50:47.157714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.907 [2024-11-28 12:50:47.157725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.907 qpair failed and we were unable to recover it. 00:27:04.907 [2024-11-28 12:50:47.157792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.907 [2024-11-28 12:50:47.157804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.907 qpair failed and we were unable to recover it. 00:27:04.907 [2024-11-28 12:50:47.157938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.907 [2024-11-28 12:50:47.157955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.907 qpair failed and we were unable to recover it. 00:27:04.907 [2024-11-28 12:50:47.158123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.907 [2024-11-28 12:50:47.158135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.907 qpair failed and we were unable to recover it. 00:27:04.907 [2024-11-28 12:50:47.158240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.907 [2024-11-28 12:50:47.158252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.907 qpair failed and we were unable to recover it. 00:27:04.907 [2024-11-28 12:50:47.158410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.907 [2024-11-28 12:50:47.158423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.907 qpair failed and we were unable to recover it. 00:27:04.907 [2024-11-28 12:50:47.158500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.907 [2024-11-28 12:50:47.158512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.907 qpair failed and we were unable to recover it. 00:27:04.907 [2024-11-28 12:50:47.158664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.907 [2024-11-28 12:50:47.158676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.907 qpair failed and we were unable to recover it. 00:27:04.907 [2024-11-28 12:50:47.158743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.907 [2024-11-28 12:50:47.158755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.907 qpair failed and we were unable to recover it. 00:27:04.907 [2024-11-28 12:50:47.158825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.907 [2024-11-28 12:50:47.158838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.907 qpair failed and we were unable to recover it. 00:27:04.907 [2024-11-28 12:50:47.158923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.907 [2024-11-28 12:50:47.158936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.907 qpair failed and we were unable to recover it. 00:27:04.907 [2024-11-28 12:50:47.159011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.907 [2024-11-28 12:50:47.159023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.907 qpair failed and we were unable to recover it. 00:27:04.907 [2024-11-28 12:50:47.159183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.907 [2024-11-28 12:50:47.159196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.907 qpair failed and we were unable to recover it. 00:27:04.907 [2024-11-28 12:50:47.159278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.907 [2024-11-28 12:50:47.159291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.907 qpair failed and we were unable to recover it. 00:27:04.907 [2024-11-28 12:50:47.159367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.907 [2024-11-28 12:50:47.159379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.907 qpair failed and we were unable to recover it. 00:27:04.907 [2024-11-28 12:50:47.159451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.907 [2024-11-28 12:50:47.159463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.907 qpair failed and we were unable to recover it. 00:27:04.907 [2024-11-28 12:50:47.159601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.907 [2024-11-28 12:50:47.159614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.907 qpair failed and we were unable to recover it. 00:27:04.907 [2024-11-28 12:50:47.159705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.907 [2024-11-28 12:50:47.159716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.907 qpair failed and we were unable to recover it. 00:27:04.907 [2024-11-28 12:50:47.159784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.907 [2024-11-28 12:50:47.159796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.907 qpair failed and we were unable to recover it. 00:27:04.907 [2024-11-28 12:50:47.159882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.907 [2024-11-28 12:50:47.159895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.907 qpair failed and we were unable to recover it. 00:27:04.907 [2024-11-28 12:50:47.159972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.907 [2024-11-28 12:50:47.159994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.907 qpair failed and we were unable to recover it. 00:27:04.907 [2024-11-28 12:50:47.160096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.907 [2024-11-28 12:50:47.160110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.907 qpair failed and we were unable to recover it. 00:27:04.907 [2024-11-28 12:50:47.160244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.907 [2024-11-28 12:50:47.160255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.907 qpair failed and we were unable to recover it. 00:27:04.907 [2024-11-28 12:50:47.160346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.907 [2024-11-28 12:50:47.160359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.907 qpair failed and we were unable to recover it. 00:27:04.907 [2024-11-28 12:50:47.160429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.907 [2024-11-28 12:50:47.160445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.907 qpair failed and we were unable to recover it. 00:27:04.907 [2024-11-28 12:50:47.160518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.907 [2024-11-28 12:50:47.160530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.907 qpair failed and we were unable to recover it. 00:27:04.907 [2024-11-28 12:50:47.160609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.907 [2024-11-28 12:50:47.160622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.907 qpair failed and we were unable to recover it. 00:27:04.907 [2024-11-28 12:50:47.160693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.907 [2024-11-28 12:50:47.160705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.907 qpair failed and we were unable to recover it. 00:27:04.907 [2024-11-28 12:50:47.160838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.907 [2024-11-28 12:50:47.160851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.907 qpair failed and we were unable to recover it. 00:27:04.907 [2024-11-28 12:50:47.160915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.907 [2024-11-28 12:50:47.160927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.907 qpair failed and we were unable to recover it. 00:27:04.907 [2024-11-28 12:50:47.161032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.907 [2024-11-28 12:50:47.161044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.907 qpair failed and we were unable to recover it. 00:27:04.907 [2024-11-28 12:50:47.161117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.907 [2024-11-28 12:50:47.161129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.907 qpair failed and we were unable to recover it. 00:27:04.907 [2024-11-28 12:50:47.161194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.907 [2024-11-28 12:50:47.161206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.907 qpair failed and we were unable to recover it. 00:27:04.908 [2024-11-28 12:50:47.161283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.908 [2024-11-28 12:50:47.161295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.908 qpair failed and we were unable to recover it. 00:27:04.908 [2024-11-28 12:50:47.161380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.908 [2024-11-28 12:50:47.161392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.908 qpair failed and we were unable to recover it. 00:27:04.908 [2024-11-28 12:50:47.161469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.908 [2024-11-28 12:50:47.161481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.908 qpair failed and we were unable to recover it. 00:27:04.908 [2024-11-28 12:50:47.161563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.908 [2024-11-28 12:50:47.161578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.908 qpair failed and we were unable to recover it. 00:27:04.908 [2024-11-28 12:50:47.161658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.908 [2024-11-28 12:50:47.161670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.908 qpair failed and we were unable to recover it. 00:27:04.908 [2024-11-28 12:50:47.161811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.908 [2024-11-28 12:50:47.161824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.908 qpair failed and we were unable to recover it. 00:27:04.908 [2024-11-28 12:50:47.161892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.908 [2024-11-28 12:50:47.161906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.908 qpair failed and we were unable to recover it. 00:27:04.908 [2024-11-28 12:50:47.161990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.908 [2024-11-28 12:50:47.162002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.908 qpair failed and we were unable to recover it. 00:27:04.908 [2024-11-28 12:50:47.162071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.908 [2024-11-28 12:50:47.162083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.908 qpair failed and we were unable to recover it. 00:27:04.908 [2024-11-28 12:50:47.162147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.908 [2024-11-28 12:50:47.162159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.908 qpair failed and we were unable to recover it. 00:27:04.908 [2024-11-28 12:50:47.162226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.908 [2024-11-28 12:50:47.162239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.908 qpair failed and we were unable to recover it. 00:27:04.908 [2024-11-28 12:50:47.162304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.908 [2024-11-28 12:50:47.162316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.908 qpair failed and we were unable to recover it. 00:27:04.908 [2024-11-28 12:50:47.162467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.908 [2024-11-28 12:50:47.162480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.908 qpair failed and we were unable to recover it. 00:27:04.908 [2024-11-28 12:50:47.162546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.908 [2024-11-28 12:50:47.162558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.908 qpair failed and we were unable to recover it. 00:27:04.908 [2024-11-28 12:50:47.162622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.908 [2024-11-28 12:50:47.162634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.908 qpair failed and we were unable to recover it. 00:27:04.908 [2024-11-28 12:50:47.162705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.908 [2024-11-28 12:50:47.162717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.908 qpair failed and we were unable to recover it. 00:27:04.908 [2024-11-28 12:50:47.162795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.908 [2024-11-28 12:50:47.162807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.908 qpair failed and we were unable to recover it. 00:27:04.908 [2024-11-28 12:50:47.162878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.908 [2024-11-28 12:50:47.162890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.908 qpair failed and we were unable to recover it. 00:27:04.908 [2024-11-28 12:50:47.163080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.908 [2024-11-28 12:50:47.163093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.908 qpair failed and we were unable to recover it. 00:27:04.908 [2024-11-28 12:50:47.163169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.908 [2024-11-28 12:50:47.163180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.908 qpair failed and we were unable to recover it. 00:27:04.908 [2024-11-28 12:50:47.163267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.908 [2024-11-28 12:50:47.163281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.908 qpair failed and we were unable to recover it. 00:27:04.908 [2024-11-28 12:50:47.163359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.908 [2024-11-28 12:50:47.163372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.908 qpair failed and we were unable to recover it. 00:27:04.908 [2024-11-28 12:50:47.163441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.908 [2024-11-28 12:50:47.163453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.908 qpair failed and we were unable to recover it. 00:27:04.908 [2024-11-28 12:50:47.163549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.908 [2024-11-28 12:50:47.163562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.908 qpair failed and we were unable to recover it. 00:27:04.908 [2024-11-28 12:50:47.163641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.908 [2024-11-28 12:50:47.163654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.908 qpair failed and we were unable to recover it. 00:27:04.908 [2024-11-28 12:50:47.163816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.908 [2024-11-28 12:50:47.163828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.908 qpair failed and we were unable to recover it. 00:27:04.908 [2024-11-28 12:50:47.163898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.908 [2024-11-28 12:50:47.163911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.908 qpair failed and we were unable to recover it. 00:27:04.908 [2024-11-28 12:50:47.164054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.908 [2024-11-28 12:50:47.164067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.908 qpair failed and we were unable to recover it. 00:27:04.908 [2024-11-28 12:50:47.164236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.908 [2024-11-28 12:50:47.164249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.908 qpair failed and we were unable to recover it. 00:27:04.908 [2024-11-28 12:50:47.164318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.908 [2024-11-28 12:50:47.164330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.908 qpair failed and we were unable to recover it. 00:27:04.908 [2024-11-28 12:50:47.164403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.908 [2024-11-28 12:50:47.164416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.908 qpair failed and we were unable to recover it. 00:27:04.908 [2024-11-28 12:50:47.164552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.908 [2024-11-28 12:50:47.164566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.908 qpair failed and we were unable to recover it. 00:27:04.908 [2024-11-28 12:50:47.164642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.908 [2024-11-28 12:50:47.164654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.908 qpair failed and we were unable to recover it. 00:27:04.908 [2024-11-28 12:50:47.164810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.908 [2024-11-28 12:50:47.164822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.908 qpair failed and we were unable to recover it. 00:27:04.908 [2024-11-28 12:50:47.164897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.908 [2024-11-28 12:50:47.164909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.908 qpair failed and we were unable to recover it. 00:27:04.908 [2024-11-28 12:50:47.165010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.908 [2024-11-28 12:50:47.165023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.908 qpair failed and we were unable to recover it. 00:27:04.908 [2024-11-28 12:50:47.165093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.908 [2024-11-28 12:50:47.165106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.908 qpair failed and we were unable to recover it. 00:27:04.908 [2024-11-28 12:50:47.165182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.908 [2024-11-28 12:50:47.165194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.908 qpair failed and we were unable to recover it. 00:27:04.908 [2024-11-28 12:50:47.165263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.909 [2024-11-28 12:50:47.165275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.909 qpair failed and we were unable to recover it. 00:27:04.909 [2024-11-28 12:50:47.165340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.909 [2024-11-28 12:50:47.165351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.909 qpair failed and we were unable to recover it. 00:27:04.909 [2024-11-28 12:50:47.165521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.909 [2024-11-28 12:50:47.165533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.909 qpair failed and we were unable to recover it. 00:27:04.909 [2024-11-28 12:50:47.165610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.909 [2024-11-28 12:50:47.165622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.909 qpair failed and we were unable to recover it. 00:27:04.909 [2024-11-28 12:50:47.165773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.909 [2024-11-28 12:50:47.165786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.909 qpair failed and we were unable to recover it. 00:27:04.909 [2024-11-28 12:50:47.165872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.909 [2024-11-28 12:50:47.165885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.909 qpair failed and we were unable to recover it. 00:27:04.909 [2024-11-28 12:50:47.165961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.909 [2024-11-28 12:50:47.165974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.909 qpair failed and we were unable to recover it. 00:27:04.909 [2024-11-28 12:50:47.166130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.909 [2024-11-28 12:50:47.166142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.909 qpair failed and we were unable to recover it. 00:27:04.909 [2024-11-28 12:50:47.166220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.909 [2024-11-28 12:50:47.166232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.909 qpair failed and we were unable to recover it. 00:27:04.909 [2024-11-28 12:50:47.166298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.909 [2024-11-28 12:50:47.166309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.909 qpair failed and we were unable to recover it. 00:27:04.909 [2024-11-28 12:50:47.166462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.909 [2024-11-28 12:50:47.166474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.909 qpair failed and we were unable to recover it. 00:27:04.909 [2024-11-28 12:50:47.166556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.909 [2024-11-28 12:50:47.166567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.909 qpair failed and we were unable to recover it. 00:27:04.909 [2024-11-28 12:50:47.166644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.909 [2024-11-28 12:50:47.166656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.909 qpair failed and we were unable to recover it. 00:27:04.909 [2024-11-28 12:50:47.166731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.909 [2024-11-28 12:50:47.166743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.909 qpair failed and we were unable to recover it. 00:27:04.909 [2024-11-28 12:50:47.166814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.909 [2024-11-28 12:50:47.166825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.909 qpair failed and we were unable to recover it. 00:27:04.909 [2024-11-28 12:50:47.166954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.909 [2024-11-28 12:50:47.166966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.909 qpair failed and we were unable to recover it. 00:27:04.909 [2024-11-28 12:50:47.167039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.909 [2024-11-28 12:50:47.167051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.909 qpair failed and we were unable to recover it. 00:27:04.909 [2024-11-28 12:50:47.167216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.909 [2024-11-28 12:50:47.167228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.909 qpair failed and we were unable to recover it. 00:27:04.909 [2024-11-28 12:50:47.167304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.909 [2024-11-28 12:50:47.167315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.909 qpair failed and we were unable to recover it. 00:27:04.909 [2024-11-28 12:50:47.167535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.909 [2024-11-28 12:50:47.167547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.909 qpair failed and we were unable to recover it. 00:27:04.909 [2024-11-28 12:50:47.167625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.909 [2024-11-28 12:50:47.167648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.909 qpair failed and we were unable to recover it. 00:27:04.909 [2024-11-28 12:50:47.167749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.909 [2024-11-28 12:50:47.167766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.909 qpair failed and we were unable to recover it. 00:27:04.909 [2024-11-28 12:50:47.167846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.909 [2024-11-28 12:50:47.167862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.909 qpair failed and we were unable to recover it. 00:27:04.909 [2024-11-28 12:50:47.168116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.909 [2024-11-28 12:50:47.168133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.909 qpair failed and we were unable to recover it. 00:27:04.909 [2024-11-28 12:50:47.168226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.909 [2024-11-28 12:50:47.168241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.909 qpair failed and we were unable to recover it. 00:27:04.909 [2024-11-28 12:50:47.168390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.909 [2024-11-28 12:50:47.168406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.909 qpair failed and we were unable to recover it. 00:27:04.909 [2024-11-28 12:50:47.168478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.909 [2024-11-28 12:50:47.168493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.909 qpair failed and we were unable to recover it. 00:27:04.909 [2024-11-28 12:50:47.168634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.909 [2024-11-28 12:50:47.168649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.909 qpair failed and we were unable to recover it. 00:27:04.909 [2024-11-28 12:50:47.168824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.909 [2024-11-28 12:50:47.168841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.909 qpair failed and we were unable to recover it. 00:27:04.909 [2024-11-28 12:50:47.168987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.909 [2024-11-28 12:50:47.169004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.909 qpair failed and we were unable to recover it. 00:27:04.909 [2024-11-28 12:50:47.169079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.909 [2024-11-28 12:50:47.169095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.909 qpair failed and we were unable to recover it. 00:27:04.909 [2024-11-28 12:50:47.169181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.909 [2024-11-28 12:50:47.169197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.909 qpair failed and we were unable to recover it. 00:27:04.909 [2024-11-28 12:50:47.169297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.909 [2024-11-28 12:50:47.169313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.909 qpair failed and we were unable to recover it. 00:27:04.909 [2024-11-28 12:50:47.169383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.909 [2024-11-28 12:50:47.169398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.909 qpair failed and we were unable to recover it. 00:27:04.909 [2024-11-28 12:50:47.169491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.909 [2024-11-28 12:50:47.169506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.909 qpair failed and we were unable to recover it. 00:27:04.909 [2024-11-28 12:50:47.169588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.909 [2024-11-28 12:50:47.169604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.909 qpair failed and we were unable to recover it. 00:27:04.909 [2024-11-28 12:50:47.169740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.909 [2024-11-28 12:50:47.169755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.909 qpair failed and we were unable to recover it. 00:27:04.909 [2024-11-28 12:50:47.169918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.909 [2024-11-28 12:50:47.169934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.909 qpair failed and we were unable to recover it. 00:27:04.909 [2024-11-28 12:50:47.170092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.909 [2024-11-28 12:50:47.170109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.910 qpair failed and we were unable to recover it. 00:27:04.910 [2024-11-28 12:50:47.170194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.910 [2024-11-28 12:50:47.170209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.910 qpair failed and we were unable to recover it. 00:27:04.910 [2024-11-28 12:50:47.170296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.910 [2024-11-28 12:50:47.170311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.910 qpair failed and we were unable to recover it. 00:27:04.910 [2024-11-28 12:50:47.170464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.910 [2024-11-28 12:50:47.170479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.910 qpair failed and we were unable to recover it. 00:27:04.910 [2024-11-28 12:50:47.170572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.910 [2024-11-28 12:50:47.170585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.910 qpair failed and we were unable to recover it. 00:27:04.910 [2024-11-28 12:50:47.170730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.910 [2024-11-28 12:50:47.170741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.910 qpair failed and we were unable to recover it. 00:27:04.910 [2024-11-28 12:50:47.170903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.910 [2024-11-28 12:50:47.170916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.910 qpair failed and we were unable to recover it. 00:27:04.910 [2024-11-28 12:50:47.171118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.910 [2024-11-28 12:50:47.171131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.910 qpair failed and we were unable to recover it. 00:27:04.910 [2024-11-28 12:50:47.171213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.910 [2024-11-28 12:50:47.171226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.910 qpair failed and we were unable to recover it. 00:27:04.910 [2024-11-28 12:50:47.171302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.910 [2024-11-28 12:50:47.171320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.910 qpair failed and we were unable to recover it. 00:27:04.910 [2024-11-28 12:50:47.171470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.910 [2024-11-28 12:50:47.171485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.910 qpair failed and we were unable to recover it. 00:27:04.910 [2024-11-28 12:50:47.171644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.910 [2024-11-28 12:50:47.171659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.910 qpair failed and we were unable to recover it. 00:27:04.910 [2024-11-28 12:50:47.171741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.910 [2024-11-28 12:50:47.171758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.910 qpair failed and we were unable to recover it. 00:27:04.910 [2024-11-28 12:50:47.171917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.910 [2024-11-28 12:50:47.171933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.910 qpair failed and we were unable to recover it. 00:27:04.910 [2024-11-28 12:50:47.172084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.910 [2024-11-28 12:50:47.172100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.910 qpair failed and we were unable to recover it. 00:27:04.910 [2024-11-28 12:50:47.172214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.910 [2024-11-28 12:50:47.172229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.910 qpair failed and we were unable to recover it. 00:27:04.910 [2024-11-28 12:50:47.172310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.910 [2024-11-28 12:50:47.172325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.910 qpair failed and we were unable to recover it. 00:27:04.910 [2024-11-28 12:50:47.172402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.910 [2024-11-28 12:50:47.172417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.910 qpair failed and we were unable to recover it. 00:27:04.910 [2024-11-28 12:50:47.172517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.910 [2024-11-28 12:50:47.172533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.910 qpair failed and we were unable to recover it. 00:27:04.910 [2024-11-28 12:50:47.172612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.910 [2024-11-28 12:50:47.172627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.910 qpair failed and we were unable to recover it. 00:27:04.910 [2024-11-28 12:50:47.172706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.910 [2024-11-28 12:50:47.172721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.910 qpair failed and we were unable to recover it. 00:27:04.910 [2024-11-28 12:50:47.172873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.910 [2024-11-28 12:50:47.172890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.910 qpair failed and we were unable to recover it. 00:27:04.910 [2024-11-28 12:50:47.172986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.910 [2024-11-28 12:50:47.173002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.910 qpair failed and we were unable to recover it. 00:27:04.910 [2024-11-28 12:50:47.173083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.910 [2024-11-28 12:50:47.173098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.910 qpair failed and we were unable to recover it. 00:27:04.910 [2024-11-28 12:50:47.173182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.910 [2024-11-28 12:50:47.173198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.910 qpair failed and we were unable to recover it. 00:27:04.910 [2024-11-28 12:50:47.173343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.910 [2024-11-28 12:50:47.173359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.910 qpair failed and we were unable to recover it. 00:27:04.910 [2024-11-28 12:50:47.173448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.910 [2024-11-28 12:50:47.173463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.910 qpair failed and we were unable to recover it. 00:27:04.910 [2024-11-28 12:50:47.173556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.910 [2024-11-28 12:50:47.173571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.910 qpair failed and we were unable to recover it. 00:27:04.910 [2024-11-28 12:50:47.173724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.910 [2024-11-28 12:50:47.173739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.910 qpair failed and we were unable to recover it. 00:27:04.910 [2024-11-28 12:50:47.173846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.910 [2024-11-28 12:50:47.173862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.910 qpair failed and we were unable to recover it. 00:27:04.910 [2024-11-28 12:50:47.173941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.910 [2024-11-28 12:50:47.173961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.910 qpair failed and we were unable to recover it. 00:27:04.910 [2024-11-28 12:50:47.174038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.910 [2024-11-28 12:50:47.174050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.910 qpair failed and we were unable to recover it. 00:27:04.910 [2024-11-28 12:50:47.174126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.910 [2024-11-28 12:50:47.174138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.910 qpair failed and we were unable to recover it. 00:27:04.910 [2024-11-28 12:50:47.174199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.910 [2024-11-28 12:50:47.174211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.910 qpair failed and we were unable to recover it. 00:27:04.910 [2024-11-28 12:50:47.174347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.910 [2024-11-28 12:50:47.174360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.910 qpair failed and we were unable to recover it. 00:27:04.911 [2024-11-28 12:50:47.174432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.911 [2024-11-28 12:50:47.174444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.911 qpair failed and we were unable to recover it. 00:27:04.911 [2024-11-28 12:50:47.174521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.911 [2024-11-28 12:50:47.174536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.911 qpair failed and we were unable to recover it. 00:27:04.911 [2024-11-28 12:50:47.174685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.911 [2024-11-28 12:50:47.174697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.911 qpair failed and we were unable to recover it. 00:27:04.911 [2024-11-28 12:50:47.174832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.911 [2024-11-28 12:50:47.174845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.911 qpair failed and we were unable to recover it. 00:27:04.911 [2024-11-28 12:50:47.174935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.911 [2024-11-28 12:50:47.174951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.911 qpair failed and we were unable to recover it. 00:27:04.911 [2024-11-28 12:50:47.175054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.911 [2024-11-28 12:50:47.175066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.911 qpair failed and we were unable to recover it. 00:27:04.911 [2024-11-28 12:50:47.175159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.911 [2024-11-28 12:50:47.175171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.911 qpair failed and we were unable to recover it. 00:27:04.911 [2024-11-28 12:50:47.175244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.911 [2024-11-28 12:50:47.175256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.911 qpair failed and we were unable to recover it. 00:27:04.911 [2024-11-28 12:50:47.175324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.911 [2024-11-28 12:50:47.175337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.911 qpair failed and we were unable to recover it. 00:27:04.911 [2024-11-28 12:50:47.175399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.911 [2024-11-28 12:50:47.175409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.911 qpair failed and we were unable to recover it. 00:27:04.911 [2024-11-28 12:50:47.175471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.911 [2024-11-28 12:50:47.175482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.911 qpair failed and we were unable to recover it. 00:27:04.911 [2024-11-28 12:50:47.175541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.911 [2024-11-28 12:50:47.175553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.911 qpair failed and we were unable to recover it. 00:27:04.911 [2024-11-28 12:50:47.175714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.911 [2024-11-28 12:50:47.175727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.911 qpair failed and we were unable to recover it. 00:27:04.911 [2024-11-28 12:50:47.175802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.911 [2024-11-28 12:50:47.175812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.911 qpair failed and we were unable to recover it. 00:27:04.911 [2024-11-28 12:50:47.175943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.911 [2024-11-28 12:50:47.175961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.911 qpair failed and we were unable to recover it. 00:27:04.911 [2024-11-28 12:50:47.176033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.911 [2024-11-28 12:50:47.176045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.911 qpair failed and we were unable to recover it. 00:27:04.911 [2024-11-28 12:50:47.176120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.911 [2024-11-28 12:50:47.176133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.911 qpair failed and we were unable to recover it. 00:27:04.911 [2024-11-28 12:50:47.176284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.911 [2024-11-28 12:50:47.176295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.911 qpair failed and we were unable to recover it. 00:27:04.911 [2024-11-28 12:50:47.176428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.911 [2024-11-28 12:50:47.176440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.911 qpair failed and we were unable to recover it. 00:27:04.911 [2024-11-28 12:50:47.176574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.911 [2024-11-28 12:50:47.176585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.911 qpair failed and we were unable to recover it. 00:27:04.911 [2024-11-28 12:50:47.176718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.911 [2024-11-28 12:50:47.176730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.911 qpair failed and we were unable to recover it. 00:27:04.911 [2024-11-28 12:50:47.176820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.911 [2024-11-28 12:50:47.176832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.911 qpair failed and we were unable to recover it. 00:27:04.911 [2024-11-28 12:50:47.176905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.911 [2024-11-28 12:50:47.176918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.911 qpair failed and we were unable to recover it. 00:27:04.911 [2024-11-28 12:50:47.177062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.911 [2024-11-28 12:50:47.177076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.911 qpair failed and we were unable to recover it. 00:27:04.911 [2024-11-28 12:50:47.177157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.911 [2024-11-28 12:50:47.177169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.911 qpair failed and we were unable to recover it. 00:27:04.911 [2024-11-28 12:50:47.177248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.911 [2024-11-28 12:50:47.177261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.911 qpair failed and we were unable to recover it. 00:27:04.911 [2024-11-28 12:50:47.177396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.911 [2024-11-28 12:50:47.177408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.911 qpair failed and we were unable to recover it. 00:27:04.911 [2024-11-28 12:50:47.177499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.911 [2024-11-28 12:50:47.177511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.911 qpair failed and we were unable to recover it. 00:27:04.911 [2024-11-28 12:50:47.177655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.911 [2024-11-28 12:50:47.177669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.911 qpair failed and we were unable to recover it. 00:27:04.911 [2024-11-28 12:50:47.177749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.911 [2024-11-28 12:50:47.177761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.911 qpair failed and we were unable to recover it. 00:27:04.911 [2024-11-28 12:50:47.177830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.911 [2024-11-28 12:50:47.177840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.911 qpair failed and we were unable to recover it. 00:27:04.911 [2024-11-28 12:50:47.177978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.911 [2024-11-28 12:50:47.177991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.911 qpair failed and we were unable to recover it. 00:27:04.911 [2024-11-28 12:50:47.178055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.911 [2024-11-28 12:50:47.178066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.911 qpair failed and we were unable to recover it. 00:27:04.911 [2024-11-28 12:50:47.178215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.911 [2024-11-28 12:50:47.178227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.911 qpair failed and we were unable to recover it. 00:27:04.911 [2024-11-28 12:50:47.178285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.911 [2024-11-28 12:50:47.178296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.911 qpair failed and we were unable to recover it. 00:27:04.911 [2024-11-28 12:50:47.178448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.911 [2024-11-28 12:50:47.178459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.911 qpair failed and we were unable to recover it. 00:27:04.911 [2024-11-28 12:50:47.178532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.911 [2024-11-28 12:50:47.178544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.911 qpair failed and we were unable to recover it. 00:27:04.911 [2024-11-28 12:50:47.178612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.911 [2024-11-28 12:50:47.178623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.912 qpair failed and we were unable to recover it. 00:27:04.912 [2024-11-28 12:50:47.178773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.912 [2024-11-28 12:50:47.178785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.912 qpair failed and we were unable to recover it. 00:27:04.912 [2024-11-28 12:50:47.178866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.912 [2024-11-28 12:50:47.178879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.912 qpair failed and we were unable to recover it. 00:27:04.912 [2024-11-28 12:50:47.178968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.912 [2024-11-28 12:50:47.178984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.912 qpair failed and we were unable to recover it. 00:27:04.912 [2024-11-28 12:50:47.179139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.912 [2024-11-28 12:50:47.179153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.912 qpair failed and we were unable to recover it. 00:27:04.912 [2024-11-28 12:50:47.179233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.912 [2024-11-28 12:50:47.179246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.912 qpair failed and we were unable to recover it. 00:27:04.912 [2024-11-28 12:50:47.179402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.912 [2024-11-28 12:50:47.179414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.912 qpair failed and we were unable to recover it. 00:27:04.912 [2024-11-28 12:50:47.179515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.912 [2024-11-28 12:50:47.179527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.912 qpair failed and we were unable to recover it. 00:27:04.912 [2024-11-28 12:50:47.179596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.912 [2024-11-28 12:50:47.179608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.912 qpair failed and we were unable to recover it. 00:27:04.912 [2024-11-28 12:50:47.179708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.912 [2024-11-28 12:50:47.179720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.912 qpair failed and we were unable to recover it. 00:27:04.912 [2024-11-28 12:50:47.179862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.912 [2024-11-28 12:50:47.179874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.912 qpair failed and we were unable to recover it. 00:27:04.912 [2024-11-28 12:50:47.179952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.912 [2024-11-28 12:50:47.179965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.912 qpair failed and we were unable to recover it. 00:27:04.912 [2024-11-28 12:50:47.180039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.912 [2024-11-28 12:50:47.180052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.912 qpair failed and we were unable to recover it. 00:27:04.912 [2024-11-28 12:50:47.180136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.912 [2024-11-28 12:50:47.180147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.912 qpair failed and we were unable to recover it. 00:27:04.912 [2024-11-28 12:50:47.180217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.912 [2024-11-28 12:50:47.180229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.912 qpair failed and we were unable to recover it. 00:27:04.912 [2024-11-28 12:50:47.180304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.912 [2024-11-28 12:50:47.180316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.912 qpair failed and we were unable to recover it. 00:27:04.912 [2024-11-28 12:50:47.180403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.912 [2024-11-28 12:50:47.180414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.912 qpair failed and we were unable to recover it. 00:27:04.912 [2024-11-28 12:50:47.180482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.912 [2024-11-28 12:50:47.180493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.912 qpair failed and we were unable to recover it. 00:27:04.912 [2024-11-28 12:50:47.180565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.912 [2024-11-28 12:50:47.180576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.912 qpair failed and we were unable to recover it. 00:27:04.912 [2024-11-28 12:50:47.180653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.912 [2024-11-28 12:50:47.180665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.912 qpair failed and we were unable to recover it. 00:27:04.912 [2024-11-28 12:50:47.180800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.912 [2024-11-28 12:50:47.180813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.912 qpair failed and we were unable to recover it. 00:27:04.912 [2024-11-28 12:50:47.180956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.912 [2024-11-28 12:50:47.180969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.912 qpair failed and we were unable to recover it. 00:27:04.912 [2024-11-28 12:50:47.181037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.912 [2024-11-28 12:50:47.181050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.912 qpair failed and we were unable to recover it. 00:27:04.912 [2024-11-28 12:50:47.181184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.912 [2024-11-28 12:50:47.181196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.912 qpair failed and we were unable to recover it. 00:27:04.912 [2024-11-28 12:50:47.181280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.912 [2024-11-28 12:50:47.181293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.912 qpair failed and we were unable to recover it. 00:27:04.912 [2024-11-28 12:50:47.181459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.912 [2024-11-28 12:50:47.181472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.912 qpair failed and we were unable to recover it. 00:27:04.912 [2024-11-28 12:50:47.181567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.912 [2024-11-28 12:50:47.181579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.912 qpair failed and we were unable to recover it. 00:27:04.912 [2024-11-28 12:50:47.181667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.912 [2024-11-28 12:50:47.181680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.912 qpair failed and we were unable to recover it. 00:27:04.912 [2024-11-28 12:50:47.181765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.912 [2024-11-28 12:50:47.181777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.912 qpair failed and we were unable to recover it. 00:27:04.912 [2024-11-28 12:50:47.181840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.912 [2024-11-28 12:50:47.181852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.912 qpair failed and we were unable to recover it. 00:27:04.912 [2024-11-28 12:50:47.181917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.912 [2024-11-28 12:50:47.181927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.912 qpair failed and we were unable to recover it. 00:27:04.912 [2024-11-28 12:50:47.182010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.912 [2024-11-28 12:50:47.182022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.912 qpair failed and we were unable to recover it. 00:27:04.912 [2024-11-28 12:50:47.182099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.912 [2024-11-28 12:50:47.182111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.912 qpair failed and we were unable to recover it. 00:27:04.912 [2024-11-28 12:50:47.182240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.912 [2024-11-28 12:50:47.182252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.912 qpair failed and we were unable to recover it. 00:27:04.912 [2024-11-28 12:50:47.182320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.912 [2024-11-28 12:50:47.182332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.912 qpair failed and we were unable to recover it. 00:27:04.912 [2024-11-28 12:50:47.182484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.912 [2024-11-28 12:50:47.182496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.912 qpair failed and we were unable to recover it. 00:27:04.912 [2024-11-28 12:50:47.182632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.912 [2024-11-28 12:50:47.182644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.912 qpair failed and we were unable to recover it. 00:27:04.912 [2024-11-28 12:50:47.182731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.912 [2024-11-28 12:50:47.182744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.912 qpair failed and we were unable to recover it. 00:27:04.912 [2024-11-28 12:50:47.182825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.912 [2024-11-28 12:50:47.182837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.912 qpair failed and we were unable to recover it. 00:27:04.913 [2024-11-28 12:50:47.182912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.913 [2024-11-28 12:50:47.182924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.913 qpair failed and we were unable to recover it. 00:27:04.913 [2024-11-28 12:50:47.183001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.913 [2024-11-28 12:50:47.183014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.913 qpair failed and we were unable to recover it. 00:27:04.913 [2024-11-28 12:50:47.183152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.913 [2024-11-28 12:50:47.183164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.913 qpair failed and we were unable to recover it. 00:27:04.913 [2024-11-28 12:50:47.183228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.913 [2024-11-28 12:50:47.183240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.913 qpair failed and we were unable to recover it. 00:27:04.913 [2024-11-28 12:50:47.183311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.913 [2024-11-28 12:50:47.183323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.913 qpair failed and we were unable to recover it. 00:27:04.913 [2024-11-28 12:50:47.183405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.913 [2024-11-28 12:50:47.183419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.913 qpair failed and we were unable to recover it. 00:27:04.913 [2024-11-28 12:50:47.183497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.913 [2024-11-28 12:50:47.183508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.913 qpair failed and we were unable to recover it. 00:27:04.913 [2024-11-28 12:50:47.183578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.913 [2024-11-28 12:50:47.183589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.913 qpair failed and we were unable to recover it. 00:27:04.913 [2024-11-28 12:50:47.183655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.913 [2024-11-28 12:50:47.183665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.913 qpair failed and we were unable to recover it. 00:27:04.913 [2024-11-28 12:50:47.183800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.913 [2024-11-28 12:50:47.183812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.913 qpair failed and we were unable to recover it. 00:27:04.913 [2024-11-28 12:50:47.183874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.913 [2024-11-28 12:50:47.183884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.913 qpair failed and we were unable to recover it. 00:27:04.913 [2024-11-28 12:50:47.183967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.913 [2024-11-28 12:50:47.183984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.913 qpair failed and we were unable to recover it. 00:27:04.913 [2024-11-28 12:50:47.184063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.913 [2024-11-28 12:50:47.184075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.913 qpair failed and we were unable to recover it. 00:27:04.913 [2024-11-28 12:50:47.184151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.913 [2024-11-28 12:50:47.184163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.913 qpair failed and we were unable to recover it. 00:27:04.913 [2024-11-28 12:50:47.184227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.913 [2024-11-28 12:50:47.184238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.913 qpair failed and we were unable to recover it. 00:27:04.913 [2024-11-28 12:50:47.184304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.913 [2024-11-28 12:50:47.184315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.913 qpair failed and we were unable to recover it. 00:27:04.913 [2024-11-28 12:50:47.184385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.913 [2024-11-28 12:50:47.184397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.913 qpair failed and we were unable to recover it. 00:27:04.913 [2024-11-28 12:50:47.184478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.913 [2024-11-28 12:50:47.184491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.913 qpair failed and we were unable to recover it. 00:27:04.913 [2024-11-28 12:50:47.184558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.913 [2024-11-28 12:50:47.184569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.913 qpair failed and we were unable to recover it. 00:27:04.913 [2024-11-28 12:50:47.184642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.913 [2024-11-28 12:50:47.184653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.913 qpair failed and we were unable to recover it. 00:27:04.913 [2024-11-28 12:50:47.184801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.913 [2024-11-28 12:50:47.184813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.913 qpair failed and we were unable to recover it. 00:27:04.913 [2024-11-28 12:50:47.184946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.913 [2024-11-28 12:50:47.184966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.913 qpair failed and we were unable to recover it. 00:27:04.913 [2024-11-28 12:50:47.185047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.913 [2024-11-28 12:50:47.185058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.913 qpair failed and we were unable to recover it. 00:27:04.913 [2024-11-28 12:50:47.185141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.913 [2024-11-28 12:50:47.185152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.913 qpair failed and we were unable to recover it. 00:27:04.913 [2024-11-28 12:50:47.185217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.913 [2024-11-28 12:50:47.185229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.913 qpair failed and we were unable to recover it. 00:27:04.913 [2024-11-28 12:50:47.185363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.913 [2024-11-28 12:50:47.185374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.913 qpair failed and we were unable to recover it. 00:27:04.913 [2024-11-28 12:50:47.185442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.913 [2024-11-28 12:50:47.185453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.913 qpair failed and we were unable to recover it. 00:27:04.913 [2024-11-28 12:50:47.185531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.913 [2024-11-28 12:50:47.185542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.913 qpair failed and we were unable to recover it. 00:27:04.913 [2024-11-28 12:50:47.185615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.913 [2024-11-28 12:50:47.185626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.913 qpair failed and we were unable to recover it. 00:27:04.913 [2024-11-28 12:50:47.185698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.913 [2024-11-28 12:50:47.185709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.913 qpair failed and we were unable to recover it. 00:27:04.913 [2024-11-28 12:50:47.185788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.913 [2024-11-28 12:50:47.185800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.913 qpair failed and we were unable to recover it. 00:27:04.913 [2024-11-28 12:50:47.185870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.913 [2024-11-28 12:50:47.185882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.913 qpair failed and we were unable to recover it. 00:27:04.913 [2024-11-28 12:50:47.185982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.913 [2024-11-28 12:50:47.185995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.913 qpair failed and we were unable to recover it. 00:27:04.913 [2024-11-28 12:50:47.186068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.913 [2024-11-28 12:50:47.186081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.913 qpair failed and we were unable to recover it. 00:27:04.913 [2024-11-28 12:50:47.186159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.913 [2024-11-28 12:50:47.186171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.913 qpair failed and we were unable to recover it. 00:27:04.913 [2024-11-28 12:50:47.186259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.913 [2024-11-28 12:50:47.186271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.913 qpair failed and we were unable to recover it. 00:27:04.913 [2024-11-28 12:50:47.186353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.913 [2024-11-28 12:50:47.186364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.913 qpair failed and we were unable to recover it. 00:27:04.913 [2024-11-28 12:50:47.186432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.913 [2024-11-28 12:50:47.186444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.913 qpair failed and we were unable to recover it. 00:27:04.914 [2024-11-28 12:50:47.186504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.914 [2024-11-28 12:50:47.186515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.914 qpair failed and we were unable to recover it. 00:27:04.914 [2024-11-28 12:50:47.186595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.914 [2024-11-28 12:50:47.186606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.914 qpair failed and we were unable to recover it. 00:27:04.914 [2024-11-28 12:50:47.186686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.914 [2024-11-28 12:50:47.186697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.914 qpair failed and we were unable to recover it. 00:27:04.914 [2024-11-28 12:50:47.186763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.914 [2024-11-28 12:50:47.186775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.914 qpair failed and we were unable to recover it. 00:27:04.914 [2024-11-28 12:50:47.186849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.914 [2024-11-28 12:50:47.186860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.914 qpair failed and we were unable to recover it. 00:27:04.914 [2024-11-28 12:50:47.186998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.914 [2024-11-28 12:50:47.187011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.914 qpair failed and we were unable to recover it. 00:27:04.914 [2024-11-28 12:50:47.187089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.914 [2024-11-28 12:50:47.187100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.914 qpair failed and we were unable to recover it. 00:27:04.914 [2024-11-28 12:50:47.187169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.914 [2024-11-28 12:50:47.187183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.914 qpair failed and we were unable to recover it. 00:27:04.914 [2024-11-28 12:50:47.187321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.914 [2024-11-28 12:50:47.187333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.914 qpair failed and we were unable to recover it. 00:27:04.914 [2024-11-28 12:50:47.187430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.914 [2024-11-28 12:50:47.187442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.914 qpair failed and we were unable to recover it. 00:27:04.914 [2024-11-28 12:50:47.187576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.914 [2024-11-28 12:50:47.187588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.914 qpair failed and we were unable to recover it. 00:27:04.914 [2024-11-28 12:50:47.187659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.914 [2024-11-28 12:50:47.187671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.914 qpair failed and we were unable to recover it. 00:27:04.914 [2024-11-28 12:50:47.187741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.914 [2024-11-28 12:50:47.187753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.914 qpair failed and we were unable to recover it. 00:27:04.914 [2024-11-28 12:50:47.187825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.914 [2024-11-28 12:50:47.187837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.914 qpair failed and we were unable to recover it. 00:27:04.914 [2024-11-28 12:50:47.187905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.914 [2024-11-28 12:50:47.187917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.914 qpair failed and we were unable to recover it. 00:27:04.914 [2024-11-28 12:50:47.187988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.914 [2024-11-28 12:50:47.188000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.914 qpair failed and we were unable to recover it. 00:27:04.914 [2024-11-28 12:50:47.188080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.914 [2024-11-28 12:50:47.188092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.914 qpair failed and we were unable to recover it. 00:27:04.914 [2024-11-28 12:50:47.188160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.914 [2024-11-28 12:50:47.188172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.914 qpair failed and we were unable to recover it. 00:27:04.914 [2024-11-28 12:50:47.188303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.914 [2024-11-28 12:50:47.188315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.914 qpair failed and we were unable to recover it. 00:27:04.914 [2024-11-28 12:50:47.188391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.914 [2024-11-28 12:50:47.188404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.914 qpair failed and we were unable to recover it. 00:27:04.914 [2024-11-28 12:50:47.188487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.914 [2024-11-28 12:50:47.188500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.914 qpair failed and we were unable to recover it. 00:27:04.914 [2024-11-28 12:50:47.188567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.914 [2024-11-28 12:50:47.188579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.914 qpair failed and we were unable to recover it. 00:27:04.914 [2024-11-28 12:50:47.188667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.914 [2024-11-28 12:50:47.188678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.914 qpair failed and we were unable to recover it. 00:27:04.914 [2024-11-28 12:50:47.188746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.914 [2024-11-28 12:50:47.188758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.914 qpair failed and we were unable to recover it. 00:27:04.914 [2024-11-28 12:50:47.188893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.914 [2024-11-28 12:50:47.188905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.914 qpair failed and we were unable to recover it. 00:27:04.914 [2024-11-28 12:50:47.189064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.914 [2024-11-28 12:50:47.189078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.914 qpair failed and we were unable to recover it. 00:27:04.914 [2024-11-28 12:50:47.189162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.914 [2024-11-28 12:50:47.189174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.914 qpair failed and we were unable to recover it. 00:27:04.914 [2024-11-28 12:50:47.189254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.914 [2024-11-28 12:50:47.189268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.914 qpair failed and we were unable to recover it. 00:27:04.914 [2024-11-28 12:50:47.189353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.914 [2024-11-28 12:50:47.189365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.914 qpair failed and we were unable to recover it. 00:27:04.914 [2024-11-28 12:50:47.189433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.914 [2024-11-28 12:50:47.189445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.914 qpair failed and we were unable to recover it. 00:27:04.914 [2024-11-28 12:50:47.189528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.914 [2024-11-28 12:50:47.189540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.914 qpair failed and we were unable to recover it. 00:27:04.914 [2024-11-28 12:50:47.189619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.914 [2024-11-28 12:50:47.189632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.914 qpair failed and we were unable to recover it. 00:27:04.914 [2024-11-28 12:50:47.189701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.914 [2024-11-28 12:50:47.189713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.914 qpair failed and we were unable to recover it. 00:27:04.914 [2024-11-28 12:50:47.189780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.915 [2024-11-28 12:50:47.189793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.915 qpair failed and we were unable to recover it. 00:27:04.915 [2024-11-28 12:50:47.189877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.915 [2024-11-28 12:50:47.189889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.915 qpair failed and we were unable to recover it. 00:27:04.915 [2024-11-28 12:50:47.189961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.915 [2024-11-28 12:50:47.189972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.915 qpair failed and we were unable to recover it. 00:27:04.915 [2024-11-28 12:50:47.190128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.915 [2024-11-28 12:50:47.190141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.915 qpair failed and we were unable to recover it. 00:27:04.915 [2024-11-28 12:50:47.190200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.915 [2024-11-28 12:50:47.190211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.915 qpair failed and we were unable to recover it. 00:27:04.915 [2024-11-28 12:50:47.190287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.915 [2024-11-28 12:50:47.190299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.915 qpair failed and we were unable to recover it. 00:27:04.915 [2024-11-28 12:50:47.190435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.915 [2024-11-28 12:50:47.190447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.915 qpair failed and we were unable to recover it. 00:27:04.915 [2024-11-28 12:50:47.190525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.915 [2024-11-28 12:50:47.190537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.915 qpair failed and we were unable to recover it. 00:27:04.915 [2024-11-28 12:50:47.190609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.915 [2024-11-28 12:50:47.190622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.915 qpair failed and we were unable to recover it. 00:27:04.915 [2024-11-28 12:50:47.190702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.915 [2024-11-28 12:50:47.190715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.915 qpair failed and we were unable to recover it. 00:27:04.915 [2024-11-28 12:50:47.190859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.915 [2024-11-28 12:50:47.190871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.915 qpair failed and we were unable to recover it. 00:27:04.915 [2024-11-28 12:50:47.190935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.915 [2024-11-28 12:50:47.190951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.915 qpair failed and we were unable to recover it. 00:27:04.915 [2024-11-28 12:50:47.191022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.915 [2024-11-28 12:50:47.191033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.915 qpair failed and we were unable to recover it. 00:27:04.915 [2024-11-28 12:50:47.191203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.915 [2024-11-28 12:50:47.191215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.915 qpair failed and we were unable to recover it. 00:27:04.915 [2024-11-28 12:50:47.191283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.915 [2024-11-28 12:50:47.191298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.915 qpair failed and we were unable to recover it. 00:27:04.915 [2024-11-28 12:50:47.191504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.915 [2024-11-28 12:50:47.191517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.915 qpair failed and we were unable to recover it. 00:27:04.915 [2024-11-28 12:50:47.191601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.915 [2024-11-28 12:50:47.191614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.915 qpair failed and we were unable to recover it. 00:27:04.915 [2024-11-28 12:50:47.191692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.915 [2024-11-28 12:50:47.191704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.915 qpair failed and we were unable to recover it. 00:27:04.915 [2024-11-28 12:50:47.191769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.915 [2024-11-28 12:50:47.191782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.915 qpair failed and we were unable to recover it. 00:27:04.915 [2024-11-28 12:50:47.191853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.915 [2024-11-28 12:50:47.191864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.915 qpair failed and we were unable to recover it. 00:27:04.915 [2024-11-28 12:50:47.191944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.915 [2024-11-28 12:50:47.191960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.915 qpair failed and we were unable to recover it. 00:27:04.915 [2024-11-28 12:50:47.192030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.915 [2024-11-28 12:50:47.192043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.915 qpair failed and we were unable to recover it. 00:27:04.915 [2024-11-28 12:50:47.192251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.915 [2024-11-28 12:50:47.192262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.915 qpair failed and we were unable to recover it. 00:27:04.915 [2024-11-28 12:50:47.192319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.915 [2024-11-28 12:50:47.192331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.915 qpair failed and we were unable to recover it. 00:27:04.915 [2024-11-28 12:50:47.192413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.915 [2024-11-28 12:50:47.192424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.915 qpair failed and we were unable to recover it. 00:27:04.915 [2024-11-28 12:50:47.192525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.915 [2024-11-28 12:50:47.192538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.915 qpair failed and we were unable to recover it. 00:27:04.915 [2024-11-28 12:50:47.192619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.915 [2024-11-28 12:50:47.192631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.915 qpair failed and we were unable to recover it. 00:27:04.915 [2024-11-28 12:50:47.192764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.915 [2024-11-28 12:50:47.192776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.915 qpair failed and we were unable to recover it. 00:27:04.915 [2024-11-28 12:50:47.192847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.915 [2024-11-28 12:50:47.192860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.915 qpair failed and we were unable to recover it. 00:27:04.915 [2024-11-28 12:50:47.193018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.915 [2024-11-28 12:50:47.193031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.915 qpair failed and we were unable to recover it. 00:27:04.915 [2024-11-28 12:50:47.193109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.915 [2024-11-28 12:50:47.193122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.915 qpair failed and we were unable to recover it. 00:27:04.915 [2024-11-28 12:50:47.193268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.915 [2024-11-28 12:50:47.193280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.915 qpair failed and we were unable to recover it. 00:27:04.915 [2024-11-28 12:50:47.193363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.915 [2024-11-28 12:50:47.193375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.915 qpair failed and we were unable to recover it. 00:27:04.915 [2024-11-28 12:50:47.193445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.915 [2024-11-28 12:50:47.193457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.915 qpair failed and we were unable to recover it. 00:27:04.915 [2024-11-28 12:50:47.193614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.915 [2024-11-28 12:50:47.193628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.915 qpair failed and we were unable to recover it. 00:27:04.915 [2024-11-28 12:50:47.193789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.915 [2024-11-28 12:50:47.193800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.915 qpair failed and we were unable to recover it. 00:27:04.915 [2024-11-28 12:50:47.193872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.915 [2024-11-28 12:50:47.193884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.915 qpair failed and we were unable to recover it. 00:27:04.915 [2024-11-28 12:50:47.193969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.915 [2024-11-28 12:50:47.193982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.915 qpair failed and we were unable to recover it. 00:27:04.915 [2024-11-28 12:50:47.194130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.916 [2024-11-28 12:50:47.194144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.916 qpair failed and we were unable to recover it. 00:27:04.916 [2024-11-28 12:50:47.194241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.916 [2024-11-28 12:50:47.194253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.916 qpair failed and we were unable to recover it. 00:27:04.916 [2024-11-28 12:50:47.194331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.916 [2024-11-28 12:50:47.194344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.916 qpair failed and we were unable to recover it. 00:27:04.916 [2024-11-28 12:50:47.194413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.916 [2024-11-28 12:50:47.194425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.916 qpair failed and we were unable to recover it. 00:27:04.916 [2024-11-28 12:50:47.194510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.916 [2024-11-28 12:50:47.194522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.916 qpair failed and we were unable to recover it. 00:27:04.916 [2024-11-28 12:50:47.194627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.916 [2024-11-28 12:50:47.194639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.916 qpair failed and we were unable to recover it. 00:27:04.916 [2024-11-28 12:50:47.194754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.916 [2024-11-28 12:50:47.194765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.916 qpair failed and we were unable to recover it. 00:27:04.916 [2024-11-28 12:50:47.194820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.916 [2024-11-28 12:50:47.194832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.916 qpair failed and we were unable to recover it. 00:27:04.916 [2024-11-28 12:50:47.194900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.916 [2024-11-28 12:50:47.194912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.916 qpair failed and we were unable to recover it. 00:27:04.916 [2024-11-28 12:50:47.194993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.916 [2024-11-28 12:50:47.195006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.916 qpair failed and we were unable to recover it. 00:27:04.916 [2024-11-28 12:50:47.195079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.916 [2024-11-28 12:50:47.195100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.916 qpair failed and we were unable to recover it. 00:27:04.916 [2024-11-28 12:50:47.195182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.916 [2024-11-28 12:50:47.195194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.916 qpair failed and we were unable to recover it. 00:27:04.916 [2024-11-28 12:50:47.195260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.916 [2024-11-28 12:50:47.195271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.916 qpair failed and we were unable to recover it. 00:27:04.916 [2024-11-28 12:50:47.195343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.916 [2024-11-28 12:50:47.195356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.916 qpair failed and we were unable to recover it. 00:27:04.916 [2024-11-28 12:50:47.195424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.916 [2024-11-28 12:50:47.195435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.916 qpair failed and we were unable to recover it. 00:27:04.916 [2024-11-28 12:50:47.195508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.916 [2024-11-28 12:50:47.195520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.916 qpair failed and we were unable to recover it. 00:27:04.916 [2024-11-28 12:50:47.195589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.916 [2024-11-28 12:50:47.195603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.916 qpair failed and we were unable to recover it. 00:27:04.916 [2024-11-28 12:50:47.195678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.916 [2024-11-28 12:50:47.195689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.916 qpair failed and we were unable to recover it. 00:27:04.916 [2024-11-28 12:50:47.195823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.916 [2024-11-28 12:50:47.195835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.916 qpair failed and we were unable to recover it. 00:27:04.916 [2024-11-28 12:50:47.195902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.916 [2024-11-28 12:50:47.195914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.916 qpair failed and we were unable to recover it. 00:27:04.916 [2024-11-28 12:50:47.196087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.916 [2024-11-28 12:50:47.196098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.916 qpair failed and we were unable to recover it. 00:27:04.916 [2024-11-28 12:50:47.196172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.916 [2024-11-28 12:50:47.196184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.916 qpair failed and we were unable to recover it. 00:27:04.916 [2024-11-28 12:50:47.196266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.916 [2024-11-28 12:50:47.196278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.916 qpair failed and we were unable to recover it. 00:27:04.916 [2024-11-28 12:50:47.196345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.916 [2024-11-28 12:50:47.196356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.916 qpair failed and we were unable to recover it. 00:27:04.916 [2024-11-28 12:50:47.196467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.916 [2024-11-28 12:50:47.196479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.916 qpair failed and we were unable to recover it. 00:27:04.916 [2024-11-28 12:50:47.196562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.916 [2024-11-28 12:50:47.196576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.916 qpair failed and we were unable to recover it. 00:27:04.916 [2024-11-28 12:50:47.196658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.916 [2024-11-28 12:50:47.196671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.916 qpair failed and we were unable to recover it. 00:27:04.916 [2024-11-28 12:50:47.196738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.916 [2024-11-28 12:50:47.196750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.916 qpair failed and we were unable to recover it. 00:27:04.916 [2024-11-28 12:50:47.196830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.916 [2024-11-28 12:50:47.196841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.916 qpair failed and we were unable to recover it. 00:27:04.916 [2024-11-28 12:50:47.196903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.916 [2024-11-28 12:50:47.196915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.916 qpair failed and we were unable to recover it. 00:27:04.916 [2024-11-28 12:50:47.197007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.916 [2024-11-28 12:50:47.197020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.916 qpair failed and we were unable to recover it. 00:27:04.916 [2024-11-28 12:50:47.197085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.916 [2024-11-28 12:50:47.197098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.916 qpair failed and we were unable to recover it. 00:27:04.916 [2024-11-28 12:50:47.197303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.916 [2024-11-28 12:50:47.197316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.916 qpair failed and we were unable to recover it. 00:27:04.916 [2024-11-28 12:50:47.197381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.916 [2024-11-28 12:50:47.197392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.916 qpair failed and we were unable to recover it. 00:27:04.916 [2024-11-28 12:50:47.197461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.916 [2024-11-28 12:50:47.197473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.916 qpair failed and we were unable to recover it. 00:27:04.916 [2024-11-28 12:50:47.197549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.916 [2024-11-28 12:50:47.197561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.916 qpair failed and we were unable to recover it. 00:27:04.916 [2024-11-28 12:50:47.197637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.916 [2024-11-28 12:50:47.197649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.916 qpair failed and we were unable to recover it. 00:27:04.916 [2024-11-28 12:50:47.197720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.916 [2024-11-28 12:50:47.197732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.916 qpair failed and we were unable to recover it. 00:27:04.916 [2024-11-28 12:50:47.197815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.916 [2024-11-28 12:50:47.197827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.917 qpair failed and we were unable to recover it. 00:27:04.917 [2024-11-28 12:50:47.197977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.917 [2024-11-28 12:50:47.197989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.917 qpair failed and we were unable to recover it. 00:27:04.917 [2024-11-28 12:50:47.198062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.917 [2024-11-28 12:50:47.198075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.917 qpair failed and we were unable to recover it. 00:27:04.917 [2024-11-28 12:50:47.198161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.917 [2024-11-28 12:50:47.198172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.917 qpair failed and we were unable to recover it. 00:27:04.917 [2024-11-28 12:50:47.198346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.917 [2024-11-28 12:50:47.198359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.917 qpair failed and we were unable to recover it. 00:27:04.917 [2024-11-28 12:50:47.198495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.917 [2024-11-28 12:50:47.198507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.917 qpair failed and we were unable to recover it. 00:27:04.917 [2024-11-28 12:50:47.198590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.917 [2024-11-28 12:50:47.198602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.917 qpair failed and we were unable to recover it. 00:27:04.917 [2024-11-28 12:50:47.198684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.917 [2024-11-28 12:50:47.198696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.917 qpair failed and we were unable to recover it. 00:27:04.917 [2024-11-28 12:50:47.198770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.917 [2024-11-28 12:50:47.198785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.917 qpair failed and we were unable to recover it. 00:27:04.917 [2024-11-28 12:50:47.198862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.917 [2024-11-28 12:50:47.198874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.917 qpair failed and we were unable to recover it. 00:27:04.917 [2024-11-28 12:50:47.199012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.917 [2024-11-28 12:50:47.199024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.917 qpair failed and we were unable to recover it. 00:27:04.917 [2024-11-28 12:50:47.199115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.917 [2024-11-28 12:50:47.199127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.917 qpair failed and we were unable to recover it. 00:27:04.917 [2024-11-28 12:50:47.199199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.917 [2024-11-28 12:50:47.199210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.917 qpair failed and we were unable to recover it. 00:27:04.917 [2024-11-28 12:50:47.199277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.917 [2024-11-28 12:50:47.199289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.917 qpair failed and we were unable to recover it. 00:27:04.917 [2024-11-28 12:50:47.199420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.917 [2024-11-28 12:50:47.199434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.917 qpair failed and we were unable to recover it. 00:27:04.917 [2024-11-28 12:50:47.199538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.917 [2024-11-28 12:50:47.199549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.917 qpair failed and we were unable to recover it. 00:27:04.917 [2024-11-28 12:50:47.199625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.917 [2024-11-28 12:50:47.199637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.917 qpair failed and we were unable to recover it. 00:27:04.917 [2024-11-28 12:50:47.199771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.917 [2024-11-28 12:50:47.199783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.917 qpair failed and we were unable to recover it. 00:27:04.917 [2024-11-28 12:50:47.199847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.917 [2024-11-28 12:50:47.199862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.917 qpair failed and we were unable to recover it. 00:27:04.917 [2024-11-28 12:50:47.199939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.917 [2024-11-28 12:50:47.199956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.917 qpair failed and we were unable to recover it. 00:27:04.917 [2024-11-28 12:50:47.200039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.917 [2024-11-28 12:50:47.200051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.917 qpair failed and we were unable to recover it. 00:27:04.917 [2024-11-28 12:50:47.200136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.917 [2024-11-28 12:50:47.200148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.917 qpair failed and we were unable to recover it. 00:27:04.917 [2024-11-28 12:50:47.200217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.917 [2024-11-28 12:50:47.200229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.917 qpair failed and we were unable to recover it. 00:27:04.917 [2024-11-28 12:50:47.200316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.917 [2024-11-28 12:50:47.200327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.917 qpair failed and we were unable to recover it. 00:27:04.917 [2024-11-28 12:50:47.200407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.917 [2024-11-28 12:50:47.200419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.917 qpair failed and we were unable to recover it. 00:27:04.917 [2024-11-28 12:50:47.200499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.917 [2024-11-28 12:50:47.200511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.917 qpair failed and we were unable to recover it. 00:27:04.917 [2024-11-28 12:50:47.200726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.917 [2024-11-28 12:50:47.200737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.917 qpair failed and we were unable to recover it. 00:27:04.917 [2024-11-28 12:50:47.200818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.917 [2024-11-28 12:50:47.200829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.917 qpair failed and we were unable to recover it. 00:27:04.917 [2024-11-28 12:50:47.200907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.917 [2024-11-28 12:50:47.200918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.917 qpair failed and we were unable to recover it. 00:27:04.917 [2024-11-28 12:50:47.200999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.917 [2024-11-28 12:50:47.201011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.917 qpair failed and we were unable to recover it. 00:27:04.917 [2024-11-28 12:50:47.201075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.917 [2024-11-28 12:50:47.201086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.917 qpair failed and we were unable to recover it. 00:27:04.917 [2024-11-28 12:50:47.201271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.917 [2024-11-28 12:50:47.201284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.917 qpair failed and we were unable to recover it. 00:27:04.917 [2024-11-28 12:50:47.201420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.917 [2024-11-28 12:50:47.201431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.917 qpair failed and we were unable to recover it. 00:27:04.917 [2024-11-28 12:50:47.201502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.917 [2024-11-28 12:50:47.201514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.917 qpair failed and we were unable to recover it. 00:27:04.917 [2024-11-28 12:50:47.201662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.917 [2024-11-28 12:50:47.201675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.917 qpair failed and we were unable to recover it. 00:27:04.917 [2024-11-28 12:50:47.201774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.917 [2024-11-28 12:50:47.201786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.917 qpair failed and we were unable to recover it. 00:27:04.917 [2024-11-28 12:50:47.201920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.917 [2024-11-28 12:50:47.201932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.917 qpair failed and we were unable to recover it. 00:27:04.917 [2024-11-28 12:50:47.202011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.917 [2024-11-28 12:50:47.202023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.917 qpair failed and we were unable to recover it. 00:27:04.917 [2024-11-28 12:50:47.202181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.917 [2024-11-28 12:50:47.202193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.917 qpair failed and we were unable to recover it. 00:27:04.918 [2024-11-28 12:50:47.202336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.918 [2024-11-28 12:50:47.202348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.918 qpair failed and we were unable to recover it. 00:27:04.918 [2024-11-28 12:50:47.202431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.918 [2024-11-28 12:50:47.202443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.918 qpair failed and we were unable to recover it. 00:27:04.918 [2024-11-28 12:50:47.202513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.918 [2024-11-28 12:50:47.202524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.918 qpair failed and we were unable to recover it. 00:27:04.918 [2024-11-28 12:50:47.202605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.918 [2024-11-28 12:50:47.202616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.918 qpair failed and we were unable to recover it. 00:27:04.918 [2024-11-28 12:50:47.202683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.918 [2024-11-28 12:50:47.202695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.918 qpair failed and we were unable to recover it. 00:27:04.918 [2024-11-28 12:50:47.202763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.918 [2024-11-28 12:50:47.202776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.918 qpair failed and we were unable to recover it. 00:27:04.918 [2024-11-28 12:50:47.202996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.918 [2024-11-28 12:50:47.203021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.918 qpair failed and we were unable to recover it. 00:27:04.918 [2024-11-28 12:50:47.203170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.918 [2024-11-28 12:50:47.203186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.918 qpair failed and we were unable to recover it. 00:27:04.918 [2024-11-28 12:50:47.203263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.918 [2024-11-28 12:50:47.203278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.918 qpair failed and we were unable to recover it. 00:27:04.918 [2024-11-28 12:50:47.203380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.918 [2024-11-28 12:50:47.203396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.918 qpair failed and we were unable to recover it. 00:27:04.918 [2024-11-28 12:50:47.203479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.918 [2024-11-28 12:50:47.203494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.918 qpair failed and we were unable to recover it. 00:27:04.918 [2024-11-28 12:50:47.203704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.918 [2024-11-28 12:50:47.203720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.918 qpair failed and we were unable to recover it. 00:27:04.918 [2024-11-28 12:50:47.203934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.918 [2024-11-28 12:50:47.203955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.918 qpair failed and we were unable to recover it. 00:27:04.918 [2024-11-28 12:50:47.204167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.918 [2024-11-28 12:50:47.204184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.918 qpair failed and we were unable to recover it. 00:27:04.918 [2024-11-28 12:50:47.204359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.918 [2024-11-28 12:50:47.204375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.918 qpair failed and we were unable to recover it. 00:27:04.918 [2024-11-28 12:50:47.204461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.918 [2024-11-28 12:50:47.204476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.918 qpair failed and we were unable to recover it. 00:27:04.918 [2024-11-28 12:50:47.204569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.918 [2024-11-28 12:50:47.204584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.918 qpair failed and we were unable to recover it. 00:27:04.918 [2024-11-28 12:50:47.204678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.918 [2024-11-28 12:50:47.204693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.918 qpair failed and we were unable to recover it. 00:27:04.918 [2024-11-28 12:50:47.204778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.918 [2024-11-28 12:50:47.204794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.918 qpair failed and we were unable to recover it. 00:27:04.918 [2024-11-28 12:50:47.204957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.918 [2024-11-28 12:50:47.204974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.918 qpair failed and we were unable to recover it. 00:27:04.918 [2024-11-28 12:50:47.205132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.918 [2024-11-28 12:50:47.205149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.918 qpair failed and we were unable to recover it. 00:27:04.918 [2024-11-28 12:50:47.205230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.918 [2024-11-28 12:50:47.205246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.918 qpair failed and we were unable to recover it. 00:27:04.918 [2024-11-28 12:50:47.205454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.918 [2024-11-28 12:50:47.205469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.918 qpair failed and we were unable to recover it. 00:27:04.918 [2024-11-28 12:50:47.205608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.918 [2024-11-28 12:50:47.205623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.918 qpair failed and we were unable to recover it. 00:27:04.918 [2024-11-28 12:50:47.205724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.918 [2024-11-28 12:50:47.205741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.918 qpair failed and we were unable to recover it. 00:27:04.918 [2024-11-28 12:50:47.205895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.918 [2024-11-28 12:50:47.205911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.918 qpair failed and we were unable to recover it. 00:27:04.918 [2024-11-28 12:50:47.206066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.918 [2024-11-28 12:50:47.206082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.918 qpair failed and we were unable to recover it. 00:27:04.918 [2024-11-28 12:50:47.206169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.918 [2024-11-28 12:50:47.206186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.918 qpair failed and we were unable to recover it. 00:27:04.918 [2024-11-28 12:50:47.206285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.918 [2024-11-28 12:50:47.206301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.918 qpair failed and we were unable to recover it. 00:27:04.918 [2024-11-28 12:50:47.206377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.918 [2024-11-28 12:50:47.206393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.918 qpair failed and we were unable to recover it. 00:27:04.918 [2024-11-28 12:50:47.206533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.918 [2024-11-28 12:50:47.206548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.918 qpair failed and we were unable to recover it. 00:27:04.918 [2024-11-28 12:50:47.206641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.918 [2024-11-28 12:50:47.206657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.918 qpair failed and we were unable to recover it. 00:27:04.918 [2024-11-28 12:50:47.206732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.918 [2024-11-28 12:50:47.206748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.918 qpair failed and we were unable to recover it. 00:27:04.918 [2024-11-28 12:50:47.206958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.918 [2024-11-28 12:50:47.206977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.918 qpair failed and we were unable to recover it. 00:27:04.918 [2024-11-28 12:50:47.207117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.918 [2024-11-28 12:50:47.207133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.918 qpair failed and we were unable to recover it. 00:27:04.918 [2024-11-28 12:50:47.207217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.918 [2024-11-28 12:50:47.207233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.918 qpair failed and we were unable to recover it. 00:27:04.918 [2024-11-28 12:50:47.207327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.918 [2024-11-28 12:50:47.207344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.918 qpair failed and we were unable to recover it. 00:27:04.918 [2024-11-28 12:50:47.207436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.918 [2024-11-28 12:50:47.207452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.918 qpair failed and we were unable to recover it. 00:27:04.918 [2024-11-28 12:50:47.207611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.918 [2024-11-28 12:50:47.207627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.919 qpair failed and we were unable to recover it. 00:27:04.919 [2024-11-28 12:50:47.207786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.919 [2024-11-28 12:50:47.207802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.919 qpair failed and we were unable to recover it. 00:27:04.919 [2024-11-28 12:50:47.207912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.919 [2024-11-28 12:50:47.207928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.919 qpair failed and we were unable to recover it. 00:27:04.919 [2024-11-28 12:50:47.208010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.919 [2024-11-28 12:50:47.208026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.919 qpair failed and we were unable to recover it. 00:27:04.919 [2024-11-28 12:50:47.208123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.919 [2024-11-28 12:50:47.208139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.919 qpair failed and we were unable to recover it. 00:27:04.919 [2024-11-28 12:50:47.208300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.919 [2024-11-28 12:50:47.208316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.919 qpair failed and we were unable to recover it. 00:27:04.919 [2024-11-28 12:50:47.208404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.919 [2024-11-28 12:50:47.208419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.919 qpair failed and we were unable to recover it. 00:27:04.919 [2024-11-28 12:50:47.208582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.919 [2024-11-28 12:50:47.208598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.919 qpair failed and we were unable to recover it. 00:27:04.919 [2024-11-28 12:50:47.208745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.919 [2024-11-28 12:50:47.208760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.919 qpair failed and we were unable to recover it. 00:27:04.919 [2024-11-28 12:50:47.208971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.919 [2024-11-28 12:50:47.208987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.919 qpair failed and we were unable to recover it. 00:27:04.919 [2024-11-28 12:50:47.209190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.919 [2024-11-28 12:50:47.209207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.919 qpair failed and we were unable to recover it. 00:27:04.919 [2024-11-28 12:50:47.209358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.919 [2024-11-28 12:50:47.209374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.919 qpair failed and we were unable to recover it. 00:27:04.919 [2024-11-28 12:50:47.209529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.919 [2024-11-28 12:50:47.209545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.919 qpair failed and we were unable to recover it. 00:27:04.919 [2024-11-28 12:50:47.209639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.919 [2024-11-28 12:50:47.209656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.919 qpair failed and we were unable to recover it. 00:27:04.919 [2024-11-28 12:50:47.209750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.919 [2024-11-28 12:50:47.209766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.919 qpair failed and we were unable to recover it. 00:27:04.919 [2024-11-28 12:50:47.209915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.919 [2024-11-28 12:50:47.209931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.919 qpair failed and we were unable to recover it. 00:27:04.919 [2024-11-28 12:50:47.210049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.919 [2024-11-28 12:50:47.210068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.919 qpair failed and we were unable to recover it. 00:27:04.919 [2024-11-28 12:50:47.210250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.919 [2024-11-28 12:50:47.210262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.919 qpair failed and we were unable to recover it. 00:27:04.919 [2024-11-28 12:50:47.210467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.919 [2024-11-28 12:50:47.210479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.919 qpair failed and we were unable to recover it. 00:27:04.919 [2024-11-28 12:50:47.210578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.919 [2024-11-28 12:50:47.210611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.919 qpair failed and we were unable to recover it. 00:27:04.919 [2024-11-28 12:50:47.210793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.919 [2024-11-28 12:50:47.210824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.919 qpair failed and we were unable to recover it. 00:27:04.919 [2024-11-28 12:50:47.210966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.919 [2024-11-28 12:50:47.211000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.919 qpair failed and we were unable to recover it. 00:27:04.919 [2024-11-28 12:50:47.211246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.919 [2024-11-28 12:50:47.211260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.919 qpair failed and we were unable to recover it. 00:27:04.919 [2024-11-28 12:50:47.211464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.919 [2024-11-28 12:50:47.211476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.919 qpair failed and we were unable to recover it. 00:27:04.919 [2024-11-28 12:50:47.211650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.919 [2024-11-28 12:50:47.211662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.919 qpair failed and we were unable to recover it. 00:27:04.919 [2024-11-28 12:50:47.211810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.919 [2024-11-28 12:50:47.211843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.919 qpair failed and we were unable to recover it. 00:27:04.919 [2024-11-28 12:50:47.211968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.919 [2024-11-28 12:50:47.212001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.919 qpair failed and we were unable to recover it. 00:27:04.919 [2024-11-28 12:50:47.212142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.919 [2024-11-28 12:50:47.212175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.919 qpair failed and we were unable to recover it. 00:27:04.919 [2024-11-28 12:50:47.212300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.919 [2024-11-28 12:50:47.212333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.919 qpair failed and we were unable to recover it. 00:27:04.919 [2024-11-28 12:50:47.212450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.919 [2024-11-28 12:50:47.212488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.919 qpair failed and we were unable to recover it. 00:27:04.919 [2024-11-28 12:50:47.212712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.919 [2024-11-28 12:50:47.212723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.919 qpair failed and we were unable to recover it. 00:27:04.919 [2024-11-28 12:50:47.212870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.919 [2024-11-28 12:50:47.212881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.919 qpair failed and we were unable to recover it. 00:27:04.919 [2024-11-28 12:50:47.213047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.919 [2024-11-28 12:50:47.213060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.919 qpair failed and we were unable to recover it. 00:27:04.919 [2024-11-28 12:50:47.213220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.919 [2024-11-28 12:50:47.213253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.919 qpair failed and we were unable to recover it. 00:27:04.920 [2024-11-28 12:50:47.213366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.920 [2024-11-28 12:50:47.213398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.920 qpair failed and we were unable to recover it. 00:27:04.920 [2024-11-28 12:50:47.213525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.920 [2024-11-28 12:50:47.213557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.920 qpair failed and we were unable to recover it. 00:27:04.920 [2024-11-28 12:50:47.213746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.920 [2024-11-28 12:50:47.213779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.920 qpair failed and we were unable to recover it. 00:27:04.920 [2024-11-28 12:50:47.213993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.920 [2024-11-28 12:50:47.214005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.920 qpair failed and we were unable to recover it. 00:27:04.920 [2024-11-28 12:50:47.214202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.920 [2024-11-28 12:50:47.214214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.920 qpair failed and we were unable to recover it. 00:27:04.920 [2024-11-28 12:50:47.214288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.920 [2024-11-28 12:50:47.214300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.920 qpair failed and we were unable to recover it. 00:27:04.920 [2024-11-28 12:50:47.214458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.920 [2024-11-28 12:50:47.214490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.920 qpair failed and we were unable to recover it. 00:27:04.920 [2024-11-28 12:50:47.214612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.920 [2024-11-28 12:50:47.214645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.920 qpair failed and we were unable to recover it. 00:27:04.920 [2024-11-28 12:50:47.214838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.920 [2024-11-28 12:50:47.214872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.920 qpair failed and we were unable to recover it. 00:27:04.920 [2024-11-28 12:50:47.215007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.920 [2024-11-28 12:50:47.215019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.920 qpair failed and we were unable to recover it. 00:27:04.920 [2024-11-28 12:50:47.215179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.920 [2024-11-28 12:50:47.215191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.920 qpair failed and we were unable to recover it. 00:27:04.920 [2024-11-28 12:50:47.215320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.920 [2024-11-28 12:50:47.215331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.920 qpair failed and we were unable to recover it. 00:27:04.920 [2024-11-28 12:50:47.215403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.920 [2024-11-28 12:50:47.215414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.920 qpair failed and we were unable to recover it. 00:27:04.920 [2024-11-28 12:50:47.215494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.920 [2024-11-28 12:50:47.215506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.920 qpair failed and we were unable to recover it. 00:27:04.920 [2024-11-28 12:50:47.215593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.920 [2024-11-28 12:50:47.215605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.920 qpair failed and we were unable to recover it. 00:27:04.920 [2024-11-28 12:50:47.215759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.920 [2024-11-28 12:50:47.215795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.920 qpair failed and we were unable to recover it. 00:27:04.920 [2024-11-28 12:50:47.215916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.920 [2024-11-28 12:50:47.215956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.920 qpair failed and we were unable to recover it. 00:27:04.920 [2024-11-28 12:50:47.216094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.920 [2024-11-28 12:50:47.216124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.920 qpair failed and we were unable to recover it. 00:27:04.920 [2024-11-28 12:50:47.216249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.920 [2024-11-28 12:50:47.216280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.920 qpair failed and we were unable to recover it. 00:27:04.920 [2024-11-28 12:50:47.216468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.920 [2024-11-28 12:50:47.216501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.920 qpair failed and we were unable to recover it. 00:27:04.920 [2024-11-28 12:50:47.216611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.920 [2024-11-28 12:50:47.216643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.920 qpair failed and we were unable to recover it. 00:27:04.920 [2024-11-28 12:50:47.216771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.920 [2024-11-28 12:50:47.216804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.920 qpair failed and we were unable to recover it. 00:27:04.920 [2024-11-28 12:50:47.216939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.920 [2024-11-28 12:50:47.216982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.920 qpair failed and we were unable to recover it. 00:27:04.920 [2024-11-28 12:50:47.217092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.920 [2024-11-28 12:50:47.217123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.920 qpair failed and we were unable to recover it. 00:27:04.920 [2024-11-28 12:50:47.217302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.920 [2024-11-28 12:50:47.217334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.920 qpair failed and we were unable to recover it. 00:27:04.920 [2024-11-28 12:50:47.217450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.920 [2024-11-28 12:50:47.217481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.920 qpair failed and we were unable to recover it. 00:27:04.920 [2024-11-28 12:50:47.217673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.920 [2024-11-28 12:50:47.217703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.920 qpair failed and we were unable to recover it. 00:27:04.920 [2024-11-28 12:50:47.217831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.920 [2024-11-28 12:50:47.217863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.920 qpair failed and we were unable to recover it. 00:27:04.920 [2024-11-28 12:50:47.218050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.920 [2024-11-28 12:50:47.218082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.920 qpair failed and we were unable to recover it. 00:27:04.920 [2024-11-28 12:50:47.218203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.920 [2024-11-28 12:50:47.218235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.920 qpair failed and we were unable to recover it. 00:27:04.920 [2024-11-28 12:50:47.218421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.920 [2024-11-28 12:50:47.218453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.920 qpair failed and we were unable to recover it. 00:27:04.920 [2024-11-28 12:50:47.218575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.920 [2024-11-28 12:50:47.218607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.920 qpair failed and we were unable to recover it. 00:27:04.920 [2024-11-28 12:50:47.218693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.920 [2024-11-28 12:50:47.218710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.920 qpair failed and we were unable to recover it. 00:27:04.920 [2024-11-28 12:50:47.218871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.920 [2024-11-28 12:50:47.218907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.920 qpair failed and we were unable to recover it. 00:27:04.920 [2024-11-28 12:50:47.219104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.920 [2024-11-28 12:50:47.219138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.920 qpair failed and we were unable to recover it. 00:27:04.920 [2024-11-28 12:50:47.219323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.920 [2024-11-28 12:50:47.219353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.920 qpair failed and we were unable to recover it. 00:27:04.920 [2024-11-28 12:50:47.219452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.920 [2024-11-28 12:50:47.219484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.920 qpair failed and we were unable to recover it. 00:27:04.920 [2024-11-28 12:50:47.219664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.921 [2024-11-28 12:50:47.219696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.921 qpair failed and we were unable to recover it. 00:27:04.921 [2024-11-28 12:50:47.219827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.921 [2024-11-28 12:50:47.219859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.921 qpair failed and we were unable to recover it. 00:27:04.921 [2024-11-28 12:50:47.220054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.921 [2024-11-28 12:50:47.220086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.921 qpair failed and we were unable to recover it. 00:27:04.921 [2024-11-28 12:50:47.220337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.921 [2024-11-28 12:50:47.220370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.921 qpair failed and we were unable to recover it. 00:27:04.921 [2024-11-28 12:50:47.220550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.921 [2024-11-28 12:50:47.220582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.921 qpair failed and we were unable to recover it. 00:27:04.921 [2024-11-28 12:50:47.220706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.921 [2024-11-28 12:50:47.220751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.921 qpair failed and we were unable to recover it. 00:27:04.921 [2024-11-28 12:50:47.220911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.921 [2024-11-28 12:50:47.220927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.921 qpair failed and we were unable to recover it. 00:27:04.921 [2024-11-28 12:50:47.221045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.921 [2024-11-28 12:50:47.221077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.921 qpair failed and we were unable to recover it. 00:27:04.921 [2024-11-28 12:50:47.221217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.921 [2024-11-28 12:50:47.221249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.921 qpair failed and we were unable to recover it. 00:27:04.921 [2024-11-28 12:50:47.221370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.921 [2024-11-28 12:50:47.221401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.921 qpair failed and we were unable to recover it. 00:27:04.921 [2024-11-28 12:50:47.221523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.921 [2024-11-28 12:50:47.221554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.921 qpair failed and we were unable to recover it. 00:27:04.921 [2024-11-28 12:50:47.221774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.921 [2024-11-28 12:50:47.221789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.921 qpair failed and we were unable to recover it. 00:27:04.921 [2024-11-28 12:50:47.221894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.921 [2024-11-28 12:50:47.221926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.921 qpair failed and we were unable to recover it. 00:27:04.921 [2024-11-28 12:50:47.222051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.921 [2024-11-28 12:50:47.222084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.921 qpair failed and we were unable to recover it. 00:27:04.921 [2024-11-28 12:50:47.222196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.921 [2024-11-28 12:50:47.222229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.921 qpair failed and we were unable to recover it. 00:27:04.921 [2024-11-28 12:50:47.222351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.921 [2024-11-28 12:50:47.222382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.921 qpair failed and we were unable to recover it. 00:27:04.921 [2024-11-28 12:50:47.222496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.921 [2024-11-28 12:50:47.222527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.921 qpair failed and we were unable to recover it. 00:27:04.921 [2024-11-28 12:50:47.222652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.921 [2024-11-28 12:50:47.222695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.921 qpair failed and we were unable to recover it. 00:27:04.921 [2024-11-28 12:50:47.222797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.921 [2024-11-28 12:50:47.222812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.921 qpair failed and we were unable to recover it. 00:27:04.921 [2024-11-28 12:50:47.222986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.921 [2024-11-28 12:50:47.223020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.921 qpair failed and we were unable to recover it. 00:27:04.921 [2024-11-28 12:50:47.223147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.921 [2024-11-28 12:50:47.223181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.921 qpair failed and we were unable to recover it. 00:27:04.921 [2024-11-28 12:50:47.223387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.921 [2024-11-28 12:50:47.223419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.921 qpair failed and we were unable to recover it. 00:27:04.921 [2024-11-28 12:50:47.223552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.921 [2024-11-28 12:50:47.223584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.921 qpair failed and we were unable to recover it. 00:27:04.921 [2024-11-28 12:50:47.223721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.921 [2024-11-28 12:50:47.223754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.921 qpair failed and we were unable to recover it. 00:27:04.921 [2024-11-28 12:50:47.223885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.921 [2024-11-28 12:50:47.223901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.921 qpair failed and we were unable to recover it. 00:27:04.921 [2024-11-28 12:50:47.223977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.921 [2024-11-28 12:50:47.223993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.921 qpair failed and we were unable to recover it. 00:27:04.921 [2024-11-28 12:50:47.224073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.921 [2024-11-28 12:50:47.224089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.921 qpair failed and we were unable to recover it. 00:27:04.921 [2024-11-28 12:50:47.224168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.921 [2024-11-28 12:50:47.224198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.921 qpair failed and we were unable to recover it. 00:27:04.921 [2024-11-28 12:50:47.224358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.921 [2024-11-28 12:50:47.224391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.921 qpair failed and we were unable to recover it. 00:27:04.921 [2024-11-28 12:50:47.224523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.921 [2024-11-28 12:50:47.224555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.921 qpair failed and we were unable to recover it. 00:27:04.921 [2024-11-28 12:50:47.224668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.921 [2024-11-28 12:50:47.224700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.921 qpair failed and we were unable to recover it. 00:27:04.921 [2024-11-28 12:50:47.224938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.921 [2024-11-28 12:50:47.224959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.921 qpair failed and we were unable to recover it. 00:27:04.921 [2024-11-28 12:50:47.225055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.921 [2024-11-28 12:50:47.225092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.921 qpair failed and we were unable to recover it. 00:27:04.921 [2024-11-28 12:50:47.225300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.921 [2024-11-28 12:50:47.225333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.921 qpair failed and we were unable to recover it. 00:27:04.921 [2024-11-28 12:50:47.225450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.921 [2024-11-28 12:50:47.225482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.921 qpair failed and we were unable to recover it. 00:27:04.921 [2024-11-28 12:50:47.225661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.921 [2024-11-28 12:50:47.225693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.921 qpair failed and we were unable to recover it. 00:27:04.921 [2024-11-28 12:50:47.225841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.921 [2024-11-28 12:50:47.225872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.921 qpair failed and we were unable to recover it. 00:27:04.921 [2024-11-28 12:50:47.226013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.921 [2024-11-28 12:50:47.226029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.921 qpair failed and we were unable to recover it. 00:27:04.921 [2024-11-28 12:50:47.226122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.922 [2024-11-28 12:50:47.226138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.922 qpair failed and we were unable to recover it. 00:27:04.922 [2024-11-28 12:50:47.226310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.922 [2024-11-28 12:50:47.226342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.922 qpair failed and we were unable to recover it. 00:27:04.922 [2024-11-28 12:50:47.226461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.922 [2024-11-28 12:50:47.226492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.922 qpair failed and we were unable to recover it. 00:27:04.922 [2024-11-28 12:50:47.226624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.922 [2024-11-28 12:50:47.226656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.922 qpair failed and we were unable to recover it. 00:27:04.922 [2024-11-28 12:50:47.226834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.922 [2024-11-28 12:50:47.226849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.922 qpair failed and we were unable to recover it. 00:27:04.922 [2024-11-28 12:50:47.226961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.922 [2024-11-28 12:50:47.226977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.922 qpair failed and we were unable to recover it. 00:27:04.922 [2024-11-28 12:50:47.227054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.922 [2024-11-28 12:50:47.227070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.922 qpair failed and we were unable to recover it. 00:27:04.922 [2024-11-28 12:50:47.227162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.922 [2024-11-28 12:50:47.227179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.922 qpair failed and we were unable to recover it. 00:27:04.922 [2024-11-28 12:50:47.227371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.922 [2024-11-28 12:50:47.227407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.922 qpair failed and we were unable to recover it. 00:27:04.922 [2024-11-28 12:50:47.227637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.922 [2024-11-28 12:50:47.227651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.922 qpair failed and we were unable to recover it. 00:27:04.922 [2024-11-28 12:50:47.227735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.922 [2024-11-28 12:50:47.227748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.922 qpair failed and we were unable to recover it. 00:27:04.922 [2024-11-28 12:50:47.227831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.922 [2024-11-28 12:50:47.227843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.922 qpair failed and we were unable to recover it. 00:27:04.922 [2024-11-28 12:50:47.227998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.922 [2024-11-28 12:50:47.228031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.922 qpair failed and we were unable to recover it. 00:27:04.922 [2024-11-28 12:50:47.228152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.922 [2024-11-28 12:50:47.228184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.922 qpair failed and we were unable to recover it. 00:27:04.922 [2024-11-28 12:50:47.228294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.922 [2024-11-28 12:50:47.228325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.922 qpair failed and we were unable to recover it. 00:27:04.922 [2024-11-28 12:50:47.228476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.922 [2024-11-28 12:50:47.228509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.922 qpair failed and we were unable to recover it. 00:27:04.922 [2024-11-28 12:50:47.228641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.922 [2024-11-28 12:50:47.228672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.922 qpair failed and we were unable to recover it. 00:27:04.922 [2024-11-28 12:50:47.228798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.922 [2024-11-28 12:50:47.228830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.922 qpair failed and we were unable to recover it. 00:27:04.922 [2024-11-28 12:50:47.229025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.922 [2024-11-28 12:50:47.229064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.922 qpair failed and we were unable to recover it. 00:27:04.922 [2024-11-28 12:50:47.229205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.922 [2024-11-28 12:50:47.229217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.922 qpair failed and we were unable to recover it. 00:27:04.922 [2024-11-28 12:50:47.229282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.922 [2024-11-28 12:50:47.229294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.922 qpair failed and we were unable to recover it. 00:27:04.922 [2024-11-28 12:50:47.229374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.922 [2024-11-28 12:50:47.229387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.922 qpair failed and we were unable to recover it. 00:27:04.922 [2024-11-28 12:50:47.229530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.922 [2024-11-28 12:50:47.229542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.922 qpair failed and we were unable to recover it. 00:27:04.922 [2024-11-28 12:50:47.229681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.922 [2024-11-28 12:50:47.229714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.922 qpair failed and we were unable to recover it. 00:27:04.922 [2024-11-28 12:50:47.229847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.922 [2024-11-28 12:50:47.229880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.922 qpair failed and we were unable to recover it. 00:27:04.922 [2024-11-28 12:50:47.230093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.922 [2024-11-28 12:50:47.230120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.922 qpair failed and we were unable to recover it. 00:27:04.922 [2024-11-28 12:50:47.230195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.922 [2024-11-28 12:50:47.230208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.922 qpair failed and we were unable to recover it. 00:27:04.922 [2024-11-28 12:50:47.230276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.922 [2024-11-28 12:50:47.230287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.922 qpair failed and we were unable to recover it. 00:27:04.922 [2024-11-28 12:50:47.230349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.922 [2024-11-28 12:50:47.230362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.922 qpair failed and we were unable to recover it. 00:27:04.922 [2024-11-28 12:50:47.230439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.922 [2024-11-28 12:50:47.230451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.922 qpair failed and we were unable to recover it. 00:27:04.922 [2024-11-28 12:50:47.230543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.922 [2024-11-28 12:50:47.230555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.922 qpair failed and we were unable to recover it. 00:27:04.922 [2024-11-28 12:50:47.230703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.922 [2024-11-28 12:50:47.230715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.922 qpair failed and we were unable to recover it. 00:27:04.922 [2024-11-28 12:50:47.230779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.922 [2024-11-28 12:50:47.230791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.922 qpair failed and we were unable to recover it. 00:27:04.922 [2024-11-28 12:50:47.230871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.922 [2024-11-28 12:50:47.230883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.922 qpair failed and we were unable to recover it. 00:27:04.922 [2024-11-28 12:50:47.231017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.922 [2024-11-28 12:50:47.231029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.922 qpair failed and we were unable to recover it. 00:27:04.922 [2024-11-28 12:50:47.231103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.922 [2024-11-28 12:50:47.231116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.922 qpair failed and we were unable to recover it. 00:27:04.922 [2024-11-28 12:50:47.231197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.922 [2024-11-28 12:50:47.231208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.922 qpair failed and we were unable to recover it. 00:27:04.922 [2024-11-28 12:50:47.231277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.922 [2024-11-28 12:50:47.231288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.922 qpair failed and we were unable to recover it. 00:27:04.922 [2024-11-28 12:50:47.231441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.922 [2024-11-28 12:50:47.231473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.922 qpair failed and we were unable to recover it. 00:27:04.923 [2024-11-28 12:50:47.231582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.923 [2024-11-28 12:50:47.231614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.923 qpair failed and we were unable to recover it. 00:27:04.923 [2024-11-28 12:50:47.231793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.923 [2024-11-28 12:50:47.231825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.923 qpair failed and we were unable to recover it. 00:27:04.923 [2024-11-28 12:50:47.232015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.923 [2024-11-28 12:50:47.232048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.923 qpair failed and we were unable to recover it. 00:27:04.923 [2024-11-28 12:50:47.232186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.923 [2024-11-28 12:50:47.232219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.923 qpair failed and we were unable to recover it. 00:27:04.923 [2024-11-28 12:50:47.232401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.923 [2024-11-28 12:50:47.232434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.923 qpair failed and we were unable to recover it. 00:27:04.923 [2024-11-28 12:50:47.232577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.923 [2024-11-28 12:50:47.232610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.923 qpair failed and we were unable to recover it. 00:27:04.923 [2024-11-28 12:50:47.232802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.923 [2024-11-28 12:50:47.232834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.923 qpair failed and we were unable to recover it. 00:27:04.923 [2024-11-28 12:50:47.232968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.923 [2024-11-28 12:50:47.233002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.923 qpair failed and we were unable to recover it. 00:27:04.923 [2024-11-28 12:50:47.233203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.923 [2024-11-28 12:50:47.233236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.923 qpair failed and we were unable to recover it. 00:27:04.923 [2024-11-28 12:50:47.233490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.923 [2024-11-28 12:50:47.233531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.923 qpair failed and we were unable to recover it. 00:27:04.923 [2024-11-28 12:50:47.233719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.923 [2024-11-28 12:50:47.233751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.923 qpair failed and we were unable to recover it. 00:27:04.923 [2024-11-28 12:50:47.233971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.923 [2024-11-28 12:50:47.234006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.923 qpair failed and we were unable to recover it. 00:27:04.923 [2024-11-28 12:50:47.234192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.923 [2024-11-28 12:50:47.234207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.923 qpair failed and we were unable to recover it. 00:27:04.923 [2024-11-28 12:50:47.234442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.923 [2024-11-28 12:50:47.234458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.923 qpair failed and we were unable to recover it. 00:27:04.923 [2024-11-28 12:50:47.234547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.923 [2024-11-28 12:50:47.234563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.923 qpair failed and we were unable to recover it. 00:27:04.923 [2024-11-28 12:50:47.234638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.923 [2024-11-28 12:50:47.234653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.923 qpair failed and we were unable to recover it. 00:27:04.923 [2024-11-28 12:50:47.234745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.923 [2024-11-28 12:50:47.234760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.923 qpair failed and we were unable to recover it. 00:27:04.923 [2024-11-28 12:50:47.234867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.923 [2024-11-28 12:50:47.234898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.923 qpair failed and we were unable to recover it. 00:27:04.923 [2024-11-28 12:50:47.235033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.923 [2024-11-28 12:50:47.235067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.923 qpair failed and we were unable to recover it. 00:27:04.923 [2024-11-28 12:50:47.235362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.923 [2024-11-28 12:50:47.235393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.923 qpair failed and we were unable to recover it. 00:27:04.923 [2024-11-28 12:50:47.235498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.923 [2024-11-28 12:50:47.235531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.923 qpair failed and we were unable to recover it. 00:27:04.923 [2024-11-28 12:50:47.235664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.923 [2024-11-28 12:50:47.235695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.923 qpair failed and we were unable to recover it. 00:27:04.923 [2024-11-28 12:50:47.235877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.923 [2024-11-28 12:50:47.235919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.923 qpair failed and we were unable to recover it. 00:27:04.923 [2024-11-28 12:50:47.236171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.923 [2024-11-28 12:50:47.236186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.923 qpair failed and we were unable to recover it. 00:27:04.923 [2024-11-28 12:50:47.236333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.923 [2024-11-28 12:50:47.236350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.923 qpair failed and we were unable to recover it. 00:27:04.923 [2024-11-28 12:50:47.236536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.923 [2024-11-28 12:50:47.236551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.923 qpair failed and we were unable to recover it. 00:27:04.923 [2024-11-28 12:50:47.236624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.923 [2024-11-28 12:50:47.236640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.923 qpair failed and we were unable to recover it. 00:27:04.923 [2024-11-28 12:50:47.236729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.923 [2024-11-28 12:50:47.236744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.923 qpair failed and we were unable to recover it. 00:27:04.923 [2024-11-28 12:50:47.236879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.923 [2024-11-28 12:50:47.236912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.923 qpair failed and we were unable to recover it. 00:27:04.923 [2024-11-28 12:50:47.237110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.923 [2024-11-28 12:50:47.237148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.923 qpair failed and we were unable to recover it. 00:27:04.923 [2024-11-28 12:50:47.237264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.923 [2024-11-28 12:50:47.237297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.923 qpair failed and we were unable to recover it. 00:27:04.923 [2024-11-28 12:50:47.237412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.923 [2024-11-28 12:50:47.237446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.923 qpair failed and we were unable to recover it. 00:27:04.923 [2024-11-28 12:50:47.237590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.923 [2024-11-28 12:50:47.237628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.924 qpair failed and we were unable to recover it. 00:27:04.924 [2024-11-28 12:50:47.237726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.924 [2024-11-28 12:50:47.237742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.924 qpair failed and we were unable to recover it. 00:27:04.924 [2024-11-28 12:50:47.237825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.924 [2024-11-28 12:50:47.237841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.924 qpair failed and we were unable to recover it. 00:27:04.924 [2024-11-28 12:50:47.237931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.924 [2024-11-28 12:50:47.237952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.924 qpair failed and we were unable to recover it. 00:27:04.924 [2024-11-28 12:50:47.238123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.924 [2024-11-28 12:50:47.238156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.924 qpair failed and we were unable to recover it. 00:27:04.924 [2024-11-28 12:50:47.238348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.924 [2024-11-28 12:50:47.238382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.924 qpair failed and we were unable to recover it. 00:27:04.924 [2024-11-28 12:50:47.238505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.924 [2024-11-28 12:50:47.238536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.924 qpair failed and we were unable to recover it. 00:27:04.924 [2024-11-28 12:50:47.238650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.924 [2024-11-28 12:50:47.238683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.924 qpair failed and we were unable to recover it. 00:27:04.924 [2024-11-28 12:50:47.238795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.924 [2024-11-28 12:50:47.238811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.924 qpair failed and we were unable to recover it. 00:27:04.924 [2024-11-28 12:50:47.238886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.924 [2024-11-28 12:50:47.238902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.924 qpair failed and we were unable to recover it. 00:27:04.924 [2024-11-28 12:50:47.239061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.924 [2024-11-28 12:50:47.239094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.924 qpair failed and we were unable to recover it. 00:27:04.924 [2024-11-28 12:50:47.239291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.924 [2024-11-28 12:50:47.239323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.924 qpair failed and we were unable to recover it. 00:27:04.924 [2024-11-28 12:50:47.239433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.924 [2024-11-28 12:50:47.239464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.924 qpair failed and we were unable to recover it. 00:27:04.924 [2024-11-28 12:50:47.239721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.924 [2024-11-28 12:50:47.239754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.924 qpair failed and we were unable to recover it. 00:27:04.924 [2024-11-28 12:50:47.239900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.924 [2024-11-28 12:50:47.239916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.924 qpair failed and we were unable to recover it. 00:27:04.924 [2024-11-28 12:50:47.240019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.924 [2024-11-28 12:50:47.240035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.924 qpair failed and we were unable to recover it. 00:27:04.924 [2024-11-28 12:50:47.240183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.924 [2024-11-28 12:50:47.240199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.924 qpair failed and we were unable to recover it. 00:27:04.924 [2024-11-28 12:50:47.240282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.924 [2024-11-28 12:50:47.240297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.924 qpair failed and we were unable to recover it. 00:27:04.924 [2024-11-28 12:50:47.240371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.924 [2024-11-28 12:50:47.240387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.924 qpair failed and we were unable to recover it. 00:27:04.924 [2024-11-28 12:50:47.240531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.924 [2024-11-28 12:50:47.240546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.924 qpair failed and we were unable to recover it. 00:27:04.924 [2024-11-28 12:50:47.240646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.924 [2024-11-28 12:50:47.240662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.924 qpair failed and we were unable to recover it. 00:27:04.924 [2024-11-28 12:50:47.240738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.924 [2024-11-28 12:50:47.240753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.924 qpair failed and we were unable to recover it. 00:27:04.924 [2024-11-28 12:50:47.240924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.924 [2024-11-28 12:50:47.240941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.924 qpair failed and we were unable to recover it. 00:27:04.924 [2024-11-28 12:50:47.241096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.924 [2024-11-28 12:50:47.241128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.924 qpair failed and we were unable to recover it. 00:27:04.924 [2024-11-28 12:50:47.241269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.924 [2024-11-28 12:50:47.241300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.924 qpair failed and we were unable to recover it. 00:27:04.924 [2024-11-28 12:50:47.241408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.924 [2024-11-28 12:50:47.241440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.924 qpair failed and we were unable to recover it. 00:27:04.924 [2024-11-28 12:50:47.241552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.924 [2024-11-28 12:50:47.241597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.924 qpair failed and we were unable to recover it. 00:27:04.924 [2024-11-28 12:50:47.241691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.924 [2024-11-28 12:50:47.241707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.924 qpair failed and we were unable to recover it. 00:27:04.924 [2024-11-28 12:50:47.241792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.924 [2024-11-28 12:50:47.241808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.924 qpair failed and we were unable to recover it. 00:27:04.924 [2024-11-28 12:50:47.241885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.924 [2024-11-28 12:50:47.241901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.924 qpair failed and we were unable to recover it. 00:27:04.924 [2024-11-28 12:50:47.241982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.924 [2024-11-28 12:50:47.241998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.924 qpair failed and we were unable to recover it. 00:27:04.924 [2024-11-28 12:50:47.242096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.924 [2024-11-28 12:50:47.242110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.924 qpair failed and we were unable to recover it. 00:27:04.924 [2024-11-28 12:50:47.242220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.924 [2024-11-28 12:50:47.242232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.924 qpair failed and we were unable to recover it. 00:27:04.924 [2024-11-28 12:50:47.242319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.924 [2024-11-28 12:50:47.242331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.924 qpair failed and we were unable to recover it. 00:27:04.924 [2024-11-28 12:50:47.242413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.924 [2024-11-28 12:50:47.242446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.924 qpair failed and we were unable to recover it. 00:27:04.924 [2024-11-28 12:50:47.242598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.924 [2024-11-28 12:50:47.242632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.924 qpair failed and we were unable to recover it. 00:27:04.924 [2024-11-28 12:50:47.242756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.924 [2024-11-28 12:50:47.242788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.924 qpair failed and we were unable to recover it. 00:27:04.924 [2024-11-28 12:50:47.242903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.924 [2024-11-28 12:50:47.242941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.924 qpair failed and we were unable to recover it. 00:27:04.924 [2024-11-28 12:50:47.243086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.924 [2024-11-28 12:50:47.243098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.924 qpair failed and we were unable to recover it. 00:27:04.924 [2024-11-28 12:50:47.243170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.925 [2024-11-28 12:50:47.243181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.925 qpair failed and we were unable to recover it. 00:27:04.925 [2024-11-28 12:50:47.243257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.925 [2024-11-28 12:50:47.243269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.925 qpair failed and we were unable to recover it. 00:27:04.925 [2024-11-28 12:50:47.243355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.925 [2024-11-28 12:50:47.243366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.925 qpair failed and we were unable to recover it. 00:27:04.925 [2024-11-28 12:50:47.243524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.925 [2024-11-28 12:50:47.243559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.925 qpair failed and we were unable to recover it. 00:27:04.925 [2024-11-28 12:50:47.243684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.925 [2024-11-28 12:50:47.243717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.925 qpair failed and we were unable to recover it. 00:27:04.925 [2024-11-28 12:50:47.243859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.925 [2024-11-28 12:50:47.243898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.925 qpair failed and we were unable to recover it. 00:27:04.925 [2024-11-28 12:50:47.244063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.925 [2024-11-28 12:50:47.244110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.925 qpair failed and we were unable to recover it. 00:27:04.925 [2024-11-28 12:50:47.244198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.925 [2024-11-28 12:50:47.244209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.925 qpair failed and we were unable to recover it. 00:27:04.925 [2024-11-28 12:50:47.244306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.925 [2024-11-28 12:50:47.244319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.925 qpair failed and we were unable to recover it. 00:27:04.925 [2024-11-28 12:50:47.244412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.925 [2024-11-28 12:50:47.244424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.925 qpair failed and we were unable to recover it. 00:27:04.925 [2024-11-28 12:50:47.244497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.925 [2024-11-28 12:50:47.244510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.925 qpair failed and we were unable to recover it. 00:27:04.925 [2024-11-28 12:50:47.244597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.925 [2024-11-28 12:50:47.244630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.925 qpair failed and we were unable to recover it. 00:27:04.925 [2024-11-28 12:50:47.244817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.925 [2024-11-28 12:50:47.244849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.925 qpair failed and we were unable to recover it. 00:27:04.925 [2024-11-28 12:50:47.244970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.925 [2024-11-28 12:50:47.245004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.925 qpair failed and we were unable to recover it. 00:27:04.925 [2024-11-28 12:50:47.245123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.925 [2024-11-28 12:50:47.245156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.925 qpair failed and we were unable to recover it. 00:27:04.925 [2024-11-28 12:50:47.245333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.925 [2024-11-28 12:50:47.245365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.925 qpair failed and we were unable to recover it. 00:27:04.925 [2024-11-28 12:50:47.245499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.925 [2024-11-28 12:50:47.245532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.925 qpair failed and we were unable to recover it. 00:27:04.925 [2024-11-28 12:50:47.245805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.925 [2024-11-28 12:50:47.245838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.925 qpair failed and we were unable to recover it. 00:27:04.925 [2024-11-28 12:50:47.245957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.925 [2024-11-28 12:50:47.245991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.925 qpair failed and we were unable to recover it. 00:27:04.925 [2024-11-28 12:50:47.246195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.925 [2024-11-28 12:50:47.246206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.925 qpair failed and we were unable to recover it. 00:27:04.925 [2024-11-28 12:50:47.246340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.925 [2024-11-28 12:50:47.246352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.925 qpair failed and we were unable to recover it. 00:27:04.925 [2024-11-28 12:50:47.246490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.925 [2024-11-28 12:50:47.246501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.925 qpair failed and we were unable to recover it. 00:27:04.925 [2024-11-28 12:50:47.246656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.925 [2024-11-28 12:50:47.246689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.925 qpair failed and we were unable to recover it. 00:27:04.925 [2024-11-28 12:50:47.246818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.925 [2024-11-28 12:50:47.246851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.925 qpair failed and we were unable to recover it. 00:27:04.925 [2024-11-28 12:50:47.247048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.925 [2024-11-28 12:50:47.247081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.925 qpair failed and we were unable to recover it. 00:27:04.925 [2024-11-28 12:50:47.247217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.925 [2024-11-28 12:50:47.247250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.925 qpair failed and we were unable to recover it. 00:27:04.925 [2024-11-28 12:50:47.247451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.925 [2024-11-28 12:50:47.247484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.925 qpair failed and we were unable to recover it. 00:27:04.925 [2024-11-28 12:50:47.247624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.925 [2024-11-28 12:50:47.247657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.925 qpair failed and we were unable to recover it. 00:27:04.925 [2024-11-28 12:50:47.247808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.925 [2024-11-28 12:50:47.247820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.925 qpair failed and we were unable to recover it. 00:27:04.925 [2024-11-28 12:50:47.247898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.925 [2024-11-28 12:50:47.247910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.925 qpair failed and we were unable to recover it. 00:27:04.925 [2024-11-28 12:50:47.247984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.925 [2024-11-28 12:50:47.247996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.925 qpair failed and we were unable to recover it. 00:27:04.925 [2024-11-28 12:50:47.248074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.925 [2024-11-28 12:50:47.248086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.925 qpair failed and we were unable to recover it. 00:27:04.925 [2024-11-28 12:50:47.248175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.925 [2024-11-28 12:50:47.248210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.925 qpair failed and we were unable to recover it. 00:27:04.925 [2024-11-28 12:50:47.248324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.925 [2024-11-28 12:50:47.248358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.925 qpair failed and we were unable to recover it. 00:27:04.925 [2024-11-28 12:50:47.248486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.925 [2024-11-28 12:50:47.248519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.925 qpair failed and we were unable to recover it. 00:27:04.925 [2024-11-28 12:50:47.248724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.925 [2024-11-28 12:50:47.248757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.925 qpair failed and we were unable to recover it. 00:27:04.925 [2024-11-28 12:50:47.248936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.925 [2024-11-28 12:50:47.248963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.925 qpair failed and we were unable to recover it. 00:27:04.925 [2024-11-28 12:50:47.249110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.925 [2024-11-28 12:50:47.249122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.925 qpair failed and we were unable to recover it. 00:27:04.925 [2024-11-28 12:50:47.249206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.926 [2024-11-28 12:50:47.249217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.926 qpair failed and we were unable to recover it. 00:27:04.926 [2024-11-28 12:50:47.249300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.926 [2024-11-28 12:50:47.249311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.926 qpair failed and we were unable to recover it. 00:27:04.926 [2024-11-28 12:50:47.249396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.926 [2024-11-28 12:50:47.249408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.926 qpair failed and we were unable to recover it. 00:27:04.926 [2024-11-28 12:50:47.249564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.926 [2024-11-28 12:50:47.249575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.926 qpair failed and we were unable to recover it. 00:27:04.926 [2024-11-28 12:50:47.249652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.926 [2024-11-28 12:50:47.249663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.926 qpair failed and we were unable to recover it. 00:27:04.926 [2024-11-28 12:50:47.249736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.926 [2024-11-28 12:50:47.249747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.926 qpair failed and we were unable to recover it. 00:27:04.926 [2024-11-28 12:50:47.249879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.926 [2024-11-28 12:50:47.249891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.926 qpair failed and we were unable to recover it. 00:27:04.926 [2024-11-28 12:50:47.249984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.926 [2024-11-28 12:50:47.250018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.926 qpair failed and we were unable to recover it. 00:27:04.926 [2024-11-28 12:50:47.250164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.926 [2024-11-28 12:50:47.250196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.926 qpair failed and we were unable to recover it. 00:27:04.926 [2024-11-28 12:50:47.250336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.926 [2024-11-28 12:50:47.250368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.926 qpair failed and we were unable to recover it. 00:27:04.926 [2024-11-28 12:50:47.250496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.926 [2024-11-28 12:50:47.250528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.926 qpair failed and we were unable to recover it. 00:27:04.926 [2024-11-28 12:50:47.250645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.926 [2024-11-28 12:50:47.250678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.926 qpair failed and we were unable to recover it. 00:27:04.926 [2024-11-28 12:50:47.250878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.926 [2024-11-28 12:50:47.250909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.926 qpair failed and we were unable to recover it. 00:27:04.926 [2024-11-28 12:50:47.251060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.926 [2024-11-28 12:50:47.251073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.926 qpair failed and we were unable to recover it. 00:27:04.926 [2024-11-28 12:50:47.251229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.926 [2024-11-28 12:50:47.251241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.926 qpair failed and we were unable to recover it. 00:27:04.926 [2024-11-28 12:50:47.251377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.926 [2024-11-28 12:50:47.251408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.926 qpair failed and we were unable to recover it. 00:27:04.926 [2024-11-28 12:50:47.251595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.926 [2024-11-28 12:50:47.251627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.926 qpair failed and we were unable to recover it. 00:27:04.926 [2024-11-28 12:50:47.251759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.926 [2024-11-28 12:50:47.251791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.926 qpair failed and we were unable to recover it. 00:27:04.926 [2024-11-28 12:50:47.251971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.926 [2024-11-28 12:50:47.251983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.926 qpair failed and we were unable to recover it. 00:27:04.926 [2024-11-28 12:50:47.252136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.926 [2024-11-28 12:50:47.252170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.926 qpair failed and we were unable to recover it. 00:27:04.926 [2024-11-28 12:50:47.252288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.926 [2024-11-28 12:50:47.252321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.926 qpair failed and we were unable to recover it. 00:27:04.926 [2024-11-28 12:50:47.252523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.926 [2024-11-28 12:50:47.252555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.926 qpair failed and we were unable to recover it. 00:27:04.926 [2024-11-28 12:50:47.252736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.926 [2024-11-28 12:50:47.252748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.926 qpair failed and we were unable to recover it. 00:27:04.926 [2024-11-28 12:50:47.252839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.926 [2024-11-28 12:50:47.252850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.926 qpair failed and we were unable to recover it. 00:27:04.926 [2024-11-28 12:50:47.252935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.926 [2024-11-28 12:50:47.252974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.926 qpair failed and we were unable to recover it. 00:27:04.926 [2024-11-28 12:50:47.253203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.926 [2024-11-28 12:50:47.253235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.926 qpair failed and we were unable to recover it. 00:27:04.926 [2024-11-28 12:50:47.253446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.926 [2024-11-28 12:50:47.253481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.926 qpair failed and we were unable to recover it. 00:27:04.926 [2024-11-28 12:50:47.253670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.926 [2024-11-28 12:50:47.253703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.926 qpair failed and we were unable to recover it. 00:27:04.926 [2024-11-28 12:50:47.253892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.926 [2024-11-28 12:50:47.253925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.926 qpair failed and we were unable to recover it. 00:27:04.926 [2024-11-28 12:50:47.254123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.926 [2024-11-28 12:50:47.254156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.926 qpair failed and we were unable to recover it. 00:27:04.926 [2024-11-28 12:50:47.254348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.926 [2024-11-28 12:50:47.254380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.926 qpair failed and we were unable to recover it. 00:27:04.926 [2024-11-28 12:50:47.254581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.926 [2024-11-28 12:50:47.254613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.926 qpair failed and we were unable to recover it. 00:27:04.926 [2024-11-28 12:50:47.254734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.926 [2024-11-28 12:50:47.254746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.926 qpair failed and we were unable to recover it. 00:27:04.926 [2024-11-28 12:50:47.254882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.926 [2024-11-28 12:50:47.254894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.926 qpair failed and we were unable to recover it. 00:27:04.926 [2024-11-28 12:50:47.255010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.926 [2024-11-28 12:50:47.255051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.926 qpair failed and we were unable to recover it. 00:27:04.926 [2024-11-28 12:50:47.255262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.926 [2024-11-28 12:50:47.255294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.926 qpair failed and we were unable to recover it. 00:27:04.926 [2024-11-28 12:50:47.255414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.926 [2024-11-28 12:50:47.255446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.926 qpair failed and we were unable to recover it. 00:27:04.926 [2024-11-28 12:50:47.255637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.926 [2024-11-28 12:50:47.255671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.926 qpair failed and we were unable to recover it. 00:27:04.926 [2024-11-28 12:50:47.255870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.927 [2024-11-28 12:50:47.255882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.927 qpair failed and we were unable to recover it. 00:27:04.927 [2024-11-28 12:50:47.256053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.927 [2024-11-28 12:50:47.256087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.927 qpair failed and we were unable to recover it. 00:27:04.927 [2024-11-28 12:50:47.256287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.927 [2024-11-28 12:50:47.256319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.927 qpair failed and we were unable to recover it. 00:27:04.927 [2024-11-28 12:50:47.256456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.927 [2024-11-28 12:50:47.256488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.927 qpair failed and we were unable to recover it. 00:27:04.927 [2024-11-28 12:50:47.256738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.927 [2024-11-28 12:50:47.256771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.927 qpair failed and we were unable to recover it. 00:27:04.927 [2024-11-28 12:50:47.256858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.927 [2024-11-28 12:50:47.256870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.927 qpair failed and we were unable to recover it. 00:27:04.927 [2024-11-28 12:50:47.256972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.927 [2024-11-28 12:50:47.256984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.927 qpair failed and we were unable to recover it. 00:27:04.927 [2024-11-28 12:50:47.257090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.927 [2024-11-28 12:50:47.257123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.927 qpair failed and we were unable to recover it. 00:27:04.927 [2024-11-28 12:50:47.257331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.927 [2024-11-28 12:50:47.257364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.927 qpair failed and we were unable to recover it. 00:27:04.927 [2024-11-28 12:50:47.257485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.927 [2024-11-28 12:50:47.257516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.927 qpair failed and we were unable to recover it. 00:27:04.927 [2024-11-28 12:50:47.257644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.927 [2024-11-28 12:50:47.257678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.927 qpair failed and we were unable to recover it. 00:27:04.927 [2024-11-28 12:50:47.257800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.927 [2024-11-28 12:50:47.257812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.927 qpair failed and we were unable to recover it. 00:27:04.927 [2024-11-28 12:50:47.257889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.927 [2024-11-28 12:50:47.257900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.927 qpair failed and we were unable to recover it. 00:27:04.927 [2024-11-28 12:50:47.257982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.927 [2024-11-28 12:50:47.258000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.927 qpair failed and we were unable to recover it. 00:27:04.927 [2024-11-28 12:50:47.258191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.927 [2024-11-28 12:50:47.258222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.927 qpair failed and we were unable to recover it. 00:27:04.927 [2024-11-28 12:50:47.258353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.927 [2024-11-28 12:50:47.258385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.927 qpair failed and we were unable to recover it. 00:27:04.927 [2024-11-28 12:50:47.258580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.927 [2024-11-28 12:50:47.258613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.927 qpair failed and we were unable to recover it. 00:27:04.927 [2024-11-28 12:50:47.258735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.927 [2024-11-28 12:50:47.258767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.927 qpair failed and we were unable to recover it. 00:27:04.927 [2024-11-28 12:50:47.258941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.927 [2024-11-28 12:50:47.258956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.927 qpair failed and we were unable to recover it. 00:27:04.927 [2024-11-28 12:50:47.259051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.927 [2024-11-28 12:50:47.259063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.927 qpair failed and we were unable to recover it. 00:27:04.927 [2024-11-28 12:50:47.259138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.927 [2024-11-28 12:50:47.259150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.927 qpair failed and we were unable to recover it. 00:27:04.927 [2024-11-28 12:50:47.259244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.927 [2024-11-28 12:50:47.259256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.927 qpair failed and we were unable to recover it. 00:27:04.927 [2024-11-28 12:50:47.259449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.927 [2024-11-28 12:50:47.259461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.927 qpair failed and we were unable to recover it. 00:27:04.927 [2024-11-28 12:50:47.259548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.927 [2024-11-28 12:50:47.259561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.927 qpair failed and we were unable to recover it. 00:27:04.927 [2024-11-28 12:50:47.259717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.927 [2024-11-28 12:50:47.259729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.927 qpair failed and we were unable to recover it. 00:27:04.927 [2024-11-28 12:50:47.259800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.927 [2024-11-28 12:50:47.259812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.927 qpair failed and we were unable to recover it. 00:27:04.927 [2024-11-28 12:50:47.259891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.927 [2024-11-28 12:50:47.259903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.927 qpair failed and we were unable to recover it. 00:27:04.927 [2024-11-28 12:50:47.259999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.927 [2024-11-28 12:50:47.260011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.927 qpair failed and we were unable to recover it. 00:27:04.927 [2024-11-28 12:50:47.260100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.927 [2024-11-28 12:50:47.260133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.927 qpair failed and we were unable to recover it. 00:27:04.927 [2024-11-28 12:50:47.260245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.927 [2024-11-28 12:50:47.260277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.927 qpair failed and we were unable to recover it. 00:27:04.927 [2024-11-28 12:50:47.260468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.927 [2024-11-28 12:50:47.260501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.927 qpair failed and we were unable to recover it. 00:27:04.927 [2024-11-28 12:50:47.260694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.927 [2024-11-28 12:50:47.260717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.927 qpair failed and we were unable to recover it. 00:27:04.927 [2024-11-28 12:50:47.260792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.927 [2024-11-28 12:50:47.260804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.927 qpair failed and we were unable to recover it. 00:27:04.927 [2024-11-28 12:50:47.260892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.927 [2024-11-28 12:50:47.260925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.927 qpair failed and we were unable to recover it. 00:27:04.927 [2024-11-28 12:50:47.261059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.927 [2024-11-28 12:50:47.261092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.927 qpair failed and we were unable to recover it. 00:27:04.927 [2024-11-28 12:50:47.261277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.927 [2024-11-28 12:50:47.261310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.927 qpair failed and we were unable to recover it. 00:27:04.927 [2024-11-28 12:50:47.261521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.927 [2024-11-28 12:50:47.261559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.927 qpair failed and we were unable to recover it. 00:27:04.927 [2024-11-28 12:50:47.261780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.927 [2024-11-28 12:50:47.261811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.927 qpair failed and we were unable to recover it. 00:27:04.927 [2024-11-28 12:50:47.262017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.927 [2024-11-28 12:50:47.262029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.928 qpair failed and we were unable to recover it. 00:27:04.928 [2024-11-28 12:50:47.262098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.928 [2024-11-28 12:50:47.262110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.928 qpair failed and we were unable to recover it. 00:27:04.928 [2024-11-28 12:50:47.262193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.928 [2024-11-28 12:50:47.262205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.928 qpair failed and we were unable to recover it. 00:27:04.928 [2024-11-28 12:50:47.262317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.928 [2024-11-28 12:50:47.262329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.928 qpair failed and we were unable to recover it. 00:27:04.928 [2024-11-28 12:50:47.262413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.928 [2024-11-28 12:50:47.262425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.928 qpair failed and we were unable to recover it. 00:27:04.928 [2024-11-28 12:50:47.262652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.928 [2024-11-28 12:50:47.262685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.928 qpair failed and we were unable to recover it. 00:27:04.928 [2024-11-28 12:50:47.262883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.928 [2024-11-28 12:50:47.262916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.928 qpair failed and we were unable to recover it. 00:27:04.928 [2024-11-28 12:50:47.263054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.928 [2024-11-28 12:50:47.263086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.928 qpair failed and we were unable to recover it. 00:27:04.928 [2024-11-28 12:50:47.263245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.928 [2024-11-28 12:50:47.263277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.928 qpair failed and we were unable to recover it. 00:27:04.928 [2024-11-28 12:50:47.263392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.928 [2024-11-28 12:50:47.263424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.928 qpair failed and we were unable to recover it. 00:27:04.928 [2024-11-28 12:50:47.263557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.928 [2024-11-28 12:50:47.263589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.928 qpair failed and we were unable to recover it. 00:27:04.928 [2024-11-28 12:50:47.263725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.928 [2024-11-28 12:50:47.263758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.928 qpair failed and we were unable to recover it. 00:27:04.928 [2024-11-28 12:50:47.263880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.928 [2024-11-28 12:50:47.263913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.928 qpair failed and we were unable to recover it. 00:27:04.928 [2024-11-28 12:50:47.264232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.928 [2024-11-28 12:50:47.264304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.928 qpair failed and we were unable to recover it. 00:27:04.928 [2024-11-28 12:50:47.264474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.928 [2024-11-28 12:50:47.264510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.928 qpair failed and we were unable to recover it. 00:27:04.928 [2024-11-28 12:50:47.264640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.928 [2024-11-28 12:50:47.264672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.928 qpair failed and we were unable to recover it. 00:27:04.928 [2024-11-28 12:50:47.264785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.928 [2024-11-28 12:50:47.264800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.928 qpair failed and we were unable to recover it. 00:27:04.928 [2024-11-28 12:50:47.264890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.928 [2024-11-28 12:50:47.264906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.928 qpair failed and we were unable to recover it. 00:27:04.928 [2024-11-28 12:50:47.264996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.928 [2024-11-28 12:50:47.265013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.928 qpair failed and we were unable to recover it. 00:27:04.928 [2024-11-28 12:50:47.265092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.928 [2024-11-28 12:50:47.265107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.928 qpair failed and we were unable to recover it. 00:27:04.928 [2024-11-28 12:50:47.265205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.928 [2024-11-28 12:50:47.265237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.928 qpair failed and we were unable to recover it. 00:27:04.928 [2024-11-28 12:50:47.265422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.928 [2024-11-28 12:50:47.265454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.928 qpair failed and we were unable to recover it. 00:27:04.928 [2024-11-28 12:50:47.265571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.928 [2024-11-28 12:50:47.265603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.928 qpair failed and we were unable to recover it. 00:27:04.928 [2024-11-28 12:50:47.265733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.928 [2024-11-28 12:50:47.265763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.928 qpair failed and we were unable to recover it. 00:27:04.928 [2024-11-28 12:50:47.265915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.928 [2024-11-28 12:50:47.265931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.928 qpair failed and we were unable to recover it. 00:27:04.928 [2024-11-28 12:50:47.266092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.928 [2024-11-28 12:50:47.266108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.928 qpair failed and we were unable to recover it. 00:27:04.928 [2024-11-28 12:50:47.266262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.928 [2024-11-28 12:50:47.266278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.928 qpair failed and we were unable to recover it. 00:27:04.928 [2024-11-28 12:50:47.266440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.928 [2024-11-28 12:50:47.266455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.928 qpair failed and we were unable to recover it. 00:27:04.928 [2024-11-28 12:50:47.266699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.928 [2024-11-28 12:50:47.266731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.928 qpair failed and we were unable to recover it. 00:27:04.928 [2024-11-28 12:50:47.266862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.929 [2024-11-28 12:50:47.266894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.929 qpair failed and we were unable to recover it. 00:27:04.929 [2024-11-28 12:50:47.267026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.929 [2024-11-28 12:50:47.267058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.929 qpair failed and we were unable to recover it. 00:27:04.929 [2024-11-28 12:50:47.267182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.929 [2024-11-28 12:50:47.267214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.929 qpair failed and we were unable to recover it. 00:27:04.929 [2024-11-28 12:50:47.267393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.929 [2024-11-28 12:50:47.267425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.929 qpair failed and we were unable to recover it. 00:27:04.929 [2024-11-28 12:50:47.267614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.929 [2024-11-28 12:50:47.267647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.929 qpair failed and we were unable to recover it. 00:27:04.929 [2024-11-28 12:50:47.267784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.929 [2024-11-28 12:50:47.267815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.929 qpair failed and we were unable to recover it. 00:27:04.929 [2024-11-28 12:50:47.267990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.929 [2024-11-28 12:50:47.268023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.929 qpair failed and we were unable to recover it. 00:27:04.929 [2024-11-28 12:50:47.268144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.929 [2024-11-28 12:50:47.268176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.929 qpair failed and we were unable to recover it. 00:27:04.929 [2024-11-28 12:50:47.268298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.929 [2024-11-28 12:50:47.268330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.929 qpair failed and we were unable to recover it. 00:27:04.929 [2024-11-28 12:50:47.268512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.929 [2024-11-28 12:50:47.268551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.929 qpair failed and we were unable to recover it. 00:27:04.929 [2024-11-28 12:50:47.268735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.929 [2024-11-28 12:50:47.268777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.929 qpair failed and we were unable to recover it. 00:27:04.929 [2024-11-28 12:50:47.268923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.929 [2024-11-28 12:50:47.268938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.929 qpair failed and we were unable to recover it. 00:27:04.929 [2024-11-28 12:50:47.269112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.929 [2024-11-28 12:50:47.269127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.929 qpair failed and we were unable to recover it. 00:27:04.929 [2024-11-28 12:50:47.269219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.929 [2024-11-28 12:50:47.269234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.929 qpair failed and we were unable to recover it. 00:27:04.929 [2024-11-28 12:50:47.269330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.929 [2024-11-28 12:50:47.269345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.929 qpair failed and we were unable to recover it. 00:27:04.929 [2024-11-28 12:50:47.269419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.929 [2024-11-28 12:50:47.269434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.929 qpair failed and we were unable to recover it. 00:27:04.929 [2024-11-28 12:50:47.269525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.929 [2024-11-28 12:50:47.269557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.929 qpair failed and we were unable to recover it. 00:27:04.929 [2024-11-28 12:50:47.269692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.929 [2024-11-28 12:50:47.269723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.929 qpair failed and we were unable to recover it. 00:27:04.929 [2024-11-28 12:50:47.269830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.929 [2024-11-28 12:50:47.269862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.929 qpair failed and we were unable to recover it. 00:27:04.929 [2024-11-28 12:50:47.269976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.929 [2024-11-28 12:50:47.270009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.929 qpair failed and we were unable to recover it. 00:27:04.929 [2024-11-28 12:50:47.270135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.929 [2024-11-28 12:50:47.270151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.929 qpair failed and we were unable to recover it. 00:27:04.929 [2024-11-28 12:50:47.270304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.929 [2024-11-28 12:50:47.270320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.929 qpair failed and we were unable to recover it. 00:27:04.929 [2024-11-28 12:50:47.270399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.929 [2024-11-28 12:50:47.270415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.929 qpair failed and we were unable to recover it. 00:27:04.929 [2024-11-28 12:50:47.270500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.929 [2024-11-28 12:50:47.270516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.929 qpair failed and we were unable to recover it. 00:27:04.929 [2024-11-28 12:50:47.270638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.929 [2024-11-28 12:50:47.270671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.929 qpair failed and we were unable to recover it. 00:27:04.929 [2024-11-28 12:50:47.270789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.929 [2024-11-28 12:50:47.270820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.929 qpair failed and we were unable to recover it. 00:27:04.929 [2024-11-28 12:50:47.270944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.929 [2024-11-28 12:50:47.270988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.929 qpair failed and we were unable to recover it. 00:27:04.929 [2024-11-28 12:50:47.271173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.929 [2024-11-28 12:50:47.271205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.929 qpair failed and we were unable to recover it. 00:27:04.929 [2024-11-28 12:50:47.271318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.929 [2024-11-28 12:50:47.271349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.929 qpair failed and we were unable to recover it. 00:27:04.929 [2024-11-28 12:50:47.271462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.929 [2024-11-28 12:50:47.271495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.929 qpair failed and we were unable to recover it. 00:27:04.929 [2024-11-28 12:50:47.271617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.929 [2024-11-28 12:50:47.271648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.929 qpair failed and we were unable to recover it. 00:27:04.929 [2024-11-28 12:50:47.271847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.929 [2024-11-28 12:50:47.271879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.929 qpair failed and we were unable to recover it. 00:27:04.929 [2024-11-28 12:50:47.271998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.929 [2024-11-28 12:50:47.272041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.929 qpair failed and we were unable to recover it. 00:27:04.929 [2024-11-28 12:50:47.272191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.929 [2024-11-28 12:50:47.272207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.929 qpair failed and we were unable to recover it. 00:27:04.929 [2024-11-28 12:50:47.272371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.929 [2024-11-28 12:50:47.272403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.929 qpair failed and we were unable to recover it. 00:27:04.929 [2024-11-28 12:50:47.272677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.929 [2024-11-28 12:50:47.272709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.929 qpair failed and we were unable to recover it. 00:27:04.929 [2024-11-28 12:50:47.272902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.929 [2024-11-28 12:50:47.272936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.929 qpair failed and we were unable to recover it. 00:27:04.929 [2024-11-28 12:50:47.273126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.929 [2024-11-28 12:50:47.273158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.929 qpair failed and we were unable to recover it. 00:27:04.929 [2024-11-28 12:50:47.273355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.930 [2024-11-28 12:50:47.273388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.930 qpair failed and we were unable to recover it. 00:27:04.930 [2024-11-28 12:50:47.273571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.930 [2024-11-28 12:50:47.273604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.930 qpair failed and we were unable to recover it. 00:27:04.930 [2024-11-28 12:50:47.273786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.930 [2024-11-28 12:50:47.273818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.930 qpair failed and we were unable to recover it. 00:27:04.930 [2024-11-28 12:50:47.273942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.930 [2024-11-28 12:50:47.273993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.930 qpair failed and we were unable to recover it. 00:27:04.930 [2024-11-28 12:50:47.274097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.930 [2024-11-28 12:50:47.274113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.930 qpair failed and we were unable to recover it. 00:27:04.930 [2024-11-28 12:50:47.274268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.930 [2024-11-28 12:50:47.274284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.930 qpair failed and we were unable to recover it. 00:27:04.930 [2024-11-28 12:50:47.274435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.930 [2024-11-28 12:50:47.274451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.930 qpair failed and we were unable to recover it. 00:27:04.930 [2024-11-28 12:50:47.274559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.930 [2024-11-28 12:50:47.274590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.930 qpair failed and we were unable to recover it. 00:27:04.930 [2024-11-28 12:50:47.274702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.930 [2024-11-28 12:50:47.274731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.930 qpair failed and we were unable to recover it. 00:27:04.930 [2024-11-28 12:50:47.274866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.930 [2024-11-28 12:50:47.274897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.930 qpair failed and we were unable to recover it. 00:27:04.930 [2024-11-28 12:50:47.275089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.930 [2024-11-28 12:50:47.275122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.930 qpair failed and we were unable to recover it. 00:27:04.930 [2024-11-28 12:50:47.275236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.930 [2024-11-28 12:50:47.275274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.930 qpair failed and we were unable to recover it. 00:27:04.930 [2024-11-28 12:50:47.275469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.930 [2024-11-28 12:50:47.275500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.930 qpair failed and we were unable to recover it. 00:27:04.930 [2024-11-28 12:50:47.275619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.930 [2024-11-28 12:50:47.275650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.930 qpair failed and we were unable to recover it. 00:27:04.930 [2024-11-28 12:50:47.275779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.930 [2024-11-28 12:50:47.275812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.930 qpair failed and we were unable to recover it. 00:27:04.930 [2024-11-28 12:50:47.275957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.930 [2024-11-28 12:50:47.275990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.930 qpair failed and we were unable to recover it. 00:27:04.930 [2024-11-28 12:50:47.276181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.930 [2024-11-28 12:50:47.276211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.930 qpair failed and we were unable to recover it. 00:27:04.930 [2024-11-28 12:50:47.276390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.930 [2024-11-28 12:50:47.276422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.930 qpair failed and we were unable to recover it. 00:27:04.930 [2024-11-28 12:50:47.276630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.930 [2024-11-28 12:50:47.276662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.930 qpair failed and we were unable to recover it. 00:27:04.930 [2024-11-28 12:50:47.276776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.930 [2024-11-28 12:50:47.276792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.930 qpair failed and we were unable to recover it. 00:27:04.930 [2024-11-28 12:50:47.276881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.930 [2024-11-28 12:50:47.276896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.930 qpair failed and we were unable to recover it. 00:27:04.930 [2024-11-28 12:50:47.276993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.930 [2024-11-28 12:50:47.277009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.930 qpair failed and we were unable to recover it. 00:27:04.930 [2024-11-28 12:50:47.277118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.930 [2024-11-28 12:50:47.277135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.930 qpair failed and we were unable to recover it. 00:27:04.930 [2024-11-28 12:50:47.277233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.930 [2024-11-28 12:50:47.277263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.930 qpair failed and we were unable to recover it. 00:27:04.930 [2024-11-28 12:50:47.277383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.930 [2024-11-28 12:50:47.277415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.930 qpair failed and we were unable to recover it. 00:27:04.930 [2024-11-28 12:50:47.277610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.930 [2024-11-28 12:50:47.277642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.930 qpair failed and we were unable to recover it. 00:27:04.930 [2024-11-28 12:50:47.277777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.930 [2024-11-28 12:50:47.277814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.930 qpair failed and we were unable to recover it. 00:27:04.930 [2024-11-28 12:50:47.277915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.930 [2024-11-28 12:50:47.277929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.930 qpair failed and we were unable to recover it. 00:27:04.930 [2024-11-28 12:50:47.278020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.930 [2024-11-28 12:50:47.278037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.930 qpair failed and we were unable to recover it. 00:27:04.930 [2024-11-28 12:50:47.278124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.930 [2024-11-28 12:50:47.278139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.930 qpair failed and we were unable to recover it. 00:27:04.930 [2024-11-28 12:50:47.278300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.930 [2024-11-28 12:50:47.278316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.930 qpair failed and we were unable to recover it. 00:27:04.930 [2024-11-28 12:50:47.278398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.930 [2024-11-28 12:50:47.278414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.930 qpair failed and we were unable to recover it. 00:27:04.930 [2024-11-28 12:50:47.278496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.930 [2024-11-28 12:50:47.278511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.930 qpair failed and we were unable to recover it. 00:27:04.930 [2024-11-28 12:50:47.278725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.930 [2024-11-28 12:50:47.278756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.930 qpair failed and we were unable to recover it. 00:27:04.930 [2024-11-28 12:50:47.278878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.930 [2024-11-28 12:50:47.278909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.930 qpair failed and we were unable to recover it. 00:27:04.930 [2024-11-28 12:50:47.279034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.930 [2024-11-28 12:50:47.279068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.930 qpair failed and we were unable to recover it. 00:27:04.930 [2024-11-28 12:50:47.279242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.930 [2024-11-28 12:50:47.279258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.930 qpair failed and we were unable to recover it. 00:27:04.931 [2024-11-28 12:50:47.279345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.931 [2024-11-28 12:50:47.279361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.931 qpair failed and we were unable to recover it. 00:27:04.931 [2024-11-28 12:50:47.279465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.931 [2024-11-28 12:50:47.279480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.931 qpair failed and we were unable to recover it. 00:27:04.931 [2024-11-28 12:50:47.279554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.931 [2024-11-28 12:50:47.279570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.931 qpair failed and we were unable to recover it. 00:27:04.931 [2024-11-28 12:50:47.279784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.931 [2024-11-28 12:50:47.279816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.931 qpair failed and we were unable to recover it. 00:27:04.931 [2024-11-28 12:50:47.279979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.931 [2024-11-28 12:50:47.280013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.931 qpair failed and we were unable to recover it. 00:27:04.931 [2024-11-28 12:50:47.280282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.931 [2024-11-28 12:50:47.280313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.931 qpair failed and we were unable to recover it. 00:27:04.931 [2024-11-28 12:50:47.280505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.931 [2024-11-28 12:50:47.280537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.931 qpair failed and we were unable to recover it. 00:27:04.931 [2024-11-28 12:50:47.280666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.931 [2024-11-28 12:50:47.280681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.931 qpair failed and we were unable to recover it. 00:27:04.931 [2024-11-28 12:50:47.280776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.931 [2024-11-28 12:50:47.280791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.931 qpair failed and we were unable to recover it. 00:27:04.931 [2024-11-28 12:50:47.280938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.931 [2024-11-28 12:50:47.280991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.931 qpair failed and we were unable to recover it. 00:27:04.931 [2024-11-28 12:50:47.281191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.931 [2024-11-28 12:50:47.281223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.931 qpair failed and we were unable to recover it. 00:27:04.931 [2024-11-28 12:50:47.281358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.931 [2024-11-28 12:50:47.281391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.931 qpair failed and we were unable to recover it. 00:27:04.931 [2024-11-28 12:50:47.281518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.931 [2024-11-28 12:50:47.281549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.931 qpair failed and we were unable to recover it. 00:27:04.931 [2024-11-28 12:50:47.281741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.931 [2024-11-28 12:50:47.281772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.931 qpair failed and we were unable to recover it. 00:27:04.931 [2024-11-28 12:50:47.281891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.931 [2024-11-28 12:50:47.281928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.931 qpair failed and we were unable to recover it. 00:27:04.931 [2024-11-28 12:50:47.282126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.931 [2024-11-28 12:50:47.282159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.931 qpair failed and we were unable to recover it. 00:27:04.931 [2024-11-28 12:50:47.282339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.931 [2024-11-28 12:50:47.282371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.931 qpair failed and we were unable to recover it. 00:27:04.931 [2024-11-28 12:50:47.282499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.931 [2024-11-28 12:50:47.282531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.931 qpair failed and we were unable to recover it. 00:27:04.931 [2024-11-28 12:50:47.282776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.931 [2024-11-28 12:50:47.282808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.931 qpair failed and we were unable to recover it. 00:27:04.931 [2024-11-28 12:50:47.282932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.931 [2024-11-28 12:50:47.282976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.931 qpair failed and we were unable to recover it. 00:27:04.931 [2024-11-28 12:50:47.283099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.931 [2024-11-28 12:50:47.283130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.931 qpair failed and we were unable to recover it. 00:27:04.931 [2024-11-28 12:50:47.283258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.931 [2024-11-28 12:50:47.283290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.931 qpair failed and we were unable to recover it. 00:27:04.931 [2024-11-28 12:50:47.283528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.931 [2024-11-28 12:50:47.283561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.931 qpair failed and we were unable to recover it. 00:27:04.931 [2024-11-28 12:50:47.283740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.931 [2024-11-28 12:50:47.283772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.931 qpair failed and we were unable to recover it. 00:27:04.931 [2024-11-28 12:50:47.283899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.931 [2024-11-28 12:50:47.283930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.931 qpair failed and we were unable to recover it. 00:27:04.931 [2024-11-28 12:50:47.284124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.931 [2024-11-28 12:50:47.284165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.931 qpair failed and we were unable to recover it. 00:27:04.931 [2024-11-28 12:50:47.284263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.931 [2024-11-28 12:50:47.284277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.931 qpair failed and we were unable to recover it. 00:27:04.931 [2024-11-28 12:50:47.284448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.931 [2024-11-28 12:50:47.284481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.931 qpair failed and we were unable to recover it. 00:27:04.931 [2024-11-28 12:50:47.284615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.931 [2024-11-28 12:50:47.284646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.931 qpair failed and we were unable to recover it. 00:27:04.931 [2024-11-28 12:50:47.284835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.931 [2024-11-28 12:50:47.284867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.931 qpair failed and we were unable to recover it. 00:27:04.931 [2024-11-28 12:50:47.285118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.931 [2024-11-28 12:50:47.285135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.931 qpair failed and we were unable to recover it. 00:27:04.931 [2024-11-28 12:50:47.285230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.931 [2024-11-28 12:50:47.285246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.931 qpair failed and we were unable to recover it. 00:27:04.931 [2024-11-28 12:50:47.285344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.931 [2024-11-28 12:50:47.285359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.931 qpair failed and we were unable to recover it. 00:27:04.931 [2024-11-28 12:50:47.285594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.931 [2024-11-28 12:50:47.285626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.931 qpair failed and we were unable to recover it. 00:27:04.931 [2024-11-28 12:50:47.285741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.931 [2024-11-28 12:50:47.285774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.931 qpair failed and we were unable to recover it. 00:27:04.931 [2024-11-28 12:50:47.286021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.931 [2024-11-28 12:50:47.286055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.931 qpair failed and we were unable to recover it. 00:27:04.931 [2024-11-28 12:50:47.286260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.931 [2024-11-28 12:50:47.286275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.931 qpair failed and we were unable to recover it. 00:27:04.931 [2024-11-28 12:50:47.286384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.931 [2024-11-28 12:50:47.286415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.931 qpair failed and we were unable to recover it. 00:27:04.932 [2024-11-28 12:50:47.286532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.932 [2024-11-28 12:50:47.286564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.932 qpair failed and we were unable to recover it. 00:27:04.932 [2024-11-28 12:50:47.286697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.932 [2024-11-28 12:50:47.286729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.932 qpair failed and we were unable to recover it. 00:27:04.932 [2024-11-28 12:50:47.287001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.932 [2024-11-28 12:50:47.287034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.932 qpair failed and we were unable to recover it. 00:27:04.932 [2024-11-28 12:50:47.287159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.932 [2024-11-28 12:50:47.287191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.932 qpair failed and we were unable to recover it. 00:27:04.932 [2024-11-28 12:50:47.287394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.932 [2024-11-28 12:50:47.287427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.932 qpair failed and we were unable to recover it. 00:27:04.932 [2024-11-28 12:50:47.287566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.932 [2024-11-28 12:50:47.287597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.932 qpair failed and we were unable to recover it. 00:27:04.932 [2024-11-28 12:50:47.287721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.932 [2024-11-28 12:50:47.287752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.932 qpair failed and we were unable to recover it. 00:27:04.932 [2024-11-28 12:50:47.287879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.932 [2024-11-28 12:50:47.287909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.932 qpair failed and we were unable to recover it. 00:27:04.932 [2024-11-28 12:50:47.288169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.932 [2024-11-28 12:50:47.288203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.932 qpair failed and we were unable to recover it. 00:27:04.932 [2024-11-28 12:50:47.288324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.932 [2024-11-28 12:50:47.288356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.932 qpair failed and we were unable to recover it. 00:27:04.932 [2024-11-28 12:50:47.288488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.932 [2024-11-28 12:50:47.288518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.932 qpair failed and we were unable to recover it. 00:27:04.932 [2024-11-28 12:50:47.288697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.932 [2024-11-28 12:50:47.288728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.932 qpair failed and we were unable to recover it. 00:27:04.932 [2024-11-28 12:50:47.288928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.932 [2024-11-28 12:50:47.288943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.932 qpair failed and we were unable to recover it. 00:27:04.932 [2024-11-28 12:50:47.289101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.932 [2024-11-28 12:50:47.289134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.932 qpair failed and we were unable to recover it. 00:27:04.932 [2024-11-28 12:50:47.289328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.932 [2024-11-28 12:50:47.289360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.932 qpair failed and we were unable to recover it. 00:27:04.932 [2024-11-28 12:50:47.289606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.932 [2024-11-28 12:50:47.289639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.932 qpair failed and we were unable to recover it. 00:27:04.932 [2024-11-28 12:50:47.289778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.932 [2024-11-28 12:50:47.289815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.932 qpair failed and we were unable to recover it. 00:27:04.932 [2024-11-28 12:50:47.289969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.932 [2024-11-28 12:50:47.290001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.932 qpair failed and we were unable to recover it. 00:27:04.932 [2024-11-28 12:50:47.290117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.932 [2024-11-28 12:50:47.290133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.932 qpair failed and we were unable to recover it. 00:27:04.932 [2024-11-28 12:50:47.290224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.932 [2024-11-28 12:50:47.290239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.932 qpair failed and we were unable to recover it. 00:27:04.932 [2024-11-28 12:50:47.290458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.932 [2024-11-28 12:50:47.290490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.932 qpair failed and we were unable to recover it. 00:27:04.932 [2024-11-28 12:50:47.290681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.932 [2024-11-28 12:50:47.290714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.932 qpair failed and we were unable to recover it. 00:27:04.932 [2024-11-28 12:50:47.290836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.932 [2024-11-28 12:50:47.290868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.932 qpair failed and we were unable to recover it. 00:27:04.932 [2024-11-28 12:50:47.290998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.932 [2024-11-28 12:50:47.291031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.932 qpair failed and we were unable to recover it. 00:27:04.932 [2024-11-28 12:50:47.291215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.932 [2024-11-28 12:50:47.291248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.932 qpair failed and we were unable to recover it. 00:27:04.932 [2024-11-28 12:50:47.291428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.932 [2024-11-28 12:50:47.291459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.932 qpair failed and we were unable to recover it. 00:27:04.932 [2024-11-28 12:50:47.291648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.932 [2024-11-28 12:50:47.291680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.932 qpair failed and we were unable to recover it. 00:27:04.932 [2024-11-28 12:50:47.291808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.932 [2024-11-28 12:50:47.291838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.932 qpair failed and we were unable to recover it. 00:27:04.932 [2024-11-28 12:50:47.291973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.932 [2024-11-28 12:50:47.292006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.932 qpair failed and we were unable to recover it. 00:27:04.932 [2024-11-28 12:50:47.292247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.932 [2024-11-28 12:50:47.292262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.932 qpair failed and we were unable to recover it. 00:27:04.932 [2024-11-28 12:50:47.292412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.932 [2024-11-28 12:50:47.292428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.932 qpair failed and we were unable to recover it. 00:27:04.932 [2024-11-28 12:50:47.292503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.932 [2024-11-28 12:50:47.292519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.932 qpair failed and we were unable to recover it. 00:27:04.932 [2024-11-28 12:50:47.292604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.932 [2024-11-28 12:50:47.292619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.932 qpair failed and we were unable to recover it. 00:27:04.932 [2024-11-28 12:50:47.292705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.932 [2024-11-28 12:50:47.292722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.932 qpair failed and we were unable to recover it. 00:27:04.932 [2024-11-28 12:50:47.292888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.932 [2024-11-28 12:50:47.292920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.932 qpair failed and we were unable to recover it. 00:27:04.932 [2024-11-28 12:50:47.293062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.932 [2024-11-28 12:50:47.293093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.933 qpair failed and we were unable to recover it. 00:27:04.933 [2024-11-28 12:50:47.293344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.933 [2024-11-28 12:50:47.293376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.933 qpair failed and we were unable to recover it. 00:27:04.933 [2024-11-28 12:50:47.293516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.933 [2024-11-28 12:50:47.293548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.933 qpair failed and we were unable to recover it. 00:27:04.933 [2024-11-28 12:50:47.293834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.933 [2024-11-28 12:50:47.293866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.933 qpair failed and we were unable to recover it. 00:27:04.933 [2024-11-28 12:50:47.294053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.933 [2024-11-28 12:50:47.294086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.933 qpair failed and we were unable to recover it. 00:27:04.933 [2024-11-28 12:50:47.294268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.933 [2024-11-28 12:50:47.294299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.933 qpair failed and we were unable to recover it. 00:27:04.933 [2024-11-28 12:50:47.294475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.933 [2024-11-28 12:50:47.294506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.933 qpair failed and we were unable to recover it. 00:27:04.933 [2024-11-28 12:50:47.294630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.933 [2024-11-28 12:50:47.294661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.933 qpair failed and we were unable to recover it. 00:27:04.933 [2024-11-28 12:50:47.294845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.933 [2024-11-28 12:50:47.294882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.933 qpair failed and we were unable to recover it. 00:27:04.933 [2024-11-28 12:50:47.295113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.933 [2024-11-28 12:50:47.295153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.933 qpair failed and we were unable to recover it. 00:27:04.933 [2024-11-28 12:50:47.295279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.933 [2024-11-28 12:50:47.295313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.933 qpair failed and we were unable to recover it. 00:27:04.933 [2024-11-28 12:50:47.295431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.933 [2024-11-28 12:50:47.295463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.933 qpair failed and we were unable to recover it. 00:27:04.933 [2024-11-28 12:50:47.295592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.933 [2024-11-28 12:50:47.295624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.933 qpair failed and we were unable to recover it. 00:27:04.933 [2024-11-28 12:50:47.295749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.933 [2024-11-28 12:50:47.295781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.933 qpair failed and we were unable to recover it. 00:27:04.933 [2024-11-28 12:50:47.296038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.933 [2024-11-28 12:50:47.296082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.933 qpair failed and we were unable to recover it. 00:27:04.933 [2024-11-28 12:50:47.296263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.933 [2024-11-28 12:50:47.296279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.933 qpair failed and we were unable to recover it. 00:27:04.933 [2024-11-28 12:50:47.296377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.933 [2024-11-28 12:50:47.296409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.933 qpair failed and we were unable to recover it. 00:27:04.933 [2024-11-28 12:50:47.296547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.933 [2024-11-28 12:50:47.296580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.933 qpair failed and we were unable to recover it. 00:27:04.933 [2024-11-28 12:50:47.296774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.933 [2024-11-28 12:50:47.296806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.933 qpair failed and we were unable to recover it. 00:27:04.933 [2024-11-28 12:50:47.297017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.933 [2024-11-28 12:50:47.297034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.933 qpair failed and we were unable to recover it. 00:27:04.933 [2024-11-28 12:50:47.297125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.933 [2024-11-28 12:50:47.297141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.933 qpair failed and we were unable to recover it. 00:27:04.933 [2024-11-28 12:50:47.297373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.933 [2024-11-28 12:50:47.297404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.933 qpair failed and we were unable to recover it. 00:27:04.933 [2024-11-28 12:50:47.297641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.933 [2024-11-28 12:50:47.297675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.933 qpair failed and we were unable to recover it. 00:27:04.933 [2024-11-28 12:50:47.297881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.933 [2024-11-28 12:50:47.297913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.933 qpair failed and we were unable to recover it. 00:27:04.933 [2024-11-28 12:50:47.298113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.933 [2024-11-28 12:50:47.298146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.933 qpair failed and we were unable to recover it. 00:27:04.933 [2024-11-28 12:50:47.298340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.933 [2024-11-28 12:50:47.298372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.933 qpair failed and we were unable to recover it. 00:27:04.933 [2024-11-28 12:50:47.298511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.933 [2024-11-28 12:50:47.298542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.933 qpair failed and we were unable to recover it. 00:27:04.933 [2024-11-28 12:50:47.298739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.933 [2024-11-28 12:50:47.298771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.933 qpair failed and we were unable to recover it. 00:27:04.933 [2024-11-28 12:50:47.298886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.933 [2024-11-28 12:50:47.298902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.933 qpair failed and we were unable to recover it. 00:27:04.933 [2024-11-28 12:50:47.299058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.933 [2024-11-28 12:50:47.299087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.933 qpair failed and we were unable to recover it. 00:27:04.933 [2024-11-28 12:50:47.299186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.933 [2024-11-28 12:50:47.299201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.933 qpair failed and we were unable to recover it. 00:27:04.933 [2024-11-28 12:50:47.299385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.933 [2024-11-28 12:50:47.299417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.933 qpair failed and we were unable to recover it. 00:27:04.933 [2024-11-28 12:50:47.299614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.933 [2024-11-28 12:50:47.299646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.933 qpair failed and we were unable to recover it. 00:27:04.933 [2024-11-28 12:50:47.299785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.933 [2024-11-28 12:50:47.299817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.933 qpair failed and we were unable to recover it. 00:27:04.933 [2024-11-28 12:50:47.299933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.933 [2024-11-28 12:50:47.299955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.933 qpair failed and we were unable to recover it. 00:27:04.933 [2024-11-28 12:50:47.300135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.933 [2024-11-28 12:50:47.300178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.933 qpair failed and we were unable to recover it. 00:27:04.933 [2024-11-28 12:50:47.300314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.933 [2024-11-28 12:50:47.300345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.934 qpair failed and we were unable to recover it. 00:27:04.934 [2024-11-28 12:50:47.300553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.934 [2024-11-28 12:50:47.300586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.934 qpair failed and we were unable to recover it. 00:27:04.934 [2024-11-28 12:50:47.300723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.934 [2024-11-28 12:50:47.300755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.934 qpair failed and we were unable to recover it. 00:27:04.934 [2024-11-28 12:50:47.301004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.934 [2024-11-28 12:50:47.301038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.934 qpair failed and we were unable to recover it. 00:27:04.934 [2024-11-28 12:50:47.301172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.934 [2024-11-28 12:50:47.301205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.934 qpair failed and we were unable to recover it. 00:27:04.934 [2024-11-28 12:50:47.301404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.934 [2024-11-28 12:50:47.301420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.934 qpair failed and we were unable to recover it. 00:27:04.934 [2024-11-28 12:50:47.301509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.934 [2024-11-28 12:50:47.301525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.934 qpair failed and we were unable to recover it. 00:27:04.934 [2024-11-28 12:50:47.301635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.934 [2024-11-28 12:50:47.301650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.934 qpair failed and we were unable to recover it. 00:27:04.934 [2024-11-28 12:50:47.301800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.934 [2024-11-28 12:50:47.301830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.934 qpair failed and we were unable to recover it. 00:27:04.934 [2024-11-28 12:50:47.302016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.934 [2024-11-28 12:50:47.302048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.934 qpair failed and we were unable to recover it. 00:27:04.934 [2024-11-28 12:50:47.302159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.934 [2024-11-28 12:50:47.302191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.934 qpair failed and we were unable to recover it. 00:27:04.934 [2024-11-28 12:50:47.302325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.934 [2024-11-28 12:50:47.302358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.934 qpair failed and we were unable to recover it. 00:27:04.934 [2024-11-28 12:50:47.302491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.934 [2024-11-28 12:50:47.302523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.934 qpair failed and we were unable to recover it. 00:27:04.934 [2024-11-28 12:50:47.302649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.934 [2024-11-28 12:50:47.302681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.934 qpair failed and we were unable to recover it. 00:27:04.934 [2024-11-28 12:50:47.302887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.934 [2024-11-28 12:50:47.302920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.934 qpair failed and we were unable to recover it. 00:27:04.934 [2024-11-28 12:50:47.303063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.934 [2024-11-28 12:50:47.303096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.934 qpair failed and we were unable to recover it. 00:27:04.934 [2024-11-28 12:50:47.303282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.934 [2024-11-28 12:50:47.303314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.934 qpair failed and we were unable to recover it. 00:27:04.934 [2024-11-28 12:50:47.303455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.934 [2024-11-28 12:50:47.303487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.934 qpair failed and we were unable to recover it. 00:27:04.934 [2024-11-28 12:50:47.303672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.934 [2024-11-28 12:50:47.303704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.934 qpair failed and we were unable to recover it. 00:27:04.934 [2024-11-28 12:50:47.303819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.934 [2024-11-28 12:50:47.303850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.934 qpair failed and we were unable to recover it. 00:27:04.934 [2024-11-28 12:50:47.303968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.934 [2024-11-28 12:50:47.303985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.934 qpair failed and we were unable to recover it. 00:27:04.934 [2024-11-28 12:50:47.304137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.934 [2024-11-28 12:50:47.304153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.934 qpair failed and we were unable to recover it. 00:27:04.934 [2024-11-28 12:50:47.304245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.934 [2024-11-28 12:50:47.304260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.934 qpair failed and we were unable to recover it. 00:27:04.934 [2024-11-28 12:50:47.304334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.934 [2024-11-28 12:50:47.304349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.934 qpair failed and we were unable to recover it. 00:27:04.934 [2024-11-28 12:50:47.304499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.934 [2024-11-28 12:50:47.304532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.934 qpair failed and we were unable to recover it. 00:27:04.934 [2024-11-28 12:50:47.304663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.934 [2024-11-28 12:50:47.304695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.934 qpair failed and we were unable to recover it. 00:27:04.934 [2024-11-28 12:50:47.304822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.934 [2024-11-28 12:50:47.304860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.934 qpair failed and we were unable to recover it. 00:27:04.934 [2024-11-28 12:50:47.304968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.934 [2024-11-28 12:50:47.305003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.934 qpair failed and we were unable to recover it. 00:27:04.934 [2024-11-28 12:50:47.305245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.934 [2024-11-28 12:50:47.305261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.934 qpair failed and we were unable to recover it. 00:27:04.934 [2024-11-28 12:50:47.305425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.934 [2024-11-28 12:50:47.305441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.934 qpair failed and we were unable to recover it. 00:27:04.934 [2024-11-28 12:50:47.305607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.934 [2024-11-28 12:50:47.305638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.934 qpair failed and we were unable to recover it. 00:27:04.934 [2024-11-28 12:50:47.305747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.934 [2024-11-28 12:50:47.305778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.934 qpair failed and we were unable to recover it. 00:27:04.934 [2024-11-28 12:50:47.305886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.934 [2024-11-28 12:50:47.305917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.934 qpair failed and we were unable to recover it. 00:27:04.934 [2024-11-28 12:50:47.306061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.934 [2024-11-28 12:50:47.306094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.934 qpair failed and we were unable to recover it. 00:27:04.934 [2024-11-28 12:50:47.306207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.934 [2024-11-28 12:50:47.306222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.934 qpair failed and we were unable to recover it. 00:27:04.934 [2024-11-28 12:50:47.306307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.934 [2024-11-28 12:50:47.306323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.934 qpair failed and we were unable to recover it. 00:27:04.934 [2024-11-28 12:50:47.306396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.934 [2024-11-28 12:50:47.306437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.934 qpair failed and we were unable to recover it. 00:27:04.934 [2024-11-28 12:50:47.306634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.934 [2024-11-28 12:50:47.306666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.934 qpair failed and we were unable to recover it. 00:27:04.934 [2024-11-28 12:50:47.306939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.934 [2024-11-28 12:50:47.306982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.934 qpair failed and we were unable to recover it. 00:27:04.934 [2024-11-28 12:50:47.307119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.934 [2024-11-28 12:50:47.307134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.934 qpair failed and we were unable to recover it. 00:27:04.934 [2024-11-28 12:50:47.307229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.934 [2024-11-28 12:50:47.307245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.934 qpair failed and we were unable to recover it. 00:27:04.934 [2024-11-28 12:50:47.307386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.934 [2024-11-28 12:50:47.307402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.934 qpair failed and we were unable to recover it. 00:27:04.934 [2024-11-28 12:50:47.307598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.934 [2024-11-28 12:50:47.307629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.935 qpair failed and we were unable to recover it. 00:27:04.935 [2024-11-28 12:50:47.307755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.935 [2024-11-28 12:50:47.307788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.935 qpair failed and we were unable to recover it. 00:27:04.935 [2024-11-28 12:50:47.307926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.935 [2024-11-28 12:50:47.307970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.935 qpair failed and we were unable to recover it. 00:27:04.935 [2024-11-28 12:50:47.308086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.935 [2024-11-28 12:50:47.308118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.935 qpair failed and we were unable to recover it. 00:27:04.935 [2024-11-28 12:50:47.308292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.935 [2024-11-28 12:50:47.308308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.935 qpair failed and we were unable to recover it. 00:27:04.935 [2024-11-28 12:50:47.308395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.935 [2024-11-28 12:50:47.308411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.935 qpair failed and we were unable to recover it. 00:27:04.935 [2024-11-28 12:50:47.308505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.935 [2024-11-28 12:50:47.308521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.935 qpair failed and we were unable to recover it. 00:27:04.935 [2024-11-28 12:50:47.308615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.935 [2024-11-28 12:50:47.308630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.935 qpair failed and we were unable to recover it. 00:27:04.935 [2024-11-28 12:50:47.308716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.935 [2024-11-28 12:50:47.308731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.935 qpair failed and we were unable to recover it. 00:27:04.935 [2024-11-28 12:50:47.308904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.935 [2024-11-28 12:50:47.308935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.935 qpair failed and we were unable to recover it. 00:27:04.935 [2024-11-28 12:50:47.309056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.935 [2024-11-28 12:50:47.309089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.935 qpair failed and we were unable to recover it. 00:27:04.935 [2024-11-28 12:50:47.309210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.935 [2024-11-28 12:50:47.309241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.935 qpair failed and we were unable to recover it. 00:27:04.935 [2024-11-28 12:50:47.309367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.935 [2024-11-28 12:50:47.309399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.935 qpair failed and we were unable to recover it. 00:27:04.935 [2024-11-28 12:50:47.309519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.935 [2024-11-28 12:50:47.309550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.935 qpair failed and we were unable to recover it. 00:27:04.935 [2024-11-28 12:50:47.309734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.935 [2024-11-28 12:50:47.309766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.935 qpair failed and we were unable to recover it. 00:27:04.935 [2024-11-28 12:50:47.309935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.935 [2024-11-28 12:50:47.309958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.935 qpair failed and we were unable to recover it. 00:27:04.935 [2024-11-28 12:50:47.310042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.935 [2024-11-28 12:50:47.310083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.935 qpair failed and we were unable to recover it. 00:27:04.935 [2024-11-28 12:50:47.310288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.935 [2024-11-28 12:50:47.310320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.935 qpair failed and we were unable to recover it. 00:27:04.935 [2024-11-28 12:50:47.310438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.935 [2024-11-28 12:50:47.310471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.935 qpair failed and we were unable to recover it. 00:27:04.935 [2024-11-28 12:50:47.310583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.935 [2024-11-28 12:50:47.310616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.935 qpair failed and we were unable to recover it. 00:27:04.935 [2024-11-28 12:50:47.310803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.935 [2024-11-28 12:50:47.310835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.935 qpair failed and we were unable to recover it. 00:27:04.935 [2024-11-28 12:50:47.310964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.935 [2024-11-28 12:50:47.310980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.935 qpair failed and we were unable to recover it. 00:27:04.935 [2024-11-28 12:50:47.311065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.935 [2024-11-28 12:50:47.311081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.935 qpair failed and we were unable to recover it. 00:27:04.935 [2024-11-28 12:50:47.311275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.935 [2024-11-28 12:50:47.311291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.935 qpair failed and we were unable to recover it. 00:27:04.935 [2024-11-28 12:50:47.311447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.935 [2024-11-28 12:50:47.311463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.935 qpair failed and we were unable to recover it. 00:27:04.935 [2024-11-28 12:50:47.311582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.935 [2024-11-28 12:50:47.311621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.935 qpair failed and we were unable to recover it. 00:27:04.935 [2024-11-28 12:50:47.311831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.935 [2024-11-28 12:50:47.311863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.935 qpair failed and we were unable to recover it. 00:27:04.935 [2024-11-28 12:50:47.311983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.935 [2024-11-28 12:50:47.312015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.935 qpair failed and we were unable to recover it. 00:27:04.935 [2024-11-28 12:50:47.312266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.935 [2024-11-28 12:50:47.312282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.935 qpair failed and we were unable to recover it. 00:27:04.935 [2024-11-28 12:50:47.312451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.935 [2024-11-28 12:50:47.312483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.935 qpair failed and we were unable to recover it. 00:27:04.935 [2024-11-28 12:50:47.312625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.935 [2024-11-28 12:50:47.312656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.935 qpair failed and we were unable to recover it. 00:27:04.935 [2024-11-28 12:50:47.312865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.935 [2024-11-28 12:50:47.312897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.935 qpair failed and we were unable to recover it. 00:27:04.935 [2024-11-28 12:50:47.313034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.935 [2024-11-28 12:50:47.313068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.935 qpair failed and we were unable to recover it. 00:27:04.935 [2024-11-28 12:50:47.313203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.935 [2024-11-28 12:50:47.313236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.935 qpair failed and we were unable to recover it. 00:27:04.935 [2024-11-28 12:50:47.313360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.935 [2024-11-28 12:50:47.313392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.935 qpair failed and we were unable to recover it. 00:27:04.935 [2024-11-28 12:50:47.313653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.935 [2024-11-28 12:50:47.313686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.935 qpair failed and we were unable to recover it. 00:27:04.935 [2024-11-28 12:50:47.313821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.935 [2024-11-28 12:50:47.313854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.935 qpair failed and we were unable to recover it. 00:27:04.935 [2024-11-28 12:50:47.313977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.935 [2024-11-28 12:50:47.314010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.935 qpair failed and we were unable to recover it. 00:27:04.935 [2024-11-28 12:50:47.314116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.935 [2024-11-28 12:50:47.314148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.935 qpair failed and we were unable to recover it. 00:27:04.935 [2024-11-28 12:50:47.314254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.935 [2024-11-28 12:50:47.314288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.935 qpair failed and we were unable to recover it. 00:27:04.935 [2024-11-28 12:50:47.314407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.935 [2024-11-28 12:50:47.314440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.935 qpair failed and we were unable to recover it. 00:27:04.935 [2024-11-28 12:50:47.314627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.935 [2024-11-28 12:50:47.314659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.935 qpair failed and we were unable to recover it. 00:27:04.935 [2024-11-28 12:50:47.314850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.936 [2024-11-28 12:50:47.314882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.936 qpair failed and we were unable to recover it. 00:27:04.936 [2024-11-28 12:50:47.315009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.936 [2024-11-28 12:50:47.315025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.936 qpair failed and we were unable to recover it. 00:27:04.936 [2024-11-28 12:50:47.315209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.936 [2024-11-28 12:50:47.315225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.936 qpair failed and we were unable to recover it. 00:27:04.936 [2024-11-28 12:50:47.315312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.936 [2024-11-28 12:50:47.315328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.936 qpair failed and we were unable to recover it. 00:27:04.936 [2024-11-28 12:50:47.315426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.936 [2024-11-28 12:50:47.315441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.936 qpair failed and we were unable to recover it. 00:27:04.936 [2024-11-28 12:50:47.315532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.936 [2024-11-28 12:50:47.315547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.936 qpair failed and we were unable to recover it. 00:27:04.936 [2024-11-28 12:50:47.315690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.936 [2024-11-28 12:50:47.315727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.936 qpair failed and we were unable to recover it. 00:27:04.936 [2024-11-28 12:50:47.315842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.936 [2024-11-28 12:50:47.315872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.936 qpair failed and we were unable to recover it. 00:27:04.936 [2024-11-28 12:50:47.316001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.936 [2024-11-28 12:50:47.316034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.936 qpair failed and we were unable to recover it. 00:27:04.936 [2024-11-28 12:50:47.316151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.936 [2024-11-28 12:50:47.316182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.936 qpair failed and we were unable to recover it. 00:27:04.936 [2024-11-28 12:50:47.316293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.936 [2024-11-28 12:50:47.316332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.936 qpair failed and we were unable to recover it. 00:27:04.936 [2024-11-28 12:50:47.316443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.936 [2024-11-28 12:50:47.316475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.936 qpair failed and we were unable to recover it. 00:27:04.936 [2024-11-28 12:50:47.316608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.936 [2024-11-28 12:50:47.316638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.936 qpair failed and we were unable to recover it. 00:27:04.936 [2024-11-28 12:50:47.316754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.936 [2024-11-28 12:50:47.316786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.936 qpair failed and we were unable to recover it. 00:27:04.936 [2024-11-28 12:50:47.316902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.936 [2024-11-28 12:50:47.316934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.936 qpair failed and we were unable to recover it. 00:27:04.936 [2024-11-28 12:50:47.317084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.936 [2024-11-28 12:50:47.317116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.936 qpair failed and we were unable to recover it. 00:27:04.936 [2024-11-28 12:50:47.317244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.936 [2024-11-28 12:50:47.317276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.936 qpair failed and we were unable to recover it. 00:27:04.936 [2024-11-28 12:50:47.317523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.936 [2024-11-28 12:50:47.317554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.936 qpair failed and we were unable to recover it. 00:27:04.936 [2024-11-28 12:50:47.317686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.936 [2024-11-28 12:50:47.317719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.936 qpair failed and we were unable to recover it. 00:27:04.936 [2024-11-28 12:50:47.317850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.936 [2024-11-28 12:50:47.317865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.936 qpair failed and we were unable to recover it. 00:27:04.936 [2024-11-28 12:50:47.317958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.936 [2024-11-28 12:50:47.317974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.936 qpair failed and we were unable to recover it. 00:27:04.936 [2024-11-28 12:50:47.318052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.936 [2024-11-28 12:50:47.318068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.936 qpair failed and we were unable to recover it. 00:27:04.936 [2024-11-28 12:50:47.318211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.936 [2024-11-28 12:50:47.318227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.936 qpair failed and we were unable to recover it. 00:27:04.936 [2024-11-28 12:50:47.318376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.936 [2024-11-28 12:50:47.318409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.936 qpair failed and we were unable to recover it. 00:27:04.936 [2024-11-28 12:50:47.318581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.936 [2024-11-28 12:50:47.318653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.936 qpair failed and we were unable to recover it. 00:27:04.936 [2024-11-28 12:50:47.318787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.936 [2024-11-28 12:50:47.318824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.936 qpair failed and we were unable to recover it. 00:27:04.936 [2024-11-28 12:50:47.319088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.936 [2024-11-28 12:50:47.319105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.936 qpair failed and we were unable to recover it. 00:27:04.936 [2024-11-28 12:50:47.319259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.936 [2024-11-28 12:50:47.319275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.936 qpair failed and we were unable to recover it. 00:27:04.936 [2024-11-28 12:50:47.319377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.936 [2024-11-28 12:50:47.319393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.936 qpair failed and we were unable to recover it. 00:27:04.936 [2024-11-28 12:50:47.319540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.936 [2024-11-28 12:50:47.319556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.936 qpair failed and we were unable to recover it. 00:27:04.936 [2024-11-28 12:50:47.319652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.936 [2024-11-28 12:50:47.319667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.936 qpair failed and we were unable to recover it. 00:27:04.936 [2024-11-28 12:50:47.319884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.936 [2024-11-28 12:50:47.319899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.936 qpair failed and we were unable to recover it. 00:27:04.936 [2024-11-28 12:50:47.320018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.936 [2024-11-28 12:50:47.320036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.936 qpair failed and we were unable to recover it. 00:27:04.936 [2024-11-28 12:50:47.320130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.936 [2024-11-28 12:50:47.320145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.936 qpair failed and we were unable to recover it. 00:27:04.936 [2024-11-28 12:50:47.320295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.936 [2024-11-28 12:50:47.320311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.936 qpair failed and we were unable to recover it. 00:27:04.936 [2024-11-28 12:50:47.320455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.936 [2024-11-28 12:50:47.320471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.936 qpair failed and we were unable to recover it. 00:27:04.936 [2024-11-28 12:50:47.320564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.936 [2024-11-28 12:50:47.320579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.936 qpair failed and we were unable to recover it. 00:27:04.936 [2024-11-28 12:50:47.320724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.936 [2024-11-28 12:50:47.320743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.936 qpair failed and we were unable to recover it. 00:27:04.936 [2024-11-28 12:50:47.320838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.936 [2024-11-28 12:50:47.320854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.936 qpair failed and we were unable to recover it. 00:27:04.936 [2024-11-28 12:50:47.320926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.936 [2024-11-28 12:50:47.320942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.936 qpair failed and we were unable to recover it. 00:27:04.936 [2024-11-28 12:50:47.321074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.936 [2024-11-28 12:50:47.321090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.936 qpair failed and we were unable to recover it. 00:27:04.936 [2024-11-28 12:50:47.321194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.936 [2024-11-28 12:50:47.321226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.936 qpair failed and we were unable to recover it. 00:27:04.936 [2024-11-28 12:50:47.321471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.936 [2024-11-28 12:50:47.321502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.936 qpair failed and we were unable to recover it. 00:27:04.936 [2024-11-28 12:50:47.321693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.936 [2024-11-28 12:50:47.321725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.936 qpair failed and we were unable to recover it. 00:27:04.937 [2024-11-28 12:50:47.321913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.937 [2024-11-28 12:50:47.321946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.937 qpair failed and we were unable to recover it. 00:27:04.937 [2024-11-28 12:50:47.322146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.937 [2024-11-28 12:50:47.322162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.937 qpair failed and we were unable to recover it. 00:27:04.937 [2024-11-28 12:50:47.322244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.937 [2024-11-28 12:50:47.322260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.937 qpair failed and we were unable to recover it. 00:27:04.937 [2024-11-28 12:50:47.322354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.937 [2024-11-28 12:50:47.322370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.937 qpair failed and we were unable to recover it. 00:27:04.937 [2024-11-28 12:50:47.322510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.937 [2024-11-28 12:50:47.322543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.937 qpair failed and we were unable to recover it. 00:27:04.937 [2024-11-28 12:50:47.322659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.937 [2024-11-28 12:50:47.322692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.937 qpair failed and we were unable to recover it. 00:27:04.937 [2024-11-28 12:50:47.322869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.937 [2024-11-28 12:50:47.322902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.937 qpair failed and we were unable to recover it. 00:27:04.937 [2024-11-28 12:50:47.323127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.937 [2024-11-28 12:50:47.323143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.937 qpair failed and we were unable to recover it. 00:27:04.937 [2024-11-28 12:50:47.323304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.937 [2024-11-28 12:50:47.323319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.937 qpair failed and we were unable to recover it. 00:27:04.937 [2024-11-28 12:50:47.323412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.937 [2024-11-28 12:50:47.323428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.937 qpair failed and we were unable to recover it. 00:27:04.937 [2024-11-28 12:50:47.323506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.937 [2024-11-28 12:50:47.323521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.937 qpair failed and we were unable to recover it. 00:27:04.937 [2024-11-28 12:50:47.323596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.937 [2024-11-28 12:50:47.323612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.937 qpair failed and we were unable to recover it. 00:27:04.937 [2024-11-28 12:50:47.323757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.937 [2024-11-28 12:50:47.323773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.937 qpair failed and we were unable to recover it. 00:27:04.937 [2024-11-28 12:50:47.323890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.937 [2024-11-28 12:50:47.323922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.937 qpair failed and we were unable to recover it. 00:27:04.937 [2024-11-28 12:50:47.324068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.937 [2024-11-28 12:50:47.324100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.937 qpair failed and we were unable to recover it. 00:27:04.937 [2024-11-28 12:50:47.324214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.937 [2024-11-28 12:50:47.324245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.937 qpair failed and we were unable to recover it. 00:27:04.937 [2024-11-28 12:50:47.324366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.937 [2024-11-28 12:50:47.324399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.937 qpair failed and we were unable to recover it. 00:27:04.937 [2024-11-28 12:50:47.324519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.937 [2024-11-28 12:50:47.324550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.937 qpair failed and we were unable to recover it. 00:27:04.937 [2024-11-28 12:50:47.324683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.937 [2024-11-28 12:50:47.324716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.937 qpair failed and we were unable to recover it. 00:27:04.937 [2024-11-28 12:50:47.324835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.937 [2024-11-28 12:50:47.324867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:04.937 qpair failed and we were unable to recover it. 00:27:04.937 [2024-11-28 12:50:47.324994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.937 [2024-11-28 12:50:47.325034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.937 qpair failed and we were unable to recover it. 00:27:04.937 [2024-11-28 12:50:47.325229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.937 [2024-11-28 12:50:47.325245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.937 qpair failed and we were unable to recover it. 00:27:04.937 [2024-11-28 12:50:47.325324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.937 [2024-11-28 12:50:47.325360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.937 qpair failed and we were unable to recover it. 00:27:04.937 [2024-11-28 12:50:47.325535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.937 [2024-11-28 12:50:47.325567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.937 qpair failed and we were unable to recover it. 00:27:04.937 [2024-11-28 12:50:47.325712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.937 [2024-11-28 12:50:47.325743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.937 qpair failed and we were unable to recover it. 00:27:04.937 [2024-11-28 12:50:47.325879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.937 [2024-11-28 12:50:47.325910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.937 qpair failed and we were unable to recover it. 00:27:04.937 [2024-11-28 12:50:47.326113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.937 [2024-11-28 12:50:47.326129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.937 qpair failed and we were unable to recover it. 00:27:04.937 [2024-11-28 12:50:47.326216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.937 [2024-11-28 12:50:47.326232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.937 qpair failed and we were unable to recover it. 00:27:04.937 [2024-11-28 12:50:47.326324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.937 [2024-11-28 12:50:47.326339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.937 qpair failed and we were unable to recover it. 00:27:04.937 [2024-11-28 12:50:47.326435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.937 [2024-11-28 12:50:47.326450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.937 qpair failed and we were unable to recover it. 00:27:04.937 [2024-11-28 12:50:47.326537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.937 [2024-11-28 12:50:47.326553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.937 qpair failed and we were unable to recover it. 00:27:04.937 [2024-11-28 12:50:47.326747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.937 [2024-11-28 12:50:47.326815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.937 qpair failed and we were unable to recover it. 00:27:04.937 [2024-11-28 12:50:47.327037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.937 [2024-11-28 12:50:47.327078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.937 qpair failed and we were unable to recover it. 00:27:04.937 [2024-11-28 12:50:47.327205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.937 [2024-11-28 12:50:47.327248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.937 qpair failed and we were unable to recover it. 00:27:04.938 [2024-11-28 12:50:47.327425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.938 [2024-11-28 12:50:47.327437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.938 qpair failed and we were unable to recover it. 00:27:04.938 [2024-11-28 12:50:47.327585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.938 [2024-11-28 12:50:47.327618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.938 qpair failed and we were unable to recover it. 00:27:04.938 [2024-11-28 12:50:47.327866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.938 [2024-11-28 12:50:47.327900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.938 qpair failed and we were unable to recover it. 00:27:04.938 [2024-11-28 12:50:47.328059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.938 [2024-11-28 12:50:47.328094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.938 qpair failed and we were unable to recover it. 00:27:04.938 [2024-11-28 12:50:47.328231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.938 [2024-11-28 12:50:47.328243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.938 qpair failed and we were unable to recover it. 00:27:04.938 [2024-11-28 12:50:47.328312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.938 [2024-11-28 12:50:47.328325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.938 qpair failed and we were unable to recover it. 00:27:04.938 [2024-11-28 12:50:47.328462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.938 [2024-11-28 12:50:47.328475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.938 qpair failed and we were unable to recover it. 00:27:04.938 [2024-11-28 12:50:47.328564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.938 [2024-11-28 12:50:47.328591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.938 qpair failed and we were unable to recover it. 00:27:04.938 [2024-11-28 12:50:47.328726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.938 [2024-11-28 12:50:47.328759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.938 qpair failed and we were unable to recover it. 00:27:04.938 [2024-11-28 12:50:47.328989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.938 [2024-11-28 12:50:47.329023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.938 qpair failed and we were unable to recover it. 00:27:04.938 [2024-11-28 12:50:47.329156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.938 [2024-11-28 12:50:47.329189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.938 qpair failed and we were unable to recover it. 00:27:04.938 [2024-11-28 12:50:47.329307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.938 [2024-11-28 12:50:47.329340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.938 qpair failed and we were unable to recover it. 00:27:04.938 [2024-11-28 12:50:47.329526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.938 [2024-11-28 12:50:47.329559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.938 qpair failed and we were unable to recover it. 00:27:04.938 [2024-11-28 12:50:47.329750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.938 [2024-11-28 12:50:47.329783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.938 qpair failed and we were unable to recover it. 00:27:04.938 [2024-11-28 12:50:47.329892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.938 [2024-11-28 12:50:47.329925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.938 qpair failed and we were unable to recover it. 00:27:04.938 [2024-11-28 12:50:47.330056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.938 [2024-11-28 12:50:47.330090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.938 qpair failed and we were unable to recover it. 00:27:04.938 [2024-11-28 12:50:47.330305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.938 [2024-11-28 12:50:47.330337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.938 qpair failed and we were unable to recover it. 00:27:04.938 [2024-11-28 12:50:47.330451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.938 [2024-11-28 12:50:47.330485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.938 qpair failed and we were unable to recover it. 00:27:04.938 [2024-11-28 12:50:47.330669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.938 [2024-11-28 12:50:47.330701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.938 qpair failed and we were unable to recover it. 00:27:04.938 [2024-11-28 12:50:47.330892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.938 [2024-11-28 12:50:47.330925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.938 qpair failed and we were unable to recover it. 00:27:04.938 [2024-11-28 12:50:47.331107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.938 [2024-11-28 12:50:47.331141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.938 qpair failed and we were unable to recover it. 00:27:04.938 [2024-11-28 12:50:47.331261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.938 [2024-11-28 12:50:47.331273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.938 qpair failed and we were unable to recover it. 00:27:04.938 [2024-11-28 12:50:47.331341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.938 [2024-11-28 12:50:47.331353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.938 qpair failed and we were unable to recover it. 00:27:04.938 [2024-11-28 12:50:47.331496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.938 [2024-11-28 12:50:47.331529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.938 qpair failed and we were unable to recover it. 00:27:04.938 [2024-11-28 12:50:47.331708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.938 [2024-11-28 12:50:47.331740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.938 qpair failed and we were unable to recover it. 00:27:04.938 [2024-11-28 12:50:47.331879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.938 [2024-11-28 12:50:47.331913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.938 qpair failed and we were unable to recover it. 00:27:04.938 [2024-11-28 12:50:47.332117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.938 [2024-11-28 12:50:47.332130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.938 qpair failed and we were unable to recover it. 00:27:04.938 [2024-11-28 12:50:47.332205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.938 [2024-11-28 12:50:47.332217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.938 qpair failed and we were unable to recover it. 00:27:04.938 [2024-11-28 12:50:47.332306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.938 [2024-11-28 12:50:47.332318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.938 qpair failed and we were unable to recover it. 00:27:04.938 [2024-11-28 12:50:47.332393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.938 [2024-11-28 12:50:47.332405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.938 qpair failed and we were unable to recover it. 00:27:04.938 [2024-11-28 12:50:47.332523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.939 [2024-11-28 12:50:47.332556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.939 qpair failed and we were unable to recover it. 00:27:04.939 [2024-11-28 12:50:47.332754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.939 [2024-11-28 12:50:47.332787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.939 qpair failed and we were unable to recover it. 00:27:04.939 [2024-11-28 12:50:47.332984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.939 [2024-11-28 12:50:47.333018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.939 qpair failed and we were unable to recover it. 00:27:04.939 [2024-11-28 12:50:47.333187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.939 [2024-11-28 12:50:47.333199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.939 qpair failed and we were unable to recover it. 00:27:04.939 [2024-11-28 12:50:47.333352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.939 [2024-11-28 12:50:47.333364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.939 qpair failed and we were unable to recover it. 00:27:04.939 [2024-11-28 12:50:47.333544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.939 [2024-11-28 12:50:47.333557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.939 qpair failed and we were unable to recover it. 00:27:04.939 [2024-11-28 12:50:47.333641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.939 [2024-11-28 12:50:47.333653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.939 qpair failed and we were unable to recover it. 00:27:04.939 [2024-11-28 12:50:47.333807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.939 [2024-11-28 12:50:47.333819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.939 qpair failed and we were unable to recover it. 00:27:04.939 [2024-11-28 12:50:47.333885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.939 [2024-11-28 12:50:47.333896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.939 qpair failed and we were unable to recover it. 00:27:04.939 [2024-11-28 12:50:47.334052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.939 [2024-11-28 12:50:47.334070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.939 qpair failed and we were unable to recover it. 00:27:04.939 [2024-11-28 12:50:47.334161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.939 [2024-11-28 12:50:47.334177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.939 qpair failed and we were unable to recover it. 00:27:04.939 [2024-11-28 12:50:47.334320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.939 [2024-11-28 12:50:47.334340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.939 qpair failed and we were unable to recover it. 00:27:04.939 [2024-11-28 12:50:47.334432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.939 [2024-11-28 12:50:47.334449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.939 qpair failed and we were unable to recover it. 00:27:04.939 [2024-11-28 12:50:47.334547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.939 [2024-11-28 12:50:47.334561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.939 qpair failed and we were unable to recover it. 00:27:04.939 [2024-11-28 12:50:47.334643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.939 [2024-11-28 12:50:47.334657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.939 qpair failed and we were unable to recover it. 00:27:04.939 [2024-11-28 12:50:47.334808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.939 [2024-11-28 12:50:47.334822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.939 qpair failed and we were unable to recover it. 00:27:04.939 [2024-11-28 12:50:47.334917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.939 [2024-11-28 12:50:47.334930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.939 qpair failed and we were unable to recover it. 00:27:04.939 [2024-11-28 12:50:47.335073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.939 [2024-11-28 12:50:47.335085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.939 qpair failed and we were unable to recover it. 00:27:04.939 [2024-11-28 12:50:47.335164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.939 [2024-11-28 12:50:47.335176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.939 qpair failed and we were unable to recover it. 00:27:04.939 [2024-11-28 12:50:47.335252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.939 [2024-11-28 12:50:47.335264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.939 qpair failed and we were unable to recover it. 00:27:04.939 [2024-11-28 12:50:47.335334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.939 [2024-11-28 12:50:47.335346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.939 qpair failed and we were unable to recover it. 00:27:04.939 [2024-11-28 12:50:47.335509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.939 [2024-11-28 12:50:47.335521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.939 qpair failed and we were unable to recover it. 00:27:04.939 [2024-11-28 12:50:47.335589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.939 [2024-11-28 12:50:47.335601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.939 qpair failed and we were unable to recover it. 00:27:04.939 [2024-11-28 12:50:47.335676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.939 [2024-11-28 12:50:47.335688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.939 qpair failed and we were unable to recover it. 00:27:04.939 [2024-11-28 12:50:47.335833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.939 [2024-11-28 12:50:47.335845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.939 qpair failed and we were unable to recover it. 00:27:04.939 [2024-11-28 12:50:47.335919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.939 [2024-11-28 12:50:47.335931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.939 qpair failed and we were unable to recover it. 00:27:04.939 [2024-11-28 12:50:47.336015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.939 [2024-11-28 12:50:47.336027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.939 qpair failed and we were unable to recover it. 00:27:04.939 [2024-11-28 12:50:47.336188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.939 [2024-11-28 12:50:47.336200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.939 qpair failed and we were unable to recover it. 00:27:04.939 [2024-11-28 12:50:47.336280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.939 [2024-11-28 12:50:47.336292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.939 qpair failed and we were unable to recover it. 00:27:04.939 [2024-11-28 12:50:47.336363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.939 [2024-11-28 12:50:47.336375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.939 qpair failed and we were unable to recover it. 00:27:04.939 [2024-11-28 12:50:47.336507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.939 [2024-11-28 12:50:47.336519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.939 qpair failed and we were unable to recover it. 00:27:04.939 [2024-11-28 12:50:47.336599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.939 [2024-11-28 12:50:47.336611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.939 qpair failed and we were unable to recover it. 00:27:04.939 [2024-11-28 12:50:47.336688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.939 [2024-11-28 12:50:47.336701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.939 qpair failed and we were unable to recover it. 00:27:04.939 [2024-11-28 12:50:47.336797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.939 [2024-11-28 12:50:47.336809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.939 qpair failed and we were unable to recover it. 00:27:04.939 [2024-11-28 12:50:47.336917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.939 [2024-11-28 12:50:47.336930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.939 qpair failed and we were unable to recover it. 00:27:04.939 [2024-11-28 12:50:47.337026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.939 [2024-11-28 12:50:47.337039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.939 qpair failed and we were unable to recover it. 00:27:04.939 [2024-11-28 12:50:47.337118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.939 [2024-11-28 12:50:47.337132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.939 qpair failed and we were unable to recover it. 00:27:04.939 [2024-11-28 12:50:47.337267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.939 [2024-11-28 12:50:47.337279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.939 qpair failed and we were unable to recover it. 00:27:04.939 [2024-11-28 12:50:47.337351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.940 [2024-11-28 12:50:47.337364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.940 qpair failed and we were unable to recover it. 00:27:04.940 [2024-11-28 12:50:47.337434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.940 [2024-11-28 12:50:47.337446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.940 qpair failed and we were unable to recover it. 00:27:04.940 [2024-11-28 12:50:47.337514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.940 [2024-11-28 12:50:47.337526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.940 qpair failed and we were unable to recover it. 00:27:04.940 [2024-11-28 12:50:47.337679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.940 [2024-11-28 12:50:47.337692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.940 qpair failed and we were unable to recover it. 00:27:04.940 [2024-11-28 12:50:47.337760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.940 [2024-11-28 12:50:47.337773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.940 qpair failed and we were unable to recover it. 00:27:04.940 [2024-11-28 12:50:47.337835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.940 [2024-11-28 12:50:47.337848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.940 qpair failed and we were unable to recover it. 00:27:04.940 [2024-11-28 12:50:47.337911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.940 [2024-11-28 12:50:47.337923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.940 qpair failed and we were unable to recover it. 00:27:04.940 [2024-11-28 12:50:47.338025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.940 [2024-11-28 12:50:47.338038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.940 qpair failed and we were unable to recover it. 00:27:04.940 [2024-11-28 12:50:47.338103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.940 [2024-11-28 12:50:47.338115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.940 qpair failed and we were unable to recover it. 00:27:04.940 [2024-11-28 12:50:47.338179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.940 [2024-11-28 12:50:47.338191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.940 qpair failed and we were unable to recover it. 00:27:04.940 [2024-11-28 12:50:47.338326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.940 [2024-11-28 12:50:47.338338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.940 qpair failed and we were unable to recover it. 00:27:04.940 [2024-11-28 12:50:47.338416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.940 [2024-11-28 12:50:47.338428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.940 qpair failed and we were unable to recover it. 00:27:04.940 [2024-11-28 12:50:47.338501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.940 [2024-11-28 12:50:47.338513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.940 qpair failed and we were unable to recover it. 00:27:04.940 [2024-11-28 12:50:47.338586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.940 [2024-11-28 12:50:47.338599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.940 qpair failed and we were unable to recover it. 00:27:04.940 [2024-11-28 12:50:47.338733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.940 [2024-11-28 12:50:47.338745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.940 qpair failed and we were unable to recover it. 00:27:04.940 [2024-11-28 12:50:47.338911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.940 [2024-11-28 12:50:47.338923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.940 qpair failed and we were unable to recover it. 00:27:04.940 [2024-11-28 12:50:47.339063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.940 [2024-11-28 12:50:47.339075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.940 qpair failed and we were unable to recover it. 00:27:04.940 [2024-11-28 12:50:47.339140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.940 [2024-11-28 12:50:47.339153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.940 qpair failed and we were unable to recover it. 00:27:04.940 [2024-11-28 12:50:47.339377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.940 [2024-11-28 12:50:47.339389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.940 qpair failed and we were unable to recover it. 00:27:04.940 [2024-11-28 12:50:47.339474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.940 [2024-11-28 12:50:47.339486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.940 qpair failed and we were unable to recover it. 00:27:04.940 [2024-11-28 12:50:47.339686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.940 [2024-11-28 12:50:47.339697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.940 qpair failed and we were unable to recover it. 00:27:04.940 [2024-11-28 12:50:47.339767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.940 [2024-11-28 12:50:47.339780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.940 qpair failed and we were unable to recover it. 00:27:04.940 [2024-11-28 12:50:47.339922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.940 [2024-11-28 12:50:47.339934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.940 qpair failed and we were unable to recover it. 00:27:04.940 [2024-11-28 12:50:47.340090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.940 [2024-11-28 12:50:47.340103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.940 qpair failed and we were unable to recover it. 00:27:04.940 [2024-11-28 12:50:47.340161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.940 [2024-11-28 12:50:47.340173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.940 qpair failed and we were unable to recover it. 00:27:04.940 [2024-11-28 12:50:47.340337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.940 [2024-11-28 12:50:47.340349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.940 qpair failed and we were unable to recover it. 00:27:04.940 [2024-11-28 12:50:47.340430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.940 [2024-11-28 12:50:47.340441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.940 qpair failed and we were unable to recover it. 00:27:04.940 [2024-11-28 12:50:47.340524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.940 [2024-11-28 12:50:47.340536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.940 qpair failed and we were unable to recover it. 00:27:04.940 [2024-11-28 12:50:47.340622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.940 [2024-11-28 12:50:47.340634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.940 qpair failed and we were unable to recover it. 00:27:04.940 [2024-11-28 12:50:47.340767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.940 [2024-11-28 12:50:47.340779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.940 qpair failed and we were unable to recover it. 00:27:04.940 [2024-11-28 12:50:47.340843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.940 [2024-11-28 12:50:47.340855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.940 qpair failed and we were unable to recover it. 00:27:04.940 [2024-11-28 12:50:47.340955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.940 [2024-11-28 12:50:47.340967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.940 qpair failed and we were unable to recover it. 00:27:04.940 [2024-11-28 12:50:47.341043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.940 [2024-11-28 12:50:47.341056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.940 qpair failed and we were unable to recover it. 00:27:04.940 [2024-11-28 12:50:47.341203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.940 [2024-11-28 12:50:47.341215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.940 qpair failed and we were unable to recover it. 00:27:04.940 [2024-11-28 12:50:47.341296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.940 [2024-11-28 12:50:47.341308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.940 qpair failed and we were unable to recover it. 00:27:04.940 [2024-11-28 12:50:47.341374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.940 [2024-11-28 12:50:47.341386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.940 qpair failed and we were unable to recover it. 00:27:04.940 [2024-11-28 12:50:47.341459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.940 [2024-11-28 12:50:47.341472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.940 qpair failed and we were unable to recover it. 00:27:04.940 [2024-11-28 12:50:47.341611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.940 [2024-11-28 12:50:47.341624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.940 qpair failed and we were unable to recover it. 00:27:04.940 [2024-11-28 12:50:47.341759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.940 [2024-11-28 12:50:47.341773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.941 qpair failed and we were unable to recover it. 00:27:04.941 [2024-11-28 12:50:47.341841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.941 [2024-11-28 12:50:47.341853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.941 qpair failed and we were unable to recover it. 00:27:04.941 [2024-11-28 12:50:47.342006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.941 [2024-11-28 12:50:47.342018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.941 qpair failed and we were unable to recover it. 00:27:04.941 [2024-11-28 12:50:47.342095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.941 [2024-11-28 12:50:47.342106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.941 qpair failed and we were unable to recover it. 00:27:04.941 [2024-11-28 12:50:47.342267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.941 [2024-11-28 12:50:47.342279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.941 qpair failed and we were unable to recover it. 00:27:04.941 [2024-11-28 12:50:47.342419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.941 [2024-11-28 12:50:47.342431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.941 qpair failed and we were unable to recover it. 00:27:04.941 [2024-11-28 12:50:47.342505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.941 [2024-11-28 12:50:47.342517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.941 qpair failed and we were unable to recover it. 00:27:04.941 [2024-11-28 12:50:47.342655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.941 [2024-11-28 12:50:47.342667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.941 qpair failed and we were unable to recover it. 00:27:04.941 [2024-11-28 12:50:47.342810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.941 [2024-11-28 12:50:47.342822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.941 qpair failed and we were unable to recover it. 00:27:04.941 [2024-11-28 12:50:47.342887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.941 [2024-11-28 12:50:47.342899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.941 qpair failed and we were unable to recover it. 00:27:04.941 [2024-11-28 12:50:47.343067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.941 [2024-11-28 12:50:47.343080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.941 qpair failed and we were unable to recover it. 00:27:04.941 [2024-11-28 12:50:47.343153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.941 [2024-11-28 12:50:47.343166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.941 qpair failed and we were unable to recover it. 00:27:04.941 [2024-11-28 12:50:47.343248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.941 [2024-11-28 12:50:47.343261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.941 qpair failed and we were unable to recover it. 00:27:04.941 [2024-11-28 12:50:47.343335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.941 [2024-11-28 12:50:47.343347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.941 qpair failed and we were unable to recover it. 00:27:04.941 [2024-11-28 12:50:47.343485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.941 [2024-11-28 12:50:47.343498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.941 qpair failed and we were unable to recover it. 00:27:04.941 [2024-11-28 12:50:47.343591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.941 [2024-11-28 12:50:47.343604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.941 qpair failed and we were unable to recover it. 00:27:04.941 [2024-11-28 12:50:47.343852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.941 [2024-11-28 12:50:47.343865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.941 qpair failed and we were unable to recover it. 00:27:04.941 [2024-11-28 12:50:47.344018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.941 [2024-11-28 12:50:47.344031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.941 qpair failed and we were unable to recover it. 00:27:04.941 [2024-11-28 12:50:47.344101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.941 [2024-11-28 12:50:47.344114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.941 qpair failed and we were unable to recover it. 00:27:04.941 [2024-11-28 12:50:47.344210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.941 [2024-11-28 12:50:47.344222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.941 qpair failed and we were unable to recover it. 00:27:04.941 [2024-11-28 12:50:47.344308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.941 [2024-11-28 12:50:47.344320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.941 qpair failed and we were unable to recover it. 00:27:04.941 [2024-11-28 12:50:47.344453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.941 [2024-11-28 12:50:47.344465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.941 qpair failed and we were unable to recover it. 00:27:04.941 [2024-11-28 12:50:47.344556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.941 [2024-11-28 12:50:47.344568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.941 qpair failed and we were unable to recover it. 00:27:04.941 [2024-11-28 12:50:47.344644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.941 [2024-11-28 12:50:47.344656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.941 qpair failed and we were unable to recover it. 00:27:04.941 [2024-11-28 12:50:47.344749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.941 [2024-11-28 12:50:47.344761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.941 qpair failed and we were unable to recover it. 00:27:04.941 [2024-11-28 12:50:47.344895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.941 [2024-11-28 12:50:47.344907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.941 qpair failed and we were unable to recover it. 00:27:04.941 [2024-11-28 12:50:47.344981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.941 [2024-11-28 12:50:47.344993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.941 qpair failed and we were unable to recover it. 00:27:04.941 [2024-11-28 12:50:47.345073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.941 [2024-11-28 12:50:47.345085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.941 qpair failed and we were unable to recover it. 00:27:04.941 [2024-11-28 12:50:47.345163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.941 [2024-11-28 12:50:47.345176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.941 qpair failed and we were unable to recover it. 00:27:04.941 [2024-11-28 12:50:47.345259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.941 [2024-11-28 12:50:47.345271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.941 qpair failed and we were unable to recover it. 00:27:04.941 [2024-11-28 12:50:47.345341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.941 [2024-11-28 12:50:47.345354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.941 qpair failed and we were unable to recover it. 00:27:04.941 [2024-11-28 12:50:47.345489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.941 [2024-11-28 12:50:47.345503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.941 qpair failed and we were unable to recover it. 00:27:04.941 [2024-11-28 12:50:47.345601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.941 [2024-11-28 12:50:47.345613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.941 qpair failed and we were unable to recover it. 00:27:04.941 [2024-11-28 12:50:47.345759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.941 [2024-11-28 12:50:47.345771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.941 qpair failed and we were unable to recover it. 00:27:04.941 [2024-11-28 12:50:47.345906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.941 [2024-11-28 12:50:47.345918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.941 qpair failed and we were unable to recover it. 00:27:04.941 [2024-11-28 12:50:47.346067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.941 [2024-11-28 12:50:47.346079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.941 qpair failed and we were unable to recover it. 00:27:04.941 [2024-11-28 12:50:47.346168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.941 [2024-11-28 12:50:47.346181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.941 qpair failed and we were unable to recover it. 00:27:04.941 [2024-11-28 12:50:47.346261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.941 [2024-11-28 12:50:47.346273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.941 qpair failed and we were unable to recover it. 00:27:04.941 [2024-11-28 12:50:47.346339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.941 [2024-11-28 12:50:47.346351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.941 qpair failed and we were unable to recover it. 00:27:04.942 [2024-11-28 12:50:47.346432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.942 [2024-11-28 12:50:47.346444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.942 qpair failed and we were unable to recover it. 00:27:04.942 [2024-11-28 12:50:47.346512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.942 [2024-11-28 12:50:47.346526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.942 qpair failed and we were unable to recover it. 00:27:04.942 [2024-11-28 12:50:47.346661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.942 [2024-11-28 12:50:47.346673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.942 qpair failed and we were unable to recover it. 00:27:04.942 [2024-11-28 12:50:47.346759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.942 [2024-11-28 12:50:47.346771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.942 qpair failed and we were unable to recover it. 00:27:04.942 [2024-11-28 12:50:47.346851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.942 [2024-11-28 12:50:47.346863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.942 qpair failed and we were unable to recover it. 00:27:04.942 [2024-11-28 12:50:47.346997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.942 [2024-11-28 12:50:47.347010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.942 qpair failed and we were unable to recover it. 00:27:04.942 [2024-11-28 12:50:47.347096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.942 [2024-11-28 12:50:47.347109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.942 qpair failed and we were unable to recover it. 00:27:04.942 [2024-11-28 12:50:47.347280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.942 [2024-11-28 12:50:47.347292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.942 qpair failed and we were unable to recover it. 00:27:04.942 [2024-11-28 12:50:47.347438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.942 [2024-11-28 12:50:47.347450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.942 qpair failed and we were unable to recover it. 00:27:04.942 [2024-11-28 12:50:47.347527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.942 [2024-11-28 12:50:47.347539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.942 qpair failed and we were unable to recover it. 00:27:04.942 [2024-11-28 12:50:47.347695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.942 [2024-11-28 12:50:47.347708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.942 qpair failed and we were unable to recover it. 00:27:04.942 [2024-11-28 12:50:47.347843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.942 [2024-11-28 12:50:47.347855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.942 qpair failed and we were unable to recover it. 00:27:04.942 [2024-11-28 12:50:47.347995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.942 [2024-11-28 12:50:47.348008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.942 qpair failed and we were unable to recover it. 00:27:04.942 [2024-11-28 12:50:47.348087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.942 [2024-11-28 12:50:47.348099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.942 qpair failed and we were unable to recover it. 00:27:04.942 [2024-11-28 12:50:47.348242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.942 [2024-11-28 12:50:47.348254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.942 qpair failed and we were unable to recover it. 00:27:04.942 [2024-11-28 12:50:47.348355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.942 [2024-11-28 12:50:47.348367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.942 qpair failed and we were unable to recover it. 00:27:04.942 [2024-11-28 12:50:47.348528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.942 [2024-11-28 12:50:47.348540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.942 qpair failed and we were unable to recover it. 00:27:04.942 [2024-11-28 12:50:47.348721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.942 [2024-11-28 12:50:47.348733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.942 qpair failed and we were unable to recover it. 00:27:04.942 [2024-11-28 12:50:47.348934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.942 [2024-11-28 12:50:47.348957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.942 qpair failed and we were unable to recover it. 00:27:04.942 [2024-11-28 12:50:47.349030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.942 [2024-11-28 12:50:47.349042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.942 qpair failed and we were unable to recover it. 00:27:04.942 [2024-11-28 12:50:47.349128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.942 [2024-11-28 12:50:47.349140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.942 qpair failed and we were unable to recover it. 00:27:04.942 [2024-11-28 12:50:47.349299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.942 [2024-11-28 12:50:47.349311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.942 qpair failed and we were unable to recover it. 00:27:04.942 [2024-11-28 12:50:47.349460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.942 [2024-11-28 12:50:47.349472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.942 qpair failed and we were unable to recover it. 00:27:04.942 [2024-11-28 12:50:47.349546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.942 [2024-11-28 12:50:47.349558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.942 qpair failed and we were unable to recover it. 00:27:04.942 [2024-11-28 12:50:47.349622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.942 [2024-11-28 12:50:47.349634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.942 qpair failed and we were unable to recover it. 00:27:04.942 [2024-11-28 12:50:47.349793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.942 [2024-11-28 12:50:47.349806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.942 qpair failed and we were unable to recover it. 00:27:04.942 [2024-11-28 12:50:47.350032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.942 [2024-11-28 12:50:47.350045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.942 qpair failed and we were unable to recover it. 00:27:04.942 [2024-11-28 12:50:47.350135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.942 [2024-11-28 12:50:47.350148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.942 qpair failed and we were unable to recover it. 00:27:04.942 [2024-11-28 12:50:47.350242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.942 [2024-11-28 12:50:47.350255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.942 qpair failed and we were unable to recover it. 00:27:04.942 [2024-11-28 12:50:47.350348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.942 [2024-11-28 12:50:47.350361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.942 qpair failed and we were unable to recover it. 00:27:04.942 [2024-11-28 12:50:47.350447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.942 [2024-11-28 12:50:47.350480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.942 qpair failed and we were unable to recover it. 00:27:04.942 [2024-11-28 12:50:47.350606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.942 [2024-11-28 12:50:47.350640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.942 qpair failed and we were unable to recover it. 00:27:04.942 [2024-11-28 12:50:47.350781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.943 [2024-11-28 12:50:47.350813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.943 qpair failed and we were unable to recover it. 00:27:04.943 [2024-11-28 12:50:47.350941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.943 [2024-11-28 12:50:47.351006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.943 qpair failed and we were unable to recover it. 00:27:04.943 [2024-11-28 12:50:47.351245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.943 [2024-11-28 12:50:47.351257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.943 qpair failed and we were unable to recover it. 00:27:04.943 [2024-11-28 12:50:47.351349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.943 [2024-11-28 12:50:47.351361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.943 qpair failed and we were unable to recover it. 00:27:04.943 [2024-11-28 12:50:47.351442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.943 [2024-11-28 12:50:47.351455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.943 qpair failed and we were unable to recover it. 00:27:04.943 [2024-11-28 12:50:47.351619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.943 [2024-11-28 12:50:47.351631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.943 qpair failed and we were unable to recover it. 00:27:04.943 [2024-11-28 12:50:47.351767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.943 [2024-11-28 12:50:47.351778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.943 qpair failed and we were unable to recover it. 00:27:04.943 [2024-11-28 12:50:47.351858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.943 [2024-11-28 12:50:47.351870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.943 qpair failed and we were unable to recover it. 00:27:04.943 [2024-11-28 12:50:47.352035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.943 [2024-11-28 12:50:47.352048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.943 qpair failed and we were unable to recover it. 00:27:04.943 [2024-11-28 12:50:47.352138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.943 [2024-11-28 12:50:47.352153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.943 qpair failed and we were unable to recover it. 00:27:04.943 [2024-11-28 12:50:47.352382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.943 [2024-11-28 12:50:47.352394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.943 qpair failed and we were unable to recover it. 00:27:04.943 [2024-11-28 12:50:47.352584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.943 [2024-11-28 12:50:47.352597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.943 qpair failed and we were unable to recover it. 00:27:04.943 [2024-11-28 12:50:47.352735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.943 [2024-11-28 12:50:47.352747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.943 qpair failed and we were unable to recover it. 00:27:04.943 [2024-11-28 12:50:47.352878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.943 [2024-11-28 12:50:47.352891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.943 qpair failed and we were unable to recover it. 00:27:04.943 [2024-11-28 12:50:47.352969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.943 [2024-11-28 12:50:47.352982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.943 qpair failed and we were unable to recover it. 00:27:04.943 [2024-11-28 12:50:47.353216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.943 [2024-11-28 12:50:47.353228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.943 qpair failed and we were unable to recover it. 00:27:04.943 [2024-11-28 12:50:47.353384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.943 [2024-11-28 12:50:47.353398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.943 qpair failed and we were unable to recover it. 00:27:04.943 [2024-11-28 12:50:47.353645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.943 [2024-11-28 12:50:47.353658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.943 qpair failed and we were unable to recover it. 00:27:04.943 [2024-11-28 12:50:47.353886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.943 [2024-11-28 12:50:47.353898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.943 qpair failed and we were unable to recover it. 00:27:04.943 [2024-11-28 12:50:47.354130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.943 [2024-11-28 12:50:47.354142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.943 qpair failed and we were unable to recover it. 00:27:04.943 [2024-11-28 12:50:47.354208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.943 [2024-11-28 12:50:47.354221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.943 qpair failed and we were unable to recover it. 00:27:04.943 [2024-11-28 12:50:47.354300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.943 [2024-11-28 12:50:47.354313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.943 qpair failed and we were unable to recover it. 00:27:04.943 [2024-11-28 12:50:47.354400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.943 [2024-11-28 12:50:47.354412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.943 qpair failed and we were unable to recover it. 00:27:04.943 [2024-11-28 12:50:47.354612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.943 [2024-11-28 12:50:47.354624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.943 qpair failed and we were unable to recover it. 00:27:04.943 [2024-11-28 12:50:47.354828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.943 [2024-11-28 12:50:47.354841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.943 qpair failed and we were unable to recover it. 00:27:04.943 [2024-11-28 12:50:47.354990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.943 [2024-11-28 12:50:47.355003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.943 qpair failed and we were unable to recover it. 00:27:04.943 [2024-11-28 12:50:47.355144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.943 [2024-11-28 12:50:47.355156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.943 qpair failed and we were unable to recover it. 00:27:04.943 [2024-11-28 12:50:47.355229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.943 [2024-11-28 12:50:47.355241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.943 qpair failed and we were unable to recover it. 00:27:04.943 [2024-11-28 12:50:47.355402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.943 [2024-11-28 12:50:47.355414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.943 qpair failed and we were unable to recover it. 00:27:04.943 [2024-11-28 12:50:47.355574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.943 [2024-11-28 12:50:47.355586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.943 qpair failed and we were unable to recover it. 00:27:04.943 [2024-11-28 12:50:47.355752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.943 [2024-11-28 12:50:47.355765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.943 qpair failed and we were unable to recover it. 00:27:04.943 [2024-11-28 12:50:47.355855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.943 [2024-11-28 12:50:47.355867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.943 qpair failed and we were unable to recover it. 00:27:04.943 [2024-11-28 12:50:47.356015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.943 [2024-11-28 12:50:47.356027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.943 qpair failed and we were unable to recover it. 00:27:04.943 [2024-11-28 12:50:47.356180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.943 [2024-11-28 12:50:47.356192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.943 qpair failed and we were unable to recover it. 00:27:04.943 [2024-11-28 12:50:47.356339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.943 [2024-11-28 12:50:47.356351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.943 qpair failed and we were unable to recover it. 00:27:04.943 [2024-11-28 12:50:47.356489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.943 [2024-11-28 12:50:47.356501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.943 qpair failed and we were unable to recover it. 00:27:04.943 [2024-11-28 12:50:47.356648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.943 [2024-11-28 12:50:47.356660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.943 qpair failed and we were unable to recover it. 00:27:04.943 [2024-11-28 12:50:47.356809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.943 [2024-11-28 12:50:47.356822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.943 qpair failed and we were unable to recover it. 00:27:04.943 [2024-11-28 12:50:47.356892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.944 [2024-11-28 12:50:47.356905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.944 qpair failed and we were unable to recover it. 00:27:04.944 [2024-11-28 12:50:47.357066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.944 [2024-11-28 12:50:47.357078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.944 qpair failed and we were unable to recover it. 00:27:04.944 [2024-11-28 12:50:47.357149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.944 [2024-11-28 12:50:47.357162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.944 qpair failed and we were unable to recover it. 00:27:04.944 [2024-11-28 12:50:47.357320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.944 [2024-11-28 12:50:47.357332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.944 qpair failed and we were unable to recover it. 00:27:04.944 [2024-11-28 12:50:47.357412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.944 [2024-11-28 12:50:47.357424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.944 qpair failed and we were unable to recover it. 00:27:04.944 [2024-11-28 12:50:47.357636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.944 [2024-11-28 12:50:47.357648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.944 qpair failed and we were unable to recover it. 00:27:04.944 [2024-11-28 12:50:47.357730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.944 [2024-11-28 12:50:47.357742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.944 qpair failed and we were unable to recover it. 00:27:04.944 [2024-11-28 12:50:47.357897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.944 [2024-11-28 12:50:47.357909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.944 qpair failed and we were unable to recover it. 00:27:04.944 [2024-11-28 12:50:47.358002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.944 [2024-11-28 12:50:47.358015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.944 qpair failed and we were unable to recover it. 00:27:04.944 [2024-11-28 12:50:47.358220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.944 [2024-11-28 12:50:47.358233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.944 qpair failed and we were unable to recover it. 00:27:04.944 [2024-11-28 12:50:47.358309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.944 [2024-11-28 12:50:47.358321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.944 qpair failed and we were unable to recover it. 00:27:04.944 [2024-11-28 12:50:47.358405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.944 [2024-11-28 12:50:47.358419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.944 qpair failed and we were unable to recover it. 00:27:04.944 [2024-11-28 12:50:47.358617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.944 [2024-11-28 12:50:47.358629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.944 qpair failed and we were unable to recover it. 00:27:04.944 [2024-11-28 12:50:47.358765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.944 [2024-11-28 12:50:47.358778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.944 qpair failed and we were unable to recover it. 00:27:04.944 [2024-11-28 12:50:47.358843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.944 [2024-11-28 12:50:47.358856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.944 qpair failed and we were unable to recover it. 00:27:04.944 [2024-11-28 12:50:47.358946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.944 [2024-11-28 12:50:47.358976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.944 qpair failed and we were unable to recover it. 00:27:04.944 [2024-11-28 12:50:47.359139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.944 [2024-11-28 12:50:47.359152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.944 qpair failed and we were unable to recover it. 00:27:04.944 [2024-11-28 12:50:47.359234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.944 [2024-11-28 12:50:47.359247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.944 qpair failed and we were unable to recover it. 00:27:04.944 [2024-11-28 12:50:47.359394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.944 [2024-11-28 12:50:47.359407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.944 qpair failed and we were unable to recover it. 00:27:04.944 [2024-11-28 12:50:47.359477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.944 [2024-11-28 12:50:47.359489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.944 qpair failed and we were unable to recover it. 00:27:04.944 [2024-11-28 12:50:47.359623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.944 [2024-11-28 12:50:47.359635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.944 qpair failed and we were unable to recover it. 00:27:04.944 [2024-11-28 12:50:47.359736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.944 [2024-11-28 12:50:47.359748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.944 qpair failed and we were unable to recover it. 00:27:04.944 [2024-11-28 12:50:47.359881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.944 [2024-11-28 12:50:47.359893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.944 qpair failed and we were unable to recover it. 00:27:04.944 [2024-11-28 12:50:47.360041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.944 [2024-11-28 12:50:47.360054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.944 qpair failed and we were unable to recover it. 00:27:04.944 [2024-11-28 12:50:47.360199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.944 [2024-11-28 12:50:47.360212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.944 qpair failed and we were unable to recover it. 00:27:04.944 [2024-11-28 12:50:47.360288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.944 [2024-11-28 12:50:47.360300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.944 qpair failed and we were unable to recover it. 00:27:04.944 [2024-11-28 12:50:47.360379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.944 [2024-11-28 12:50:47.360391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.944 qpair failed and we were unable to recover it. 00:27:04.944 [2024-11-28 12:50:47.360469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.944 [2024-11-28 12:50:47.360482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.944 qpair failed and we were unable to recover it. 00:27:04.944 [2024-11-28 12:50:47.360546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.944 [2024-11-28 12:50:47.360558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.944 qpair failed and we were unable to recover it. 00:27:04.944 [2024-11-28 12:50:47.360716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.944 [2024-11-28 12:50:47.360729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.944 qpair failed and we were unable to recover it. 00:27:04.944 [2024-11-28 12:50:47.360966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.944 [2024-11-28 12:50:47.360979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.944 qpair failed and we were unable to recover it. 00:27:04.944 [2024-11-28 12:50:47.361050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.944 [2024-11-28 12:50:47.361062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.944 qpair failed and we were unable to recover it. 00:27:04.944 [2024-11-28 12:50:47.361133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.944 [2024-11-28 12:50:47.361145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.944 qpair failed and we were unable to recover it. 00:27:04.944 [2024-11-28 12:50:47.361222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.944 [2024-11-28 12:50:47.361234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.944 qpair failed and we were unable to recover it. 00:27:04.944 [2024-11-28 12:50:47.361433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.944 [2024-11-28 12:50:47.361446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.944 qpair failed and we were unable to recover it. 00:27:04.944 [2024-11-28 12:50:47.361595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.944 [2024-11-28 12:50:47.361607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.944 qpair failed and we were unable to recover it. 00:27:04.944 [2024-11-28 12:50:47.361863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.944 [2024-11-28 12:50:47.361876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.944 qpair failed and we were unable to recover it. 00:27:04.944 [2024-11-28 12:50:47.362031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.944 [2024-11-28 12:50:47.362043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.944 qpair failed and we were unable to recover it. 00:27:04.945 [2024-11-28 12:50:47.362203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.945 [2024-11-28 12:50:47.362215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.945 qpair failed and we were unable to recover it. 00:27:04.945 [2024-11-28 12:50:47.362383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.945 [2024-11-28 12:50:47.362395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.945 qpair failed and we were unable to recover it. 00:27:04.945 [2024-11-28 12:50:47.362470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.945 [2024-11-28 12:50:47.362482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.945 qpair failed and we were unable to recover it. 00:27:04.945 [2024-11-28 12:50:47.362550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.945 [2024-11-28 12:50:47.362562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.945 qpair failed and we were unable to recover it. 00:27:04.945 [2024-11-28 12:50:47.362685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.945 [2024-11-28 12:50:47.362698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.945 qpair failed and we were unable to recover it. 00:27:04.945 [2024-11-28 12:50:47.362781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.945 [2024-11-28 12:50:47.362794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.945 qpair failed and we were unable to recover it. 00:27:04.945 [2024-11-28 12:50:47.362873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.945 [2024-11-28 12:50:47.362886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.945 qpair failed and we were unable to recover it. 00:27:04.945 [2024-11-28 12:50:47.363038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.945 [2024-11-28 12:50:47.363052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.945 qpair failed and we were unable to recover it. 00:27:04.945 [2024-11-28 12:50:47.363138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.945 [2024-11-28 12:50:47.363151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.945 qpair failed and we were unable to recover it. 00:27:04.945 [2024-11-28 12:50:47.363287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.945 [2024-11-28 12:50:47.363299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.945 qpair failed and we were unable to recover it. 00:27:04.945 [2024-11-28 12:50:47.363455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.945 [2024-11-28 12:50:47.363467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.945 qpair failed and we were unable to recover it. 00:27:04.945 [2024-11-28 12:50:47.363561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.945 [2024-11-28 12:50:47.363574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.945 qpair failed and we were unable to recover it. 00:27:04.945 [2024-11-28 12:50:47.363730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.945 [2024-11-28 12:50:47.363744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.945 qpair failed and we were unable to recover it. 00:27:04.945 [2024-11-28 12:50:47.363896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.945 [2024-11-28 12:50:47.363911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.945 qpair failed and we were unable to recover it. 00:27:04.945 [2024-11-28 12:50:47.364000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.945 [2024-11-28 12:50:47.364013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.945 qpair failed and we were unable to recover it. 00:27:04.945 [2024-11-28 12:50:47.364094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.945 [2024-11-28 12:50:47.364107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.945 qpair failed and we were unable to recover it. 00:27:04.945 [2024-11-28 12:50:47.364223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.945 [2024-11-28 12:50:47.364236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.945 qpair failed and we were unable to recover it. 00:27:04.945 [2024-11-28 12:50:47.364371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.945 [2024-11-28 12:50:47.364384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.945 qpair failed and we were unable to recover it. 00:27:04.945 [2024-11-28 12:50:47.364450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.945 [2024-11-28 12:50:47.364462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.945 qpair failed and we were unable to recover it. 00:27:04.945 [2024-11-28 12:50:47.364607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.945 [2024-11-28 12:50:47.364620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.945 qpair failed and we were unable to recover it. 00:27:04.945 [2024-11-28 12:50:47.364702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.945 [2024-11-28 12:50:47.364715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.945 qpair failed and we were unable to recover it. 00:27:04.945 [2024-11-28 12:50:47.367070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.945 [2024-11-28 12:50:47.367084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.945 qpair failed and we were unable to recover it. 00:27:04.945 [2024-11-28 12:50:47.367241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.945 [2024-11-28 12:50:47.367254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.945 qpair failed and we were unable to recover it. 00:27:04.945 [2024-11-28 12:50:47.367480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.945 [2024-11-28 12:50:47.367493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.945 qpair failed and we were unable to recover it. 00:27:04.945 [2024-11-28 12:50:47.367599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.945 [2024-11-28 12:50:47.367612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.945 qpair failed and we were unable to recover it. 00:27:04.945 [2024-11-28 12:50:47.367761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.945 [2024-11-28 12:50:47.367774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.945 qpair failed and we were unable to recover it. 00:27:04.945 [2024-11-28 12:50:47.367952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.945 [2024-11-28 12:50:47.367965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.945 qpair failed and we were unable to recover it. 00:27:04.945 [2024-11-28 12:50:47.368115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.945 [2024-11-28 12:50:47.368128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.945 qpair failed and we were unable to recover it. 00:27:04.945 [2024-11-28 12:50:47.368227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.945 [2024-11-28 12:50:47.368240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.945 qpair failed and we were unable to recover it. 00:27:04.945 [2024-11-28 12:50:47.368333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.945 [2024-11-28 12:50:47.368345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.945 qpair failed and we were unable to recover it. 00:27:04.945 [2024-11-28 12:50:47.368549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.945 [2024-11-28 12:50:47.368562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.945 qpair failed and we were unable to recover it. 00:27:04.945 [2024-11-28 12:50:47.368707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.945 [2024-11-28 12:50:47.368719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.945 qpair failed and we were unable to recover it. 00:27:04.945 [2024-11-28 12:50:47.368870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.945 [2024-11-28 12:50:47.368882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.945 qpair failed and we were unable to recover it. 00:27:04.945 [2024-11-28 12:50:47.368966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.945 [2024-11-28 12:50:47.368979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.945 qpair failed and we were unable to recover it. 00:27:04.945 [2024-11-28 12:50:47.369069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.945 [2024-11-28 12:50:47.369082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.945 qpair failed and we were unable to recover it. 00:27:04.945 [2024-11-28 12:50:47.369224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.945 [2024-11-28 12:50:47.369237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.945 qpair failed and we were unable to recover it. 00:27:04.945 [2024-11-28 12:50:47.369311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.945 [2024-11-28 12:50:47.369324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.945 qpair failed and we were unable to recover it. 00:27:04.945 [2024-11-28 12:50:47.369471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.945 [2024-11-28 12:50:47.369484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.945 qpair failed and we were unable to recover it. 00:27:04.945 [2024-11-28 12:50:47.369551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.946 [2024-11-28 12:50:47.369564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.946 qpair failed and we were unable to recover it. 00:27:04.946 [2024-11-28 12:50:47.369710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.946 [2024-11-28 12:50:47.369723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.946 qpair failed and we were unable to recover it. 00:27:04.946 [2024-11-28 12:50:47.369929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.946 [2024-11-28 12:50:47.369942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.946 qpair failed and we were unable to recover it. 00:27:04.946 [2024-11-28 12:50:47.370030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.946 [2024-11-28 12:50:47.370043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.946 qpair failed and we were unable to recover it. 00:27:04.946 [2024-11-28 12:50:47.370130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.946 [2024-11-28 12:50:47.370143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.946 qpair failed and we were unable to recover it. 00:27:04.946 [2024-11-28 12:50:47.370218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.946 [2024-11-28 12:50:47.370231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.946 qpair failed and we were unable to recover it. 00:27:04.946 [2024-11-28 12:50:47.370378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.946 [2024-11-28 12:50:47.370392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.946 qpair failed and we were unable to recover it. 00:27:04.946 [2024-11-28 12:50:47.370555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.946 [2024-11-28 12:50:47.370568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.946 qpair failed and we were unable to recover it. 00:27:04.946 [2024-11-28 12:50:47.370645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.946 [2024-11-28 12:50:47.370658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.946 qpair failed and we were unable to recover it. 00:27:04.946 [2024-11-28 12:50:47.370735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.946 [2024-11-28 12:50:47.370748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.946 qpair failed and we were unable to recover it. 00:27:04.946 [2024-11-28 12:50:47.370834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.946 [2024-11-28 12:50:47.370847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.946 qpair failed and we were unable to recover it. 00:27:04.946 [2024-11-28 12:50:47.370979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.946 [2024-11-28 12:50:47.370994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.946 qpair failed and we were unable to recover it. 00:27:04.946 [2024-11-28 12:50:47.371102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.946 [2024-11-28 12:50:47.371115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.946 qpair failed and we were unable to recover it. 00:27:04.946 [2024-11-28 12:50:47.371211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.946 [2024-11-28 12:50:47.371224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.946 qpair failed and we were unable to recover it. 00:27:04.946 [2024-11-28 12:50:47.371307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.946 [2024-11-28 12:50:47.371320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.946 qpair failed and we were unable to recover it. 00:27:04.946 [2024-11-28 12:50:47.371462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.946 [2024-11-28 12:50:47.371477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.946 qpair failed and we were unable to recover it. 00:27:04.946 [2024-11-28 12:50:47.371570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.946 [2024-11-28 12:50:47.371583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.946 qpair failed and we were unable to recover it. 00:27:04.946 [2024-11-28 12:50:47.371716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.946 [2024-11-28 12:50:47.371729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.946 qpair failed and we were unable to recover it. 00:27:04.946 [2024-11-28 12:50:47.371816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.946 [2024-11-28 12:50:47.371828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.946 qpair failed and we were unable to recover it. 00:27:04.946 [2024-11-28 12:50:47.371966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.946 [2024-11-28 12:50:47.371980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.946 qpair failed and we were unable to recover it. 00:27:04.946 [2024-11-28 12:50:47.372070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.946 [2024-11-28 12:50:47.372083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.946 qpair failed and we were unable to recover it. 00:27:04.946 [2024-11-28 12:50:47.372228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.946 [2024-11-28 12:50:47.372241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.946 qpair failed and we were unable to recover it. 00:27:04.946 [2024-11-28 12:50:47.372407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.946 [2024-11-28 12:50:47.372420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.946 qpair failed and we were unable to recover it. 00:27:04.946 [2024-11-28 12:50:47.372586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.946 [2024-11-28 12:50:47.372599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.946 qpair failed and we were unable to recover it. 00:27:04.946 [2024-11-28 12:50:47.372703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.946 [2024-11-28 12:50:47.372716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.946 qpair failed and we were unable to recover it. 00:27:04.946 [2024-11-28 12:50:47.372858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.946 [2024-11-28 12:50:47.372871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.946 qpair failed and we were unable to recover it. 00:27:04.946 [2024-11-28 12:50:47.373032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.946 [2024-11-28 12:50:47.373046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.946 qpair failed and we were unable to recover it. 00:27:04.946 [2024-11-28 12:50:47.373193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.946 [2024-11-28 12:50:47.373207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.946 qpair failed and we were unable to recover it. 00:27:04.946 [2024-11-28 12:50:47.373358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.946 [2024-11-28 12:50:47.373371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.946 qpair failed and we were unable to recover it. 00:27:04.946 [2024-11-28 12:50:47.373517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.946 [2024-11-28 12:50:47.373530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.946 qpair failed and we were unable to recover it. 00:27:04.946 [2024-11-28 12:50:47.373614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.946 [2024-11-28 12:50:47.373626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.946 qpair failed and we were unable to recover it. 00:27:04.946 [2024-11-28 12:50:47.373772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.946 [2024-11-28 12:50:47.373784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.946 qpair failed and we were unable to recover it. 00:27:04.946 [2024-11-28 12:50:47.373921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.946 [2024-11-28 12:50:47.373935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.946 qpair failed and we were unable to recover it. 00:27:04.946 [2024-11-28 12:50:47.374019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.947 [2024-11-28 12:50:47.374032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.947 qpair failed and we were unable to recover it. 00:27:04.947 [2024-11-28 12:50:47.374135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.947 [2024-11-28 12:50:47.374148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.947 qpair failed and we were unable to recover it. 00:27:04.947 [2024-11-28 12:50:47.374241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.947 [2024-11-28 12:50:47.374254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.947 qpair failed and we were unable to recover it. 00:27:04.947 [2024-11-28 12:50:47.374402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.947 [2024-11-28 12:50:47.374415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.947 qpair failed and we were unable to recover it. 00:27:04.947 [2024-11-28 12:50:47.374588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.947 [2024-11-28 12:50:47.374601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.947 qpair failed and we were unable to recover it. 00:27:04.947 [2024-11-28 12:50:47.374684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.947 [2024-11-28 12:50:47.374697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.947 qpair failed and we were unable to recover it. 00:27:04.947 [2024-11-28 12:50:47.374869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.947 [2024-11-28 12:50:47.374902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.947 qpair failed and we were unable to recover it. 00:27:04.947 [2024-11-28 12:50:47.375047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.947 [2024-11-28 12:50:47.375081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.947 qpair failed and we were unable to recover it. 00:27:04.947 [2024-11-28 12:50:47.375210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.947 [2024-11-28 12:50:47.375243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.947 qpair failed and we were unable to recover it. 00:27:04.947 [2024-11-28 12:50:47.375377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.947 [2024-11-28 12:50:47.375410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.947 qpair failed and we were unable to recover it. 00:27:04.947 [2024-11-28 12:50:47.375601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.947 [2024-11-28 12:50:47.375635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.947 qpair failed and we were unable to recover it. 00:27:04.947 [2024-11-28 12:50:47.375762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.947 [2024-11-28 12:50:47.375796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.947 qpair failed and we were unable to recover it. 00:27:04.947 [2024-11-28 12:50:47.376061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.947 [2024-11-28 12:50:47.376096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.947 qpair failed and we were unable to recover it. 00:27:04.947 [2024-11-28 12:50:47.376289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.947 [2024-11-28 12:50:47.376323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.947 qpair failed and we were unable to recover it. 00:27:04.947 [2024-11-28 12:50:47.376443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.947 [2024-11-28 12:50:47.376476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.947 qpair failed and we were unable to recover it. 00:27:04.947 [2024-11-28 12:50:47.376675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.947 [2024-11-28 12:50:47.376709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.947 qpair failed and we were unable to recover it. 00:27:04.947 [2024-11-28 12:50:47.376902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.947 [2024-11-28 12:50:47.376936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.947 qpair failed and we were unable to recover it. 00:27:04.947 [2024-11-28 12:50:47.377206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.947 [2024-11-28 12:50:47.377240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.947 qpair failed and we were unable to recover it. 00:27:04.947 [2024-11-28 12:50:47.377333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.947 [2024-11-28 12:50:47.377345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.947 qpair failed and we were unable to recover it. 00:27:04.947 [2024-11-28 12:50:47.377423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.947 [2024-11-28 12:50:47.377435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.947 qpair failed and we were unable to recover it. 00:27:04.947 [2024-11-28 12:50:47.377517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.947 [2024-11-28 12:50:47.377529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.947 qpair failed and we were unable to recover it. 00:27:04.947 [2024-11-28 12:50:47.377618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.947 [2024-11-28 12:50:47.377650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.947 qpair failed and we were unable to recover it. 00:27:04.947 [2024-11-28 12:50:47.377788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.947 [2024-11-28 12:50:47.377828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.947 qpair failed and we were unable to recover it. 00:27:04.947 [2024-11-28 12:50:47.378019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.947 [2024-11-28 12:50:47.378053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.947 qpair failed and we were unable to recover it. 00:27:04.947 [2024-11-28 12:50:47.378267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.947 [2024-11-28 12:50:47.378300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.947 qpair failed and we were unable to recover it. 00:27:04.947 [2024-11-28 12:50:47.378491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.947 [2024-11-28 12:50:47.378503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.947 qpair failed and we were unable to recover it. 00:27:04.947 [2024-11-28 12:50:47.378669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.947 [2024-11-28 12:50:47.378702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.947 qpair failed and we were unable to recover it. 00:27:04.947 [2024-11-28 12:50:47.378894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.947 [2024-11-28 12:50:47.378934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.947 qpair failed and we were unable to recover it. 00:27:04.947 [2024-11-28 12:50:47.379141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.947 [2024-11-28 12:50:47.379205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.947 qpair failed and we were unable to recover it. 00:27:04.947 [2024-11-28 12:50:47.379425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.947 [2024-11-28 12:50:47.379441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.947 qpair failed and we were unable to recover it. 00:27:04.947 [2024-11-28 12:50:47.379692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.947 [2024-11-28 12:50:47.379705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.947 qpair failed and we were unable to recover it. 00:27:04.947 [2024-11-28 12:50:47.379846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.947 [2024-11-28 12:50:47.379879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.947 qpair failed and we were unable to recover it. 00:27:04.947 [2024-11-28 12:50:47.380075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.947 [2024-11-28 12:50:47.380109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.947 qpair failed and we were unable to recover it. 00:27:04.947 [2024-11-28 12:50:47.380226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.947 [2024-11-28 12:50:47.380259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.947 qpair failed and we were unable to recover it. 00:27:04.947 [2024-11-28 12:50:47.380511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.947 [2024-11-28 12:50:47.380526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.947 qpair failed and we were unable to recover it. 00:27:04.947 [2024-11-28 12:50:47.380676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.947 [2024-11-28 12:50:47.380688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.947 qpair failed and we were unable to recover it. 00:27:04.947 [2024-11-28 12:50:47.380914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.947 [2024-11-28 12:50:47.380931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.947 qpair failed and we were unable to recover it. 00:27:04.947 [2024-11-28 12:50:47.381081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.947 [2024-11-28 12:50:47.381094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.947 qpair failed and we were unable to recover it. 00:27:04.947 [2024-11-28 12:50:47.381168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.948 [2024-11-28 12:50:47.381180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.948 qpair failed and we were unable to recover it. 00:27:04.948 [2024-11-28 12:50:47.381383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.948 [2024-11-28 12:50:47.381395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.948 qpair failed and we were unable to recover it. 00:27:04.948 [2024-11-28 12:50:47.381548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.948 [2024-11-28 12:50:47.381560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:04.948 qpair failed and we were unable to recover it. 00:27:04.948 [2024-11-28 12:50:47.381694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.948 [2024-11-28 12:50:47.381732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:04.948 qpair failed and we were unable to recover it. 00:27:04.948 [2024-11-28 12:50:47.382002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.948 [2024-11-28 12:50:47.382068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.948 qpair failed and we were unable to recover it. 00:27:04.948 [2024-11-28 12:50:47.382227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.948 [2024-11-28 12:50:47.382262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.948 qpair failed and we were unable to recover it. 00:27:04.948 [2024-11-28 12:50:47.382408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.948 [2024-11-28 12:50:47.382441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:04.948 qpair failed and we were unable to recover it. 00:27:04.948 [2024-11-28 12:50:47.382631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.948 [2024-11-28 12:50:47.382681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.229 qpair failed and we were unable to recover it. 00:27:05.229 [2024-11-28 12:50:47.382945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.229 [2024-11-28 12:50:47.383017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.229 qpair failed and we were unable to recover it. 00:27:05.229 [2024-11-28 12:50:47.383195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.229 [2024-11-28 12:50:47.383241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.229 qpair failed and we were unable to recover it. 00:27:05.229 [2024-11-28 12:50:47.383354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.229 [2024-11-28 12:50:47.383366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.229 qpair failed and we were unable to recover it. 00:27:05.229 [2024-11-28 12:50:47.383599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.229 [2024-11-28 12:50:47.383612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.229 qpair failed and we were unable to recover it. 00:27:05.229 [2024-11-28 12:50:47.383757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.229 [2024-11-28 12:50:47.383769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.229 qpair failed and we were unable to recover it. 00:27:05.229 [2024-11-28 12:50:47.383998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.229 [2024-11-28 12:50:47.384011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.229 qpair failed and we were unable to recover it. 00:27:05.229 [2024-11-28 12:50:47.384096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.229 [2024-11-28 12:50:47.384110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.229 qpair failed and we were unable to recover it. 00:27:05.229 [2024-11-28 12:50:47.384256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.229 [2024-11-28 12:50:47.384267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.229 qpair failed and we were unable to recover it. 00:27:05.229 [2024-11-28 12:50:47.384405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.229 [2024-11-28 12:50:47.384417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.229 qpair failed and we were unable to recover it. 00:27:05.229 [2024-11-28 12:50:47.384556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.229 [2024-11-28 12:50:47.384568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.229 qpair failed and we were unable to recover it. 00:27:05.229 [2024-11-28 12:50:47.384654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.229 [2024-11-28 12:50:47.384665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.229 qpair failed and we were unable to recover it. 00:27:05.229 [2024-11-28 12:50:47.384743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.229 [2024-11-28 12:50:47.384755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.229 qpair failed and we were unable to recover it. 00:27:05.229 [2024-11-28 12:50:47.384913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.229 [2024-11-28 12:50:47.384925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.229 qpair failed and we were unable to recover it. 00:27:05.229 [2024-11-28 12:50:47.385152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.229 [2024-11-28 12:50:47.385164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.229 qpair failed and we were unable to recover it. 00:27:05.229 [2024-11-28 12:50:47.385333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.230 [2024-11-28 12:50:47.385345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.230 qpair failed and we were unable to recover it. 00:27:05.230 [2024-11-28 12:50:47.385489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.230 [2024-11-28 12:50:47.385501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.230 qpair failed and we were unable to recover it. 00:27:05.230 [2024-11-28 12:50:47.385597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.230 [2024-11-28 12:50:47.385612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.230 qpair failed and we were unable to recover it. 00:27:05.230 [2024-11-28 12:50:47.385765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.230 [2024-11-28 12:50:47.385776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.230 qpair failed and we were unable to recover it. 00:27:05.230 [2024-11-28 12:50:47.385912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.230 [2024-11-28 12:50:47.385923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.230 qpair failed and we were unable to recover it. 00:27:05.230 [2024-11-28 12:50:47.386200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.230 [2024-11-28 12:50:47.386213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.230 qpair failed and we were unable to recover it. 00:27:05.230 [2024-11-28 12:50:47.386351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.230 [2024-11-28 12:50:47.386363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.230 qpair failed and we were unable to recover it. 00:27:05.230 [2024-11-28 12:50:47.386566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.230 [2024-11-28 12:50:47.386578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.230 qpair failed and we were unable to recover it. 00:27:05.230 [2024-11-28 12:50:47.386727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.230 [2024-11-28 12:50:47.386739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.230 qpair failed and we were unable to recover it. 00:27:05.230 [2024-11-28 12:50:47.386825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.230 [2024-11-28 12:50:47.386837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.230 qpair failed and we were unable to recover it. 00:27:05.230 [2024-11-28 12:50:47.387009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.230 [2024-11-28 12:50:47.387021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.230 qpair failed and we were unable to recover it. 00:27:05.230 [2024-11-28 12:50:47.387112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.230 [2024-11-28 12:50:47.387125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.230 qpair failed and we were unable to recover it. 00:27:05.230 [2024-11-28 12:50:47.387251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.230 [2024-11-28 12:50:47.387263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.230 qpair failed and we were unable to recover it. 00:27:05.230 [2024-11-28 12:50:47.387403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.230 [2024-11-28 12:50:47.387415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.230 qpair failed and we were unable to recover it. 00:27:05.230 [2024-11-28 12:50:47.387614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.230 [2024-11-28 12:50:47.387627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.230 qpair failed and we were unable to recover it. 00:27:05.230 [2024-11-28 12:50:47.387852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.230 [2024-11-28 12:50:47.387865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.230 qpair failed and we were unable to recover it. 00:27:05.230 [2024-11-28 12:50:47.387945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.230 [2024-11-28 12:50:47.387962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.230 qpair failed and we were unable to recover it. 00:27:05.230 [2024-11-28 12:50:47.388035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.230 [2024-11-28 12:50:47.388048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.230 qpair failed and we were unable to recover it. 00:27:05.230 [2024-11-28 12:50:47.388135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.230 [2024-11-28 12:50:47.388147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.230 qpair failed and we were unable to recover it. 00:27:05.230 [2024-11-28 12:50:47.388235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.230 [2024-11-28 12:50:47.388247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.230 qpair failed and we were unable to recover it. 00:27:05.230 [2024-11-28 12:50:47.388450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.230 [2024-11-28 12:50:47.388463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.230 qpair failed and we were unable to recover it. 00:27:05.230 [2024-11-28 12:50:47.388621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.230 [2024-11-28 12:50:47.388633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.230 qpair failed and we were unable to recover it. 00:27:05.230 [2024-11-28 12:50:47.388804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.230 [2024-11-28 12:50:47.388816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.230 qpair failed and we were unable to recover it. 00:27:05.230 [2024-11-28 12:50:47.388991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.230 [2024-11-28 12:50:47.389024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.230 qpair failed and we were unable to recover it. 00:27:05.230 [2024-11-28 12:50:47.389226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.230 [2024-11-28 12:50:47.389259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.230 qpair failed and we were unable to recover it. 00:27:05.230 [2024-11-28 12:50:47.389449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.230 [2024-11-28 12:50:47.389490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.230 qpair failed and we were unable to recover it. 00:27:05.230 [2024-11-28 12:50:47.389694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.230 [2024-11-28 12:50:47.389706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.230 qpair failed and we were unable to recover it. 00:27:05.230 [2024-11-28 12:50:47.389874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.230 [2024-11-28 12:50:47.389886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.230 qpair failed and we were unable to recover it. 00:27:05.230 [2024-11-28 12:50:47.390045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.230 [2024-11-28 12:50:47.390079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.230 qpair failed and we were unable to recover it. 00:27:05.230 [2024-11-28 12:50:47.390300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.230 [2024-11-28 12:50:47.390330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.230 qpair failed and we were unable to recover it. 00:27:05.230 [2024-11-28 12:50:47.390525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.230 [2024-11-28 12:50:47.390569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.230 qpair failed and we were unable to recover it. 00:27:05.231 [2024-11-28 12:50:47.390692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.231 [2024-11-28 12:50:47.390725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.231 qpair failed and we were unable to recover it. 00:27:05.231 [2024-11-28 12:50:47.390854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.231 [2024-11-28 12:50:47.390886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.231 qpair failed and we were unable to recover it. 00:27:05.231 [2024-11-28 12:50:47.391101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.231 [2024-11-28 12:50:47.391136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.231 qpair failed and we were unable to recover it. 00:27:05.231 [2024-11-28 12:50:47.391360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.231 [2024-11-28 12:50:47.391393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.231 qpair failed and we were unable to recover it. 00:27:05.231 [2024-11-28 12:50:47.391672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.231 [2024-11-28 12:50:47.391708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.231 qpair failed and we were unable to recover it. 00:27:05.231 [2024-11-28 12:50:47.391905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.231 [2024-11-28 12:50:47.391937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.231 qpair failed and we were unable to recover it. 00:27:05.231 [2024-11-28 12:50:47.392133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.231 [2024-11-28 12:50:47.392168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.231 qpair failed and we were unable to recover it. 00:27:05.231 [2024-11-28 12:50:47.392453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.231 [2024-11-28 12:50:47.392484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.231 qpair failed and we were unable to recover it. 00:27:05.231 [2024-11-28 12:50:47.392730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.231 [2024-11-28 12:50:47.392761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.231 qpair failed and we were unable to recover it. 00:27:05.231 [2024-11-28 12:50:47.392901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.231 [2024-11-28 12:50:47.392933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.231 qpair failed and we were unable to recover it. 00:27:05.231 [2024-11-28 12:50:47.393076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.231 [2024-11-28 12:50:47.393108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.231 qpair failed and we were unable to recover it. 00:27:05.231 [2024-11-28 12:50:47.393346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.231 [2024-11-28 12:50:47.393377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.231 qpair failed and we were unable to recover it. 00:27:05.231 [2024-11-28 12:50:47.393583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.231 [2024-11-28 12:50:47.393616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.231 qpair failed and we were unable to recover it. 00:27:05.231 [2024-11-28 12:50:47.393813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.231 [2024-11-28 12:50:47.393844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.231 qpair failed and we were unable to recover it. 00:27:05.231 [2024-11-28 12:50:47.393976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.231 [2024-11-28 12:50:47.394010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.231 qpair failed and we were unable to recover it. 00:27:05.231 [2024-11-28 12:50:47.394280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.231 [2024-11-28 12:50:47.394313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.231 qpair failed and we were unable to recover it. 00:27:05.231 [2024-11-28 12:50:47.394582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.231 [2024-11-28 12:50:47.394614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.231 qpair failed and we were unable to recover it. 00:27:05.231 [2024-11-28 12:50:47.394908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.231 [2024-11-28 12:50:47.394940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.231 qpair failed and we were unable to recover it. 00:27:05.231 [2024-11-28 12:50:47.395085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.231 [2024-11-28 12:50:47.395118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.231 qpair failed and we were unable to recover it. 00:27:05.231 [2024-11-28 12:50:47.395253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.231 [2024-11-28 12:50:47.395286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.231 qpair failed and we were unable to recover it. 00:27:05.231 [2024-11-28 12:50:47.395424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.231 [2024-11-28 12:50:47.395456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.231 qpair failed and we were unable to recover it. 00:27:05.231 [2024-11-28 12:50:47.395630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.231 [2024-11-28 12:50:47.395663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.231 qpair failed and we were unable to recover it. 00:27:05.231 [2024-11-28 12:50:47.395801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.231 [2024-11-28 12:50:47.395833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.231 qpair failed and we were unable to recover it. 00:27:05.231 [2024-11-28 12:50:47.395967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.231 [2024-11-28 12:50:47.396000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.231 qpair failed and we were unable to recover it. 00:27:05.231 [2024-11-28 12:50:47.396190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.231 [2024-11-28 12:50:47.396222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.231 qpair failed and we were unable to recover it. 00:27:05.231 [2024-11-28 12:50:47.396402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.231 [2024-11-28 12:50:47.396440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.231 qpair failed and we were unable to recover it. 00:27:05.231 [2024-11-28 12:50:47.396705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.231 [2024-11-28 12:50:47.396747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.231 qpair failed and we were unable to recover it. 00:27:05.231 [2024-11-28 12:50:47.396833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.231 [2024-11-28 12:50:47.396849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.231 qpair failed and we were unable to recover it. 00:27:05.231 [2024-11-28 12:50:47.396928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.231 [2024-11-28 12:50:47.396944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.231 qpair failed and we were unable to recover it. 00:27:05.231 [2024-11-28 12:50:47.397121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.231 [2024-11-28 12:50:47.397137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.231 qpair failed and we were unable to recover it. 00:27:05.232 [2024-11-28 12:50:47.397273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.232 [2024-11-28 12:50:47.397319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.232 qpair failed and we were unable to recover it. 00:27:05.232 [2024-11-28 12:50:47.397531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.232 [2024-11-28 12:50:47.397563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.232 qpair failed and we were unable to recover it. 00:27:05.232 [2024-11-28 12:50:47.397694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.232 [2024-11-28 12:50:47.397726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.232 qpair failed and we were unable to recover it. 00:27:05.232 [2024-11-28 12:50:47.397863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.232 [2024-11-28 12:50:47.397895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.232 qpair failed and we were unable to recover it. 00:27:05.232 [2024-11-28 12:50:47.398106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.232 [2024-11-28 12:50:47.398140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.232 qpair failed and we were unable to recover it. 00:27:05.232 [2024-11-28 12:50:47.398262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.232 [2024-11-28 12:50:47.398294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.232 qpair failed and we were unable to recover it. 00:27:05.232 [2024-11-28 12:50:47.398423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.232 [2024-11-28 12:50:47.398438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.232 qpair failed and we were unable to recover it. 00:27:05.232 [2024-11-28 12:50:47.398535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.232 [2024-11-28 12:50:47.398550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.232 qpair failed and we were unable to recover it. 00:27:05.232 [2024-11-28 12:50:47.398645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.232 [2024-11-28 12:50:47.398661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.232 qpair failed and we were unable to recover it. 00:27:05.232 [2024-11-28 12:50:47.398750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.232 [2024-11-28 12:50:47.398765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.232 qpair failed and we were unable to recover it. 00:27:05.232 [2024-11-28 12:50:47.398989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.232 [2024-11-28 12:50:47.399021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.232 qpair failed and we were unable to recover it. 00:27:05.232 [2024-11-28 12:50:47.399215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.232 [2024-11-28 12:50:47.399248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.232 qpair failed and we were unable to recover it. 00:27:05.232 [2024-11-28 12:50:47.399459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.232 [2024-11-28 12:50:47.399489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.232 qpair failed and we were unable to recover it. 00:27:05.232 [2024-11-28 12:50:47.399763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.232 [2024-11-28 12:50:47.399794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.232 qpair failed and we were unable to recover it. 00:27:05.232 [2024-11-28 12:50:47.400001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.232 [2024-11-28 12:50:47.400034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.232 qpair failed and we were unable to recover it. 00:27:05.232 [2024-11-28 12:50:47.400279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.232 [2024-11-28 12:50:47.400311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.232 qpair failed and we were unable to recover it. 00:27:05.232 [2024-11-28 12:50:47.400487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.232 [2024-11-28 12:50:47.400503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.232 qpair failed and we were unable to recover it. 00:27:05.232 [2024-11-28 12:50:47.400588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.232 [2024-11-28 12:50:47.400603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.232 qpair failed and we were unable to recover it. 00:27:05.232 [2024-11-28 12:50:47.400769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.232 [2024-11-28 12:50:47.400800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.232 qpair failed and we were unable to recover it. 00:27:05.232 [2024-11-28 12:50:47.400940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.232 [2024-11-28 12:50:47.400986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.232 qpair failed and we were unable to recover it. 00:27:05.232 [2024-11-28 12:50:47.401116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.232 [2024-11-28 12:50:47.401149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.232 qpair failed and we were unable to recover it. 00:27:05.232 [2024-11-28 12:50:47.401328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.232 [2024-11-28 12:50:47.401359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.232 qpair failed and we were unable to recover it. 00:27:05.232 [2024-11-28 12:50:47.401552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.232 [2024-11-28 12:50:47.401590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.232 qpair failed and we were unable to recover it. 00:27:05.232 [2024-11-28 12:50:47.401710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.232 [2024-11-28 12:50:47.401742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.232 qpair failed and we were unable to recover it. 00:27:05.232 [2024-11-28 12:50:47.401938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.232 [2024-11-28 12:50:47.401979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.232 qpair failed and we were unable to recover it. 00:27:05.232 [2024-11-28 12:50:47.402163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.232 [2024-11-28 12:50:47.402195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.232 qpair failed and we were unable to recover it. 00:27:05.232 [2024-11-28 12:50:47.402336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.232 [2024-11-28 12:50:47.402368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.232 qpair failed and we were unable to recover it. 00:27:05.232 [2024-11-28 12:50:47.402564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.232 [2024-11-28 12:50:47.402579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.232 qpair failed and we were unable to recover it. 00:27:05.232 [2024-11-28 12:50:47.402721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.232 [2024-11-28 12:50:47.402737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.232 qpair failed and we were unable to recover it. 00:27:05.232 [2024-11-28 12:50:47.402966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.232 [2024-11-28 12:50:47.403000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.232 qpair failed and we were unable to recover it. 00:27:05.232 [2024-11-28 12:50:47.403248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.232 [2024-11-28 12:50:47.403280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.233 qpair failed and we were unable to recover it. 00:27:05.233 [2024-11-28 12:50:47.403425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.233 [2024-11-28 12:50:47.403466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.233 qpair failed and we were unable to recover it. 00:27:05.233 [2024-11-28 12:50:47.403631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.233 [2024-11-28 12:50:47.403647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.233 qpair failed and we were unable to recover it. 00:27:05.233 [2024-11-28 12:50:47.403798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.233 [2024-11-28 12:50:47.403814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.233 qpair failed and we were unable to recover it. 00:27:05.233 [2024-11-28 12:50:47.403981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.233 [2024-11-28 12:50:47.404014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.233 qpair failed and we were unable to recover it. 00:27:05.233 [2024-11-28 12:50:47.404157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.233 [2024-11-28 12:50:47.404189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.233 qpair failed and we were unable to recover it. 00:27:05.233 [2024-11-28 12:50:47.404248] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd10b20 (9): Bad file descriptor 00:27:05.233 [2024-11-28 12:50:47.404566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.233 [2024-11-28 12:50:47.404639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.233 qpair failed and we were unable to recover it. 00:27:05.233 [2024-11-28 12:50:47.404860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.233 [2024-11-28 12:50:47.404897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.233 qpair failed and we were unable to recover it. 00:27:05.233 [2024-11-28 12:50:47.405167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.233 [2024-11-28 12:50:47.405202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.233 qpair failed and we were unable to recover it. 00:27:05.233 [2024-11-28 12:50:47.405403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.233 [2024-11-28 12:50:47.405437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.233 qpair failed and we were unable to recover it. 00:27:05.233 [2024-11-28 12:50:47.405564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.233 [2024-11-28 12:50:47.405596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.233 qpair failed and we were unable to recover it. 00:27:05.233 [2024-11-28 12:50:47.405802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.233 [2024-11-28 12:50:47.405818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.233 qpair failed and we were unable to recover it. 00:27:05.233 [2024-11-28 12:50:47.405912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.233 [2024-11-28 12:50:47.405927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.233 qpair failed and we were unable to recover it. 00:27:05.233 [2024-11-28 12:50:47.406160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.233 [2024-11-28 12:50:47.406192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.233 qpair failed and we were unable to recover it. 00:27:05.233 [2024-11-28 12:50:47.406324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.233 [2024-11-28 12:50:47.406357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.233 qpair failed and we were unable to recover it. 00:27:05.233 [2024-11-28 12:50:47.406484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.233 [2024-11-28 12:50:47.406516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.233 qpair failed and we were unable to recover it. 00:27:05.233 [2024-11-28 12:50:47.406782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.233 [2024-11-28 12:50:47.406815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.233 qpair failed and we were unable to recover it. 00:27:05.233 [2024-11-28 12:50:47.406998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.233 [2024-11-28 12:50:47.407031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.233 qpair failed and we were unable to recover it. 00:27:05.233 [2024-11-28 12:50:47.407254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.233 [2024-11-28 12:50:47.407285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.233 qpair failed and we were unable to recover it. 00:27:05.233 [2024-11-28 12:50:47.407431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.233 [2024-11-28 12:50:47.407464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.233 qpair failed and we were unable to recover it. 00:27:05.233 [2024-11-28 12:50:47.407709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.233 [2024-11-28 12:50:47.407741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.233 qpair failed and we were unable to recover it. 00:27:05.233 [2024-11-28 12:50:47.407999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.233 [2024-11-28 12:50:47.408032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.233 qpair failed and we were unable to recover it. 00:27:05.233 [2024-11-28 12:50:47.408212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.233 [2024-11-28 12:50:47.408228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.233 qpair failed and we were unable to recover it. 00:27:05.233 [2024-11-28 12:50:47.408376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.233 [2024-11-28 12:50:47.408408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.233 qpair failed and we were unable to recover it. 00:27:05.233 [2024-11-28 12:50:47.408583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.233 [2024-11-28 12:50:47.408614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.233 qpair failed and we were unable to recover it. 00:27:05.233 [2024-11-28 12:50:47.408748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.233 [2024-11-28 12:50:47.408780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.233 qpair failed and we were unable to recover it. 00:27:05.233 [2024-11-28 12:50:47.408973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.233 [2024-11-28 12:50:47.409006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.233 qpair failed and we were unable to recover it. 00:27:05.233 [2024-11-28 12:50:47.409254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.233 [2024-11-28 12:50:47.409286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.233 qpair failed and we were unable to recover it. 00:27:05.233 [2024-11-28 12:50:47.409552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.233 [2024-11-28 12:50:47.409585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.233 qpair failed and we were unable to recover it. 00:27:05.234 [2024-11-28 12:50:47.409726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.234 [2024-11-28 12:50:47.409758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.234 qpair failed and we were unable to recover it. 00:27:05.234 [2024-11-28 12:50:47.409884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.234 [2024-11-28 12:50:47.409917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.234 qpair failed and we were unable to recover it. 00:27:05.234 [2024-11-28 12:50:47.410189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.234 [2024-11-28 12:50:47.410262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.234 qpair failed and we were unable to recover it. 00:27:05.234 [2024-11-28 12:50:47.410499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.234 [2024-11-28 12:50:47.410544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.234 qpair failed and we were unable to recover it. 00:27:05.234 [2024-11-28 12:50:47.410826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.234 [2024-11-28 12:50:47.410860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.234 qpair failed and we were unable to recover it. 00:27:05.234 [2024-11-28 12:50:47.410988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.234 [2024-11-28 12:50:47.411023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.234 qpair failed and we were unable to recover it. 00:27:05.234 [2024-11-28 12:50:47.411235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.234 [2024-11-28 12:50:47.411267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.234 qpair failed and we were unable to recover it. 00:27:05.234 [2024-11-28 12:50:47.411458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.234 [2024-11-28 12:50:47.411470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.234 qpair failed and we were unable to recover it. 00:27:05.234 [2024-11-28 12:50:47.411581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.234 [2024-11-28 12:50:47.411614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.234 qpair failed and we were unable to recover it. 00:27:05.234 [2024-11-28 12:50:47.411812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.234 [2024-11-28 12:50:47.411846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.234 qpair failed and we were unable to recover it. 00:27:05.234 [2024-11-28 12:50:47.412036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.234 [2024-11-28 12:50:47.412070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.234 qpair failed and we were unable to recover it. 00:27:05.234 [2024-11-28 12:50:47.412268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.234 [2024-11-28 12:50:47.412300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.234 qpair failed and we were unable to recover it. 00:27:05.234 [2024-11-28 12:50:47.412442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.234 [2024-11-28 12:50:47.412474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.234 qpair failed and we were unable to recover it. 00:27:05.234 [2024-11-28 12:50:47.412722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.234 [2024-11-28 12:50:47.412755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.234 qpair failed and we were unable to recover it. 00:27:05.234 [2024-11-28 12:50:47.412880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.234 [2024-11-28 12:50:47.412911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.234 qpair failed and we were unable to recover it. 00:27:05.234 [2024-11-28 12:50:47.413105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.234 [2024-11-28 12:50:47.413138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.234 qpair failed and we were unable to recover it. 00:27:05.234 [2024-11-28 12:50:47.413385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.234 [2024-11-28 12:50:47.413417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.234 qpair failed and we were unable to recover it. 00:27:05.234 [2024-11-28 12:50:47.413681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.234 [2024-11-28 12:50:47.413714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.234 qpair failed and we were unable to recover it. 00:27:05.234 [2024-11-28 12:50:47.413831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.234 [2024-11-28 12:50:47.413863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.234 qpair failed and we were unable to recover it. 00:27:05.234 [2024-11-28 12:50:47.414057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.234 [2024-11-28 12:50:47.414091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.234 qpair failed and we were unable to recover it. 00:27:05.234 [2024-11-28 12:50:47.414216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.234 [2024-11-28 12:50:47.414249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.234 qpair failed and we were unable to recover it. 00:27:05.234 [2024-11-28 12:50:47.414440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.234 [2024-11-28 12:50:47.414472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.234 qpair failed and we were unable to recover it. 00:27:05.234 [2024-11-28 12:50:47.414646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.234 [2024-11-28 12:50:47.414678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.234 qpair failed and we were unable to recover it. 00:27:05.234 [2024-11-28 12:50:47.414859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.234 [2024-11-28 12:50:47.414892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.234 qpair failed and we were unable to recover it. 00:27:05.234 [2024-11-28 12:50:47.415097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.234 [2024-11-28 12:50:47.415130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.234 qpair failed and we were unable to recover it. 00:27:05.234 [2024-11-28 12:50:47.415376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.234 [2024-11-28 12:50:47.415409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.234 qpair failed and we were unable to recover it. 00:27:05.234 [2024-11-28 12:50:47.415609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.234 [2024-11-28 12:50:47.415642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.234 qpair failed and we were unable to recover it. 00:27:05.234 [2024-11-28 12:50:47.415772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.234 [2024-11-28 12:50:47.415804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.234 qpair failed and we were unable to recover it. 00:27:05.234 [2024-11-28 12:50:47.416000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.234 [2024-11-28 12:50:47.416034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.234 qpair failed and we were unable to recover it. 00:27:05.234 [2024-11-28 12:50:47.416214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.235 [2024-11-28 12:50:47.416247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.235 qpair failed and we were unable to recover it. 00:27:05.235 [2024-11-28 12:50:47.416371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.235 [2024-11-28 12:50:47.416404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.235 qpair failed and we were unable to recover it. 00:27:05.235 [2024-11-28 12:50:47.416522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.235 [2024-11-28 12:50:47.416553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.235 qpair failed and we were unable to recover it. 00:27:05.235 [2024-11-28 12:50:47.416666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.235 [2024-11-28 12:50:47.416699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.235 qpair failed and we were unable to recover it. 00:27:05.235 [2024-11-28 12:50:47.416853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.235 [2024-11-28 12:50:47.416886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.235 qpair failed and we were unable to recover it. 00:27:05.235 [2024-11-28 12:50:47.417073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.235 [2024-11-28 12:50:47.417106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.235 qpair failed and we were unable to recover it. 00:27:05.235 [2024-11-28 12:50:47.417304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.235 [2024-11-28 12:50:47.417337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.235 qpair failed and we were unable to recover it. 00:27:05.235 [2024-11-28 12:50:47.417434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.235 [2024-11-28 12:50:47.417446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.235 qpair failed and we were unable to recover it. 00:27:05.235 [2024-11-28 12:50:47.417634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.235 [2024-11-28 12:50:47.417665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.235 qpair failed and we were unable to recover it. 00:27:05.235 [2024-11-28 12:50:47.417791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.235 [2024-11-28 12:50:47.417825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.235 qpair failed and we were unable to recover it. 00:27:05.235 [2024-11-28 12:50:47.418034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.235 [2024-11-28 12:50:47.418068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.235 qpair failed and we were unable to recover it. 00:27:05.235 [2024-11-28 12:50:47.418219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.235 [2024-11-28 12:50:47.418231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.235 qpair failed and we were unable to recover it. 00:27:05.235 [2024-11-28 12:50:47.418305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.235 [2024-11-28 12:50:47.418317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.235 qpair failed and we were unable to recover it. 00:27:05.235 [2024-11-28 12:50:47.418457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.235 [2024-11-28 12:50:47.418488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.235 qpair failed and we were unable to recover it. 00:27:05.235 [2024-11-28 12:50:47.418692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.235 [2024-11-28 12:50:47.418731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.235 qpair failed and we were unable to recover it. 00:27:05.235 [2024-11-28 12:50:47.418851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.235 [2024-11-28 12:50:47.418883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.235 qpair failed and we were unable to recover it. 00:27:05.235 [2024-11-28 12:50:47.419088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.235 [2024-11-28 12:50:47.419122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.235 qpair failed and we were unable to recover it. 00:27:05.235 [2024-11-28 12:50:47.419313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.235 [2024-11-28 12:50:47.419346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.235 qpair failed and we were unable to recover it. 00:27:05.235 [2024-11-28 12:50:47.419528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.235 [2024-11-28 12:50:47.419561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.235 qpair failed and we were unable to recover it. 00:27:05.235 [2024-11-28 12:50:47.419742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.235 [2024-11-28 12:50:47.419775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.235 qpair failed and we were unable to recover it. 00:27:05.235 [2024-11-28 12:50:47.419886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.235 [2024-11-28 12:50:47.419919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.235 qpair failed and we were unable to recover it. 00:27:05.235 [2024-11-28 12:50:47.420060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.235 [2024-11-28 12:50:47.420094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.235 qpair failed and we were unable to recover it. 00:27:05.235 [2024-11-28 12:50:47.420294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.235 [2024-11-28 12:50:47.420326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.235 qpair failed and we were unable to recover it. 00:27:05.235 [2024-11-28 12:50:47.420594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.235 [2024-11-28 12:50:47.420605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.236 qpair failed and we were unable to recover it. 00:27:05.236 [2024-11-28 12:50:47.420801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.236 [2024-11-28 12:50:47.420833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.236 qpair failed and we were unable to recover it. 00:27:05.236 [2024-11-28 12:50:47.421029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.236 [2024-11-28 12:50:47.421063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.236 qpair failed and we were unable to recover it. 00:27:05.236 [2024-11-28 12:50:47.421182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.236 [2024-11-28 12:50:47.421194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.236 qpair failed and we were unable to recover it. 00:27:05.236 [2024-11-28 12:50:47.421277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.236 [2024-11-28 12:50:47.421289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.236 qpair failed and we were unable to recover it. 00:27:05.236 [2024-11-28 12:50:47.421432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.236 [2024-11-28 12:50:47.421465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.236 qpair failed and we were unable to recover it. 00:27:05.236 [2024-11-28 12:50:47.421722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.236 [2024-11-28 12:50:47.421755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.236 qpair failed and we were unable to recover it. 00:27:05.236 [2024-11-28 12:50:47.421967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.236 [2024-11-28 12:50:47.422000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.236 qpair failed and we were unable to recover it. 00:27:05.236 [2024-11-28 12:50:47.422186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.236 [2024-11-28 12:50:47.422220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.236 qpair failed and we were unable to recover it. 00:27:05.236 [2024-11-28 12:50:47.422337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.236 [2024-11-28 12:50:47.422369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.236 qpair failed and we were unable to recover it. 00:27:05.236 [2024-11-28 12:50:47.422669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.236 [2024-11-28 12:50:47.422702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.236 qpair failed and we were unable to recover it. 00:27:05.236 [2024-11-28 12:50:47.422817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.236 [2024-11-28 12:50:47.422851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.236 qpair failed and we were unable to recover it. 00:27:05.236 [2024-11-28 12:50:47.422975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.236 [2024-11-28 12:50:47.423007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.236 qpair failed and we were unable to recover it. 00:27:05.236 [2024-11-28 12:50:47.423213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.236 [2024-11-28 12:50:47.423226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.236 qpair failed and we were unable to recover it. 00:27:05.236 [2024-11-28 12:50:47.423460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.236 [2024-11-28 12:50:47.423493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.236 qpair failed and we were unable to recover it. 00:27:05.236 [2024-11-28 12:50:47.423678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.236 [2024-11-28 12:50:47.423711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.236 qpair failed and we were unable to recover it. 00:27:05.236 [2024-11-28 12:50:47.423894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.236 [2024-11-28 12:50:47.423927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.236 qpair failed and we were unable to recover it. 00:27:05.236 [2024-11-28 12:50:47.424132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.236 [2024-11-28 12:50:47.424164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.236 qpair failed and we were unable to recover it. 00:27:05.236 [2024-11-28 12:50:47.424497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.236 [2024-11-28 12:50:47.424582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.236 qpair failed and we were unable to recover it. 00:27:05.236 [2024-11-28 12:50:47.424748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.236 [2024-11-28 12:50:47.424765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.236 qpair failed and we were unable to recover it. 00:27:05.236 [2024-11-28 12:50:47.424957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.236 [2024-11-28 12:50:47.424974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.236 qpair failed and we were unable to recover it. 00:27:05.236 [2024-11-28 12:50:47.425087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.236 [2024-11-28 12:50:47.425120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.236 qpair failed and we were unable to recover it. 00:27:05.236 [2024-11-28 12:50:47.425257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.236 [2024-11-28 12:50:47.425289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.236 qpair failed and we were unable to recover it. 00:27:05.236 [2024-11-28 12:50:47.425469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.236 [2024-11-28 12:50:47.425502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.236 qpair failed and we were unable to recover it. 00:27:05.236 [2024-11-28 12:50:47.425699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.236 [2024-11-28 12:50:47.425732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.236 qpair failed and we were unable to recover it. 00:27:05.236 [2024-11-28 12:50:47.426014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.236 [2024-11-28 12:50:47.426052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.236 qpair failed and we were unable to recover it. 00:27:05.236 [2024-11-28 12:50:47.426270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.236 [2024-11-28 12:50:47.426286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.236 qpair failed and we were unable to recover it. 00:27:05.236 [2024-11-28 12:50:47.426388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.236 [2024-11-28 12:50:47.426404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.236 qpair failed and we were unable to recover it. 00:27:05.236 [2024-11-28 12:50:47.426580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.236 [2024-11-28 12:50:47.426595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.236 qpair failed and we were unable to recover it. 00:27:05.236 [2024-11-28 12:50:47.426687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.237 [2024-11-28 12:50:47.426702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.237 qpair failed and we were unable to recover it. 00:27:05.237 [2024-11-28 12:50:47.426855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.237 [2024-11-28 12:50:47.426888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.237 qpair failed and we were unable to recover it. 00:27:05.237 [2024-11-28 12:50:47.427089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.237 [2024-11-28 12:50:47.427123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.237 qpair failed and we were unable to recover it. 00:27:05.237 [2024-11-28 12:50:47.427308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.237 [2024-11-28 12:50:47.427340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.237 qpair failed and we were unable to recover it. 00:27:05.237 [2024-11-28 12:50:47.427537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.237 [2024-11-28 12:50:47.427568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.237 qpair failed and we were unable to recover it. 00:27:05.237 [2024-11-28 12:50:47.427714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.237 [2024-11-28 12:50:47.427747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.237 qpair failed and we were unable to recover it. 00:27:05.237 [2024-11-28 12:50:47.428034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.237 [2024-11-28 12:50:47.428068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.237 qpair failed and we were unable to recover it. 00:27:05.237 [2024-11-28 12:50:47.428187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.237 [2024-11-28 12:50:47.428220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.237 qpair failed and we were unable to recover it. 00:27:05.237 [2024-11-28 12:50:47.428443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.237 [2024-11-28 12:50:47.428483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.237 qpair failed and we were unable to recover it. 00:27:05.237 [2024-11-28 12:50:47.428711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.237 [2024-11-28 12:50:47.428727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.237 qpair failed and we were unable to recover it. 00:27:05.237 [2024-11-28 12:50:47.428878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.237 [2024-11-28 12:50:47.428894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.237 qpair failed and we were unable to recover it. 00:27:05.237 [2024-11-28 12:50:47.429056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.237 [2024-11-28 12:50:47.429073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.237 qpair failed and we were unable to recover it. 00:27:05.237 [2024-11-28 12:50:47.429231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.237 [2024-11-28 12:50:47.429247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.237 qpair failed and we were unable to recover it. 00:27:05.237 [2024-11-28 12:50:47.429507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.237 [2024-11-28 12:50:47.429539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.237 qpair failed and we were unable to recover it. 00:27:05.237 [2024-11-28 12:50:47.429731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.237 [2024-11-28 12:50:47.429764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.237 qpair failed and we were unable to recover it. 00:27:05.237 [2024-11-28 12:50:47.429966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.237 [2024-11-28 12:50:47.430000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.237 qpair failed and we were unable to recover it. 00:27:05.237 [2024-11-28 12:50:47.430209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.237 [2024-11-28 12:50:47.430231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.237 qpair failed and we were unable to recover it. 00:27:05.237 [2024-11-28 12:50:47.430447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.237 [2024-11-28 12:50:47.430481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.237 qpair failed and we were unable to recover it. 00:27:05.237 [2024-11-28 12:50:47.430625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.237 [2024-11-28 12:50:47.430656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.237 qpair failed and we were unable to recover it. 00:27:05.237 [2024-11-28 12:50:47.430929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.237 [2024-11-28 12:50:47.430971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.237 qpair failed and we were unable to recover it. 00:27:05.237 [2024-11-28 12:50:47.431151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.237 [2024-11-28 12:50:47.431184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.237 qpair failed and we were unable to recover it. 00:27:05.237 [2024-11-28 12:50:47.431367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.237 [2024-11-28 12:50:47.431399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.237 qpair failed and we were unable to recover it. 00:27:05.237 [2024-11-28 12:50:47.431565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.237 [2024-11-28 12:50:47.431581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.237 qpair failed and we were unable to recover it. 00:27:05.237 [2024-11-28 12:50:47.431738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.237 [2024-11-28 12:50:47.431771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.237 qpair failed and we were unable to recover it. 00:27:05.237 [2024-11-28 12:50:47.431986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.237 [2024-11-28 12:50:47.432019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.237 qpair failed and we were unable to recover it. 00:27:05.237 [2024-11-28 12:50:47.432264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.237 [2024-11-28 12:50:47.432296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.237 qpair failed and we were unable to recover it. 00:27:05.237 [2024-11-28 12:50:47.432538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.237 [2024-11-28 12:50:47.432571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.237 qpair failed and we were unable to recover it. 00:27:05.237 [2024-11-28 12:50:47.432711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.237 [2024-11-28 12:50:47.432744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.237 qpair failed and we were unable to recover it. 00:27:05.237 [2024-11-28 12:50:47.433015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.237 [2024-11-28 12:50:47.433049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.237 qpair failed and we were unable to recover it. 00:27:05.238 [2024-11-28 12:50:47.433266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.238 [2024-11-28 12:50:47.433299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.238 qpair failed and we were unable to recover it. 00:27:05.238 [2024-11-28 12:50:47.433485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.238 [2024-11-28 12:50:47.433500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.238 qpair failed and we were unable to recover it. 00:27:05.238 [2024-11-28 12:50:47.433710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.238 [2024-11-28 12:50:47.433726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.238 qpair failed and we were unable to recover it. 00:27:05.238 [2024-11-28 12:50:47.433817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.238 [2024-11-28 12:50:47.433833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.238 qpair failed and we were unable to recover it. 00:27:05.238 [2024-11-28 12:50:47.433921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.238 [2024-11-28 12:50:47.433937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.238 qpair failed and we were unable to recover it. 00:27:05.238 [2024-11-28 12:50:47.434025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.238 [2024-11-28 12:50:47.434042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.238 qpair failed and we were unable to recover it. 00:27:05.238 [2024-11-28 12:50:47.434222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.238 [2024-11-28 12:50:47.434253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.238 qpair failed and we were unable to recover it. 00:27:05.238 [2024-11-28 12:50:47.434451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.238 [2024-11-28 12:50:47.434483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.238 qpair failed and we were unable to recover it. 00:27:05.238 [2024-11-28 12:50:47.434609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.238 [2024-11-28 12:50:47.434642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.238 qpair failed and we were unable to recover it. 00:27:05.238 [2024-11-28 12:50:47.434815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.238 [2024-11-28 12:50:47.434847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.238 qpair failed and we were unable to recover it. 00:27:05.238 [2024-11-28 12:50:47.435037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.238 [2024-11-28 12:50:47.435070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.238 qpair failed and we were unable to recover it. 00:27:05.238 [2024-11-28 12:50:47.435204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.238 [2024-11-28 12:50:47.435237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.238 qpair failed and we were unable to recover it. 00:27:05.238 [2024-11-28 12:50:47.435419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.238 [2024-11-28 12:50:47.435452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.238 qpair failed and we were unable to recover it. 00:27:05.238 [2024-11-28 12:50:47.435563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.238 [2024-11-28 12:50:47.435595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.238 qpair failed and we were unable to recover it. 00:27:05.238 [2024-11-28 12:50:47.435850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.238 [2024-11-28 12:50:47.435889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.238 qpair failed and we were unable to recover it. 00:27:05.238 [2024-11-28 12:50:47.436080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.238 [2024-11-28 12:50:47.436113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.238 qpair failed and we were unable to recover it. 00:27:05.238 [2024-11-28 12:50:47.436293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.238 [2024-11-28 12:50:47.436326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.238 qpair failed and we were unable to recover it. 00:27:05.238 [2024-11-28 12:50:47.436569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.238 [2024-11-28 12:50:47.436585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.238 qpair failed and we were unable to recover it. 00:27:05.238 [2024-11-28 12:50:47.436807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.238 [2024-11-28 12:50:47.436822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.238 qpair failed and we were unable to recover it. 00:27:05.238 [2024-11-28 12:50:47.437036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.238 [2024-11-28 12:50:47.437053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.238 qpair failed and we were unable to recover it. 00:27:05.238 [2024-11-28 12:50:47.437146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.238 [2024-11-28 12:50:47.437162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.238 qpair failed and we were unable to recover it. 00:27:05.238 [2024-11-28 12:50:47.437324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.238 [2024-11-28 12:50:47.437340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.238 qpair failed and we were unable to recover it. 00:27:05.238 [2024-11-28 12:50:47.437566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.238 [2024-11-28 12:50:47.437582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.238 qpair failed and we were unable to recover it. 00:27:05.238 [2024-11-28 12:50:47.437677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.238 [2024-11-28 12:50:47.437693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.238 qpair failed and we were unable to recover it. 00:27:05.238 [2024-11-28 12:50:47.437840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.238 [2024-11-28 12:50:47.437856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.238 qpair failed and we were unable to recover it. 00:27:05.238 [2024-11-28 12:50:47.437945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.238 [2024-11-28 12:50:47.437971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.238 qpair failed and we were unable to recover it. 00:27:05.238 [2024-11-28 12:50:47.438113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.238 [2024-11-28 12:50:47.438128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.238 qpair failed and we were unable to recover it. 00:27:05.238 [2024-11-28 12:50:47.438279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.238 [2024-11-28 12:50:47.438294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.238 qpair failed and we were unable to recover it. 00:27:05.238 [2024-11-28 12:50:47.438515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.238 [2024-11-28 12:50:47.438586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.238 qpair failed and we were unable to recover it. 00:27:05.238 [2024-11-28 12:50:47.438724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.238 [2024-11-28 12:50:47.438760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.238 qpair failed and we were unable to recover it. 00:27:05.238 [2024-11-28 12:50:47.438975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.239 [2024-11-28 12:50:47.439010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.239 qpair failed and we were unable to recover it. 00:27:05.239 [2024-11-28 12:50:47.439199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.239 [2024-11-28 12:50:47.439232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.239 qpair failed and we were unable to recover it. 00:27:05.239 [2024-11-28 12:50:47.439360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.239 [2024-11-28 12:50:47.439392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.239 qpair failed and we were unable to recover it. 00:27:05.239 [2024-11-28 12:50:47.439522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.239 [2024-11-28 12:50:47.439539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.239 qpair failed and we were unable to recover it. 00:27:05.239 [2024-11-28 12:50:47.439757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.239 [2024-11-28 12:50:47.439789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.239 qpair failed and we were unable to recover it. 00:27:05.239 [2024-11-28 12:50:47.439977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.239 [2024-11-28 12:50:47.440011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.239 qpair failed and we were unable to recover it. 00:27:05.239 [2024-11-28 12:50:47.440201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.239 [2024-11-28 12:50:47.440233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.239 qpair failed and we were unable to recover it. 00:27:05.239 [2024-11-28 12:50:47.440372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.239 [2024-11-28 12:50:47.440387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.239 qpair failed and we were unable to recover it. 00:27:05.239 [2024-11-28 12:50:47.440466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.239 [2024-11-28 12:50:47.440482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.239 qpair failed and we were unable to recover it. 00:27:05.239 [2024-11-28 12:50:47.440637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.239 [2024-11-28 12:50:47.440652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.239 qpair failed and we were unable to recover it. 00:27:05.239 [2024-11-28 12:50:47.440798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.239 [2024-11-28 12:50:47.440831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.239 qpair failed and we were unable to recover it. 00:27:05.239 [2024-11-28 12:50:47.441028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.239 [2024-11-28 12:50:47.441071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.239 qpair failed and we were unable to recover it. 00:27:05.239 [2024-11-28 12:50:47.441340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.239 [2024-11-28 12:50:47.441371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.239 qpair failed and we were unable to recover it. 00:27:05.239 [2024-11-28 12:50:47.441569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.239 [2024-11-28 12:50:47.441602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.239 qpair failed and we were unable to recover it. 00:27:05.239 [2024-11-28 12:50:47.441790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.239 [2024-11-28 12:50:47.441822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.239 qpair failed and we were unable to recover it. 00:27:05.239 [2024-11-28 12:50:47.441944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.239 [2024-11-28 12:50:47.441986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.239 qpair failed and we were unable to recover it. 00:27:05.239 [2024-11-28 12:50:47.442173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.239 [2024-11-28 12:50:47.442206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.239 qpair failed and we were unable to recover it. 00:27:05.239 [2024-11-28 12:50:47.442393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.239 [2024-11-28 12:50:47.442426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.239 qpair failed and we were unable to recover it. 00:27:05.239 [2024-11-28 12:50:47.442608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.239 [2024-11-28 12:50:47.442640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.239 qpair failed and we were unable to recover it. 00:27:05.239 [2024-11-28 12:50:47.442766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.239 [2024-11-28 12:50:47.442799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.239 qpair failed and we were unable to recover it. 00:27:05.239 [2024-11-28 12:50:47.442984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.239 [2024-11-28 12:50:47.443017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.239 qpair failed and we were unable to recover it. 00:27:05.239 [2024-11-28 12:50:47.443148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.239 [2024-11-28 12:50:47.443179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.239 qpair failed and we were unable to recover it. 00:27:05.239 [2024-11-28 12:50:47.443357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.239 [2024-11-28 12:50:47.443389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.239 qpair failed and we were unable to recover it. 00:27:05.239 [2024-11-28 12:50:47.443622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.239 [2024-11-28 12:50:47.443638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.239 qpair failed and we were unable to recover it. 00:27:05.239 [2024-11-28 12:50:47.443725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.239 [2024-11-28 12:50:47.443741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.239 qpair failed and we were unable to recover it. 00:27:05.239 [2024-11-28 12:50:47.443921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.239 [2024-11-28 12:50:47.443967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.239 qpair failed and we were unable to recover it. 00:27:05.239 [2024-11-28 12:50:47.444110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.239 [2024-11-28 12:50:47.444142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.239 qpair failed and we were unable to recover it. 00:27:05.239 [2024-11-28 12:50:47.444258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.239 [2024-11-28 12:50:47.444291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.239 qpair failed and we were unable to recover it. 00:27:05.239 [2024-11-28 12:50:47.444470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.239 [2024-11-28 12:50:47.444502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.239 qpair failed and we were unable to recover it. 00:27:05.239 [2024-11-28 12:50:47.444695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.239 [2024-11-28 12:50:47.444727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.239 qpair failed and we were unable to recover it. 00:27:05.239 [2024-11-28 12:50:47.444998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.240 [2024-11-28 12:50:47.445032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.240 qpair failed and we were unable to recover it. 00:27:05.240 [2024-11-28 12:50:47.445283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.240 [2024-11-28 12:50:47.445300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.240 qpair failed and we were unable to recover it. 00:27:05.240 [2024-11-28 12:50:47.445411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.240 [2024-11-28 12:50:47.445426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.240 qpair failed and we were unable to recover it. 00:27:05.240 [2024-11-28 12:50:47.445589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.240 [2024-11-28 12:50:47.445621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.240 qpair failed and we were unable to recover it. 00:27:05.240 [2024-11-28 12:50:47.445846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.240 [2024-11-28 12:50:47.445879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.240 qpair failed and we were unable to recover it. 00:27:05.240 [2024-11-28 12:50:47.446065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.240 [2024-11-28 12:50:47.446097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.240 qpair failed and we were unable to recover it. 00:27:05.240 [2024-11-28 12:50:47.446226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.240 [2024-11-28 12:50:47.446242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.240 qpair failed and we were unable to recover it. 00:27:05.240 [2024-11-28 12:50:47.446386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.240 [2024-11-28 12:50:47.446422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.240 qpair failed and we were unable to recover it. 00:27:05.240 [2024-11-28 12:50:47.446717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.240 [2024-11-28 12:50:47.446789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.240 qpair failed and we were unable to recover it. 00:27:05.240 [2024-11-28 12:50:47.447009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.240 [2024-11-28 12:50:47.447048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.240 qpair failed and we were unable to recover it. 00:27:05.240 [2024-11-28 12:50:47.447295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.240 [2024-11-28 12:50:47.447328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.240 qpair failed and we were unable to recover it. 00:27:05.240 [2024-11-28 12:50:47.447505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.240 [2024-11-28 12:50:47.447517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.240 qpair failed and we were unable to recover it. 00:27:05.240 [2024-11-28 12:50:47.447671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.240 [2024-11-28 12:50:47.447705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.240 qpair failed and we were unable to recover it. 00:27:05.240 [2024-11-28 12:50:47.447904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.240 [2024-11-28 12:50:47.447937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.240 qpair failed and we were unable to recover it. 00:27:05.240 [2024-11-28 12:50:47.448147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.240 [2024-11-28 12:50:47.448180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.240 qpair failed and we were unable to recover it. 00:27:05.240 [2024-11-28 12:50:47.448427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.240 [2024-11-28 12:50:47.448439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.240 qpair failed and we were unable to recover it. 00:27:05.240 [2024-11-28 12:50:47.448540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.240 [2024-11-28 12:50:47.448572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.240 qpair failed and we were unable to recover it. 00:27:05.240 [2024-11-28 12:50:47.448867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.240 [2024-11-28 12:50:47.448899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.240 qpair failed and we were unable to recover it. 00:27:05.240 [2024-11-28 12:50:47.449055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.240 [2024-11-28 12:50:47.449089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.240 qpair failed and we were unable to recover it. 00:27:05.240 [2024-11-28 12:50:47.449281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.240 [2024-11-28 12:50:47.449314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.240 qpair failed and we were unable to recover it. 00:27:05.240 [2024-11-28 12:50:47.449485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.240 [2024-11-28 12:50:47.449497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.240 qpair failed and we were unable to recover it. 00:27:05.240 [2024-11-28 12:50:47.449658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.240 [2024-11-28 12:50:47.449700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.240 qpair failed and we were unable to recover it. 00:27:05.240 [2024-11-28 12:50:47.449835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.240 [2024-11-28 12:50:47.449867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.240 qpair failed and we were unable to recover it. 00:27:05.240 [2024-11-28 12:50:47.450009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.240 [2024-11-28 12:50:47.450044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.240 qpair failed and we were unable to recover it. 00:27:05.240 [2024-11-28 12:50:47.450319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.240 [2024-11-28 12:50:47.450352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.240 qpair failed and we were unable to recover it. 00:27:05.240 [2024-11-28 12:50:47.450550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.240 [2024-11-28 12:50:47.450583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.240 qpair failed and we were unable to recover it. 00:27:05.240 [2024-11-28 12:50:47.450703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.240 [2024-11-28 12:50:47.450735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.240 qpair failed and we were unable to recover it. 00:27:05.240 [2024-11-28 12:50:47.450983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.240 [2024-11-28 12:50:47.450995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.240 qpair failed and we were unable to recover it. 00:27:05.240 [2024-11-28 12:50:47.451146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.240 [2024-11-28 12:50:47.451178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.240 qpair failed and we were unable to recover it. 00:27:05.240 [2024-11-28 12:50:47.451374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.240 [2024-11-28 12:50:47.451407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.240 qpair failed and we were unable to recover it. 00:27:05.240 [2024-11-28 12:50:47.451527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.240 [2024-11-28 12:50:47.451559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.240 qpair failed and we were unable to recover it. 00:27:05.241 [2024-11-28 12:50:47.451707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.241 [2024-11-28 12:50:47.451738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.241 qpair failed and we were unable to recover it. 00:27:05.241 [2024-11-28 12:50:47.451867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.241 [2024-11-28 12:50:47.451901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.241 qpair failed and we were unable to recover it. 00:27:05.241 [2024-11-28 12:50:47.452182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.241 [2024-11-28 12:50:47.452215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.241 qpair failed and we were unable to recover it. 00:27:05.241 [2024-11-28 12:50:47.452392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.241 [2024-11-28 12:50:47.452404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.241 qpair failed and we were unable to recover it. 00:27:05.241 [2024-11-28 12:50:47.452569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.241 [2024-11-28 12:50:47.452602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.241 qpair failed and we were unable to recover it. 00:27:05.241 [2024-11-28 12:50:47.452809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.241 [2024-11-28 12:50:47.452842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.241 qpair failed and we were unable to recover it. 00:27:05.241 [2024-11-28 12:50:47.453029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.241 [2024-11-28 12:50:47.453063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.241 qpair failed and we were unable to recover it. 00:27:05.241 [2024-11-28 12:50:47.453210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.241 [2024-11-28 12:50:47.453243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.241 qpair failed and we were unable to recover it. 00:27:05.241 [2024-11-28 12:50:47.453436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.241 [2024-11-28 12:50:47.453470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.241 qpair failed and we were unable to recover it. 00:27:05.241 [2024-11-28 12:50:47.453704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.241 [2024-11-28 12:50:47.453717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.241 qpair failed and we were unable to recover it. 00:27:05.241 [2024-11-28 12:50:47.453968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.241 [2024-11-28 12:50:47.454002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.241 qpair failed and we were unable to recover it. 00:27:05.241 [2024-11-28 12:50:47.454199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.241 [2024-11-28 12:50:47.454232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.241 qpair failed and we were unable to recover it. 00:27:05.241 [2024-11-28 12:50:47.454359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.241 [2024-11-28 12:50:47.454392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.241 qpair failed and we were unable to recover it. 00:27:05.241 [2024-11-28 12:50:47.454584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.241 [2024-11-28 12:50:47.454617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.241 qpair failed and we were unable to recover it. 00:27:05.241 [2024-11-28 12:50:47.454745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.241 [2024-11-28 12:50:47.454778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.241 qpair failed and we were unable to recover it. 00:27:05.241 [2024-11-28 12:50:47.455046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.241 [2024-11-28 12:50:47.455081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.241 qpair failed and we were unable to recover it. 00:27:05.241 [2024-11-28 12:50:47.455293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.241 [2024-11-28 12:50:47.455326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.241 qpair failed and we were unable to recover it. 00:27:05.241 [2024-11-28 12:50:47.455677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.241 [2024-11-28 12:50:47.455748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.241 qpair failed and we were unable to recover it. 00:27:05.241 [2024-11-28 12:50:47.456007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.241 [2024-11-28 12:50:47.456047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.241 qpair failed and we were unable to recover it. 00:27:05.241 [2024-11-28 12:50:47.456254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.241 [2024-11-28 12:50:47.456287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.241 qpair failed and we were unable to recover it. 00:27:05.241 [2024-11-28 12:50:47.456481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.241 [2024-11-28 12:50:47.456516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.241 qpair failed and we were unable to recover it. 00:27:05.241 [2024-11-28 12:50:47.456698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.241 [2024-11-28 12:50:47.456713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.241 qpair failed and we were unable to recover it. 00:27:05.241 [2024-11-28 12:50:47.456918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.241 [2024-11-28 12:50:47.456963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.241 qpair failed and we were unable to recover it. 00:27:05.241 [2024-11-28 12:50:47.457098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.241 [2024-11-28 12:50:47.457131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.241 qpair failed and we were unable to recover it. 00:27:05.241 [2024-11-28 12:50:47.457263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.242 [2024-11-28 12:50:47.457295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.242 qpair failed and we were unable to recover it. 00:27:05.242 [2024-11-28 12:50:47.457490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.242 [2024-11-28 12:50:47.457521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.242 qpair failed and we were unable to recover it. 00:27:05.242 [2024-11-28 12:50:47.457719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.242 [2024-11-28 12:50:47.457752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.242 qpair failed and we were unable to recover it. 00:27:05.242 [2024-11-28 12:50:47.458033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.242 [2024-11-28 12:50:47.458066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.242 qpair failed and we were unable to recover it. 00:27:05.242 [2024-11-28 12:50:47.458318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.242 [2024-11-28 12:50:47.458351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.242 qpair failed and we were unable to recover it. 00:27:05.242 [2024-11-28 12:50:47.458537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.242 [2024-11-28 12:50:47.458572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.242 qpair failed and we were unable to recover it. 00:27:05.242 [2024-11-28 12:50:47.458689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.242 [2024-11-28 12:50:47.458732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.242 qpair failed and we were unable to recover it. 00:27:05.242 [2024-11-28 12:50:47.458929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.242 [2024-11-28 12:50:47.458974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.242 qpair failed and we were unable to recover it. 00:27:05.242 [2024-11-28 12:50:47.459193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.242 [2024-11-28 12:50:47.459225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.242 qpair failed and we were unable to recover it. 00:27:05.242 [2024-11-28 12:50:47.459449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.242 [2024-11-28 12:50:47.459482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.242 qpair failed and we were unable to recover it. 00:27:05.242 [2024-11-28 12:50:47.459683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.242 [2024-11-28 12:50:47.459714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.242 qpair failed and we were unable to recover it. 00:27:05.242 [2024-11-28 12:50:47.459911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.242 [2024-11-28 12:50:47.459943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.242 qpair failed and we were unable to recover it. 00:27:05.242 [2024-11-28 12:50:47.460149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.242 [2024-11-28 12:50:47.460182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.242 qpair failed and we were unable to recover it. 00:27:05.242 [2024-11-28 12:50:47.460377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.242 [2024-11-28 12:50:47.460392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.242 qpair failed and we were unable to recover it. 00:27:05.242 [2024-11-28 12:50:47.460572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.242 [2024-11-28 12:50:47.460606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.242 qpair failed and we were unable to recover it. 00:27:05.242 [2024-11-28 12:50:47.460803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.242 [2024-11-28 12:50:47.460836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.242 qpair failed and we were unable to recover it. 00:27:05.242 [2024-11-28 12:50:47.460967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.242 [2024-11-28 12:50:47.461000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.242 qpair failed and we were unable to recover it. 00:27:05.242 [2024-11-28 12:50:47.461221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.242 [2024-11-28 12:50:47.461254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.242 qpair failed and we were unable to recover it. 00:27:05.242 [2024-11-28 12:50:47.461432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.242 [2024-11-28 12:50:47.461465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.242 qpair failed and we were unable to recover it. 00:27:05.242 [2024-11-28 12:50:47.461603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.242 [2024-11-28 12:50:47.461636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.242 qpair failed and we were unable to recover it. 00:27:05.242 [2024-11-28 12:50:47.461892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.242 [2024-11-28 12:50:47.461924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.242 qpair failed and we were unable to recover it. 00:27:05.242 [2024-11-28 12:50:47.462193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.242 [2024-11-28 12:50:47.462226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.242 qpair failed and we were unable to recover it. 00:27:05.242 [2024-11-28 12:50:47.462429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.242 [2024-11-28 12:50:47.462445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.242 qpair failed and we were unable to recover it. 00:27:05.242 [2024-11-28 12:50:47.462611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.242 [2024-11-28 12:50:47.462642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.242 qpair failed and we were unable to recover it. 00:27:05.242 [2024-11-28 12:50:47.462777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.242 [2024-11-28 12:50:47.462810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.242 qpair failed and we were unable to recover it. 00:27:05.242 [2024-11-28 12:50:47.462945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.242 [2024-11-28 12:50:47.462988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.242 qpair failed and we were unable to recover it. 00:27:05.242 [2024-11-28 12:50:47.463185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.242 [2024-11-28 12:50:47.463218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.242 qpair failed and we were unable to recover it. 00:27:05.242 [2024-11-28 12:50:47.463340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.242 [2024-11-28 12:50:47.463372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.242 qpair failed and we were unable to recover it. 00:27:05.242 [2024-11-28 12:50:47.463583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.242 [2024-11-28 12:50:47.463616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.242 qpair failed and we were unable to recover it. 00:27:05.242 [2024-11-28 12:50:47.463823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.242 [2024-11-28 12:50:47.463855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.242 qpair failed and we were unable to recover it. 00:27:05.242 [2024-11-28 12:50:47.464042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.243 [2024-11-28 12:50:47.464076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.243 qpair failed and we were unable to recover it. 00:27:05.243 [2024-11-28 12:50:47.464257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.243 [2024-11-28 12:50:47.464291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.243 qpair failed and we were unable to recover it. 00:27:05.243 [2024-11-28 12:50:47.464436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.243 [2024-11-28 12:50:47.464469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.243 qpair failed and we were unable to recover it. 00:27:05.243 [2024-11-28 12:50:47.464671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.243 [2024-11-28 12:50:47.464704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.243 qpair failed and we were unable to recover it. 00:27:05.243 [2024-11-28 12:50:47.464888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.243 [2024-11-28 12:50:47.464921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.243 qpair failed and we were unable to recover it. 00:27:05.243 [2024-11-28 12:50:47.465184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.243 [2024-11-28 12:50:47.465217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.243 qpair failed and we were unable to recover it. 00:27:05.243 [2024-11-28 12:50:47.465478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.243 [2024-11-28 12:50:47.465511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.243 qpair failed and we were unable to recover it. 00:27:05.243 [2024-11-28 12:50:47.465688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.243 [2024-11-28 12:50:47.465704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.243 qpair failed and we were unable to recover it. 00:27:05.243 [2024-11-28 12:50:47.465919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.243 [2024-11-28 12:50:47.465934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.243 qpair failed and we were unable to recover it. 00:27:05.243 [2024-11-28 12:50:47.466032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.243 [2024-11-28 12:50:47.466048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.243 qpair failed and we were unable to recover it. 00:27:05.243 [2024-11-28 12:50:47.466141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.243 [2024-11-28 12:50:47.466158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.243 qpair failed and we were unable to recover it. 00:27:05.243 [2024-11-28 12:50:47.466241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.243 [2024-11-28 12:50:47.466256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.243 qpair failed and we were unable to recover it. 00:27:05.243 [2024-11-28 12:50:47.466426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.243 [2024-11-28 12:50:47.466442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.243 qpair failed and we were unable to recover it. 00:27:05.243 [2024-11-28 12:50:47.466624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.243 [2024-11-28 12:50:47.466640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.243 qpair failed and we were unable to recover it. 00:27:05.243 [2024-11-28 12:50:47.466849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.243 [2024-11-28 12:50:47.466864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.243 qpair failed and we were unable to recover it. 00:27:05.243 [2024-11-28 12:50:47.467030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.243 [2024-11-28 12:50:47.467045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.243 qpair failed and we were unable to recover it. 00:27:05.243 [2024-11-28 12:50:47.467161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.243 [2024-11-28 12:50:47.467180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.243 qpair failed and we were unable to recover it. 00:27:05.243 [2024-11-28 12:50:47.467285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.243 [2024-11-28 12:50:47.467302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.243 qpair failed and we were unable to recover it. 00:27:05.243 [2024-11-28 12:50:47.467457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.243 [2024-11-28 12:50:47.467473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.243 qpair failed and we were unable to recover it. 00:27:05.243 [2024-11-28 12:50:47.467630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.243 [2024-11-28 12:50:47.467663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.243 qpair failed and we were unable to recover it. 00:27:05.243 [2024-11-28 12:50:47.467784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.243 [2024-11-28 12:50:47.467815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.243 qpair failed and we were unable to recover it. 00:27:05.243 [2024-11-28 12:50:47.467933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.243 [2024-11-28 12:50:47.467974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.243 qpair failed and we were unable to recover it. 00:27:05.243 [2024-11-28 12:50:47.468105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.243 [2024-11-28 12:50:47.468138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.243 qpair failed and we were unable to recover it. 00:27:05.243 [2024-11-28 12:50:47.468338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.243 [2024-11-28 12:50:47.468376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.243 qpair failed and we were unable to recover it. 00:27:05.243 [2024-11-28 12:50:47.468533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.243 [2024-11-28 12:50:47.468548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.243 qpair failed and we were unable to recover it. 00:27:05.243 [2024-11-28 12:50:47.468700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.243 [2024-11-28 12:50:47.468731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.243 qpair failed and we were unable to recover it. 00:27:05.243 [2024-11-28 12:50:47.468849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.243 [2024-11-28 12:50:47.468882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.243 qpair failed and we were unable to recover it. 00:27:05.243 [2024-11-28 12:50:47.469035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.243 [2024-11-28 12:50:47.469067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.243 qpair failed and we were unable to recover it. 00:27:05.243 [2024-11-28 12:50:47.469331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.243 [2024-11-28 12:50:47.469364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.243 qpair failed and we were unable to recover it. 00:27:05.244 [2024-11-28 12:50:47.469559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.244 [2024-11-28 12:50:47.469589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.244 qpair failed and we were unable to recover it. 00:27:05.244 [2024-11-28 12:50:47.469866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.244 [2024-11-28 12:50:47.469898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.244 qpair failed and we were unable to recover it. 00:27:05.244 [2024-11-28 12:50:47.470087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.244 [2024-11-28 12:50:47.470120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.244 qpair failed and we were unable to recover it. 00:27:05.244 [2024-11-28 12:50:47.470242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.244 [2024-11-28 12:50:47.470273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.244 qpair failed and we were unable to recover it. 00:27:05.244 [2024-11-28 12:50:47.470459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.244 [2024-11-28 12:50:47.470497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.244 qpair failed and we were unable to recover it. 00:27:05.244 [2024-11-28 12:50:47.470669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.244 [2024-11-28 12:50:47.470685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.244 qpair failed and we were unable to recover it. 00:27:05.244 [2024-11-28 12:50:47.470784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.244 [2024-11-28 12:50:47.470812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.244 qpair failed and we were unable to recover it. 00:27:05.244 [2024-11-28 12:50:47.470912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.244 [2024-11-28 12:50:47.470926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.244 qpair failed and we were unable to recover it. 00:27:05.244 [2024-11-28 12:50:47.470998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.244 [2024-11-28 12:50:47.471029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.244 qpair failed and we were unable to recover it. 00:27:05.244 [2024-11-28 12:50:47.471310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.244 [2024-11-28 12:50:47.471344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.244 qpair failed and we were unable to recover it. 00:27:05.244 [2024-11-28 12:50:47.471482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.244 [2024-11-28 12:50:47.471515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.244 qpair failed and we were unable to recover it. 00:27:05.244 [2024-11-28 12:50:47.471761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.244 [2024-11-28 12:50:47.471772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.244 qpair failed and we were unable to recover it. 00:27:05.244 [2024-11-28 12:50:47.471921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.244 [2024-11-28 12:50:47.471934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.244 qpair failed and we were unable to recover it. 00:27:05.244 [2024-11-28 12:50:47.472109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.244 [2024-11-28 12:50:47.472122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.244 qpair failed and we were unable to recover it. 00:27:05.244 [2024-11-28 12:50:47.472245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.244 [2024-11-28 12:50:47.472279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.244 qpair failed and we were unable to recover it. 00:27:05.244 [2024-11-28 12:50:47.472502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.244 [2024-11-28 12:50:47.472520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.244 qpair failed and we were unable to recover it. 00:27:05.244 [2024-11-28 12:50:47.472610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.244 [2024-11-28 12:50:47.472627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.244 qpair failed and we were unable to recover it. 00:27:05.244 [2024-11-28 12:50:47.472767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.244 [2024-11-28 12:50:47.472800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.244 qpair failed and we were unable to recover it. 00:27:05.244 [2024-11-28 12:50:47.472970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.244 [2024-11-28 12:50:47.473003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.244 qpair failed and we were unable to recover it. 00:27:05.244 [2024-11-28 12:50:47.473220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.244 [2024-11-28 12:50:47.473253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.244 qpair failed and we were unable to recover it. 00:27:05.244 [2024-11-28 12:50:47.473399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.244 [2024-11-28 12:50:47.473431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.244 qpair failed and we were unable to recover it. 00:27:05.244 [2024-11-28 12:50:47.473614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.244 [2024-11-28 12:50:47.473647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.244 qpair failed and we were unable to recover it. 00:27:05.244 [2024-11-28 12:50:47.473774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.244 [2024-11-28 12:50:47.473806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.244 qpair failed and we were unable to recover it. 00:27:05.244 [2024-11-28 12:50:47.474020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.244 [2024-11-28 12:50:47.474053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.244 qpair failed and we were unable to recover it. 00:27:05.244 [2024-11-28 12:50:47.474242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.244 [2024-11-28 12:50:47.474274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.244 qpair failed and we were unable to recover it. 00:27:05.244 [2024-11-28 12:50:47.474550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.244 [2024-11-28 12:50:47.474583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.244 qpair failed and we were unable to recover it. 00:27:05.244 [2024-11-28 12:50:47.474798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.244 [2024-11-28 12:50:47.474830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.244 qpair failed and we were unable to recover it. 00:27:05.244 [2024-11-28 12:50:47.475128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.244 [2024-11-28 12:50:47.475161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.244 qpair failed and we were unable to recover it. 00:27:05.244 [2024-11-28 12:50:47.475362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.244 [2024-11-28 12:50:47.475403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.244 qpair failed and we were unable to recover it. 00:27:05.244 [2024-11-28 12:50:47.475558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.244 [2024-11-28 12:50:47.475573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.244 qpair failed and we were unable to recover it. 00:27:05.244 [2024-11-28 12:50:47.475731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.244 [2024-11-28 12:50:47.475765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.244 qpair failed and we were unable to recover it. 00:27:05.245 [2024-11-28 12:50:47.475895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.245 [2024-11-28 12:50:47.475926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.245 qpair failed and we were unable to recover it. 00:27:05.245 [2024-11-28 12:50:47.476126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.245 [2024-11-28 12:50:47.476159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.245 qpair failed and we were unable to recover it. 00:27:05.245 [2024-11-28 12:50:47.476350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.245 [2024-11-28 12:50:47.476366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.245 qpair failed and we were unable to recover it. 00:27:05.245 [2024-11-28 12:50:47.476555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.245 [2024-11-28 12:50:47.476587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.245 qpair failed and we were unable to recover it. 00:27:05.245 [2024-11-28 12:50:47.476832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.245 [2024-11-28 12:50:47.476865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.245 qpair failed and we were unable to recover it. 00:27:05.245 [2024-11-28 12:50:47.477055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.245 [2024-11-28 12:50:47.477089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.245 qpair failed and we were unable to recover it. 00:27:05.245 [2024-11-28 12:50:47.477265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.245 [2024-11-28 12:50:47.477297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.245 qpair failed and we were unable to recover it. 00:27:05.245 [2024-11-28 12:50:47.477496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.245 [2024-11-28 12:50:47.477528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.245 qpair failed and we were unable to recover it. 00:27:05.245 [2024-11-28 12:50:47.477804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.245 [2024-11-28 12:50:47.477836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.245 qpair failed and we were unable to recover it. 00:27:05.245 [2024-11-28 12:50:47.477968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.245 [2024-11-28 12:50:47.478000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.245 qpair failed and we were unable to recover it. 00:27:05.245 [2024-11-28 12:50:47.478206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.245 [2024-11-28 12:50:47.478238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.245 qpair failed and we were unable to recover it. 00:27:05.245 [2024-11-28 12:50:47.478535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.245 [2024-11-28 12:50:47.478567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.245 qpair failed and we were unable to recover it. 00:27:05.245 [2024-11-28 12:50:47.478812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.245 [2024-11-28 12:50:47.478845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.245 qpair failed and we were unable to recover it. 00:27:05.245 [2024-11-28 12:50:47.479028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.245 [2024-11-28 12:50:47.479060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.245 qpair failed and we were unable to recover it. 00:27:05.245 [2024-11-28 12:50:47.479186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.245 [2024-11-28 12:50:47.479223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.245 qpair failed and we were unable to recover it. 00:27:05.245 [2024-11-28 12:50:47.479405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.245 [2024-11-28 12:50:47.479420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.245 qpair failed and we were unable to recover it. 00:27:05.245 [2024-11-28 12:50:47.479634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.245 [2024-11-28 12:50:47.479666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.245 qpair failed and we were unable to recover it. 00:27:05.245 [2024-11-28 12:50:47.479886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.245 [2024-11-28 12:50:47.479917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.245 qpair failed and we were unable to recover it. 00:27:05.245 [2024-11-28 12:50:47.480185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.245 [2024-11-28 12:50:47.480218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.245 qpair failed and we were unable to recover it. 00:27:05.245 [2024-11-28 12:50:47.480409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.245 [2024-11-28 12:50:47.480442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.245 qpair failed and we were unable to recover it. 00:27:05.245 [2024-11-28 12:50:47.480639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.245 [2024-11-28 12:50:47.480671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.245 qpair failed and we were unable to recover it. 00:27:05.245 [2024-11-28 12:50:47.480848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.245 [2024-11-28 12:50:47.480880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.245 qpair failed and we were unable to recover it. 00:27:05.245 [2024-11-28 12:50:47.481081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.245 [2024-11-28 12:50:47.481115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.245 qpair failed and we were unable to recover it. 00:27:05.245 [2024-11-28 12:50:47.481308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.245 [2024-11-28 12:50:47.481327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.245 qpair failed and we were unable to recover it. 00:27:05.245 [2024-11-28 12:50:47.481412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.245 [2024-11-28 12:50:47.481454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.245 qpair failed and we were unable to recover it. 00:27:05.245 [2024-11-28 12:50:47.481731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.245 [2024-11-28 12:50:47.481764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.245 qpair failed and we were unable to recover it. 00:27:05.245 [2024-11-28 12:50:47.482010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.245 [2024-11-28 12:50:47.482043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.245 qpair failed and we were unable to recover it. 00:27:05.245 [2024-11-28 12:50:47.482309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.245 [2024-11-28 12:50:47.482342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.245 qpair failed and we were unable to recover it. 00:27:05.245 [2024-11-28 12:50:47.482544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.245 [2024-11-28 12:50:47.482576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.245 qpair failed and we were unable to recover it. 00:27:05.245 [2024-11-28 12:50:47.482723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.246 [2024-11-28 12:50:47.482754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.246 qpair failed and we were unable to recover it. 00:27:05.246 [2024-11-28 12:50:47.483000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.246 [2024-11-28 12:50:47.483033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.246 qpair failed and we were unable to recover it. 00:27:05.246 [2024-11-28 12:50:47.483214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.246 [2024-11-28 12:50:47.483246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.246 qpair failed and we were unable to recover it. 00:27:05.246 [2024-11-28 12:50:47.483385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.246 [2024-11-28 12:50:47.483417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.246 qpair failed and we were unable to recover it. 00:27:05.246 [2024-11-28 12:50:47.483596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.246 [2024-11-28 12:50:47.483612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.246 qpair failed and we were unable to recover it. 00:27:05.246 [2024-11-28 12:50:47.483694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.246 [2024-11-28 12:50:47.483710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.246 qpair failed and we were unable to recover it. 00:27:05.246 [2024-11-28 12:50:47.483857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.246 [2024-11-28 12:50:47.483873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.246 qpair failed and we were unable to recover it. 00:27:05.246 [2024-11-28 12:50:47.484056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.246 [2024-11-28 12:50:47.484072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.246 qpair failed and we were unable to recover it. 00:27:05.246 [2024-11-28 12:50:47.484234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.246 [2024-11-28 12:50:47.484249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.246 qpair failed and we were unable to recover it. 00:27:05.246 [2024-11-28 12:50:47.484358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.246 [2024-11-28 12:50:47.484374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.246 qpair failed and we were unable to recover it. 00:27:05.246 [2024-11-28 12:50:47.484537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.246 [2024-11-28 12:50:47.484569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.246 qpair failed and we were unable to recover it. 00:27:05.246 [2024-11-28 12:50:47.484824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.246 [2024-11-28 12:50:47.484855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.246 qpair failed and we were unable to recover it. 00:27:05.246 [2024-11-28 12:50:47.485039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.246 [2024-11-28 12:50:47.485072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.246 qpair failed and we were unable to recover it. 00:27:05.246 [2024-11-28 12:50:47.485189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.246 [2024-11-28 12:50:47.485220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.246 qpair failed and we were unable to recover it. 00:27:05.246 [2024-11-28 12:50:47.485402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.246 [2024-11-28 12:50:47.485444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.246 qpair failed and we were unable to recover it. 00:27:05.246 [2024-11-28 12:50:47.485524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.246 [2024-11-28 12:50:47.485539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.246 qpair failed and we were unable to recover it. 00:27:05.246 [2024-11-28 12:50:47.485691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.246 [2024-11-28 12:50:47.485707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.246 qpair failed and we were unable to recover it. 00:27:05.246 [2024-11-28 12:50:47.485854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.246 [2024-11-28 12:50:47.485869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.246 qpair failed and we were unable to recover it. 00:27:05.246 [2024-11-28 12:50:47.486031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.246 [2024-11-28 12:50:47.486064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.246 qpair failed and we were unable to recover it. 00:27:05.246 [2024-11-28 12:50:47.486319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.246 [2024-11-28 12:50:47.486352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.246 qpair failed and we were unable to recover it. 00:27:05.246 [2024-11-28 12:50:47.486530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.246 [2024-11-28 12:50:47.486561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.246 qpair failed and we were unable to recover it. 00:27:05.246 [2024-11-28 12:50:47.486798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.246 [2024-11-28 12:50:47.486829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.246 qpair failed and we were unable to recover it. 00:27:05.246 [2024-11-28 12:50:47.486974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.246 [2024-11-28 12:50:47.487007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.246 qpair failed and we were unable to recover it. 00:27:05.246 [2024-11-28 12:50:47.487279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.246 [2024-11-28 12:50:47.487311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.246 qpair failed and we were unable to recover it. 00:27:05.246 [2024-11-28 12:50:47.487444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.246 [2024-11-28 12:50:47.487476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.246 qpair failed and we were unable to recover it. 00:27:05.246 [2024-11-28 12:50:47.487769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.246 [2024-11-28 12:50:47.487802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.246 qpair failed and we were unable to recover it. 00:27:05.246 [2024-11-28 12:50:47.488050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.246 [2024-11-28 12:50:47.488084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.246 qpair failed and we were unable to recover it. 00:27:05.246 [2024-11-28 12:50:47.488276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.246 [2024-11-28 12:50:47.488309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.246 qpair failed and we were unable to recover it. 00:27:05.246 [2024-11-28 12:50:47.488522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.246 [2024-11-28 12:50:47.488554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.246 qpair failed and we were unable to recover it. 00:27:05.247 [2024-11-28 12:50:47.488732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.247 [2024-11-28 12:50:47.488765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.247 qpair failed and we were unable to recover it. 00:27:05.247 [2024-11-28 12:50:47.489019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.247 [2024-11-28 12:50:47.489053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.247 qpair failed and we were unable to recover it. 00:27:05.247 [2024-11-28 12:50:47.489235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.247 [2024-11-28 12:50:47.489267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.247 qpair failed and we were unable to recover it. 00:27:05.247 [2024-11-28 12:50:47.489379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.247 [2024-11-28 12:50:47.489413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.247 qpair failed and we were unable to recover it. 00:27:05.247 [2024-11-28 12:50:47.489525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.247 [2024-11-28 12:50:47.489557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.247 qpair failed and we were unable to recover it. 00:27:05.247 [2024-11-28 12:50:47.489730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.247 [2024-11-28 12:50:47.489750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.247 qpair failed and we were unable to recover it. 00:27:05.247 [2024-11-28 12:50:47.489902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.247 [2024-11-28 12:50:47.489934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.247 qpair failed and we were unable to recover it. 00:27:05.247 [2024-11-28 12:50:47.490134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.247 [2024-11-28 12:50:47.490166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.247 qpair failed and we were unable to recover it. 00:27:05.247 [2024-11-28 12:50:47.490307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.247 [2024-11-28 12:50:47.490338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.247 qpair failed and we were unable to recover it. 00:27:05.247 [2024-11-28 12:50:47.490490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.247 [2024-11-28 12:50:47.490505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.247 qpair failed and we were unable to recover it. 00:27:05.247 [2024-11-28 12:50:47.490589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.247 [2024-11-28 12:50:47.490605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.247 qpair failed and we were unable to recover it. 00:27:05.247 [2024-11-28 12:50:47.490816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.247 [2024-11-28 12:50:47.490849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.247 qpair failed and we were unable to recover it. 00:27:05.247 [2024-11-28 12:50:47.491042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.247 [2024-11-28 12:50:47.491074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.247 qpair failed and we were unable to recover it. 00:27:05.247 [2024-11-28 12:50:47.491189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.247 [2024-11-28 12:50:47.491222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.247 qpair failed and we were unable to recover it. 00:27:05.247 [2024-11-28 12:50:47.491421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.247 [2024-11-28 12:50:47.491452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.247 qpair failed and we were unable to recover it. 00:27:05.247 [2024-11-28 12:50:47.491562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.247 [2024-11-28 12:50:47.491593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.247 qpair failed and we were unable to recover it. 00:27:05.247 [2024-11-28 12:50:47.491772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.247 [2024-11-28 12:50:47.491807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.247 qpair failed and we were unable to recover it. 00:27:05.247 [2024-11-28 12:50:47.491935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.247 [2024-11-28 12:50:47.491977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.247 qpair failed and we were unable to recover it. 00:27:05.247 [2024-11-28 12:50:47.492225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.247 [2024-11-28 12:50:47.492257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.247 qpair failed and we were unable to recover it. 00:27:05.247 [2024-11-28 12:50:47.492387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.247 [2024-11-28 12:50:47.492419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.247 qpair failed and we were unable to recover it. 00:27:05.247 [2024-11-28 12:50:47.492545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.247 [2024-11-28 12:50:47.492576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.247 qpair failed and we were unable to recover it. 00:27:05.247 [2024-11-28 12:50:47.492757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.247 [2024-11-28 12:50:47.492791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.247 qpair failed and we were unable to recover it. 00:27:05.247 [2024-11-28 12:50:47.492981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.247 [2024-11-28 12:50:47.493014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.247 qpair failed and we were unable to recover it. 00:27:05.247 [2024-11-28 12:50:47.493208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.248 [2024-11-28 12:50:47.493241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.248 qpair failed and we were unable to recover it. 00:27:05.248 [2024-11-28 12:50:47.493380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.248 [2024-11-28 12:50:47.493395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.248 qpair failed and we were unable to recover it. 00:27:05.248 [2024-11-28 12:50:47.493549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.248 [2024-11-28 12:50:47.493566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.248 qpair failed and we were unable to recover it. 00:27:05.248 [2024-11-28 12:50:47.493682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.248 [2024-11-28 12:50:47.493714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.248 qpair failed and we were unable to recover it. 00:27:05.248 [2024-11-28 12:50:47.493927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.248 [2024-11-28 12:50:47.493965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.248 qpair failed and we were unable to recover it. 00:27:05.248 [2024-11-28 12:50:47.494169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.248 [2024-11-28 12:50:47.494201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.248 qpair failed and we were unable to recover it. 00:27:05.248 [2024-11-28 12:50:47.494326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.248 [2024-11-28 12:50:47.494343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.248 qpair failed and we were unable to recover it. 00:27:05.248 [2024-11-28 12:50:47.494417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.248 [2024-11-28 12:50:47.494432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.248 qpair failed and we were unable to recover it. 00:27:05.248 [2024-11-28 12:50:47.494597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.248 [2024-11-28 12:50:47.494630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.248 qpair failed and we were unable to recover it. 00:27:05.248 [2024-11-28 12:50:47.494902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.248 [2024-11-28 12:50:47.494935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.248 qpair failed and we were unable to recover it. 00:27:05.248 [2024-11-28 12:50:47.495200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.248 [2024-11-28 12:50:47.495232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.248 qpair failed and we were unable to recover it. 00:27:05.248 [2024-11-28 12:50:47.495433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.248 [2024-11-28 12:50:47.495469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.248 qpair failed and we were unable to recover it. 00:27:05.248 [2024-11-28 12:50:47.495582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.248 [2024-11-28 12:50:47.495613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.248 qpair failed and we were unable to recover it. 00:27:05.248 [2024-11-28 12:50:47.495749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.248 [2024-11-28 12:50:47.495765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.248 qpair failed and we were unable to recover it. 00:27:05.248 [2024-11-28 12:50:47.495902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.248 [2024-11-28 12:50:47.495916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.248 qpair failed and we were unable to recover it. 00:27:05.248 [2024-11-28 12:50:47.496067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.248 [2024-11-28 12:50:47.496083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.248 qpair failed and we were unable to recover it. 00:27:05.248 [2024-11-28 12:50:47.496156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.248 [2024-11-28 12:50:47.496171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.248 qpair failed and we were unable to recover it. 00:27:05.248 [2024-11-28 12:50:47.496388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.248 [2024-11-28 12:50:47.496404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.248 qpair failed and we were unable to recover it. 00:27:05.248 [2024-11-28 12:50:47.496507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.248 [2024-11-28 12:50:47.496523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.248 qpair failed and we were unable to recover it. 00:27:05.248 [2024-11-28 12:50:47.496693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.248 [2024-11-28 12:50:47.496724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.248 qpair failed and we were unable to recover it. 00:27:05.248 [2024-11-28 12:50:47.496921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.248 [2024-11-28 12:50:47.496963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.248 qpair failed and we were unable to recover it. 00:27:05.248 [2024-11-28 12:50:47.497092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.248 [2024-11-28 12:50:47.497122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.248 qpair failed and we were unable to recover it. 00:27:05.248 [2024-11-28 12:50:47.497301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.248 [2024-11-28 12:50:47.497340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.248 qpair failed and we were unable to recover it. 00:27:05.248 [2024-11-28 12:50:47.497525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.248 [2024-11-28 12:50:47.497557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.248 qpair failed and we were unable to recover it. 00:27:05.248 [2024-11-28 12:50:47.497669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.248 [2024-11-28 12:50:47.497684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.248 qpair failed and we were unable to recover it. 00:27:05.248 [2024-11-28 12:50:47.497790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.248 [2024-11-28 12:50:47.497805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.248 qpair failed and we were unable to recover it. 00:27:05.248 [2024-11-28 12:50:47.497997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.248 [2024-11-28 12:50:47.498031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.248 qpair failed and we were unable to recover it. 00:27:05.248 [2024-11-28 12:50:47.498143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.248 [2024-11-28 12:50:47.498175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.248 qpair failed and we were unable to recover it. 00:27:05.248 [2024-11-28 12:50:47.498367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.248 [2024-11-28 12:50:47.498399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.248 qpair failed and we were unable to recover it. 00:27:05.248 [2024-11-28 12:50:47.498587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.248 [2024-11-28 12:50:47.498603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.248 qpair failed and we were unable to recover it. 00:27:05.249 [2024-11-28 12:50:47.498769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.249 [2024-11-28 12:50:47.498801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.249 qpair failed and we were unable to recover it. 00:27:05.249 [2024-11-28 12:50:47.499025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.249 [2024-11-28 12:50:47.499057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.249 qpair failed and we were unable to recover it. 00:27:05.249 [2024-11-28 12:50:47.499248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.249 [2024-11-28 12:50:47.499280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.249 qpair failed and we were unable to recover it. 00:27:05.249 [2024-11-28 12:50:47.499525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.249 [2024-11-28 12:50:47.499541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.249 qpair failed and we were unable to recover it. 00:27:05.249 [2024-11-28 12:50:47.499682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.249 [2024-11-28 12:50:47.499698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.249 qpair failed and we were unable to recover it. 00:27:05.249 [2024-11-28 12:50:47.499856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.249 [2024-11-28 12:50:47.499887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.249 qpair failed and we were unable to recover it. 00:27:05.249 [2024-11-28 12:50:47.500026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.249 [2024-11-28 12:50:47.500060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.249 qpair failed and we were unable to recover it. 00:27:05.249 [2024-11-28 12:50:47.500236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.249 [2024-11-28 12:50:47.500267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.249 qpair failed and we were unable to recover it. 00:27:05.249 [2024-11-28 12:50:47.500415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.249 [2024-11-28 12:50:47.500447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.249 qpair failed and we were unable to recover it. 00:27:05.249 [2024-11-28 12:50:47.500627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.249 [2024-11-28 12:50:47.500659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.249 qpair failed and we were unable to recover it. 00:27:05.249 [2024-11-28 12:50:47.500869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.249 [2024-11-28 12:50:47.500901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.249 qpair failed and we were unable to recover it. 00:27:05.249 [2024-11-28 12:50:47.501152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.249 [2024-11-28 12:50:47.501185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.249 qpair failed and we were unable to recover it. 00:27:05.249 [2024-11-28 12:50:47.501375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.249 [2024-11-28 12:50:47.501408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.249 qpair failed and we were unable to recover it. 00:27:05.249 [2024-11-28 12:50:47.501521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.249 [2024-11-28 12:50:47.501536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.249 qpair failed and we were unable to recover it. 00:27:05.249 [2024-11-28 12:50:47.501693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.249 [2024-11-28 12:50:47.501708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.249 qpair failed and we were unable to recover it. 00:27:05.249 [2024-11-28 12:50:47.501902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.249 [2024-11-28 12:50:47.501935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.249 qpair failed and we were unable to recover it. 00:27:05.249 [2024-11-28 12:50:47.502079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.249 [2024-11-28 12:50:47.502113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.249 qpair failed and we were unable to recover it. 00:27:05.249 [2024-11-28 12:50:47.502296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.249 [2024-11-28 12:50:47.502328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.249 qpair failed and we were unable to recover it. 00:27:05.249 [2024-11-28 12:50:47.502523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.249 [2024-11-28 12:50:47.502555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.249 qpair failed and we were unable to recover it. 00:27:05.249 [2024-11-28 12:50:47.502698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.249 [2024-11-28 12:50:47.502730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.249 qpair failed and we were unable to recover it. 00:27:05.249 [2024-11-28 12:50:47.502920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.249 [2024-11-28 12:50:47.502963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.249 qpair failed and we were unable to recover it. 00:27:05.249 [2024-11-28 12:50:47.503152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.249 [2024-11-28 12:50:47.503183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.249 qpair failed and we were unable to recover it. 00:27:05.249 [2024-11-28 12:50:47.503374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.249 [2024-11-28 12:50:47.503405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.249 qpair failed and we were unable to recover it. 00:27:05.249 [2024-11-28 12:50:47.503514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.249 [2024-11-28 12:50:47.503539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.249 qpair failed and we were unable to recover it. 00:27:05.249 [2024-11-28 12:50:47.503747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.249 [2024-11-28 12:50:47.503779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.249 qpair failed and we were unable to recover it. 00:27:05.249 [2024-11-28 12:50:47.503920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.249 [2024-11-28 12:50:47.503959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.249 qpair failed and we were unable to recover it. 00:27:05.249 [2024-11-28 12:50:47.504098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.249 [2024-11-28 12:50:47.504130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.249 qpair failed and we were unable to recover it. 00:27:05.249 [2024-11-28 12:50:47.504336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.249 [2024-11-28 12:50:47.504367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.249 qpair failed and we were unable to recover it. 00:27:05.250 [2024-11-28 12:50:47.504503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.250 [2024-11-28 12:50:47.504536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.250 qpair failed and we were unable to recover it. 00:27:05.250 [2024-11-28 12:50:47.504712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.250 [2024-11-28 12:50:47.504745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.250 qpair failed and we were unable to recover it. 00:27:05.250 [2024-11-28 12:50:47.504882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.250 [2024-11-28 12:50:47.504914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.250 qpair failed and we were unable to recover it. 00:27:05.250 [2024-11-28 12:50:47.505046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.250 [2024-11-28 12:50:47.505079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.250 qpair failed and we were unable to recover it. 00:27:05.250 [2024-11-28 12:50:47.505273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.250 [2024-11-28 12:50:47.505310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.250 qpair failed and we were unable to recover it. 00:27:05.250 [2024-11-28 12:50:47.505533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.250 [2024-11-28 12:50:47.505565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.250 qpair failed and we were unable to recover it. 00:27:05.250 [2024-11-28 12:50:47.505761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.250 [2024-11-28 12:50:47.505800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.250 qpair failed and we were unable to recover it. 00:27:05.250 [2024-11-28 12:50:47.505884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.250 [2024-11-28 12:50:47.505899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.250 qpair failed and we were unable to recover it. 00:27:05.250 [2024-11-28 12:50:47.506016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.250 [2024-11-28 12:50:47.506031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.250 qpair failed and we were unable to recover it. 00:27:05.250 [2024-11-28 12:50:47.506110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.250 [2024-11-28 12:50:47.506125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.250 qpair failed and we were unable to recover it. 00:27:05.250 [2024-11-28 12:50:47.506224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.250 [2024-11-28 12:50:47.506239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.250 qpair failed and we were unable to recover it. 00:27:05.250 [2024-11-28 12:50:47.506426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.250 [2024-11-28 12:50:47.506457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.250 qpair failed and we were unable to recover it. 00:27:05.250 [2024-11-28 12:50:47.506582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.250 [2024-11-28 12:50:47.506614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.250 qpair failed and we were unable to recover it. 00:27:05.250 [2024-11-28 12:50:47.506804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.250 [2024-11-28 12:50:47.506836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.250 qpair failed and we were unable to recover it. 00:27:05.250 [2024-11-28 12:50:47.507038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.250 [2024-11-28 12:50:47.507070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.250 qpair failed and we were unable to recover it. 00:27:05.250 [2024-11-28 12:50:47.507199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.250 [2024-11-28 12:50:47.507230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.250 qpair failed and we were unable to recover it. 00:27:05.250 [2024-11-28 12:50:47.507339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.250 [2024-11-28 12:50:47.507371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.250 qpair failed and we were unable to recover it. 00:27:05.250 [2024-11-28 12:50:47.507564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.250 [2024-11-28 12:50:47.507595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.250 qpair failed and we were unable to recover it. 00:27:05.250 [2024-11-28 12:50:47.507778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.250 [2024-11-28 12:50:47.507811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.250 qpair failed and we were unable to recover it. 00:27:05.250 [2024-11-28 12:50:47.508003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.250 [2024-11-28 12:50:47.508035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.250 qpair failed and we were unable to recover it. 00:27:05.250 [2024-11-28 12:50:47.508162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.250 [2024-11-28 12:50:47.508194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.250 qpair failed and we were unable to recover it. 00:27:05.250 [2024-11-28 12:50:47.508319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.250 [2024-11-28 12:50:47.508351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.250 qpair failed and we were unable to recover it. 00:27:05.250 [2024-11-28 12:50:47.508478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.250 [2024-11-28 12:50:47.508511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.250 qpair failed and we were unable to recover it. 00:27:05.250 [2024-11-28 12:50:47.508701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.250 [2024-11-28 12:50:47.508733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.250 qpair failed and we were unable to recover it. 00:27:05.250 [2024-11-28 12:50:47.508980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.250 [2024-11-28 12:50:47.509013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.250 qpair failed and we were unable to recover it. 00:27:05.250 [2024-11-28 12:50:47.509261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.250 [2024-11-28 12:50:47.509294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.250 qpair failed and we were unable to recover it. 00:27:05.250 [2024-11-28 12:50:47.509485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.250 [2024-11-28 12:50:47.509516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.250 qpair failed and we were unable to recover it. 00:27:05.250 [2024-11-28 12:50:47.509634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.250 [2024-11-28 12:50:47.509666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.250 qpair failed and we were unable to recover it. 00:27:05.250 [2024-11-28 12:50:47.509854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.250 [2024-11-28 12:50:47.509886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.250 qpair failed and we were unable to recover it. 00:27:05.250 [2024-11-28 12:50:47.510129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.250 [2024-11-28 12:50:47.510162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.250 qpair failed and we were unable to recover it. 00:27:05.250 [2024-11-28 12:50:47.510279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.251 [2024-11-28 12:50:47.510308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.251 qpair failed and we were unable to recover it. 00:27:05.251 [2024-11-28 12:50:47.510395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.251 [2024-11-28 12:50:47.510410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.251 qpair failed and we were unable to recover it. 00:27:05.251 [2024-11-28 12:50:47.510555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.251 [2024-11-28 12:50:47.510570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.251 qpair failed and we were unable to recover it. 00:27:05.251 [2024-11-28 12:50:47.510803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.251 [2024-11-28 12:50:47.510818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.251 qpair failed and we were unable to recover it. 00:27:05.251 [2024-11-28 12:50:47.510931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.251 [2024-11-28 12:50:47.510949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.251 qpair failed and we were unable to recover it. 00:27:05.251 [2024-11-28 12:50:47.511201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.251 [2024-11-28 12:50:47.511217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.251 qpair failed and we were unable to recover it. 00:27:05.251 [2024-11-28 12:50:47.511308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.251 [2024-11-28 12:50:47.511323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.251 qpair failed and we were unable to recover it. 00:27:05.251 [2024-11-28 12:50:47.511553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.251 [2024-11-28 12:50:47.511585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.251 qpair failed and we were unable to recover it. 00:27:05.251 [2024-11-28 12:50:47.511770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.251 [2024-11-28 12:50:47.511802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.251 qpair failed and we were unable to recover it. 00:27:05.251 [2024-11-28 12:50:47.511997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.251 [2024-11-28 12:50:47.512037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.251 qpair failed and we were unable to recover it. 00:27:05.251 [2024-11-28 12:50:47.512221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.251 [2024-11-28 12:50:47.512236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.251 qpair failed and we were unable to recover it. 00:27:05.251 [2024-11-28 12:50:47.512309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.251 [2024-11-28 12:50:47.512324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.251 qpair failed and we were unable to recover it. 00:27:05.251 [2024-11-28 12:50:47.512579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.251 [2024-11-28 12:50:47.512611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.251 qpair failed and we were unable to recover it. 00:27:05.251 [2024-11-28 12:50:47.512797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.251 [2024-11-28 12:50:47.512830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.251 qpair failed and we were unable to recover it. 00:27:05.251 [2024-11-28 12:50:47.513020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.251 [2024-11-28 12:50:47.513057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.251 qpair failed and we were unable to recover it. 00:27:05.251 [2024-11-28 12:50:47.513249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.251 [2024-11-28 12:50:47.513280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.251 qpair failed and we were unable to recover it. 00:27:05.251 [2024-11-28 12:50:47.513476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.251 [2024-11-28 12:50:47.513491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.251 qpair failed and we were unable to recover it. 00:27:05.251 [2024-11-28 12:50:47.513603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.251 [2024-11-28 12:50:47.513619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.251 qpair failed and we were unable to recover it. 00:27:05.251 [2024-11-28 12:50:47.513765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.251 [2024-11-28 12:50:47.513781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.251 qpair failed and we were unable to recover it. 00:27:05.251 [2024-11-28 12:50:47.513940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.251 [2024-11-28 12:50:47.513963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.251 qpair failed and we were unable to recover it. 00:27:05.251 [2024-11-28 12:50:47.514132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.251 [2024-11-28 12:50:47.514164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.251 qpair failed and we were unable to recover it. 00:27:05.251 [2024-11-28 12:50:47.514338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.251 [2024-11-28 12:50:47.514369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.251 qpair failed and we were unable to recover it. 00:27:05.251 [2024-11-28 12:50:47.514516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.251 [2024-11-28 12:50:47.514547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.251 qpair failed and we were unable to recover it. 00:27:05.251 [2024-11-28 12:50:47.514670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.251 [2024-11-28 12:50:47.514702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.251 qpair failed and we were unable to recover it. 00:27:05.251 [2024-11-28 12:50:47.514909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.251 [2024-11-28 12:50:47.514925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.251 qpair failed and we were unable to recover it. 00:27:05.251 [2024-11-28 12:50:47.515170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.251 [2024-11-28 12:50:47.515186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.251 qpair failed and we were unable to recover it. 00:27:05.251 [2024-11-28 12:50:47.515272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.251 [2024-11-28 12:50:47.515288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.251 qpair failed and we were unable to recover it. 00:27:05.251 [2024-11-28 12:50:47.515363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.251 [2024-11-28 12:50:47.515378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.251 qpair failed and we were unable to recover it. 00:27:05.251 [2024-11-28 12:50:47.515466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.251 [2024-11-28 12:50:47.515481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.252 qpair failed and we were unable to recover it. 00:27:05.252 [2024-11-28 12:50:47.515577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.252 [2024-11-28 12:50:47.515593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.252 qpair failed and we were unable to recover it. 00:27:05.252 [2024-11-28 12:50:47.515678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.252 [2024-11-28 12:50:47.515694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.252 qpair failed and we were unable to recover it. 00:27:05.252 [2024-11-28 12:50:47.515796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.252 [2024-11-28 12:50:47.515811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.252 qpair failed and we were unable to recover it. 00:27:05.252 [2024-11-28 12:50:47.515883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.252 [2024-11-28 12:50:47.515899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.252 qpair failed and we were unable to recover it. 00:27:05.252 [2024-11-28 12:50:47.516047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.252 [2024-11-28 12:50:47.516063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.252 qpair failed and we were unable to recover it. 00:27:05.252 [2024-11-28 12:50:47.516278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.252 [2024-11-28 12:50:47.516311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.252 qpair failed and we were unable to recover it. 00:27:05.252 [2024-11-28 12:50:47.516439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.252 [2024-11-28 12:50:47.516471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.252 qpair failed and we were unable to recover it. 00:27:05.252 [2024-11-28 12:50:47.516610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.252 [2024-11-28 12:50:47.516641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.252 qpair failed and we were unable to recover it. 00:27:05.252 [2024-11-28 12:50:47.516907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.252 [2024-11-28 12:50:47.516939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.252 qpair failed and we were unable to recover it. 00:27:05.252 [2024-11-28 12:50:47.517168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.252 [2024-11-28 12:50:47.517200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.252 qpair failed and we were unable to recover it. 00:27:05.252 [2024-11-28 12:50:47.517394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.252 [2024-11-28 12:50:47.517424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.252 qpair failed and we were unable to recover it. 00:27:05.252 [2024-11-28 12:50:47.517610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.252 [2024-11-28 12:50:47.517642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.252 qpair failed and we were unable to recover it. 00:27:05.252 [2024-11-28 12:50:47.517836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.252 [2024-11-28 12:50:47.517868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.252 qpair failed and we were unable to recover it. 00:27:05.252 [2024-11-28 12:50:47.517980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.252 [2024-11-28 12:50:47.518013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.252 qpair failed and we were unable to recover it. 00:27:05.252 [2024-11-28 12:50:47.518156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.252 [2024-11-28 12:50:47.518187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.252 qpair failed and we were unable to recover it. 00:27:05.252 [2024-11-28 12:50:47.518436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.252 [2024-11-28 12:50:47.518467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.252 qpair failed and we were unable to recover it. 00:27:05.252 [2024-11-28 12:50:47.518596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.252 [2024-11-28 12:50:47.518611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.252 qpair failed and we were unable to recover it. 00:27:05.252 [2024-11-28 12:50:47.518830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.252 [2024-11-28 12:50:47.518863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.252 qpair failed and we were unable to recover it. 00:27:05.252 [2024-11-28 12:50:47.519070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.252 [2024-11-28 12:50:47.519102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.252 qpair failed and we were unable to recover it. 00:27:05.252 [2024-11-28 12:50:47.519302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.252 [2024-11-28 12:50:47.519335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.252 qpair failed and we were unable to recover it. 00:27:05.252 [2024-11-28 12:50:47.519459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.252 [2024-11-28 12:50:47.519475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.252 qpair failed and we were unable to recover it. 00:27:05.252 [2024-11-28 12:50:47.519564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.252 [2024-11-28 12:50:47.519580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.252 qpair failed and we were unable to recover it. 00:27:05.252 [2024-11-28 12:50:47.519664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.252 [2024-11-28 12:50:47.519683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.252 qpair failed and we were unable to recover it. 00:27:05.252 [2024-11-28 12:50:47.519866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.252 [2024-11-28 12:50:47.519900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.252 qpair failed and we were unable to recover it. 00:27:05.252 [2024-11-28 12:50:47.520080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.252 [2024-11-28 12:50:47.520112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.252 qpair failed and we were unable to recover it. 00:27:05.252 [2024-11-28 12:50:47.520325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.252 [2024-11-28 12:50:47.520362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.252 qpair failed and we were unable to recover it. 00:27:05.252 [2024-11-28 12:50:47.520500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.252 [2024-11-28 12:50:47.520515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.252 qpair failed and we were unable to recover it. 00:27:05.252 [2024-11-28 12:50:47.520589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.252 [2024-11-28 12:50:47.520605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.252 qpair failed and we were unable to recover it. 00:27:05.252 [2024-11-28 12:50:47.520846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.252 [2024-11-28 12:50:47.520878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.252 qpair failed and we were unable to recover it. 00:27:05.252 [2024-11-28 12:50:47.521015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.253 [2024-11-28 12:50:47.521048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.253 qpair failed and we were unable to recover it. 00:27:05.253 [2024-11-28 12:50:47.521174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.253 [2024-11-28 12:50:47.521206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.253 qpair failed and we were unable to recover it. 00:27:05.253 [2024-11-28 12:50:47.521452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.253 [2024-11-28 12:50:47.521484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.253 qpair failed and we were unable to recover it. 00:27:05.253 [2024-11-28 12:50:47.521612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.253 [2024-11-28 12:50:47.521644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.253 qpair failed and we were unable to recover it. 00:27:05.253 [2024-11-28 12:50:47.521939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.253 [2024-11-28 12:50:47.521982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.253 qpair failed and we were unable to recover it. 00:27:05.253 [2024-11-28 12:50:47.522122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.253 [2024-11-28 12:50:47.522154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.253 qpair failed and we were unable to recover it. 00:27:05.253 [2024-11-28 12:50:47.522401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.253 [2024-11-28 12:50:47.522433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.253 qpair failed and we were unable to recover it. 00:27:05.253 [2024-11-28 12:50:47.522543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.253 [2024-11-28 12:50:47.522558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.253 qpair failed and we were unable to recover it. 00:27:05.253 [2024-11-28 12:50:47.522748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.253 [2024-11-28 12:50:47.522780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.253 qpair failed and we were unable to recover it. 00:27:05.253 [2024-11-28 12:50:47.522962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.253 [2024-11-28 12:50:47.522996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.253 qpair failed and we were unable to recover it. 00:27:05.253 [2024-11-28 12:50:47.523222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.253 [2024-11-28 12:50:47.523255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.253 qpair failed and we were unable to recover it. 00:27:05.253 [2024-11-28 12:50:47.523447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.253 [2024-11-28 12:50:47.523462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.253 qpair failed and we were unable to recover it. 00:27:05.253 [2024-11-28 12:50:47.523549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.253 [2024-11-28 12:50:47.523564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.253 qpair failed and we were unable to recover it. 00:27:05.253 [2024-11-28 12:50:47.523644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.253 [2024-11-28 12:50:47.523660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.253 qpair failed and we were unable to recover it. 00:27:05.253 [2024-11-28 12:50:47.523813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.253 [2024-11-28 12:50:47.523844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.253 qpair failed and we were unable to recover it. 00:27:05.253 [2024-11-28 12:50:47.524040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.253 [2024-11-28 12:50:47.524072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.253 qpair failed and we were unable to recover it. 00:27:05.253 [2024-11-28 12:50:47.524258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.253 [2024-11-28 12:50:47.524289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.253 qpair failed and we were unable to recover it. 00:27:05.253 [2024-11-28 12:50:47.524421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.253 [2024-11-28 12:50:47.524437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.253 qpair failed and we were unable to recover it. 00:27:05.253 [2024-11-28 12:50:47.524579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.253 [2024-11-28 12:50:47.524594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.253 qpair failed and we were unable to recover it. 00:27:05.253 [2024-11-28 12:50:47.524803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.253 [2024-11-28 12:50:47.524819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.253 qpair failed and we were unable to recover it. 00:27:05.253 [2024-11-28 12:50:47.524973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.253 [2024-11-28 12:50:47.525001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.253 qpair failed and we were unable to recover it. 00:27:05.253 [2024-11-28 12:50:47.525163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.253 [2024-11-28 12:50:47.525195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.253 qpair failed and we were unable to recover it. 00:27:05.253 [2024-11-28 12:50:47.525493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.253 [2024-11-28 12:50:47.525525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.253 qpair failed and we were unable to recover it. 00:27:05.253 [2024-11-28 12:50:47.525669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.253 [2024-11-28 12:50:47.525684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.253 qpair failed and we were unable to recover it. 00:27:05.253 [2024-11-28 12:50:47.525944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.253 [2024-11-28 12:50:47.525993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.253 qpair failed and we were unable to recover it. 00:27:05.253 [2024-11-28 12:50:47.526137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.253 [2024-11-28 12:50:47.526167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.254 qpair failed and we were unable to recover it. 00:27:05.254 [2024-11-28 12:50:47.526349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.254 [2024-11-28 12:50:47.526380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.254 qpair failed and we were unable to recover it. 00:27:05.254 [2024-11-28 12:50:47.526517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.254 [2024-11-28 12:50:47.526534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.254 qpair failed and we were unable to recover it. 00:27:05.254 [2024-11-28 12:50:47.526608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.254 [2024-11-28 12:50:47.526623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.254 qpair failed and we were unable to recover it. 00:27:05.254 [2024-11-28 12:50:47.526731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.254 [2024-11-28 12:50:47.526764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.254 qpair failed and we were unable to recover it. 00:27:05.254 [2024-11-28 12:50:47.526908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.254 [2024-11-28 12:50:47.526939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.254 qpair failed and we were unable to recover it. 00:27:05.254 [2024-11-28 12:50:47.527197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.254 [2024-11-28 12:50:47.527228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.254 qpair failed and we were unable to recover it. 00:27:05.254 [2024-11-28 12:50:47.527340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.254 [2024-11-28 12:50:47.527371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.254 qpair failed and we were unable to recover it. 00:27:05.254 [2024-11-28 12:50:47.527500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.254 [2024-11-28 12:50:47.527541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.254 qpair failed and we were unable to recover it. 00:27:05.254 [2024-11-28 12:50:47.527642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.254 [2024-11-28 12:50:47.527657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.254 qpair failed and we were unable to recover it. 00:27:05.254 [2024-11-28 12:50:47.527799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.254 [2024-11-28 12:50:47.527839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.254 qpair failed and we were unable to recover it. 00:27:05.254 [2024-11-28 12:50:47.527966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.254 [2024-11-28 12:50:47.528009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.254 qpair failed and we were unable to recover it. 00:27:05.254 [2024-11-28 12:50:47.528213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.254 [2024-11-28 12:50:47.528243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.254 qpair failed and we were unable to recover it. 00:27:05.254 [2024-11-28 12:50:47.528443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.254 [2024-11-28 12:50:47.528475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.254 qpair failed and we were unable to recover it. 00:27:05.254 [2024-11-28 12:50:47.528603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.254 [2024-11-28 12:50:47.528635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.254 qpair failed and we were unable to recover it. 00:27:05.254 [2024-11-28 12:50:47.528804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.254 [2024-11-28 12:50:47.528820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.254 qpair failed and we were unable to recover it. 00:27:05.254 [2024-11-28 12:50:47.528977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.254 [2024-11-28 12:50:47.528992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.254 qpair failed and we were unable to recover it. 00:27:05.254 [2024-11-28 12:50:47.529080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.254 [2024-11-28 12:50:47.529097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.254 qpair failed and we were unable to recover it. 00:27:05.254 [2024-11-28 12:50:47.529242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.254 [2024-11-28 12:50:47.529278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.254 qpair failed and we were unable to recover it. 00:27:05.254 [2024-11-28 12:50:47.529424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.254 [2024-11-28 12:50:47.529454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.254 qpair failed and we were unable to recover it. 00:27:05.254 [2024-11-28 12:50:47.529636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.254 [2024-11-28 12:50:47.529669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.254 qpair failed and we were unable to recover it. 00:27:05.254 [2024-11-28 12:50:47.529815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.254 [2024-11-28 12:50:47.529831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.254 qpair failed and we were unable to recover it. 00:27:05.254 [2024-11-28 12:50:47.530041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.254 [2024-11-28 12:50:47.530058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.254 qpair failed and we were unable to recover it. 00:27:05.254 [2024-11-28 12:50:47.530169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.254 [2024-11-28 12:50:47.530185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.254 qpair failed and we were unable to recover it. 00:27:05.254 [2024-11-28 12:50:47.530398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.254 [2024-11-28 12:50:47.530413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.254 qpair failed and we were unable to recover it. 00:27:05.254 [2024-11-28 12:50:47.530578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.254 [2024-11-28 12:50:47.530608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.254 qpair failed and we were unable to recover it. 00:27:05.254 [2024-11-28 12:50:47.530878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.254 [2024-11-28 12:50:47.530910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.254 qpair failed and we were unable to recover it. 00:27:05.254 [2024-11-28 12:50:47.531103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.254 [2024-11-28 12:50:47.531137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.254 qpair failed and we were unable to recover it. 00:27:05.254 [2024-11-28 12:50:47.531318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.254 [2024-11-28 12:50:47.531349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.254 qpair failed and we were unable to recover it. 00:27:05.254 [2024-11-28 12:50:47.531595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.254 [2024-11-28 12:50:47.531625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.254 qpair failed and we were unable to recover it. 00:27:05.254 [2024-11-28 12:50:47.531820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.255 [2024-11-28 12:50:47.531835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.255 qpair failed and we were unable to recover it. 00:27:05.255 [2024-11-28 12:50:47.532031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.255 [2024-11-28 12:50:47.532065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.255 qpair failed and we were unable to recover it. 00:27:05.255 [2024-11-28 12:50:47.532206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.255 [2024-11-28 12:50:47.532238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.255 qpair failed and we were unable to recover it. 00:27:05.255 [2024-11-28 12:50:47.532376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.255 [2024-11-28 12:50:47.532407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.255 qpair failed and we were unable to recover it. 00:27:05.255 [2024-11-28 12:50:47.532605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.255 [2024-11-28 12:50:47.532637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.255 qpair failed and we were unable to recover it. 00:27:05.255 [2024-11-28 12:50:47.532821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.255 [2024-11-28 12:50:47.532836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.255 qpair failed and we were unable to recover it. 00:27:05.255 [2024-11-28 12:50:47.532992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.255 [2024-11-28 12:50:47.533007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.255 qpair failed and we were unable to recover it. 00:27:05.255 [2024-11-28 12:50:47.533278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.255 [2024-11-28 12:50:47.533294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.255 qpair failed and we were unable to recover it. 00:27:05.255 [2024-11-28 12:50:47.533477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.255 [2024-11-28 12:50:47.533493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.255 qpair failed and we were unable to recover it. 00:27:05.255 [2024-11-28 12:50:47.533599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.255 [2024-11-28 12:50:47.533615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.255 qpair failed and we were unable to recover it. 00:27:05.255 [2024-11-28 12:50:47.533702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.255 [2024-11-28 12:50:47.533717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.255 qpair failed and we were unable to recover it. 00:27:05.255 [2024-11-28 12:50:47.533866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.255 [2024-11-28 12:50:47.533881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.255 qpair failed and we were unable to recover it. 00:27:05.255 [2024-11-28 12:50:47.534104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.255 [2024-11-28 12:50:47.534120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.255 qpair failed and we were unable to recover it. 00:27:05.255 [2024-11-28 12:50:47.534212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.255 [2024-11-28 12:50:47.534228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.255 qpair failed and we were unable to recover it. 00:27:05.255 [2024-11-28 12:50:47.534461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.255 [2024-11-28 12:50:47.534477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.255 qpair failed and we were unable to recover it. 00:27:05.255 [2024-11-28 12:50:47.534568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.255 [2024-11-28 12:50:47.534583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.255 qpair failed and we were unable to recover it. 00:27:05.255 [2024-11-28 12:50:47.534746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.255 [2024-11-28 12:50:47.534762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.255 qpair failed and we were unable to recover it. 00:27:05.255 [2024-11-28 12:50:47.534843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.255 [2024-11-28 12:50:47.534859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.255 qpair failed and we were unable to recover it. 00:27:05.255 [2024-11-28 12:50:47.535071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.255 [2024-11-28 12:50:47.535088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.255 qpair failed and we were unable to recover it. 00:27:05.255 [2024-11-28 12:50:47.535181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.255 [2024-11-28 12:50:47.535198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.255 qpair failed and we were unable to recover it. 00:27:05.255 [2024-11-28 12:50:47.535339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.255 [2024-11-28 12:50:47.535373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.255 qpair failed and we were unable to recover it. 00:27:05.255 [2024-11-28 12:50:47.535641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.255 [2024-11-28 12:50:47.535680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.255 qpair failed and we were unable to recover it. 00:27:05.255 [2024-11-28 12:50:47.535824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.255 [2024-11-28 12:50:47.535856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.255 qpair failed and we were unable to recover it. 00:27:05.255 [2024-11-28 12:50:47.535965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.255 [2024-11-28 12:50:47.535982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.255 qpair failed and we were unable to recover it. 00:27:05.255 [2024-11-28 12:50:47.536063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.255 [2024-11-28 12:50:47.536079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.255 qpair failed and we were unable to recover it. 00:27:05.255 [2024-11-28 12:50:47.536231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.255 [2024-11-28 12:50:47.536264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.255 qpair failed and we were unable to recover it. 00:27:05.255 [2024-11-28 12:50:47.536385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.255 [2024-11-28 12:50:47.536415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.255 qpair failed and we were unable to recover it. 00:27:05.255 [2024-11-28 12:50:47.536596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.255 [2024-11-28 12:50:47.536628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.255 qpair failed and we were unable to recover it. 00:27:05.255 [2024-11-28 12:50:47.536811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.255 [2024-11-28 12:50:47.536827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.255 qpair failed and we were unable to recover it. 00:27:05.255 [2024-11-28 12:50:47.537031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.255 [2024-11-28 12:50:47.537065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.255 qpair failed and we were unable to recover it. 00:27:05.255 [2024-11-28 12:50:47.537254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.255 [2024-11-28 12:50:47.537286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.256 qpair failed and we were unable to recover it. 00:27:05.256 [2024-11-28 12:50:47.537414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.256 [2024-11-28 12:50:47.537445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.256 qpair failed and we were unable to recover it. 00:27:05.256 [2024-11-28 12:50:47.537578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.256 [2024-11-28 12:50:47.537593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.256 qpair failed and we were unable to recover it. 00:27:05.256 [2024-11-28 12:50:47.537760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.256 [2024-11-28 12:50:47.537775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.256 qpair failed and we were unable to recover it. 00:27:05.256 [2024-11-28 12:50:47.537920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.256 [2024-11-28 12:50:47.537986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.256 qpair failed and we were unable to recover it. 00:27:05.256 [2024-11-28 12:50:47.538134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.256 [2024-11-28 12:50:47.538167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.256 qpair failed and we were unable to recover it. 00:27:05.256 [2024-11-28 12:50:47.538290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.256 [2024-11-28 12:50:47.538321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.256 qpair failed and we were unable to recover it. 00:27:05.256 [2024-11-28 12:50:47.538506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.256 [2024-11-28 12:50:47.538538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.256 qpair failed and we were unable to recover it. 00:27:05.256 [2024-11-28 12:50:47.538749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.256 [2024-11-28 12:50:47.538765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.256 qpair failed and we were unable to recover it. 00:27:05.256 [2024-11-28 12:50:47.538953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.256 [2024-11-28 12:50:47.538969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.256 qpair failed and we were unable to recover it. 00:27:05.256 [2024-11-28 12:50:47.539116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.256 [2024-11-28 12:50:47.539132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.256 qpair failed and we were unable to recover it. 00:27:05.256 [2024-11-28 12:50:47.539214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.256 [2024-11-28 12:50:47.539230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.256 qpair failed and we were unable to recover it. 00:27:05.256 [2024-11-28 12:50:47.539372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.256 [2024-11-28 12:50:47.539388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.256 qpair failed and we were unable to recover it. 00:27:05.256 [2024-11-28 12:50:47.539534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.256 [2024-11-28 12:50:47.539549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.256 qpair failed and we were unable to recover it. 00:27:05.256 [2024-11-28 12:50:47.539640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.256 [2024-11-28 12:50:47.539655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.256 qpair failed and we were unable to recover it. 00:27:05.256 [2024-11-28 12:50:47.539746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.256 [2024-11-28 12:50:47.539762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.256 qpair failed and we were unable to recover it. 00:27:05.256 [2024-11-28 12:50:47.539909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.256 [2024-11-28 12:50:47.539924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.256 qpair failed and we were unable to recover it. 00:27:05.256 [2024-11-28 12:50:47.540022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.256 [2024-11-28 12:50:47.540037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.256 qpair failed and we were unable to recover it. 00:27:05.256 [2024-11-28 12:50:47.540202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.256 [2024-11-28 12:50:47.540240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.256 qpair failed and we were unable to recover it. 00:27:05.256 [2024-11-28 12:50:47.540393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.256 [2024-11-28 12:50:47.540410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.256 qpair failed and we were unable to recover it. 00:27:05.256 [2024-11-28 12:50:47.540486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.256 [2024-11-28 12:50:47.540502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.256 qpair failed and we were unable to recover it. 00:27:05.256 [2024-11-28 12:50:47.540665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.256 [2024-11-28 12:50:47.540680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.256 qpair failed and we were unable to recover it. 00:27:05.256 [2024-11-28 12:50:47.540921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.256 [2024-11-28 12:50:47.540967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.256 qpair failed and we were unable to recover it. 00:27:05.256 [2024-11-28 12:50:47.541223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.256 [2024-11-28 12:50:47.541256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.256 qpair failed and we were unable to recover it. 00:27:05.256 [2024-11-28 12:50:47.541401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.256 [2024-11-28 12:50:47.541434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.256 qpair failed and we were unable to recover it. 00:27:05.256 [2024-11-28 12:50:47.541574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.256 [2024-11-28 12:50:47.541606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.256 qpair failed and we were unable to recover it. 00:27:05.256 [2024-11-28 12:50:47.541827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.256 [2024-11-28 12:50:47.541860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.256 qpair failed and we were unable to recover it. 00:27:05.256 [2024-11-28 12:50:47.542004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.256 [2024-11-28 12:50:47.542037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.256 qpair failed and we were unable to recover it. 00:27:05.256 [2024-11-28 12:50:47.542227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.256 [2024-11-28 12:50:47.542259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.256 qpair failed and we were unable to recover it. 00:27:05.256 [2024-11-28 12:50:47.542384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.257 [2024-11-28 12:50:47.542416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.257 qpair failed and we were unable to recover it. 00:27:05.257 [2024-11-28 12:50:47.542660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.257 [2024-11-28 12:50:47.542694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.257 qpair failed and we were unable to recover it. 00:27:05.257 [2024-11-28 12:50:47.542891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.257 [2024-11-28 12:50:47.542923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.257 qpair failed and we were unable to recover it. 00:27:05.257 [2024-11-28 12:50:47.543208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.257 [2024-11-28 12:50:47.543241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.257 qpair failed and we were unable to recover it. 00:27:05.257 [2024-11-28 12:50:47.543475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.257 [2024-11-28 12:50:47.543508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.257 qpair failed and we were unable to recover it. 00:27:05.257 [2024-11-28 12:50:47.543690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.257 [2024-11-28 12:50:47.543705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.257 qpair failed and we were unable to recover it. 00:27:05.257 [2024-11-28 12:50:47.543801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.257 [2024-11-28 12:50:47.543818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.257 qpair failed and we were unable to recover it. 00:27:05.257 [2024-11-28 12:50:47.544046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.257 [2024-11-28 12:50:47.544063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.257 qpair failed and we were unable to recover it. 00:27:05.257 [2024-11-28 12:50:47.544206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.257 [2024-11-28 12:50:47.544237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.257 qpair failed and we were unable to recover it. 00:27:05.257 [2024-11-28 12:50:47.544350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.257 [2024-11-28 12:50:47.544384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.257 qpair failed and we were unable to recover it. 00:27:05.257 [2024-11-28 12:50:47.544636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.257 [2024-11-28 12:50:47.544668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.257 qpair failed and we were unable to recover it. 00:27:05.257 [2024-11-28 12:50:47.544853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.257 [2024-11-28 12:50:47.544885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.257 qpair failed and we were unable to recover it. 00:27:05.257 [2024-11-28 12:50:47.545098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.257 [2024-11-28 12:50:47.545131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.257 qpair failed and we were unable to recover it. 00:27:05.257 [2024-11-28 12:50:47.545326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.257 [2024-11-28 12:50:47.545360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.257 qpair failed and we were unable to recover it. 00:27:05.257 [2024-11-28 12:50:47.545484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.257 [2024-11-28 12:50:47.545515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.257 qpair failed and we were unable to recover it. 00:27:05.257 [2024-11-28 12:50:47.545706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.257 [2024-11-28 12:50:47.545721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.257 qpair failed and we were unable to recover it. 00:27:05.257 [2024-11-28 12:50:47.545837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.257 [2024-11-28 12:50:47.545877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.257 qpair failed and we were unable to recover it. 00:27:05.257 [2024-11-28 12:50:47.546150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.257 [2024-11-28 12:50:47.546184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.257 qpair failed and we were unable to recover it. 00:27:05.257 [2024-11-28 12:50:47.546316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.257 [2024-11-28 12:50:47.546349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.257 qpair failed and we were unable to recover it. 00:27:05.257 [2024-11-28 12:50:47.546565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.257 [2024-11-28 12:50:47.546582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.257 qpair failed and we were unable to recover it. 00:27:05.257 [2024-11-28 12:50:47.546727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.257 [2024-11-28 12:50:47.546759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.257 qpair failed and we were unable to recover it. 00:27:05.257 [2024-11-28 12:50:47.546885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.257 [2024-11-28 12:50:47.546917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.257 qpair failed and we were unable to recover it. 00:27:05.257 [2024-11-28 12:50:47.547209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.257 [2024-11-28 12:50:47.547245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.257 qpair failed and we were unable to recover it. 00:27:05.257 [2024-11-28 12:50:47.547443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.257 [2024-11-28 12:50:47.547476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.257 qpair failed and we were unable to recover it. 00:27:05.257 [2024-11-28 12:50:47.547699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.257 [2024-11-28 12:50:47.547730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.257 qpair failed and we were unable to recover it. 00:27:05.257 [2024-11-28 12:50:47.547978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.257 [2024-11-28 12:50:47.548011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.257 qpair failed and we were unable to recover it. 00:27:05.257 [2024-11-28 12:50:47.548280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.257 [2024-11-28 12:50:47.548312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.257 qpair failed and we were unable to recover it. 00:27:05.257 [2024-11-28 12:50:47.548488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.257 [2024-11-28 12:50:47.548520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.257 qpair failed and we were unable to recover it. 00:27:05.257 [2024-11-28 12:50:47.548714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.257 [2024-11-28 12:50:47.548747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.257 qpair failed and we were unable to recover it. 00:27:05.257 [2024-11-28 12:50:47.548958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.257 [2024-11-28 12:50:47.548974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.257 qpair failed and we were unable to recover it. 00:27:05.257 [2024-11-28 12:50:47.549156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.257 [2024-11-28 12:50:47.549188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.258 qpair failed and we were unable to recover it. 00:27:05.258 [2024-11-28 12:50:47.549322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.258 [2024-11-28 12:50:47.549354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.258 qpair failed and we were unable to recover it. 00:27:05.258 [2024-11-28 12:50:47.549574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.258 [2024-11-28 12:50:47.549605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.258 qpair failed and we were unable to recover it. 00:27:05.258 [2024-11-28 12:50:47.549777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.258 [2024-11-28 12:50:47.549792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.258 qpair failed and we were unable to recover it. 00:27:05.258 [2024-11-28 12:50:47.549934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.258 [2024-11-28 12:50:47.549990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.258 qpair failed and we were unable to recover it. 00:27:05.258 [2024-11-28 12:50:47.550183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.258 [2024-11-28 12:50:47.550216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.258 qpair failed and we were unable to recover it. 00:27:05.258 [2024-11-28 12:50:47.550534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.258 [2024-11-28 12:50:47.550566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.258 qpair failed and we were unable to recover it. 00:27:05.258 [2024-11-28 12:50:47.550754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.258 [2024-11-28 12:50:47.550770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.258 qpair failed and we were unable to recover it. 00:27:05.258 [2024-11-28 12:50:47.550931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.258 [2024-11-28 12:50:47.550951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.258 qpair failed and we were unable to recover it. 00:27:05.258 [2024-11-28 12:50:47.551046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.258 [2024-11-28 12:50:47.551061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.258 qpair failed and we were unable to recover it. 00:27:05.258 [2024-11-28 12:50:47.551293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.258 [2024-11-28 12:50:47.551307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.258 qpair failed and we were unable to recover it. 00:27:05.258 [2024-11-28 12:50:47.551400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.258 [2024-11-28 12:50:47.551416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.258 qpair failed and we were unable to recover it. 00:27:05.258 [2024-11-28 12:50:47.551508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.258 [2024-11-28 12:50:47.551523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.258 qpair failed and we were unable to recover it. 00:27:05.258 [2024-11-28 12:50:47.551630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.258 [2024-11-28 12:50:47.551648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.258 qpair failed and we were unable to recover it. 00:27:05.258 [2024-11-28 12:50:47.551817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.258 [2024-11-28 12:50:47.551845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.258 qpair failed and we were unable to recover it. 00:27:05.258 [2024-11-28 12:50:47.551968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.258 [2024-11-28 12:50:47.551999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.258 qpair failed and we were unable to recover it. 00:27:05.258 [2024-11-28 12:50:47.552185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.258 [2024-11-28 12:50:47.552216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.258 qpair failed and we were unable to recover it. 00:27:05.258 [2024-11-28 12:50:47.552350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.258 [2024-11-28 12:50:47.552381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.258 qpair failed and we were unable to recover it. 00:27:05.258 [2024-11-28 12:50:47.552569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.258 [2024-11-28 12:50:47.552584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.258 qpair failed and we were unable to recover it. 00:27:05.258 [2024-11-28 12:50:47.552731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.258 [2024-11-28 12:50:47.552764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.258 qpair failed and we were unable to recover it. 00:27:05.258 [2024-11-28 12:50:47.553054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.258 [2024-11-28 12:50:47.553087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.258 qpair failed and we were unable to recover it. 00:27:05.258 [2024-11-28 12:50:47.553212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.258 [2024-11-28 12:50:47.553244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.258 qpair failed and we were unable to recover it. 00:27:05.258 [2024-11-28 12:50:47.553430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.258 [2024-11-28 12:50:47.553461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.258 qpair failed and we were unable to recover it. 00:27:05.258 [2024-11-28 12:50:47.553666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.258 [2024-11-28 12:50:47.553699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.258 qpair failed and we were unable to recover it. 00:27:05.258 [2024-11-28 12:50:47.553827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.258 [2024-11-28 12:50:47.553842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.258 qpair failed and we were unable to recover it. 00:27:05.258 [2024-11-28 12:50:47.554004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.258 [2024-11-28 12:50:47.554020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.258 qpair failed and we were unable to recover it. 00:27:05.258 [2024-11-28 12:50:47.554261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.258 [2024-11-28 12:50:47.554294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.258 qpair failed and we were unable to recover it. 00:27:05.259 [2024-11-28 12:50:47.554483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.259 [2024-11-28 12:50:47.554515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.259 qpair failed and we were unable to recover it. 00:27:05.259 [2024-11-28 12:50:47.554663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.259 [2024-11-28 12:50:47.554695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.259 qpair failed and we were unable to recover it. 00:27:05.259 [2024-11-28 12:50:47.554884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.259 [2024-11-28 12:50:47.554916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.259 qpair failed and we were unable to recover it. 00:27:05.259 [2024-11-28 12:50:47.555160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.259 [2024-11-28 12:50:47.555231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.259 qpair failed and we were unable to recover it. 00:27:05.259 [2024-11-28 12:50:47.555443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.259 [2024-11-28 12:50:47.555478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.259 qpair failed and we were unable to recover it. 00:27:05.259 [2024-11-28 12:50:47.555601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.259 [2024-11-28 12:50:47.555632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.259 qpair failed and we were unable to recover it. 00:27:05.259 [2024-11-28 12:50:47.555811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.259 [2024-11-28 12:50:47.555844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.259 qpair failed and we were unable to recover it. 00:27:05.259 [2024-11-28 12:50:47.555972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.259 [2024-11-28 12:50:47.556004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.259 qpair failed and we were unable to recover it. 00:27:05.259 [2024-11-28 12:50:47.556181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.259 [2024-11-28 12:50:47.556213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.259 qpair failed and we were unable to recover it. 00:27:05.259 [2024-11-28 12:50:47.556507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.259 [2024-11-28 12:50:47.556539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.259 qpair failed and we were unable to recover it. 00:27:05.259 [2024-11-28 12:50:47.556741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.259 [2024-11-28 12:50:47.556773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.259 qpair failed and we were unable to recover it. 00:27:05.259 [2024-11-28 12:50:47.557013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.259 [2024-11-28 12:50:47.557029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.259 qpair failed and we were unable to recover it. 00:27:05.259 [2024-11-28 12:50:47.557189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.259 [2024-11-28 12:50:47.557223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.259 qpair failed and we were unable to recover it. 00:27:05.259 [2024-11-28 12:50:47.557473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.259 [2024-11-28 12:50:47.557545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.259 qpair failed and we were unable to recover it. 00:27:05.259 [2024-11-28 12:50:47.557704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.259 [2024-11-28 12:50:47.557720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.259 qpair failed and we were unable to recover it. 00:27:05.259 [2024-11-28 12:50:47.557874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.259 [2024-11-28 12:50:47.557890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.259 qpair failed and we were unable to recover it. 00:27:05.259 [2024-11-28 12:50:47.557986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.259 [2024-11-28 12:50:47.558002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.259 qpair failed and we were unable to recover it. 00:27:05.259 [2024-11-28 12:50:47.558069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.259 [2024-11-28 12:50:47.558084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.259 qpair failed and we were unable to recover it. 00:27:05.259 [2024-11-28 12:50:47.558250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.259 [2024-11-28 12:50:47.558281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.259 qpair failed and we were unable to recover it. 00:27:05.259 [2024-11-28 12:50:47.558419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.259 [2024-11-28 12:50:47.558451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.259 qpair failed and we were unable to recover it. 00:27:05.259 [2024-11-28 12:50:47.558578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.259 [2024-11-28 12:50:47.558610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.259 qpair failed and we were unable to recover it. 00:27:05.259 [2024-11-28 12:50:47.558805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.259 [2024-11-28 12:50:47.558836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.259 qpair failed and we were unable to recover it. 00:27:05.259 [2024-11-28 12:50:47.559019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.259 [2024-11-28 12:50:47.559052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.259 qpair failed and we were unable to recover it. 00:27:05.259 [2024-11-28 12:50:47.559282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.259 [2024-11-28 12:50:47.559314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.259 qpair failed and we were unable to recover it. 00:27:05.259 [2024-11-28 12:50:47.559516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.259 [2024-11-28 12:50:47.559547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.259 qpair failed and we were unable to recover it. 00:27:05.259 [2024-11-28 12:50:47.559736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.259 [2024-11-28 12:50:47.559769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.259 qpair failed and we were unable to recover it. 00:27:05.259 [2024-11-28 12:50:47.559902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.259 [2024-11-28 12:50:47.559943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.259 qpair failed and we were unable to recover it. 00:27:05.259 [2024-11-28 12:50:47.560179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.259 [2024-11-28 12:50:47.560210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.259 qpair failed and we were unable to recover it. 00:27:05.259 [2024-11-28 12:50:47.560415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.259 [2024-11-28 12:50:47.560447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.259 qpair failed and we were unable to recover it. 00:27:05.259 [2024-11-28 12:50:47.560638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.259 [2024-11-28 12:50:47.560654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.259 qpair failed and we were unable to recover it. 00:27:05.259 [2024-11-28 12:50:47.560732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.260 [2024-11-28 12:50:47.560747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.260 qpair failed and we were unable to recover it. 00:27:05.260 [2024-11-28 12:50:47.560854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.260 [2024-11-28 12:50:47.560886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.260 qpair failed and we were unable to recover it. 00:27:05.260 [2024-11-28 12:50:47.561085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.260 [2024-11-28 12:50:47.561118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.260 qpair failed and we were unable to recover it. 00:27:05.260 [2024-11-28 12:50:47.561326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.260 [2024-11-28 12:50:47.561358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.260 qpair failed and we were unable to recover it. 00:27:05.260 [2024-11-28 12:50:47.561487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.260 [2024-11-28 12:50:47.561518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.260 qpair failed and we were unable to recover it. 00:27:05.260 [2024-11-28 12:50:47.561792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.260 [2024-11-28 12:50:47.561823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.260 qpair failed and we were unable to recover it. 00:27:05.260 [2024-11-28 12:50:47.562073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.260 [2024-11-28 12:50:47.562107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.260 qpair failed and we were unable to recover it. 00:27:05.260 [2024-11-28 12:50:47.562298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.260 [2024-11-28 12:50:47.562330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.260 qpair failed and we were unable to recover it. 00:27:05.260 [2024-11-28 12:50:47.562570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.260 [2024-11-28 12:50:47.562585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.260 qpair failed and we were unable to recover it. 00:27:05.260 [2024-11-28 12:50:47.562754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.260 [2024-11-28 12:50:47.562786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.260 qpair failed and we were unable to recover it. 00:27:05.260 [2024-11-28 12:50:47.563011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.260 [2024-11-28 12:50:47.563045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.260 qpair failed and we were unable to recover it. 00:27:05.260 [2024-11-28 12:50:47.563229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.260 [2024-11-28 12:50:47.563260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.260 qpair failed and we were unable to recover it. 00:27:05.260 [2024-11-28 12:50:47.563528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.260 [2024-11-28 12:50:47.563569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.260 qpair failed and we were unable to recover it. 00:27:05.260 [2024-11-28 12:50:47.563779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.260 [2024-11-28 12:50:47.563795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.260 qpair failed and we were unable to recover it. 00:27:05.260 [2024-11-28 12:50:47.563967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.260 [2024-11-28 12:50:47.564000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.260 qpair failed and we were unable to recover it. 00:27:05.260 [2024-11-28 12:50:47.564137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.260 [2024-11-28 12:50:47.564168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.260 qpair failed and we were unable to recover it. 00:27:05.260 [2024-11-28 12:50:47.564354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.260 [2024-11-28 12:50:47.564385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.260 qpair failed and we were unable to recover it. 00:27:05.260 [2024-11-28 12:50:47.564633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.260 [2024-11-28 12:50:47.564665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.260 qpair failed and we were unable to recover it. 00:27:05.260 [2024-11-28 12:50:47.564851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.260 [2024-11-28 12:50:47.564883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.260 qpair failed and we were unable to recover it. 00:27:05.260 [2024-11-28 12:50:47.565080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.260 [2024-11-28 12:50:47.565112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.260 qpair failed and we were unable to recover it. 00:27:05.260 [2024-11-28 12:50:47.565309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.260 [2024-11-28 12:50:47.565340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.260 qpair failed and we were unable to recover it. 00:27:05.260 [2024-11-28 12:50:47.565623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.260 [2024-11-28 12:50:47.565655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.260 qpair failed and we were unable to recover it. 00:27:05.260 [2024-11-28 12:50:47.565767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.260 [2024-11-28 12:50:47.565782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.260 qpair failed and we were unable to recover it. 00:27:05.260 [2024-11-28 12:50:47.565977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.260 [2024-11-28 12:50:47.566013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.260 qpair failed and we were unable to recover it. 00:27:05.260 [2024-11-28 12:50:47.566249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.260 [2024-11-28 12:50:47.566280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.260 qpair failed and we were unable to recover it. 00:27:05.260 [2024-11-28 12:50:47.566487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.260 [2024-11-28 12:50:47.566518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.260 qpair failed and we were unable to recover it. 00:27:05.260 [2024-11-28 12:50:47.566659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.260 [2024-11-28 12:50:47.566690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.260 qpair failed and we were unable to recover it. 00:27:05.260 [2024-11-28 12:50:47.566876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.260 [2024-11-28 12:50:47.566909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.260 qpair failed and we were unable to recover it. 00:27:05.260 [2024-11-28 12:50:47.567109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.260 [2024-11-28 12:50:47.567142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.260 qpair failed and we were unable to recover it. 00:27:05.260 [2024-11-28 12:50:47.567339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.260 [2024-11-28 12:50:47.567372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.260 qpair failed and we were unable to recover it. 00:27:05.260 [2024-11-28 12:50:47.567585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.261 [2024-11-28 12:50:47.567616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.261 qpair failed and we were unable to recover it. 00:27:05.261 [2024-11-28 12:50:47.567862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.261 [2024-11-28 12:50:47.567878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.261 qpair failed and we were unable to recover it. 00:27:05.261 [2024-11-28 12:50:47.568080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.261 [2024-11-28 12:50:47.568113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.261 qpair failed and we were unable to recover it. 00:27:05.261 [2024-11-28 12:50:47.568230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.261 [2024-11-28 12:50:47.568260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.261 qpair failed and we were unable to recover it. 00:27:05.261 [2024-11-28 12:50:47.568441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.261 [2024-11-28 12:50:47.568474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.261 qpair failed and we were unable to recover it. 00:27:05.261 [2024-11-28 12:50:47.568614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.261 [2024-11-28 12:50:47.568646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.261 qpair failed and we were unable to recover it. 00:27:05.261 [2024-11-28 12:50:47.568938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.261 [2024-11-28 12:50:47.568985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.261 qpair failed and we were unable to recover it. 00:27:05.261 [2024-11-28 12:50:47.569186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.261 [2024-11-28 12:50:47.569217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.261 qpair failed and we were unable to recover it. 00:27:05.261 [2024-11-28 12:50:47.569414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.261 [2024-11-28 12:50:47.569446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.261 qpair failed and we were unable to recover it. 00:27:05.261 [2024-11-28 12:50:47.569637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.261 [2024-11-28 12:50:47.569668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.261 qpair failed and we were unable to recover it. 00:27:05.261 [2024-11-28 12:50:47.569923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.261 [2024-11-28 12:50:47.569939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.261 qpair failed and we were unable to recover it. 00:27:05.261 [2024-11-28 12:50:47.570104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.261 [2024-11-28 12:50:47.570119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.261 qpair failed and we were unable to recover it. 00:27:05.261 [2024-11-28 12:50:47.570385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.261 [2024-11-28 12:50:47.570417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.261 qpair failed and we were unable to recover it. 00:27:05.261 [2024-11-28 12:50:47.570610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.261 [2024-11-28 12:50:47.570641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.261 qpair failed and we were unable to recover it. 00:27:05.261 [2024-11-28 12:50:47.570895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.261 [2024-11-28 12:50:47.570927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.261 qpair failed and we were unable to recover it. 00:27:05.261 [2024-11-28 12:50:47.571126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.261 [2024-11-28 12:50:47.571199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.261 qpair failed and we were unable to recover it. 00:27:05.261 [2024-11-28 12:50:47.571354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.261 [2024-11-28 12:50:47.571390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.261 qpair failed and we were unable to recover it. 00:27:05.261 [2024-11-28 12:50:47.571525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.261 [2024-11-28 12:50:47.571557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.261 qpair failed and we were unable to recover it. 00:27:05.261 [2024-11-28 12:50:47.571741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.261 [2024-11-28 12:50:47.571758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.261 qpair failed and we were unable to recover it. 00:27:05.261 [2024-11-28 12:50:47.571971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.261 [2024-11-28 12:50:47.572005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.261 qpair failed and we were unable to recover it. 00:27:05.261 [2024-11-28 12:50:47.572213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.261 [2024-11-28 12:50:47.572246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.261 qpair failed and we were unable to recover it. 00:27:05.261 [2024-11-28 12:50:47.572514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.261 [2024-11-28 12:50:47.572546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.261 qpair failed and we were unable to recover it. 00:27:05.261 [2024-11-28 12:50:47.572686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.261 [2024-11-28 12:50:47.572720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.261 qpair failed and we were unable to recover it. 00:27:05.261 [2024-11-28 12:50:47.572903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.261 [2024-11-28 12:50:47.572935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.261 qpair failed and we were unable to recover it. 00:27:05.261 [2024-11-28 12:50:47.573209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.261 [2024-11-28 12:50:47.573242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.261 qpair failed and we were unable to recover it. 00:27:05.261 [2024-11-28 12:50:47.573450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.261 [2024-11-28 12:50:47.573481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.261 qpair failed and we were unable to recover it. 00:27:05.261 [2024-11-28 12:50:47.573602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.261 [2024-11-28 12:50:47.573618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.261 qpair failed and we were unable to recover it. 00:27:05.261 [2024-11-28 12:50:47.573690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.261 [2024-11-28 12:50:47.573706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.261 qpair failed and we were unable to recover it. 00:27:05.261 [2024-11-28 12:50:47.573784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.261 [2024-11-28 12:50:47.573799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.261 qpair failed and we were unable to recover it. 00:27:05.261 [2024-11-28 12:50:47.573983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.261 [2024-11-28 12:50:47.574018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.261 qpair failed and we were unable to recover it. 00:27:05.261 [2024-11-28 12:50:47.574265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.262 [2024-11-28 12:50:47.574297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.262 qpair failed and we were unable to recover it. 00:27:05.262 [2024-11-28 12:50:47.574545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.262 [2024-11-28 12:50:47.574578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.262 qpair failed and we were unable to recover it. 00:27:05.262 [2024-11-28 12:50:47.574760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.262 [2024-11-28 12:50:47.574776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.262 qpair failed and we were unable to recover it. 00:27:05.262 [2024-11-28 12:50:47.574852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.262 [2024-11-28 12:50:47.574871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.262 qpair failed and we were unable to recover it. 00:27:05.262 [2024-11-28 12:50:47.575129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.262 [2024-11-28 12:50:47.575163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.262 qpair failed and we were unable to recover it. 00:27:05.262 [2024-11-28 12:50:47.575347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.262 [2024-11-28 12:50:47.575379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.262 qpair failed and we were unable to recover it. 00:27:05.262 [2024-11-28 12:50:47.575651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.262 [2024-11-28 12:50:47.575684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.262 qpair failed and we were unable to recover it. 00:27:05.262 [2024-11-28 12:50:47.575881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.262 [2024-11-28 12:50:47.575897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.262 qpair failed and we were unable to recover it. 00:27:05.262 [2024-11-28 12:50:47.576069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.262 [2024-11-28 12:50:47.576085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.262 qpair failed and we were unable to recover it. 00:27:05.262 [2024-11-28 12:50:47.576297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.262 [2024-11-28 12:50:47.576328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.262 qpair failed and we were unable to recover it. 00:27:05.262 [2024-11-28 12:50:47.576469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.262 [2024-11-28 12:50:47.576502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.262 qpair failed and we were unable to recover it. 00:27:05.262 [2024-11-28 12:50:47.576697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.262 [2024-11-28 12:50:47.576729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.262 qpair failed and we were unable to recover it. 00:27:05.262 [2024-11-28 12:50:47.576903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.262 [2024-11-28 12:50:47.576919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.262 qpair failed and we were unable to recover it. 00:27:05.262 [2024-11-28 12:50:47.577096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.262 [2024-11-28 12:50:47.577112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.262 qpair failed and we were unable to recover it. 00:27:05.262 [2024-11-28 12:50:47.577265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.262 [2024-11-28 12:50:47.577301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.262 qpair failed and we were unable to recover it. 00:27:05.262 [2024-11-28 12:50:47.577493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.262 [2024-11-28 12:50:47.577526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.262 qpair failed and we were unable to recover it. 00:27:05.262 [2024-11-28 12:50:47.577666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.262 [2024-11-28 12:50:47.577699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.262 qpair failed and we were unable to recover it. 00:27:05.262 [2024-11-28 12:50:47.577878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.262 [2024-11-28 12:50:47.577894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.262 qpair failed and we were unable to recover it. 00:27:05.262 [2024-11-28 12:50:47.578063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.262 [2024-11-28 12:50:47.578097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.262 qpair failed and we were unable to recover it. 00:27:05.262 [2024-11-28 12:50:47.578364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.262 [2024-11-28 12:50:47.578397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.262 qpair failed and we were unable to recover it. 00:27:05.262 [2024-11-28 12:50:47.578543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.262 [2024-11-28 12:50:47.578574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.262 qpair failed and we were unable to recover it. 00:27:05.262 [2024-11-28 12:50:47.578693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.262 [2024-11-28 12:50:47.578709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.262 qpair failed and we were unable to recover it. 00:27:05.262 [2024-11-28 12:50:47.578812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.262 [2024-11-28 12:50:47.578827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.262 qpair failed and we were unable to recover it. 00:27:05.262 [2024-11-28 12:50:47.579058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.262 [2024-11-28 12:50:47.579091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.262 qpair failed and we were unable to recover it. 00:27:05.262 [2024-11-28 12:50:47.579311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.262 [2024-11-28 12:50:47.579343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.262 qpair failed and we were unable to recover it. 00:27:05.262 [2024-11-28 12:50:47.579479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.262 [2024-11-28 12:50:47.579510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.262 qpair failed and we were unable to recover it. 00:27:05.262 [2024-11-28 12:50:47.579669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.262 [2024-11-28 12:50:47.579684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.262 qpair failed and we were unable to recover it. 00:27:05.262 [2024-11-28 12:50:47.579861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.262 [2024-11-28 12:50:47.579893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.262 qpair failed and we were unable to recover it. 00:27:05.262 [2024-11-28 12:50:47.580146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.262 [2024-11-28 12:50:47.580179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.262 qpair failed and we were unable to recover it. 00:27:05.262 [2024-11-28 12:50:47.580295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.262 [2024-11-28 12:50:47.580327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.262 qpair failed and we were unable to recover it. 00:27:05.262 [2024-11-28 12:50:47.580516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.262 [2024-11-28 12:50:47.580549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.262 qpair failed and we were unable to recover it. 00:27:05.262 [2024-11-28 12:50:47.580755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.262 [2024-11-28 12:50:47.580788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.262 qpair failed and we were unable to recover it. 00:27:05.262 [2024-11-28 12:50:47.580916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.262 [2024-11-28 12:50:47.580956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.263 qpair failed and we were unable to recover it. 00:27:05.263 [2024-11-28 12:50:47.581142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.263 [2024-11-28 12:50:47.581174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.263 qpair failed and we were unable to recover it. 00:27:05.263 [2024-11-28 12:50:47.581443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.263 [2024-11-28 12:50:47.581474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.263 qpair failed and we were unable to recover it. 00:27:05.263 [2024-11-28 12:50:47.581599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.263 [2024-11-28 12:50:47.581631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.263 qpair failed and we were unable to recover it. 00:27:05.263 [2024-11-28 12:50:47.581767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.263 [2024-11-28 12:50:47.581799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.263 qpair failed and we were unable to recover it. 00:27:05.263 [2024-11-28 12:50:47.581999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.263 [2024-11-28 12:50:47.582032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.263 qpair failed and we were unable to recover it. 00:27:05.263 [2024-11-28 12:50:47.582293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.263 [2024-11-28 12:50:47.582327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.263 qpair failed and we were unable to recover it. 00:27:05.263 [2024-11-28 12:50:47.582436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.263 [2024-11-28 12:50:47.582469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.263 qpair failed and we were unable to recover it. 00:27:05.263 [2024-11-28 12:50:47.582736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.263 [2024-11-28 12:50:47.582769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.263 qpair failed and we were unable to recover it. 00:27:05.263 [2024-11-28 12:50:47.582991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.263 [2024-11-28 12:50:47.583008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.263 qpair failed and we were unable to recover it. 00:27:05.263 [2024-11-28 12:50:47.583101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.263 [2024-11-28 12:50:47.583134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.263 qpair failed and we were unable to recover it. 00:27:05.263 [2024-11-28 12:50:47.583325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.263 [2024-11-28 12:50:47.583357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.263 qpair failed and we were unable to recover it. 00:27:05.263 [2024-11-28 12:50:47.583574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.263 [2024-11-28 12:50:47.583613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.263 qpair failed and we were unable to recover it. 00:27:05.263 [2024-11-28 12:50:47.583732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.263 [2024-11-28 12:50:47.583747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.263 qpair failed and we were unable to recover it. 00:27:05.263 [2024-11-28 12:50:47.583933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.263 [2024-11-28 12:50:47.583972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.263 qpair failed and we were unable to recover it. 00:27:05.263 [2024-11-28 12:50:47.584120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.263 [2024-11-28 12:50:47.584152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.263 qpair failed and we were unable to recover it. 00:27:05.263 [2024-11-28 12:50:47.584268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.263 [2024-11-28 12:50:47.584301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.263 qpair failed and we were unable to recover it. 00:27:05.263 [2024-11-28 12:50:47.584424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.263 [2024-11-28 12:50:47.584465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.263 qpair failed and we were unable to recover it. 00:27:05.263 [2024-11-28 12:50:47.584648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.263 [2024-11-28 12:50:47.584663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.263 qpair failed and we were unable to recover it. 00:27:05.263 [2024-11-28 12:50:47.584824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.263 [2024-11-28 12:50:47.584857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.263 qpair failed and we were unable to recover it. 00:27:05.263 [2024-11-28 12:50:47.585103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.263 [2024-11-28 12:50:47.585136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.263 qpair failed and we were unable to recover it. 00:27:05.263 [2024-11-28 12:50:47.585329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.263 [2024-11-28 12:50:47.585362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.263 qpair failed and we were unable to recover it. 00:27:05.263 [2024-11-28 12:50:47.585576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.263 [2024-11-28 12:50:47.585609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.263 qpair failed and we were unable to recover it. 00:27:05.263 [2024-11-28 12:50:47.585816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.263 [2024-11-28 12:50:47.585848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.263 qpair failed and we were unable to recover it. 00:27:05.263 [2024-11-28 12:50:47.586092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.263 [2024-11-28 12:50:47.586125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.263 qpair failed and we were unable to recover it. 00:27:05.263 [2024-11-28 12:50:47.586311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.263 [2024-11-28 12:50:47.586344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.263 qpair failed and we were unable to recover it. 00:27:05.263 [2024-11-28 12:50:47.586526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.263 [2024-11-28 12:50:47.586559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.263 qpair failed and we were unable to recover it. 00:27:05.263 [2024-11-28 12:50:47.586747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.263 [2024-11-28 12:50:47.586780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.263 qpair failed and we were unable to recover it. 00:27:05.263 [2024-11-28 12:50:47.586924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.263 [2024-11-28 12:50:47.586967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.263 qpair failed and we were unable to recover it. 00:27:05.263 [2024-11-28 12:50:47.587172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.263 [2024-11-28 12:50:47.587205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.263 qpair failed and we were unable to recover it. 00:27:05.263 [2024-11-28 12:50:47.587330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.264 [2024-11-28 12:50:47.587362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.264 qpair failed and we were unable to recover it. 00:27:05.264 [2024-11-28 12:50:47.587604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.264 [2024-11-28 12:50:47.587620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.264 qpair failed and we were unable to recover it. 00:27:05.264 [2024-11-28 12:50:47.587873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.264 [2024-11-28 12:50:47.587912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.264 qpair failed and we were unable to recover it. 00:27:05.264 [2024-11-28 12:50:47.588174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.264 [2024-11-28 12:50:47.588245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.264 qpair failed and we were unable to recover it. 00:27:05.264 [2024-11-28 12:50:47.588563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.264 [2024-11-28 12:50:47.588600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.264 qpair failed and we were unable to recover it. 00:27:05.264 [2024-11-28 12:50:47.588850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.264 [2024-11-28 12:50:47.588884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.264 qpair failed and we were unable to recover it. 00:27:05.264 [2024-11-28 12:50:47.589180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.264 [2024-11-28 12:50:47.589214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.264 qpair failed and we were unable to recover it. 00:27:05.264 [2024-11-28 12:50:47.589434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.264 [2024-11-28 12:50:47.589467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.264 qpair failed and we were unable to recover it. 00:27:05.264 [2024-11-28 12:50:47.589739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.264 [2024-11-28 12:50:47.589770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.264 qpair failed and we were unable to recover it. 00:27:05.264 [2024-11-28 12:50:47.590021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.264 [2024-11-28 12:50:47.590062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.264 qpair failed and we were unable to recover it. 00:27:05.264 [2024-11-28 12:50:47.590306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.264 [2024-11-28 12:50:47.590339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.264 qpair failed and we were unable to recover it. 00:27:05.264 [2024-11-28 12:50:47.590533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.264 [2024-11-28 12:50:47.590565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.264 qpair failed and we were unable to recover it. 00:27:05.264 [2024-11-28 12:50:47.590746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.264 [2024-11-28 12:50:47.590779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.264 qpair failed and we were unable to recover it. 00:27:05.264 [2024-11-28 12:50:47.591028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.264 [2024-11-28 12:50:47.591044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.264 qpair failed and we were unable to recover it. 00:27:05.264 [2024-11-28 12:50:47.591274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.264 [2024-11-28 12:50:47.591289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.264 qpair failed and we were unable to recover it. 00:27:05.264 [2024-11-28 12:50:47.591453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.264 [2024-11-28 12:50:47.591469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.264 qpair failed and we were unable to recover it. 00:27:05.264 [2024-11-28 12:50:47.591709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.264 [2024-11-28 12:50:47.591741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.264 qpair failed and we were unable to recover it. 00:27:05.264 [2024-11-28 12:50:47.591973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.264 [2024-11-28 12:50:47.592005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.264 qpair failed and we were unable to recover it. 00:27:05.264 [2024-11-28 12:50:47.592218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.264 [2024-11-28 12:50:47.592250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.264 qpair failed and we were unable to recover it. 00:27:05.264 [2024-11-28 12:50:47.592433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.264 [2024-11-28 12:50:47.592464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.264 qpair failed and we were unable to recover it. 00:27:05.264 [2024-11-28 12:50:47.592660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.264 [2024-11-28 12:50:47.592693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.264 qpair failed and we were unable to recover it. 00:27:05.264 [2024-11-28 12:50:47.592960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.264 [2024-11-28 12:50:47.592977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.264 qpair failed and we were unable to recover it. 00:27:05.264 [2024-11-28 12:50:47.593118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.264 [2024-11-28 12:50:47.593134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.264 qpair failed and we were unable to recover it. 00:27:05.264 [2024-11-28 12:50:47.593286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.264 [2024-11-28 12:50:47.593301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.264 qpair failed and we were unable to recover it. 00:27:05.264 [2024-11-28 12:50:47.593392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.264 [2024-11-28 12:50:47.593407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.264 qpair failed and we were unable to recover it. 00:27:05.265 [2024-11-28 12:50:47.593554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.265 [2024-11-28 12:50:47.593569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.265 qpair failed and we were unable to recover it. 00:27:05.265 [2024-11-28 12:50:47.593734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.265 [2024-11-28 12:50:47.593751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.265 qpair failed and we were unable to recover it. 00:27:05.265 [2024-11-28 12:50:47.593904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.265 [2024-11-28 12:50:47.593919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.265 qpair failed and we were unable to recover it. 00:27:05.265 [2024-11-28 12:50:47.594086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.265 [2024-11-28 12:50:47.594103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.265 qpair failed and we were unable to recover it. 00:27:05.265 [2024-11-28 12:50:47.594265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.265 [2024-11-28 12:50:47.594280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.265 qpair failed and we were unable to recover it. 00:27:05.265 [2024-11-28 12:50:47.594454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.265 [2024-11-28 12:50:47.594470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.265 qpair failed and we were unable to recover it. 00:27:05.265 [2024-11-28 12:50:47.594560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.265 [2024-11-28 12:50:47.594575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.265 qpair failed and we were unable to recover it. 00:27:05.265 [2024-11-28 12:50:47.594790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.265 [2024-11-28 12:50:47.594805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.265 qpair failed and we were unable to recover it. 00:27:05.265 [2024-11-28 12:50:47.594968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.265 [2024-11-28 12:50:47.594985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.265 qpair failed and we were unable to recover it. 00:27:05.265 [2024-11-28 12:50:47.595082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.265 [2024-11-28 12:50:47.595100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.265 qpair failed and we were unable to recover it. 00:27:05.265 [2024-11-28 12:50:47.595308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.265 [2024-11-28 12:50:47.595322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.265 qpair failed and we were unable to recover it. 00:27:05.265 [2024-11-28 12:50:47.595509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.265 [2024-11-28 12:50:47.595529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.265 qpair failed and we were unable to recover it. 00:27:05.265 [2024-11-28 12:50:47.595616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.265 [2024-11-28 12:50:47.595632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.265 qpair failed and we were unable to recover it. 00:27:05.265 [2024-11-28 12:50:47.595776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.265 [2024-11-28 12:50:47.595791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.265 qpair failed and we were unable to recover it. 00:27:05.265 [2024-11-28 12:50:47.595878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.265 [2024-11-28 12:50:47.595894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.265 qpair failed and we were unable to recover it. 00:27:05.265 [2024-11-28 12:50:47.596076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.265 [2024-11-28 12:50:47.596093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.265 qpair failed and we were unable to recover it. 00:27:05.265 [2024-11-28 12:50:47.596318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.265 [2024-11-28 12:50:47.596335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.265 qpair failed and we were unable to recover it. 00:27:05.265 [2024-11-28 12:50:47.596507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.265 [2024-11-28 12:50:47.596522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.265 qpair failed and we were unable to recover it. 00:27:05.265 [2024-11-28 12:50:47.596614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.265 [2024-11-28 12:50:47.596631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.265 qpair failed and we were unable to recover it. 00:27:05.265 [2024-11-28 12:50:47.596846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.265 [2024-11-28 12:50:47.596879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.265 qpair failed and we were unable to recover it. 00:27:05.265 [2024-11-28 12:50:47.597150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.265 [2024-11-28 12:50:47.597183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.265 qpair failed and we were unable to recover it. 00:27:05.265 [2024-11-28 12:50:47.597377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.265 [2024-11-28 12:50:47.597409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.265 qpair failed and we were unable to recover it. 00:27:05.265 [2024-11-28 12:50:47.597615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.265 [2024-11-28 12:50:47.597647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.265 qpair failed and we were unable to recover it. 00:27:05.265 [2024-11-28 12:50:47.597849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.265 [2024-11-28 12:50:47.597882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.265 qpair failed and we were unable to recover it. 00:27:05.265 [2024-11-28 12:50:47.598051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.265 [2024-11-28 12:50:47.598067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.265 qpair failed and we were unable to recover it. 00:27:05.265 [2024-11-28 12:50:47.598176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.265 [2024-11-28 12:50:47.598192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.265 qpair failed and we were unable to recover it. 00:27:05.265 [2024-11-28 12:50:47.598270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.265 [2024-11-28 12:50:47.598286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.265 qpair failed and we were unable to recover it. 00:27:05.265 [2024-11-28 12:50:47.598367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.265 [2024-11-28 12:50:47.598383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.265 qpair failed and we were unable to recover it. 00:27:05.265 [2024-11-28 12:50:47.598482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.265 [2024-11-28 12:50:47.598514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.265 qpair failed and we were unable to recover it. 00:27:05.266 [2024-11-28 12:50:47.598639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.266 [2024-11-28 12:50:47.598672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.266 qpair failed and we were unable to recover it. 00:27:05.266 [2024-11-28 12:50:47.598875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.266 [2024-11-28 12:50:47.598906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.266 qpair failed and we were unable to recover it. 00:27:05.266 [2024-11-28 12:50:47.599033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.266 [2024-11-28 12:50:47.599065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.266 qpair failed and we were unable to recover it. 00:27:05.266 [2024-11-28 12:50:47.599242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.266 [2024-11-28 12:50:47.599274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.266 qpair failed and we were unable to recover it. 00:27:05.266 [2024-11-28 12:50:47.599529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.266 [2024-11-28 12:50:47.599562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.266 qpair failed and we were unable to recover it. 00:27:05.266 [2024-11-28 12:50:47.599811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.266 [2024-11-28 12:50:47.599827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.266 qpair failed and we were unable to recover it. 00:27:05.266 [2024-11-28 12:50:47.599977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.266 [2024-11-28 12:50:47.599992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.266 qpair failed and we were unable to recover it. 00:27:05.266 [2024-11-28 12:50:47.600147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.266 [2024-11-28 12:50:47.600180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.266 qpair failed and we were unable to recover it. 00:27:05.266 [2024-11-28 12:50:47.600470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.266 [2024-11-28 12:50:47.600500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.266 qpair failed and we were unable to recover it. 00:27:05.266 [2024-11-28 12:50:47.600628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.266 [2024-11-28 12:50:47.600665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.266 qpair failed and we were unable to recover it. 00:27:05.266 [2024-11-28 12:50:47.600845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.266 [2024-11-28 12:50:47.600861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.266 qpair failed and we were unable to recover it. 00:27:05.266 [2024-11-28 12:50:47.600966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.266 [2024-11-28 12:50:47.600982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.266 qpair failed and we were unable to recover it. 00:27:05.266 [2024-11-28 12:50:47.601222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.266 [2024-11-28 12:50:47.601255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.266 qpair failed and we were unable to recover it. 00:27:05.266 [2024-11-28 12:50:47.601436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.266 [2024-11-28 12:50:47.601469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.266 qpair failed and we were unable to recover it. 00:27:05.266 [2024-11-28 12:50:47.601656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.266 [2024-11-28 12:50:47.601689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.266 qpair failed and we were unable to recover it. 00:27:05.266 [2024-11-28 12:50:47.601873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.266 [2024-11-28 12:50:47.601889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.266 qpair failed and we were unable to recover it. 00:27:05.266 [2024-11-28 12:50:47.601963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.266 [2024-11-28 12:50:47.601980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.266 qpair failed and we were unable to recover it. 00:27:05.266 [2024-11-28 12:50:47.602153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.266 [2024-11-28 12:50:47.602186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.266 qpair failed and we were unable to recover it. 00:27:05.266 [2024-11-28 12:50:47.602366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.266 [2024-11-28 12:50:47.602398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.266 qpair failed and we were unable to recover it. 00:27:05.266 [2024-11-28 12:50:47.602515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.266 [2024-11-28 12:50:47.602548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.266 qpair failed and we were unable to recover it. 00:27:05.266 [2024-11-28 12:50:47.602678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.266 [2024-11-28 12:50:47.602694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.266 qpair failed and we were unable to recover it. 00:27:05.266 [2024-11-28 12:50:47.602846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.266 [2024-11-28 12:50:47.602881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.266 qpair failed and we were unable to recover it. 00:27:05.266 [2024-11-28 12:50:47.603063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.266 [2024-11-28 12:50:47.603097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.266 qpair failed and we were unable to recover it. 00:27:05.266 [2024-11-28 12:50:47.603245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.266 [2024-11-28 12:50:47.603277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.266 qpair failed and we were unable to recover it. 00:27:05.266 [2024-11-28 12:50:47.603389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.266 [2024-11-28 12:50:47.603420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.266 qpair failed and we were unable to recover it. 00:27:05.266 [2024-11-28 12:50:47.603619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.266 [2024-11-28 12:50:47.603652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.266 qpair failed and we were unable to recover it. 00:27:05.266 [2024-11-28 12:50:47.603773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.266 [2024-11-28 12:50:47.603805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.266 qpair failed and we were unable to recover it. 00:27:05.266 [2024-11-28 12:50:47.603994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.266 [2024-11-28 12:50:47.604029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.266 qpair failed and we were unable to recover it. 00:27:05.266 [2024-11-28 12:50:47.604196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.266 [2024-11-28 12:50:47.604212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.266 qpair failed and we were unable to recover it. 00:27:05.266 [2024-11-28 12:50:47.604429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.266 [2024-11-28 12:50:47.604462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.266 qpair failed and we were unable to recover it. 00:27:05.266 [2024-11-28 12:50:47.604603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.266 [2024-11-28 12:50:47.604618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.266 qpair failed and we were unable to recover it. 00:27:05.266 [2024-11-28 12:50:47.604771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.266 [2024-11-28 12:50:47.604787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.266 qpair failed and we were unable to recover it. 00:27:05.266 [2024-11-28 12:50:47.605001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.266 [2024-11-28 12:50:47.605034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.266 qpair failed and we were unable to recover it. 00:27:05.266 [2024-11-28 12:50:47.605209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.267 [2024-11-28 12:50:47.605241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.267 qpair failed and we were unable to recover it. 00:27:05.267 [2024-11-28 12:50:47.605385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.267 [2024-11-28 12:50:47.605416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.267 qpair failed and we were unable to recover it. 00:27:05.267 [2024-11-28 12:50:47.605611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.267 [2024-11-28 12:50:47.605645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.267 qpair failed and we were unable to recover it. 00:27:05.267 [2024-11-28 12:50:47.605773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.267 [2024-11-28 12:50:47.605806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.267 qpair failed and we were unable to recover it. 00:27:05.267 [2024-11-28 12:50:47.605960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.267 [2024-11-28 12:50:47.605994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.267 qpair failed and we were unable to recover it. 00:27:05.267 [2024-11-28 12:50:47.606212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.267 [2024-11-28 12:50:47.606245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.267 qpair failed and we were unable to recover it. 00:27:05.267 [2024-11-28 12:50:47.606371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.267 [2024-11-28 12:50:47.606403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.267 qpair failed and we were unable to recover it. 00:27:05.267 [2024-11-28 12:50:47.606698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.267 [2024-11-28 12:50:47.606731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.267 qpair failed and we were unable to recover it. 00:27:05.267 [2024-11-28 12:50:47.606905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.267 [2024-11-28 12:50:47.606922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.267 qpair failed and we were unable to recover it. 00:27:05.267 [2024-11-28 12:50:47.607145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.267 [2024-11-28 12:50:47.607178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.267 qpair failed and we were unable to recover it. 00:27:05.267 [2024-11-28 12:50:47.607457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.267 [2024-11-28 12:50:47.607488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.267 qpair failed and we were unable to recover it. 00:27:05.267 [2024-11-28 12:50:47.607626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.267 [2024-11-28 12:50:47.607657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.267 qpair failed and we were unable to recover it. 00:27:05.267 [2024-11-28 12:50:47.607847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.267 [2024-11-28 12:50:47.607880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.267 qpair failed and we were unable to recover it. 00:27:05.267 [2024-11-28 12:50:47.608111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.267 [2024-11-28 12:50:47.608144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.267 qpair failed and we were unable to recover it. 00:27:05.267 [2024-11-28 12:50:47.608285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.267 [2024-11-28 12:50:47.608318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.267 qpair failed and we were unable to recover it. 00:27:05.267 [2024-11-28 12:50:47.608530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.267 [2024-11-28 12:50:47.608562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.267 qpair failed and we were unable to recover it. 00:27:05.267 [2024-11-28 12:50:47.608751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.267 [2024-11-28 12:50:47.608766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.267 qpair failed and we were unable to recover it. 00:27:05.267 [2024-11-28 12:50:47.608921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.267 [2024-11-28 12:50:47.608972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.267 qpair failed and we were unable to recover it. 00:27:05.267 [2024-11-28 12:50:47.609169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.267 [2024-11-28 12:50:47.609202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.267 qpair failed and we were unable to recover it. 00:27:05.267 [2024-11-28 12:50:47.609339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.267 [2024-11-28 12:50:47.609371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.267 qpair failed and we were unable to recover it. 00:27:05.267 [2024-11-28 12:50:47.609570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.267 [2024-11-28 12:50:47.609602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.267 qpair failed and we were unable to recover it. 00:27:05.267 [2024-11-28 12:50:47.609815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.267 [2024-11-28 12:50:47.609849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.267 qpair failed and we were unable to recover it. 00:27:05.267 [2024-11-28 12:50:47.609990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.267 [2024-11-28 12:50:47.610024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.267 qpair failed and we were unable to recover it. 00:27:05.267 [2024-11-28 12:50:47.610272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.267 [2024-11-28 12:50:47.610304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.267 qpair failed and we were unable to recover it. 00:27:05.267 [2024-11-28 12:50:47.610521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.267 [2024-11-28 12:50:47.610554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.267 qpair failed and we were unable to recover it. 00:27:05.267 [2024-11-28 12:50:47.610734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.267 [2024-11-28 12:50:47.610766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.267 qpair failed and we were unable to recover it. 00:27:05.267 [2024-11-28 12:50:47.610971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.267 [2024-11-28 12:50:47.611004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.267 qpair failed and we were unable to recover it. 00:27:05.267 [2024-11-28 12:50:47.611205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.267 [2024-11-28 12:50:47.611236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.267 qpair failed and we were unable to recover it. 00:27:05.267 [2024-11-28 12:50:47.611431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.267 [2024-11-28 12:50:47.611464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.267 qpair failed and we were unable to recover it. 00:27:05.267 [2024-11-28 12:50:47.611707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.267 [2024-11-28 12:50:47.611723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.267 qpair failed and we were unable to recover it. 00:27:05.267 [2024-11-28 12:50:47.611896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.267 [2024-11-28 12:50:47.611928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.267 qpair failed and we were unable to recover it. 00:27:05.267 [2024-11-28 12:50:47.612071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.267 [2024-11-28 12:50:47.612104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.267 qpair failed and we were unable to recover it. 00:27:05.267 [2024-11-28 12:50:47.612368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.268 [2024-11-28 12:50:47.612400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.268 qpair failed and we were unable to recover it. 00:27:05.268 [2024-11-28 12:50:47.612540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.268 [2024-11-28 12:50:47.612572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.268 qpair failed and we were unable to recover it. 00:27:05.268 [2024-11-28 12:50:47.612817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.268 [2024-11-28 12:50:47.612850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.268 qpair failed and we were unable to recover it. 00:27:05.268 [2024-11-28 12:50:47.612988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.268 [2024-11-28 12:50:47.613005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.268 qpair failed and we were unable to recover it. 00:27:05.268 [2024-11-28 12:50:47.613258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.268 [2024-11-28 12:50:47.613289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.268 qpair failed and we were unable to recover it. 00:27:05.268 [2024-11-28 12:50:47.613474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.268 [2024-11-28 12:50:47.613507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.268 qpair failed and we were unable to recover it. 00:27:05.268 [2024-11-28 12:50:47.613754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.268 [2024-11-28 12:50:47.613786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.268 qpair failed and we were unable to recover it. 00:27:05.268 [2024-11-28 12:50:47.613922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.268 [2024-11-28 12:50:47.613964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.268 qpair failed and we were unable to recover it. 00:27:05.268 [2024-11-28 12:50:47.614158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.268 [2024-11-28 12:50:47.614189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.268 qpair failed and we were unable to recover it. 00:27:05.268 [2024-11-28 12:50:47.614371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.268 [2024-11-28 12:50:47.614404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.268 qpair failed and we were unable to recover it. 00:27:05.268 [2024-11-28 12:50:47.614585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.268 [2024-11-28 12:50:47.614618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.268 qpair failed and we were unable to recover it. 00:27:05.268 [2024-11-28 12:50:47.614805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.268 [2024-11-28 12:50:47.614837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.268 qpair failed and we were unable to recover it. 00:27:05.268 [2024-11-28 12:50:47.615022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.268 [2024-11-28 12:50:47.615041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.268 qpair failed and we were unable to recover it. 00:27:05.268 [2024-11-28 12:50:47.615284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.268 [2024-11-28 12:50:47.615315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.268 qpair failed and we were unable to recover it. 00:27:05.268 [2024-11-28 12:50:47.615512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.268 [2024-11-28 12:50:47.615545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.268 qpair failed and we were unable to recover it. 00:27:05.268 [2024-11-28 12:50:47.615737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.268 [2024-11-28 12:50:47.615770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.268 qpair failed and we were unable to recover it. 00:27:05.268 [2024-11-28 12:50:47.616080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.268 [2024-11-28 12:50:47.616114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.268 qpair failed and we were unable to recover it. 00:27:05.268 [2024-11-28 12:50:47.616305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.268 [2024-11-28 12:50:47.616338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.268 qpair failed and we were unable to recover it. 00:27:05.268 [2024-11-28 12:50:47.616473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.268 [2024-11-28 12:50:47.616505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.268 qpair failed and we were unable to recover it. 00:27:05.268 [2024-11-28 12:50:47.616629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.268 [2024-11-28 12:50:47.616662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.268 qpair failed and we were unable to recover it. 00:27:05.268 [2024-11-28 12:50:47.616908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.268 [2024-11-28 12:50:47.616940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.268 qpair failed and we were unable to recover it. 00:27:05.268 [2024-11-28 12:50:47.617223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.268 [2024-11-28 12:50:47.617255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.268 qpair failed and we were unable to recover it. 00:27:05.268 [2024-11-28 12:50:47.617503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.268 [2024-11-28 12:50:47.617535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.268 qpair failed and we were unable to recover it. 00:27:05.268 [2024-11-28 12:50:47.617738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.268 [2024-11-28 12:50:47.617770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.268 qpair failed and we were unable to recover it. 00:27:05.268 [2024-11-28 12:50:47.618011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.268 [2024-11-28 12:50:47.618028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.268 qpair failed and we were unable to recover it. 00:27:05.268 [2024-11-28 12:50:47.618263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.268 [2024-11-28 12:50:47.618279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.268 qpair failed and we were unable to recover it. 00:27:05.268 [2024-11-28 12:50:47.618386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.268 [2024-11-28 12:50:47.618402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.268 qpair failed and we were unable to recover it. 00:27:05.268 [2024-11-28 12:50:47.618538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.268 [2024-11-28 12:50:47.618571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.268 qpair failed and we were unable to recover it. 00:27:05.268 [2024-11-28 12:50:47.618780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.268 [2024-11-28 12:50:47.618796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.268 qpair failed and we were unable to recover it. 00:27:05.268 [2024-11-28 12:50:47.618967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.268 [2024-11-28 12:50:47.619001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.268 qpair failed and we were unable to recover it. 00:27:05.268 [2024-11-28 12:50:47.619141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.268 [2024-11-28 12:50:47.619173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.269 qpair failed and we were unable to recover it. 00:27:05.269 [2024-11-28 12:50:47.619423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.269 [2024-11-28 12:50:47.619455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.269 qpair failed and we were unable to recover it. 00:27:05.269 [2024-11-28 12:50:47.619585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.269 [2024-11-28 12:50:47.619618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.269 qpair failed and we were unable to recover it. 00:27:05.269 [2024-11-28 12:50:47.619867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.269 [2024-11-28 12:50:47.619899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.269 qpair failed and we were unable to recover it. 00:27:05.269 [2024-11-28 12:50:47.620039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.269 [2024-11-28 12:50:47.620073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.269 qpair failed and we were unable to recover it. 00:27:05.269 [2024-11-28 12:50:47.620370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.269 [2024-11-28 12:50:47.620402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.269 qpair failed and we were unable to recover it. 00:27:05.269 [2024-11-28 12:50:47.620532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.269 [2024-11-28 12:50:47.620564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.269 qpair failed and we were unable to recover it. 00:27:05.269 [2024-11-28 12:50:47.620784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.269 [2024-11-28 12:50:47.620816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.269 qpair failed and we were unable to recover it. 00:27:05.269 [2024-11-28 12:50:47.621039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.269 [2024-11-28 12:50:47.621055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.269 qpair failed and we were unable to recover it. 00:27:05.269 [2024-11-28 12:50:47.621260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.269 [2024-11-28 12:50:47.621277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.269 qpair failed and we were unable to recover it. 00:27:05.269 [2024-11-28 12:50:47.621514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.269 [2024-11-28 12:50:47.621529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.269 qpair failed and we were unable to recover it. 00:27:05.269 [2024-11-28 12:50:47.621612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.269 [2024-11-28 12:50:47.621627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.269 qpair failed and we were unable to recover it. 00:27:05.269 [2024-11-28 12:50:47.621854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.269 [2024-11-28 12:50:47.621870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.269 qpair failed and we were unable to recover it. 00:27:05.269 [2024-11-28 12:50:47.621958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.269 [2024-11-28 12:50:47.621975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.269 qpair failed and we were unable to recover it. 00:27:05.269 [2024-11-28 12:50:47.622134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.269 [2024-11-28 12:50:47.622150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.269 qpair failed and we were unable to recover it. 00:27:05.269 [2024-11-28 12:50:47.622316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.269 [2024-11-28 12:50:47.622332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.269 qpair failed and we were unable to recover it. 00:27:05.269 [2024-11-28 12:50:47.622480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.269 [2024-11-28 12:50:47.622512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.269 qpair failed and we were unable to recover it. 00:27:05.269 [2024-11-28 12:50:47.622786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.269 [2024-11-28 12:50:47.622817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.269 qpair failed and we were unable to recover it. 00:27:05.269 [2024-11-28 12:50:47.623004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.269 [2024-11-28 12:50:47.623019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.269 qpair failed and we were unable to recover it. 00:27:05.269 [2024-11-28 12:50:47.623125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.269 [2024-11-28 12:50:47.623140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.269 qpair failed and we were unable to recover it. 00:27:05.269 [2024-11-28 12:50:47.623289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.269 [2024-11-28 12:50:47.623304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.269 qpair failed and we were unable to recover it. 00:27:05.269 [2024-11-28 12:50:47.623481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.269 [2024-11-28 12:50:47.623497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.269 qpair failed and we were unable to recover it. 00:27:05.269 [2024-11-28 12:50:47.623654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.269 [2024-11-28 12:50:47.623685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.269 qpair failed and we were unable to recover it. 00:27:05.269 [2024-11-28 12:50:47.623825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.269 [2024-11-28 12:50:47.623863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.269 qpair failed and we were unable to recover it. 00:27:05.269 [2024-11-28 12:50:47.624117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.269 [2024-11-28 12:50:47.624150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.269 qpair failed and we were unable to recover it. 00:27:05.269 [2024-11-28 12:50:47.624365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.269 [2024-11-28 12:50:47.624398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.269 qpair failed and we were unable to recover it. 00:27:05.269 [2024-11-28 12:50:47.624584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.269 [2024-11-28 12:50:47.624617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.269 qpair failed and we were unable to recover it. 00:27:05.269 [2024-11-28 12:50:47.624725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.269 [2024-11-28 12:50:47.624741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.269 qpair failed and we were unable to recover it. 00:27:05.269 [2024-11-28 12:50:47.624887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.269 [2024-11-28 12:50:47.624903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.269 qpair failed and we were unable to recover it. 00:27:05.269 [2024-11-28 12:50:47.625068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.269 [2024-11-28 12:50:47.625101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.269 qpair failed and we were unable to recover it. 00:27:05.269 [2024-11-28 12:50:47.625291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.269 [2024-11-28 12:50:47.625322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.269 qpair failed and we were unable to recover it. 00:27:05.269 [2024-11-28 12:50:47.625596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.269 [2024-11-28 12:50:47.625629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.269 qpair failed and we were unable to recover it. 00:27:05.269 [2024-11-28 12:50:47.625901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.269 [2024-11-28 12:50:47.625932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.269 qpair failed and we were unable to recover it. 00:27:05.269 [2024-11-28 12:50:47.626231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.269 [2024-11-28 12:50:47.626264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.269 qpair failed and we were unable to recover it. 00:27:05.269 [2024-11-28 12:50:47.626468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.269 [2024-11-28 12:50:47.626500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.269 qpair failed and we were unable to recover it. 00:27:05.269 [2024-11-28 12:50:47.626778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.269 [2024-11-28 12:50:47.626809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.269 qpair failed and we were unable to recover it. 00:27:05.269 [2024-11-28 12:50:47.627082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.270 [2024-11-28 12:50:47.627098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.270 qpair failed and we were unable to recover it. 00:27:05.270 [2024-11-28 12:50:47.627269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.270 [2024-11-28 12:50:47.627285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.270 qpair failed and we were unable to recover it. 00:27:05.270 [2024-11-28 12:50:47.627491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.270 [2024-11-28 12:50:47.627506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.270 qpair failed and we were unable to recover it. 00:27:05.270 [2024-11-28 12:50:47.627602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.270 [2024-11-28 12:50:47.627618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.270 qpair failed and we were unable to recover it. 00:27:05.270 [2024-11-28 12:50:47.627780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.270 [2024-11-28 12:50:47.627795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.270 qpair failed and we were unable to recover it. 00:27:05.270 [2024-11-28 12:50:47.627892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.270 [2024-11-28 12:50:47.627908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.270 qpair failed and we were unable to recover it. 00:27:05.270 [2024-11-28 12:50:47.628051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.270 [2024-11-28 12:50:47.628067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.270 qpair failed and we were unable to recover it. 00:27:05.270 [2024-11-28 12:50:47.628216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.270 [2024-11-28 12:50:47.628232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.270 qpair failed and we were unable to recover it. 00:27:05.270 [2024-11-28 12:50:47.628380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.270 [2024-11-28 12:50:47.628413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.270 qpair failed and we were unable to recover it. 00:27:05.270 [2024-11-28 12:50:47.628597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.270 [2024-11-28 12:50:47.628628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.270 qpair failed and we were unable to recover it. 00:27:05.270 [2024-11-28 12:50:47.628862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.270 [2024-11-28 12:50:47.628903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.270 qpair failed and we were unable to recover it. 00:27:05.270 [2024-11-28 12:50:47.628994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.270 [2024-11-28 12:50:47.629010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.270 qpair failed and we were unable to recover it. 00:27:05.270 [2024-11-28 12:50:47.629186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.270 [2024-11-28 12:50:47.629202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.270 qpair failed and we were unable to recover it. 00:27:05.270 [2024-11-28 12:50:47.629457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.270 [2024-11-28 12:50:47.629472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.270 qpair failed and we were unable to recover it. 00:27:05.270 [2024-11-28 12:50:47.629562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.270 [2024-11-28 12:50:47.629580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.270 qpair failed and we were unable to recover it. 00:27:05.270 [2024-11-28 12:50:47.629818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.270 [2024-11-28 12:50:47.629851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.270 qpair failed and we were unable to recover it. 00:27:05.270 [2024-11-28 12:50:47.630042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.270 [2024-11-28 12:50:47.630075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.270 qpair failed and we were unable to recover it. 00:27:05.270 [2024-11-28 12:50:47.630276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.270 [2024-11-28 12:50:47.630309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.270 qpair failed and we were unable to recover it. 00:27:05.270 [2024-11-28 12:50:47.630569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.270 [2024-11-28 12:50:47.630600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.270 qpair failed and we were unable to recover it. 00:27:05.270 [2024-11-28 12:50:47.630820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.270 [2024-11-28 12:50:47.630860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.270 qpair failed and we were unable to recover it. 00:27:05.270 [2024-11-28 12:50:47.631006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.270 [2024-11-28 12:50:47.631022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.270 qpair failed and we were unable to recover it. 00:27:05.270 [2024-11-28 12:50:47.631177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.270 [2024-11-28 12:50:47.631193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.270 qpair failed and we were unable to recover it. 00:27:05.270 [2024-11-28 12:50:47.631423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.270 [2024-11-28 12:50:47.631455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.270 qpair failed and we were unable to recover it. 00:27:05.270 [2024-11-28 12:50:47.631638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.270 [2024-11-28 12:50:47.631669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.270 qpair failed and we were unable to recover it. 00:27:05.270 [2024-11-28 12:50:47.631857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.270 [2024-11-28 12:50:47.631889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.270 qpair failed and we were unable to recover it. 00:27:05.270 [2024-11-28 12:50:47.632110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.270 [2024-11-28 12:50:47.632126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.270 qpair failed and we were unable to recover it. 00:27:05.270 [2024-11-28 12:50:47.632372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.270 [2024-11-28 12:50:47.632404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.270 qpair failed and we were unable to recover it. 00:27:05.270 [2024-11-28 12:50:47.632599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.270 [2024-11-28 12:50:47.632631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.270 qpair failed and we were unable to recover it. 00:27:05.270 [2024-11-28 12:50:47.632849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.270 [2024-11-28 12:50:47.632882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.270 qpair failed and we were unable to recover it. 00:27:05.270 [2024-11-28 12:50:47.633082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.270 [2024-11-28 12:50:47.633115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.270 qpair failed and we were unable to recover it. 00:27:05.270 [2024-11-28 12:50:47.633256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.270 [2024-11-28 12:50:47.633289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.270 qpair failed and we were unable to recover it. 00:27:05.270 [2024-11-28 12:50:47.633487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.271 [2024-11-28 12:50:47.633518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.271 qpair failed and we were unable to recover it. 00:27:05.271 [2024-11-28 12:50:47.633760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.271 [2024-11-28 12:50:47.633793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.271 qpair failed and we were unable to recover it. 00:27:05.271 [2024-11-28 12:50:47.634028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.271 [2024-11-28 12:50:47.634045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.271 qpair failed and we were unable to recover it. 00:27:05.271 [2024-11-28 12:50:47.634149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.271 [2024-11-28 12:50:47.634165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.271 qpair failed and we were unable to recover it. 00:27:05.271 [2024-11-28 12:50:47.634328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.271 [2024-11-28 12:50:47.634344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.271 qpair failed and we were unable to recover it. 00:27:05.271 [2024-11-28 12:50:47.634429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.271 [2024-11-28 12:50:47.634445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.271 qpair failed and we were unable to recover it. 00:27:05.271 [2024-11-28 12:50:47.634534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.271 [2024-11-28 12:50:47.634550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.271 qpair failed and we were unable to recover it. 00:27:05.271 [2024-11-28 12:50:47.634631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.271 [2024-11-28 12:50:47.634647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.271 qpair failed and we were unable to recover it. 00:27:05.271 [2024-11-28 12:50:47.634825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.271 [2024-11-28 12:50:47.634856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.271 qpair failed and we were unable to recover it. 00:27:05.271 [2024-11-28 12:50:47.635001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.271 [2024-11-28 12:50:47.635034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.271 qpair failed and we were unable to recover it. 00:27:05.271 [2024-11-28 12:50:47.635282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.271 [2024-11-28 12:50:47.635315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.271 qpair failed and we were unable to recover it. 00:27:05.271 [2024-11-28 12:50:47.635496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.271 [2024-11-28 12:50:47.635528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.271 qpair failed and we were unable to recover it. 00:27:05.271 [2024-11-28 12:50:47.635643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.271 [2024-11-28 12:50:47.635676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.271 qpair failed and we were unable to recover it. 00:27:05.271 [2024-11-28 12:50:47.635899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.271 [2024-11-28 12:50:47.635931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.271 qpair failed and we were unable to recover it. 00:27:05.271 [2024-11-28 12:50:47.636124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.271 [2024-11-28 12:50:47.636156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.271 qpair failed and we were unable to recover it. 00:27:05.271 [2024-11-28 12:50:47.636282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.271 [2024-11-28 12:50:47.636314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.271 qpair failed and we were unable to recover it. 00:27:05.271 [2024-11-28 12:50:47.636503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.271 [2024-11-28 12:50:47.636536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.271 qpair failed and we were unable to recover it. 00:27:05.271 [2024-11-28 12:50:47.636720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.271 [2024-11-28 12:50:47.636735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.271 qpair failed and we were unable to recover it. 00:27:05.271 [2024-11-28 12:50:47.636883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.271 [2024-11-28 12:50:47.636915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.271 qpair failed and we were unable to recover it. 00:27:05.271 [2024-11-28 12:50:47.637189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.271 [2024-11-28 12:50:47.637221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.271 qpair failed and we were unable to recover it. 00:27:05.271 [2024-11-28 12:50:47.637465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.271 [2024-11-28 12:50:47.637497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.271 qpair failed and we were unable to recover it. 00:27:05.271 [2024-11-28 12:50:47.637690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.271 [2024-11-28 12:50:47.637723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.271 qpair failed and we were unable to recover it. 00:27:05.271 [2024-11-28 12:50:47.637915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.271 [2024-11-28 12:50:47.637931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.271 qpair failed and we were unable to recover it. 00:27:05.271 [2024-11-28 12:50:47.638135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.271 [2024-11-28 12:50:47.638207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.271 qpair failed and we were unable to recover it. 00:27:05.271 [2024-11-28 12:50:47.638406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.271 [2024-11-28 12:50:47.638487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.271 qpair failed and we were unable to recover it. 00:27:05.271 [2024-11-28 12:50:47.638761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.271 [2024-11-28 12:50:47.638796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.271 qpair failed and we were unable to recover it. 00:27:05.271 [2024-11-28 12:50:47.639083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.271 [2024-11-28 12:50:47.639118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.271 qpair failed and we were unable to recover it. 00:27:05.271 [2024-11-28 12:50:47.639312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.271 [2024-11-28 12:50:47.639345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.271 qpair failed and we were unable to recover it. 00:27:05.271 [2024-11-28 12:50:47.639537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.271 [2024-11-28 12:50:47.639570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.271 qpair failed and we were unable to recover it. 00:27:05.271 [2024-11-28 12:50:47.639688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.271 [2024-11-28 12:50:47.639721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.271 qpair failed and we were unable to recover it. 00:27:05.271 [2024-11-28 12:50:47.639992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.271 [2024-11-28 12:50:47.640027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.271 qpair failed and we were unable to recover it. 00:27:05.271 [2024-11-28 12:50:47.640291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.271 [2024-11-28 12:50:47.640324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.271 qpair failed and we were unable to recover it. 00:27:05.271 [2024-11-28 12:50:47.640522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.271 [2024-11-28 12:50:47.640554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.271 qpair failed and we were unable to recover it. 00:27:05.271 [2024-11-28 12:50:47.640774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.271 [2024-11-28 12:50:47.640806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.271 qpair failed and we were unable to recover it. 00:27:05.272 [2024-11-28 12:50:47.640995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.272 [2024-11-28 12:50:47.641011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.272 qpair failed and we were unable to recover it. 00:27:05.272 [2024-11-28 12:50:47.641159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.272 [2024-11-28 12:50:47.641190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.272 qpair failed and we were unable to recover it. 00:27:05.272 [2024-11-28 12:50:47.641448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.272 [2024-11-28 12:50:47.641479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.272 qpair failed and we were unable to recover it. 00:27:05.272 [2024-11-28 12:50:47.641748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.272 [2024-11-28 12:50:47.641780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.272 qpair failed and we were unable to recover it. 00:27:05.272 [2024-11-28 12:50:47.641904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.272 [2024-11-28 12:50:47.641935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.272 qpair failed and we were unable to recover it. 00:27:05.272 [2024-11-28 12:50:47.642139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.272 [2024-11-28 12:50:47.642171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.272 qpair failed and we were unable to recover it. 00:27:05.272 [2024-11-28 12:50:47.642427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.272 [2024-11-28 12:50:47.642460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.272 qpair failed and we were unable to recover it. 00:27:05.272 [2024-11-28 12:50:47.642634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.272 [2024-11-28 12:50:47.642666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.272 qpair failed and we were unable to recover it. 00:27:05.272 [2024-11-28 12:50:47.642854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.272 [2024-11-28 12:50:47.642887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.272 qpair failed and we were unable to recover it. 00:27:05.272 [2024-11-28 12:50:47.643079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.272 [2024-11-28 12:50:47.643094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.272 qpair failed and we were unable to recover it. 00:27:05.272 [2024-11-28 12:50:47.643264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.272 [2024-11-28 12:50:47.643281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.272 qpair failed and we were unable to recover it. 00:27:05.272 [2024-11-28 12:50:47.643447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.272 [2024-11-28 12:50:47.643480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.272 qpair failed and we were unable to recover it. 00:27:05.272 [2024-11-28 12:50:47.643608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.272 [2024-11-28 12:50:47.643639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.272 qpair failed and we were unable to recover it. 00:27:05.272 [2024-11-28 12:50:47.643768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.272 [2024-11-28 12:50:47.643800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.272 qpair failed and we were unable to recover it. 00:27:05.272 [2024-11-28 12:50:47.644047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.272 [2024-11-28 12:50:47.644081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.272 qpair failed and we were unable to recover it. 00:27:05.272 [2024-11-28 12:50:47.644229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.272 [2024-11-28 12:50:47.644261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.272 qpair failed and we were unable to recover it. 00:27:05.272 [2024-11-28 12:50:47.644467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.272 [2024-11-28 12:50:47.644500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.272 qpair failed and we were unable to recover it. 00:27:05.272 [2024-11-28 12:50:47.644650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.272 [2024-11-28 12:50:47.644682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.272 qpair failed and we were unable to recover it. 00:27:05.272 [2024-11-28 12:50:47.644935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.272 [2024-11-28 12:50:47.644976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.272 qpair failed and we were unable to recover it. 00:27:05.272 [2024-11-28 12:50:47.645227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.272 [2024-11-28 12:50:47.645260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.272 qpair failed and we were unable to recover it. 00:27:05.272 [2024-11-28 12:50:47.645506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.272 [2024-11-28 12:50:47.645539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.272 qpair failed and we were unable to recover it. 00:27:05.272 [2024-11-28 12:50:47.645752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.272 [2024-11-28 12:50:47.645784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.272 qpair failed and we were unable to recover it. 00:27:05.272 [2024-11-28 12:50:47.645975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.272 [2024-11-28 12:50:47.646008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.272 qpair failed and we were unable to recover it. 00:27:05.272 [2024-11-28 12:50:47.646295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.272 [2024-11-28 12:50:47.646327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.272 qpair failed and we were unable to recover it. 00:27:05.272 [2024-11-28 12:50:47.646576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.272 [2024-11-28 12:50:47.646608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.272 qpair failed and we were unable to recover it. 00:27:05.272 [2024-11-28 12:50:47.646856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.272 [2024-11-28 12:50:47.646888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.272 qpair failed and we were unable to recover it. 00:27:05.272 [2024-11-28 12:50:47.647046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.272 [2024-11-28 12:50:47.647080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.272 qpair failed and we were unable to recover it. 00:27:05.272 [2024-11-28 12:50:47.647293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.272 [2024-11-28 12:50:47.647325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.272 qpair failed and we were unable to recover it. 00:27:05.272 [2024-11-28 12:50:47.647500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.272 [2024-11-28 12:50:47.647533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.272 qpair failed and we were unable to recover it. 00:27:05.272 [2024-11-28 12:50:47.647780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.272 [2024-11-28 12:50:47.647812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.272 qpair failed and we were unable to recover it. 00:27:05.272 [2024-11-28 12:50:47.648030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.272 [2024-11-28 12:50:47.648074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.272 qpair failed and we were unable to recover it. 00:27:05.272 [2024-11-28 12:50:47.648281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.272 [2024-11-28 12:50:47.648313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.272 qpair failed and we were unable to recover it. 00:27:05.272 [2024-11-28 12:50:47.648454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.272 [2024-11-28 12:50:47.648485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.272 qpair failed and we were unable to recover it. 00:27:05.272 [2024-11-28 12:50:47.648754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.272 [2024-11-28 12:50:47.648786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.272 qpair failed and we were unable to recover it. 00:27:05.272 [2024-11-28 12:50:47.649032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.272 [2024-11-28 12:50:47.649064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.272 qpair failed and we were unable to recover it. 00:27:05.272 [2024-11-28 12:50:47.649272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.272 [2024-11-28 12:50:47.649302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.272 qpair failed and we were unable to recover it. 00:27:05.272 [2024-11-28 12:50:47.649437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.272 [2024-11-28 12:50:47.649469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.272 qpair failed and we were unable to recover it. 00:27:05.273 [2024-11-28 12:50:47.649589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.273 [2024-11-28 12:50:47.649621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.273 qpair failed and we were unable to recover it. 00:27:05.273 [2024-11-28 12:50:47.649732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.273 [2024-11-28 12:50:47.649762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.273 qpair failed and we were unable to recover it. 00:27:05.273 [2024-11-28 12:50:47.649994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.273 [2024-11-28 12:50:47.650010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.273 qpair failed and we were unable to recover it. 00:27:05.273 [2024-11-28 12:50:47.650107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.273 [2024-11-28 12:50:47.650123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.273 qpair failed and we were unable to recover it. 00:27:05.273 [2024-11-28 12:50:47.650218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.273 [2024-11-28 12:50:47.650248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.273 qpair failed and we were unable to recover it. 00:27:05.273 [2024-11-28 12:50:47.650425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.273 [2024-11-28 12:50:47.650457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.273 qpair failed and we were unable to recover it. 00:27:05.273 [2024-11-28 12:50:47.650707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.273 [2024-11-28 12:50:47.650740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.273 qpair failed and we were unable to recover it. 00:27:05.273 [2024-11-28 12:50:47.651021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.273 [2024-11-28 12:50:47.651038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.273 qpair failed and we were unable to recover it. 00:27:05.273 [2024-11-28 12:50:47.651121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.273 [2024-11-28 12:50:47.651137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.273 qpair failed and we were unable to recover it. 00:27:05.273 [2024-11-28 12:50:47.651349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.273 [2024-11-28 12:50:47.651379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.273 qpair failed and we were unable to recover it. 00:27:05.273 [2024-11-28 12:50:47.651560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.273 [2024-11-28 12:50:47.651592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.273 qpair failed and we were unable to recover it. 00:27:05.273 [2024-11-28 12:50:47.651811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.273 [2024-11-28 12:50:47.651842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.273 qpair failed and we were unable to recover it. 00:27:05.273 [2024-11-28 12:50:47.652093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.273 [2024-11-28 12:50:47.652109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.273 qpair failed and we were unable to recover it. 00:27:05.273 [2024-11-28 12:50:47.652264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.273 [2024-11-28 12:50:47.652279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.273 qpair failed and we were unable to recover it. 00:27:05.273 [2024-11-28 12:50:47.652506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.273 [2024-11-28 12:50:47.652538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.273 qpair failed and we were unable to recover it. 00:27:05.273 [2024-11-28 12:50:47.652720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.273 [2024-11-28 12:50:47.652753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.273 qpair failed and we were unable to recover it. 00:27:05.273 [2024-11-28 12:50:47.652864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.273 [2024-11-28 12:50:47.652895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.273 qpair failed and we were unable to recover it. 00:27:05.273 [2024-11-28 12:50:47.653034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.273 [2024-11-28 12:50:47.653049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.273 qpair failed and we were unable to recover it. 00:27:05.273 [2024-11-28 12:50:47.653133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.273 [2024-11-28 12:50:47.653148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.273 qpair failed and we were unable to recover it. 00:27:05.273 [2024-11-28 12:50:47.653366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.273 [2024-11-28 12:50:47.653397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.273 qpair failed and we were unable to recover it. 00:27:05.273 [2024-11-28 12:50:47.653594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.273 [2024-11-28 12:50:47.653627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.273 qpair failed and we were unable to recover it. 00:27:05.273 [2024-11-28 12:50:47.653814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.273 [2024-11-28 12:50:47.653845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.273 qpair failed and we were unable to recover it. 00:27:05.273 [2024-11-28 12:50:47.654024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.273 [2024-11-28 12:50:47.654040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.273 qpair failed and we were unable to recover it. 00:27:05.273 [2024-11-28 12:50:47.654137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.273 [2024-11-28 12:50:47.654153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.273 qpair failed and we were unable to recover it. 00:27:05.273 [2024-11-28 12:50:47.654360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.273 [2024-11-28 12:50:47.654375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.273 qpair failed and we were unable to recover it. 00:27:05.273 [2024-11-28 12:50:47.654544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.273 [2024-11-28 12:50:47.654560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.273 qpair failed and we were unable to recover it. 00:27:05.273 [2024-11-28 12:50:47.654740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.273 [2024-11-28 12:50:47.654755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.273 qpair failed and we were unable to recover it. 00:27:05.273 [2024-11-28 12:50:47.654918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.273 [2024-11-28 12:50:47.654979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.273 qpair failed and we were unable to recover it. 00:27:05.273 [2024-11-28 12:50:47.655235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.273 [2024-11-28 12:50:47.655265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.273 qpair failed and we were unable to recover it. 00:27:05.273 [2024-11-28 12:50:47.655411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.273 [2024-11-28 12:50:47.655440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.273 qpair failed and we were unable to recover it. 00:27:05.273 [2024-11-28 12:50:47.655568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.273 [2024-11-28 12:50:47.655600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.273 qpair failed and we were unable to recover it. 00:27:05.273 [2024-11-28 12:50:47.655740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.273 [2024-11-28 12:50:47.655770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.273 qpair failed and we were unable to recover it. 00:27:05.273 [2024-11-28 12:50:47.655987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.273 [2024-11-28 12:50:47.656002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.273 qpair failed and we were unable to recover it. 00:27:05.273 [2024-11-28 12:50:47.656103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.273 [2024-11-28 12:50:47.656121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.273 qpair failed and we were unable to recover it. 00:27:05.273 [2024-11-28 12:50:47.656338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.273 [2024-11-28 12:50:47.656370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.273 qpair failed and we were unable to recover it. 00:27:05.273 [2024-11-28 12:50:47.656482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.273 [2024-11-28 12:50:47.656513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.273 qpair failed and we were unable to recover it. 00:27:05.273 [2024-11-28 12:50:47.656689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.274 [2024-11-28 12:50:47.656720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.274 qpair failed and we were unable to recover it. 00:27:05.274 [2024-11-28 12:50:47.656963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.274 [2024-11-28 12:50:47.656980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.274 qpair failed and we were unable to recover it. 00:27:05.274 [2024-11-28 12:50:47.657167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.274 [2024-11-28 12:50:47.657200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.274 qpair failed and we were unable to recover it. 00:27:05.274 [2024-11-28 12:50:47.657381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.274 [2024-11-28 12:50:47.657414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.274 qpair failed and we were unable to recover it. 00:27:05.274 [2024-11-28 12:50:47.657594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.274 [2024-11-28 12:50:47.657636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.274 qpair failed and we were unable to recover it. 00:27:05.274 [2024-11-28 12:50:47.657894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.274 [2024-11-28 12:50:47.657909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.274 qpair failed and we were unable to recover it. 00:27:05.274 [2024-11-28 12:50:47.658046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.274 [2024-11-28 12:50:47.658061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.274 qpair failed and we were unable to recover it. 00:27:05.274 [2024-11-28 12:50:47.658269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.274 [2024-11-28 12:50:47.658285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.274 qpair failed and we were unable to recover it. 00:27:05.274 [2024-11-28 12:50:47.658373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.274 [2024-11-28 12:50:47.658388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.274 qpair failed and we were unable to recover it. 00:27:05.274 [2024-11-28 12:50:47.658569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.274 [2024-11-28 12:50:47.658583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.274 qpair failed and we were unable to recover it. 00:27:05.274 [2024-11-28 12:50:47.658730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.274 [2024-11-28 12:50:47.658761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.274 qpair failed and we were unable to recover it. 00:27:05.274 [2024-11-28 12:50:47.658967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.274 [2024-11-28 12:50:47.659001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.274 qpair failed and we were unable to recover it. 00:27:05.274 [2024-11-28 12:50:47.659132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.274 [2024-11-28 12:50:47.659164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.274 qpair failed and we were unable to recover it. 00:27:05.274 [2024-11-28 12:50:47.659362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.274 [2024-11-28 12:50:47.659394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.274 qpair failed and we were unable to recover it. 00:27:05.274 [2024-11-28 12:50:47.659601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.274 [2024-11-28 12:50:47.659633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.274 qpair failed and we were unable to recover it. 00:27:05.274 [2024-11-28 12:50:47.659766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.274 [2024-11-28 12:50:47.659796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.274 qpair failed and we were unable to recover it. 00:27:05.274 [2024-11-28 12:50:47.660000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.274 [2024-11-28 12:50:47.660017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.274 qpair failed and we were unable to recover it. 00:27:05.274 [2024-11-28 12:50:47.660185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.274 [2024-11-28 12:50:47.660200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.274 qpair failed and we were unable to recover it. 00:27:05.274 [2024-11-28 12:50:47.660350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.274 [2024-11-28 12:50:47.660364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.274 qpair failed and we were unable to recover it. 00:27:05.274 [2024-11-28 12:50:47.660534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.274 [2024-11-28 12:50:47.660566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.274 qpair failed and we were unable to recover it. 00:27:05.274 [2024-11-28 12:50:47.660756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.274 [2024-11-28 12:50:47.660787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.274 qpair failed and we were unable to recover it. 00:27:05.274 [2024-11-28 12:50:47.661008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.274 [2024-11-28 12:50:47.661041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.274 qpair failed and we were unable to recover it. 00:27:05.274 [2024-11-28 12:50:47.661239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.274 [2024-11-28 12:50:47.661271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.274 qpair failed and we were unable to recover it. 00:27:05.274 [2024-11-28 12:50:47.661406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.274 [2024-11-28 12:50:47.661436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.274 qpair failed and we were unable to recover it. 00:27:05.274 [2024-11-28 12:50:47.661639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.274 [2024-11-28 12:50:47.661671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.274 qpair failed and we were unable to recover it. 00:27:05.274 [2024-11-28 12:50:47.661850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.274 [2024-11-28 12:50:47.661866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.274 qpair failed and we were unable to recover it. 00:27:05.274 [2024-11-28 12:50:47.661942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.274 [2024-11-28 12:50:47.661963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.274 qpair failed and we were unable to recover it. 00:27:05.274 [2024-11-28 12:50:47.662207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.274 [2024-11-28 12:50:47.662223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.274 qpair failed and we were unable to recover it. 00:27:05.274 [2024-11-28 12:50:47.662374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.274 [2024-11-28 12:50:47.662389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.274 qpair failed and we were unable to recover it. 00:27:05.274 [2024-11-28 12:50:47.662466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.274 [2024-11-28 12:50:47.662481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.274 qpair failed and we were unable to recover it. 00:27:05.274 [2024-11-28 12:50:47.662623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.274 [2024-11-28 12:50:47.662638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.274 qpair failed and we were unable to recover it. 00:27:05.274 [2024-11-28 12:50:47.662721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.274 [2024-11-28 12:50:47.662736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.274 qpair failed and we were unable to recover it. 00:27:05.274 [2024-11-28 12:50:47.662825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.274 [2024-11-28 12:50:47.662841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.274 qpair failed and we were unable to recover it. 00:27:05.274 [2024-11-28 12:50:47.662960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.274 [2024-11-28 12:50:47.662976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.274 qpair failed and we were unable to recover it. 00:27:05.274 [2024-11-28 12:50:47.663167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.275 [2024-11-28 12:50:47.663200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.275 qpair failed and we were unable to recover it. 00:27:05.275 [2024-11-28 12:50:47.663337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.275 [2024-11-28 12:50:47.663367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.275 qpair failed and we were unable to recover it. 00:27:05.275 [2024-11-28 12:50:47.663547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.275 [2024-11-28 12:50:47.663579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.275 qpair failed and we were unable to recover it. 00:27:05.275 [2024-11-28 12:50:47.663769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.275 [2024-11-28 12:50:47.663806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.275 qpair failed and we were unable to recover it. 00:27:05.275 [2024-11-28 12:50:47.664010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.275 [2024-11-28 12:50:47.664044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.275 qpair failed and we were unable to recover it. 00:27:05.275 [2024-11-28 12:50:47.664321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.275 [2024-11-28 12:50:47.664353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.275 qpair failed and we were unable to recover it. 00:27:05.275 [2024-11-28 12:50:47.664571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.275 [2024-11-28 12:50:47.664602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.275 qpair failed and we were unable to recover it. 00:27:05.275 [2024-11-28 12:50:47.664801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.275 [2024-11-28 12:50:47.664832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.275 qpair failed and we were unable to recover it. 00:27:05.275 [2024-11-28 12:50:47.664972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.275 [2024-11-28 12:50:47.664988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.275 qpair failed and we were unable to recover it. 00:27:05.275 [2024-11-28 12:50:47.665222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.275 [2024-11-28 12:50:47.665238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.275 qpair failed and we were unable to recover it. 00:27:05.275 [2024-11-28 12:50:47.665443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.275 [2024-11-28 12:50:47.665458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.275 qpair failed and we were unable to recover it. 00:27:05.275 [2024-11-28 12:50:47.665610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.275 [2024-11-28 12:50:47.665625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.275 qpair failed and we were unable to recover it. 00:27:05.275 [2024-11-28 12:50:47.665716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.275 [2024-11-28 12:50:47.665731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.275 qpair failed and we were unable to recover it. 00:27:05.275 [2024-11-28 12:50:47.665883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.275 [2024-11-28 12:50:47.665898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.275 qpair failed and we were unable to recover it. 00:27:05.275 [2024-11-28 12:50:47.666055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.275 [2024-11-28 12:50:47.666070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.275 qpair failed and we were unable to recover it. 00:27:05.275 [2024-11-28 12:50:47.666158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.275 [2024-11-28 12:50:47.666173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.275 qpair failed and we were unable to recover it. 00:27:05.275 [2024-11-28 12:50:47.666263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.275 [2024-11-28 12:50:47.666279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.275 qpair failed and we were unable to recover it. 00:27:05.275 [2024-11-28 12:50:47.666423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.275 [2024-11-28 12:50:47.666438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.275 qpair failed and we were unable to recover it. 00:27:05.275 [2024-11-28 12:50:47.666558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.275 [2024-11-28 12:50:47.666588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.275 qpair failed and we were unable to recover it. 00:27:05.275 [2024-11-28 12:50:47.666726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.275 [2024-11-28 12:50:47.666757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.275 qpair failed and we were unable to recover it. 00:27:05.275 [2024-11-28 12:50:47.666936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.275 [2024-11-28 12:50:47.666976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.275 qpair failed and we were unable to recover it. 00:27:05.275 [2024-11-28 12:50:47.667161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.275 [2024-11-28 12:50:47.667176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.275 qpair failed and we were unable to recover it. 00:27:05.275 [2024-11-28 12:50:47.667323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.275 [2024-11-28 12:50:47.667354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.275 qpair failed and we were unable to recover it. 00:27:05.275 [2024-11-28 12:50:47.667501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.275 [2024-11-28 12:50:47.667534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.275 qpair failed and we were unable to recover it. 00:27:05.275 [2024-11-28 12:50:47.667648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.275 [2024-11-28 12:50:47.667679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.275 qpair failed and we were unable to recover it. 00:27:05.275 [2024-11-28 12:50:47.667890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.275 [2024-11-28 12:50:47.667931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.275 qpair failed and we were unable to recover it. 00:27:05.275 [2024-11-28 12:50:47.668182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.275 [2024-11-28 12:50:47.668198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.275 qpair failed and we were unable to recover it. 00:27:05.275 [2024-11-28 12:50:47.668357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.275 [2024-11-28 12:50:47.668373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.275 qpair failed and we were unable to recover it. 00:27:05.275 [2024-11-28 12:50:47.668518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.275 [2024-11-28 12:50:47.668533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.275 qpair failed and we were unable to recover it. 00:27:05.275 [2024-11-28 12:50:47.668614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.275 [2024-11-28 12:50:47.668629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.275 qpair failed and we were unable to recover it. 00:27:05.275 [2024-11-28 12:50:47.668893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.275 [2024-11-28 12:50:47.668925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.275 qpair failed and we were unable to recover it. 00:27:05.275 [2024-11-28 12:50:47.669120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.275 [2024-11-28 12:50:47.669152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.275 qpair failed and we were unable to recover it. 00:27:05.275 [2024-11-28 12:50:47.669370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.275 [2024-11-28 12:50:47.669402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.275 qpair failed and we were unable to recover it. 00:27:05.275 [2024-11-28 12:50:47.669532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.275 [2024-11-28 12:50:47.669564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.275 qpair failed and we were unable to recover it. 00:27:05.275 [2024-11-28 12:50:47.669695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.275 [2024-11-28 12:50:47.669724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.275 qpair failed and we were unable to recover it. 00:27:05.275 [2024-11-28 12:50:47.669858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.275 [2024-11-28 12:50:47.669889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.275 qpair failed and we were unable to recover it. 00:27:05.275 [2024-11-28 12:50:47.670024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.276 [2024-11-28 12:50:47.670040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.276 qpair failed and we were unable to recover it. 00:27:05.276 [2024-11-28 12:50:47.670259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.276 [2024-11-28 12:50:47.670290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.276 qpair failed and we were unable to recover it. 00:27:05.276 [2024-11-28 12:50:47.670425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.276 [2024-11-28 12:50:47.670457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.276 qpair failed and we were unable to recover it. 00:27:05.276 [2024-11-28 12:50:47.670679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.276 [2024-11-28 12:50:47.670711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.276 qpair failed and we were unable to recover it. 00:27:05.276 [2024-11-28 12:50:47.670849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.276 [2024-11-28 12:50:47.670865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.276 qpair failed and we were unable to recover it. 00:27:05.276 [2024-11-28 12:50:47.670975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.276 [2024-11-28 12:50:47.670991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.276 qpair failed and we were unable to recover it. 00:27:05.276 [2024-11-28 12:50:47.671149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.276 [2024-11-28 12:50:47.671182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.276 qpair failed and we were unable to recover it. 00:27:05.276 [2024-11-28 12:50:47.671359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.276 [2024-11-28 12:50:47.671394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.276 qpair failed and we were unable to recover it. 00:27:05.276 [2024-11-28 12:50:47.671586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.276 [2024-11-28 12:50:47.671616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.276 qpair failed and we were unable to recover it. 00:27:05.276 [2024-11-28 12:50:47.671795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.276 [2024-11-28 12:50:47.671811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.276 qpair failed and we were unable to recover it. 00:27:05.276 [2024-11-28 12:50:47.671884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.276 [2024-11-28 12:50:47.671900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.276 qpair failed and we were unable to recover it. 00:27:05.276 [2024-11-28 12:50:47.671986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.276 [2024-11-28 12:50:47.672022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.276 qpair failed and we were unable to recover it. 00:27:05.276 [2024-11-28 12:50:47.672272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.276 [2024-11-28 12:50:47.672305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.276 qpair failed and we were unable to recover it. 00:27:05.276 [2024-11-28 12:50:47.672578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.276 [2024-11-28 12:50:47.672610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.276 qpair failed and we were unable to recover it. 00:27:05.276 [2024-11-28 12:50:47.672881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.276 [2024-11-28 12:50:47.672913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.276 qpair failed and we were unable to recover it. 00:27:05.276 [2024-11-28 12:50:47.673159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.276 [2024-11-28 12:50:47.673193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.276 qpair failed and we were unable to recover it. 00:27:05.276 [2024-11-28 12:50:47.673464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.276 [2024-11-28 12:50:47.673496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.276 qpair failed and we were unable to recover it. 00:27:05.276 [2024-11-28 12:50:47.673685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.276 [2024-11-28 12:50:47.673717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.276 qpair failed and we were unable to recover it. 00:27:05.276 [2024-11-28 12:50:47.673886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.276 [2024-11-28 12:50:47.673901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.276 qpair failed and we were unable to recover it. 00:27:05.276 [2024-11-28 12:50:47.674054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.276 [2024-11-28 12:50:47.674071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.276 qpair failed and we were unable to recover it. 00:27:05.276 [2024-11-28 12:50:47.674240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.276 [2024-11-28 12:50:47.674256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.276 qpair failed and we were unable to recover it. 00:27:05.276 [2024-11-28 12:50:47.674424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.276 [2024-11-28 12:50:47.674456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.276 qpair failed and we were unable to recover it. 00:27:05.276 [2024-11-28 12:50:47.674592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.276 [2024-11-28 12:50:47.674624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.276 qpair failed and we were unable to recover it. 00:27:05.276 [2024-11-28 12:50:47.674803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.276 [2024-11-28 12:50:47.674835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.276 qpair failed and we were unable to recover it. 00:27:05.276 [2024-11-28 12:50:47.675059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.276 [2024-11-28 12:50:47.675091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.276 qpair failed and we were unable to recover it. 00:27:05.276 [2024-11-28 12:50:47.675381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.276 [2024-11-28 12:50:47.675412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.276 qpair failed and we were unable to recover it. 00:27:05.276 [2024-11-28 12:50:47.675589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.276 [2024-11-28 12:50:47.675621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.276 qpair failed and we were unable to recover it. 00:27:05.276 [2024-11-28 12:50:47.675845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.276 [2024-11-28 12:50:47.675877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.276 qpair failed and we were unable to recover it. 00:27:05.276 [2024-11-28 12:50:47.676068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.276 [2024-11-28 12:50:47.676100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.276 qpair failed and we were unable to recover it. 00:27:05.276 [2024-11-28 12:50:47.676303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.276 [2024-11-28 12:50:47.676336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.276 qpair failed and we were unable to recover it. 00:27:05.276 [2024-11-28 12:50:47.676470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.276 [2024-11-28 12:50:47.676502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.276 qpair failed and we were unable to recover it. 00:27:05.276 [2024-11-28 12:50:47.676773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.276 [2024-11-28 12:50:47.676805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.276 qpair failed and we were unable to recover it. 00:27:05.276 [2024-11-28 12:50:47.677114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.276 [2024-11-28 12:50:47.677147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.276 qpair failed and we were unable to recover it. 00:27:05.276 [2024-11-28 12:50:47.677343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.276 [2024-11-28 12:50:47.677375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.276 qpair failed and we were unable to recover it. 00:27:05.276 [2024-11-28 12:50:47.677659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.276 [2024-11-28 12:50:47.677691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.276 qpair failed and we were unable to recover it. 00:27:05.276 [2024-11-28 12:50:47.677804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.276 [2024-11-28 12:50:47.677820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.276 qpair failed and we were unable to recover it. 00:27:05.276 [2024-11-28 12:50:47.677909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.276 [2024-11-28 12:50:47.677925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.276 qpair failed and we were unable to recover it. 00:27:05.276 [2024-11-28 12:50:47.678085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.276 [2024-11-28 12:50:47.678101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.276 qpair failed and we were unable to recover it. 00:27:05.276 [2024-11-28 12:50:47.678253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.276 [2024-11-28 12:50:47.678286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.276 qpair failed and we were unable to recover it. 00:27:05.276 [2024-11-28 12:50:47.678463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.277 [2024-11-28 12:50:47.678495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.277 qpair failed and we were unable to recover it. 00:27:05.277 [2024-11-28 12:50:47.678711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.277 [2024-11-28 12:50:47.678750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.277 qpair failed and we were unable to recover it. 00:27:05.277 [2024-11-28 12:50:47.679011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.277 [2024-11-28 12:50:47.679058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.277 qpair failed and we were unable to recover it. 00:27:05.277 [2024-11-28 12:50:47.679257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.277 [2024-11-28 12:50:47.679290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.277 qpair failed and we were unable to recover it. 00:27:05.277 [2024-11-28 12:50:47.679422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.277 [2024-11-28 12:50:47.679453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.277 qpair failed and we were unable to recover it. 00:27:05.277 [2024-11-28 12:50:47.679714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.277 [2024-11-28 12:50:47.679746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.277 qpair failed and we were unable to recover it. 00:27:05.277 [2024-11-28 12:50:47.679863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.277 [2024-11-28 12:50:47.679894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.277 qpair failed and we were unable to recover it. 00:27:05.277 [2024-11-28 12:50:47.680088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.277 [2024-11-28 12:50:47.680122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.277 qpair failed and we were unable to recover it. 00:27:05.277 [2024-11-28 12:50:47.680303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.277 [2024-11-28 12:50:47.680340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.277 qpair failed and we were unable to recover it. 00:27:05.277 [2024-11-28 12:50:47.680584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.277 [2024-11-28 12:50:47.680615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.277 qpair failed and we were unable to recover it. 00:27:05.277 [2024-11-28 12:50:47.680802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.277 [2024-11-28 12:50:47.680818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.277 qpair failed and we were unable to recover it. 00:27:05.277 [2024-11-28 12:50:47.680908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.277 [2024-11-28 12:50:47.680942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.277 qpair failed and we were unable to recover it. 00:27:05.277 [2024-11-28 12:50:47.681131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.277 [2024-11-28 12:50:47.681163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.277 qpair failed and we were unable to recover it. 00:27:05.277 [2024-11-28 12:50:47.681355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.277 [2024-11-28 12:50:47.681387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.277 qpair failed and we were unable to recover it. 00:27:05.277 [2024-11-28 12:50:47.681588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.277 [2024-11-28 12:50:47.681621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.277 qpair failed and we were unable to recover it. 00:27:05.277 [2024-11-28 12:50:47.681892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.277 [2024-11-28 12:50:47.681923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.277 qpair failed and we were unable to recover it. 00:27:05.277 [2024-11-28 12:50:47.682060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.277 [2024-11-28 12:50:47.682076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.277 qpair failed and we were unable to recover it. 00:27:05.277 [2024-11-28 12:50:47.682256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.277 [2024-11-28 12:50:47.682272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.277 qpair failed and we were unable to recover it. 00:27:05.277 [2024-11-28 12:50:47.682431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.277 [2024-11-28 12:50:47.682447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.277 qpair failed and we were unable to recover it. 00:27:05.277 [2024-11-28 12:50:47.682628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.277 [2024-11-28 12:50:47.682660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.277 qpair failed and we were unable to recover it. 00:27:05.277 [2024-11-28 12:50:47.682862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.277 [2024-11-28 12:50:47.682895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.277 qpair failed and we were unable to recover it. 00:27:05.277 [2024-11-28 12:50:47.683103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.277 [2024-11-28 12:50:47.683135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.277 qpair failed and we were unable to recover it. 00:27:05.277 [2024-11-28 12:50:47.683355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.277 [2024-11-28 12:50:47.683371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.277 qpair failed and we were unable to recover it. 00:27:05.277 [2024-11-28 12:50:47.683520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.277 [2024-11-28 12:50:47.683536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.277 qpair failed and we were unable to recover it. 00:27:05.277 [2024-11-28 12:50:47.683744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.277 [2024-11-28 12:50:47.683775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.277 qpair failed and we were unable to recover it. 00:27:05.277 [2024-11-28 12:50:47.684052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.277 [2024-11-28 12:50:47.684086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.277 qpair failed and we were unable to recover it. 00:27:05.277 [2024-11-28 12:50:47.684288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.277 [2024-11-28 12:50:47.684304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.277 qpair failed and we were unable to recover it. 00:27:05.277 [2024-11-28 12:50:47.684522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.277 [2024-11-28 12:50:47.684554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.277 qpair failed and we were unable to recover it. 00:27:05.277 [2024-11-28 12:50:47.684774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.277 [2024-11-28 12:50:47.684806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.277 qpair failed and we were unable to recover it. 00:27:05.277 [2024-11-28 12:50:47.684995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.277 [2024-11-28 12:50:47.685028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.277 qpair failed and we were unable to recover it. 00:27:05.277 [2024-11-28 12:50:47.685266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.277 [2024-11-28 12:50:47.685282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.277 qpair failed and we were unable to recover it. 00:27:05.277 [2024-11-28 12:50:47.685435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.277 [2024-11-28 12:50:47.685451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.277 qpair failed and we were unable to recover it. 00:27:05.277 [2024-11-28 12:50:47.685542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.277 [2024-11-28 12:50:47.685558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.277 qpair failed and we were unable to recover it. 00:27:05.277 [2024-11-28 12:50:47.685711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.277 [2024-11-28 12:50:47.685726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.277 qpair failed and we were unable to recover it. 00:27:05.277 [2024-11-28 12:50:47.685861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.277 [2024-11-28 12:50:47.685877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.277 qpair failed and we were unable to recover it. 00:27:05.277 [2024-11-28 12:50:47.686039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.277 [2024-11-28 12:50:47.686054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.277 qpair failed and we were unable to recover it. 00:27:05.277 [2024-11-28 12:50:47.686132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.277 [2024-11-28 12:50:47.686148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.277 qpair failed and we were unable to recover it. 00:27:05.277 [2024-11-28 12:50:47.686304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.277 [2024-11-28 12:50:47.686337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.277 qpair failed and we were unable to recover it. 00:27:05.277 [2024-11-28 12:50:47.686473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.277 [2024-11-28 12:50:47.686503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.277 qpair failed and we were unable to recover it. 00:27:05.277 [2024-11-28 12:50:47.686634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.278 [2024-11-28 12:50:47.686666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.278 qpair failed and we were unable to recover it. 00:27:05.278 [2024-11-28 12:50:47.686860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.278 [2024-11-28 12:50:47.686893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.278 qpair failed and we were unable to recover it. 00:27:05.278 [2024-11-28 12:50:47.687119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.278 [2024-11-28 12:50:47.687152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.278 qpair failed and we were unable to recover it. 00:27:05.278 [2024-11-28 12:50:47.687341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.278 [2024-11-28 12:50:47.687372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.278 qpair failed and we were unable to recover it. 00:27:05.278 [2024-11-28 12:50:47.687570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.278 [2024-11-28 12:50:47.687600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.278 qpair failed and we were unable to recover it. 00:27:05.278 [2024-11-28 12:50:47.687730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.278 [2024-11-28 12:50:47.687744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.278 qpair failed and we were unable to recover it. 00:27:05.278 [2024-11-28 12:50:47.687904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.278 [2024-11-28 12:50:47.687920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.278 qpair failed and we were unable to recover it. 00:27:05.278 [2024-11-28 12:50:47.688000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.278 [2024-11-28 12:50:47.688044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.278 qpair failed and we were unable to recover it. 00:27:05.278 [2024-11-28 12:50:47.688179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.278 [2024-11-28 12:50:47.688211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.278 qpair failed and we were unable to recover it. 00:27:05.278 [2024-11-28 12:50:47.688456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.278 [2024-11-28 12:50:47.688497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.278 qpair failed and we were unable to recover it. 00:27:05.278 [2024-11-28 12:50:47.688625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.278 [2024-11-28 12:50:47.688656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.278 qpair failed and we were unable to recover it. 00:27:05.278 [2024-11-28 12:50:47.688789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.278 [2024-11-28 12:50:47.688819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.278 qpair failed and we were unable to recover it. 00:27:05.278 [2024-11-28 12:50:47.689012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.278 [2024-11-28 12:50:47.689045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.278 qpair failed and we were unable to recover it. 00:27:05.278 [2024-11-28 12:50:47.689171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.278 [2024-11-28 12:50:47.689187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.278 qpair failed and we were unable to recover it. 00:27:05.278 [2024-11-28 12:50:47.689384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.278 [2024-11-28 12:50:47.689415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.278 qpair failed and we were unable to recover it. 00:27:05.278 [2024-11-28 12:50:47.689612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.278 [2024-11-28 12:50:47.689643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.278 qpair failed and we were unable to recover it. 00:27:05.278 [2024-11-28 12:50:47.689834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.278 [2024-11-28 12:50:47.689865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.278 qpair failed and we were unable to recover it. 00:27:05.278 [2024-11-28 12:50:47.689980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.278 [2024-11-28 12:50:47.689996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.278 qpair failed and we were unable to recover it. 00:27:05.278 [2024-11-28 12:50:47.690072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.278 [2024-11-28 12:50:47.690087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.278 qpair failed and we were unable to recover it. 00:27:05.278 [2024-11-28 12:50:47.690230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.278 [2024-11-28 12:50:47.690245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.278 qpair failed and we were unable to recover it. 00:27:05.278 [2024-11-28 12:50:47.690331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.278 [2024-11-28 12:50:47.690359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.278 qpair failed and we were unable to recover it. 00:27:05.278 [2024-11-28 12:50:47.690566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.278 [2024-11-28 12:50:47.690598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.278 qpair failed and we were unable to recover it. 00:27:05.278 [2024-11-28 12:50:47.690702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.278 [2024-11-28 12:50:47.690733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.278 qpair failed and we were unable to recover it. 00:27:05.278 [2024-11-28 12:50:47.690985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.278 [2024-11-28 12:50:47.691018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.278 qpair failed and we were unable to recover it. 00:27:05.278 [2024-11-28 12:50:47.691264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.278 [2024-11-28 12:50:47.691296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.278 qpair failed and we were unable to recover it. 00:27:05.278 [2024-11-28 12:50:47.691482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.278 [2024-11-28 12:50:47.691512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.278 qpair failed and we were unable to recover it. 00:27:05.278 [2024-11-28 12:50:47.691694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.278 [2024-11-28 12:50:47.691725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.278 qpair failed and we were unable to recover it. 00:27:05.278 [2024-11-28 12:50:47.691852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.278 [2024-11-28 12:50:47.691868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.278 qpair failed and we were unable to recover it. 00:27:05.278 [2024-11-28 12:50:47.692041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.278 [2024-11-28 12:50:47.692073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.278 qpair failed and we were unable to recover it. 00:27:05.278 [2024-11-28 12:50:47.692204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.278 [2024-11-28 12:50:47.692235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.278 qpair failed and we were unable to recover it. 00:27:05.278 [2024-11-28 12:50:47.692419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.278 [2024-11-28 12:50:47.692452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.278 qpair failed and we were unable to recover it. 00:27:05.278 [2024-11-28 12:50:47.692633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.278 [2024-11-28 12:50:47.692665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.278 qpair failed and we were unable to recover it. 00:27:05.278 [2024-11-28 12:50:47.692970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.278 [2024-11-28 12:50:47.693003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.278 qpair failed and we were unable to recover it. 00:27:05.278 [2024-11-28 12:50:47.693247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.278 [2024-11-28 12:50:47.693278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.278 qpair failed and we were unable to recover it. 00:27:05.278 [2024-11-28 12:50:47.693532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.278 [2024-11-28 12:50:47.693563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.278 qpair failed and we were unable to recover it. 00:27:05.278 [2024-11-28 12:50:47.693743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.278 [2024-11-28 12:50:47.693775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.278 qpair failed and we were unable to recover it. 00:27:05.278 [2024-11-28 12:50:47.694028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.278 [2024-11-28 12:50:47.694101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.278 qpair failed and we were unable to recover it. 00:27:05.278 [2024-11-28 12:50:47.694245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.278 [2024-11-28 12:50:47.694282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.278 qpair failed and we were unable to recover it. 00:27:05.278 [2024-11-28 12:50:47.694479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.278 [2024-11-28 12:50:47.694512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.278 qpair failed and we were unable to recover it. 00:27:05.278 [2024-11-28 12:50:47.694696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.278 [2024-11-28 12:50:47.694728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.278 qpair failed and we were unable to recover it. 00:27:05.278 [2024-11-28 12:50:47.694930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.278 [2024-11-28 12:50:47.694980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.278 qpair failed and we were unable to recover it. 00:27:05.278 [2024-11-28 12:50:47.695253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.279 [2024-11-28 12:50:47.695285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.279 qpair failed and we were unable to recover it. 00:27:05.279 [2024-11-28 12:50:47.695488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.279 [2024-11-28 12:50:47.695520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.279 qpair failed and we were unable to recover it. 00:27:05.279 [2024-11-28 12:50:47.695798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.279 [2024-11-28 12:50:47.695831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.279 qpair failed and we were unable to recover it. 00:27:05.279 [2024-11-28 12:50:47.695973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.279 [2024-11-28 12:50:47.696008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.279 qpair failed and we were unable to recover it. 00:27:05.279 [2024-11-28 12:50:47.696135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.279 [2024-11-28 12:50:47.696167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.279 qpair failed and we were unable to recover it. 00:27:05.279 [2024-11-28 12:50:47.696297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.279 [2024-11-28 12:50:47.696330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.279 qpair failed and we were unable to recover it. 00:27:05.279 [2024-11-28 12:50:47.696514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.279 [2024-11-28 12:50:47.696545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.279 qpair failed and we were unable to recover it. 00:27:05.279 [2024-11-28 12:50:47.696727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.279 [2024-11-28 12:50:47.696758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.279 qpair failed and we were unable to recover it. 00:27:05.279 [2024-11-28 12:50:47.696899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.279 [2024-11-28 12:50:47.696939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.279 qpair failed and we were unable to recover it. 00:27:05.279 [2024-11-28 12:50:47.697223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.279 [2024-11-28 12:50:47.697255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.279 qpair failed and we were unable to recover it. 00:27:05.279 [2024-11-28 12:50:47.697378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.279 [2024-11-28 12:50:47.697410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.279 qpair failed and we were unable to recover it. 00:27:05.279 [2024-11-28 12:50:47.697614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.279 [2024-11-28 12:50:47.697645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.279 qpair failed and we were unable to recover it. 00:27:05.279 [2024-11-28 12:50:47.697840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.279 [2024-11-28 12:50:47.697872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.279 qpair failed and we were unable to recover it. 00:27:05.279 [2024-11-28 12:50:47.698013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.279 [2024-11-28 12:50:47.698045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.279 qpair failed and we were unable to recover it. 00:27:05.279 [2024-11-28 12:50:47.698303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.279 [2024-11-28 12:50:47.698335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.279 qpair failed and we were unable to recover it. 00:27:05.279 [2024-11-28 12:50:47.698514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.279 [2024-11-28 12:50:47.698546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.279 qpair failed and we were unable to recover it. 00:27:05.279 [2024-11-28 12:50:47.698820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.279 [2024-11-28 12:50:47.698852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.279 qpair failed and we were unable to recover it. 00:27:05.279 [2024-11-28 12:50:47.699052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.279 [2024-11-28 12:50:47.699085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.279 qpair failed and we were unable to recover it. 00:27:05.279 [2024-11-28 12:50:47.699269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.279 [2024-11-28 12:50:47.699301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.279 qpair failed and we were unable to recover it. 00:27:05.279 [2024-11-28 12:50:47.699498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.279 [2024-11-28 12:50:47.699531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.279 qpair failed and we were unable to recover it. 00:27:05.279 [2024-11-28 12:50:47.699759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.279 [2024-11-28 12:50:47.699790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.279 qpair failed and we were unable to recover it. 00:27:05.279 [2024-11-28 12:50:47.699987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.279 [2024-11-28 12:50:47.700022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.279 qpair failed and we were unable to recover it. 00:27:05.279 [2024-11-28 12:50:47.700159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.279 [2024-11-28 12:50:47.700174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.279 qpair failed and we were unable to recover it. 00:27:05.279 [2024-11-28 12:50:47.700363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.279 [2024-11-28 12:50:47.700395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.279 qpair failed and we were unable to recover it. 00:27:05.279 [2024-11-28 12:50:47.700529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.279 [2024-11-28 12:50:47.700561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.279 qpair failed and we were unable to recover it. 00:27:05.279 [2024-11-28 12:50:47.700810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.279 [2024-11-28 12:50:47.700842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.279 qpair failed and we were unable to recover it. 00:27:05.279 [2024-11-28 12:50:47.700982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.279 [2024-11-28 12:50:47.700999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.279 qpair failed and we were unable to recover it. 00:27:05.279 [2024-11-28 12:50:47.701157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.279 [2024-11-28 12:50:47.701188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.279 qpair failed and we were unable to recover it. 00:27:05.279 [2024-11-28 12:50:47.701311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.279 [2024-11-28 12:50:47.701342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.279 qpair failed and we were unable to recover it. 00:27:05.279 [2024-11-28 12:50:47.701471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.279 [2024-11-28 12:50:47.701502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.279 qpair failed and we were unable to recover it. 00:27:05.279 [2024-11-28 12:50:47.701684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.279 [2024-11-28 12:50:47.701715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.279 qpair failed and we were unable to recover it. 00:27:05.279 [2024-11-28 12:50:47.701946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.279 [2024-11-28 12:50:47.701965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.279 qpair failed and we were unable to recover it. 00:27:05.279 [2024-11-28 12:50:47.702129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.279 [2024-11-28 12:50:47.702146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.279 qpair failed and we were unable to recover it. 00:27:05.280 [2024-11-28 12:50:47.702235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.280 [2024-11-28 12:50:47.702251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.280 qpair failed and we were unable to recover it. 00:27:05.280 [2024-11-28 12:50:47.702419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.280 [2024-11-28 12:50:47.702434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.280 qpair failed and we were unable to recover it. 00:27:05.280 [2024-11-28 12:50:47.702551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.280 [2024-11-28 12:50:47.702623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.280 qpair failed and we were unable to recover it. 00:27:05.280 [2024-11-28 12:50:47.702808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.280 [2024-11-28 12:50:47.702881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.280 qpair failed and we were unable to recover it. 00:27:05.280 [2024-11-28 12:50:47.703052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.280 [2024-11-28 12:50:47.703106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.280 qpair failed and we were unable to recover it. 00:27:05.280 [2024-11-28 12:50:47.703314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.280 [2024-11-28 12:50:47.703345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.280 qpair failed and we were unable to recover it. 00:27:05.280 [2024-11-28 12:50:47.703477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.280 [2024-11-28 12:50:47.703509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.280 qpair failed and we were unable to recover it. 00:27:05.280 [2024-11-28 12:50:47.703653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.280 [2024-11-28 12:50:47.703686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.280 qpair failed and we were unable to recover it. 00:27:05.280 [2024-11-28 12:50:47.703813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.280 [2024-11-28 12:50:47.703845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.280 qpair failed and we were unable to recover it. 00:27:05.280 [2024-11-28 12:50:47.704108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.280 [2024-11-28 12:50:47.704124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.280 qpair failed and we were unable to recover it. 00:27:05.280 [2024-11-28 12:50:47.704233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.280 [2024-11-28 12:50:47.704248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.280 qpair failed and we were unable to recover it. 00:27:05.280 [2024-11-28 12:50:47.704414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.280 [2024-11-28 12:50:47.704445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.280 qpair failed and we were unable to recover it. 00:27:05.280 [2024-11-28 12:50:47.704574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.280 [2024-11-28 12:50:47.704607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.280 qpair failed and we were unable to recover it. 00:27:05.280 [2024-11-28 12:50:47.704732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.280 [2024-11-28 12:50:47.704763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.280 qpair failed and we were unable to recover it. 00:27:05.280 [2024-11-28 12:50:47.704892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.280 [2024-11-28 12:50:47.704924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.280 qpair failed and we were unable to recover it. 00:27:05.280 [2024-11-28 12:50:47.705198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.280 [2024-11-28 12:50:47.705216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.280 qpair failed and we were unable to recover it. 00:27:05.280 [2024-11-28 12:50:47.705423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.280 [2024-11-28 12:50:47.705438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.280 qpair failed and we were unable to recover it. 00:27:05.280 [2024-11-28 12:50:47.705596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.280 [2024-11-28 12:50:47.705627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.280 qpair failed and we were unable to recover it. 00:27:05.280 [2024-11-28 12:50:47.705819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.280 [2024-11-28 12:50:47.705850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.280 qpair failed and we were unable to recover it. 00:27:05.280 [2024-11-28 12:50:47.706032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.280 [2024-11-28 12:50:47.706064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.280 qpair failed and we were unable to recover it. 00:27:05.280 [2024-11-28 12:50:47.706243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.280 [2024-11-28 12:50:47.706259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.280 qpair failed and we were unable to recover it. 00:27:05.280 [2024-11-28 12:50:47.706410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.280 [2024-11-28 12:50:47.706443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.280 qpair failed and we were unable to recover it. 00:27:05.280 [2024-11-28 12:50:47.706556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.280 [2024-11-28 12:50:47.706588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.280 qpair failed and we were unable to recover it. 00:27:05.280 [2024-11-28 12:50:47.706718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.280 [2024-11-28 12:50:47.706750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.280 qpair failed and we were unable to recover it. 00:27:05.280 [2024-11-28 12:50:47.706896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.280 [2024-11-28 12:50:47.706927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.280 qpair failed and we were unable to recover it. 00:27:05.280 [2024-11-28 12:50:47.707053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.280 [2024-11-28 12:50:47.707069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.280 qpair failed and we were unable to recover it. 00:27:05.280 [2024-11-28 12:50:47.707171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.280 [2024-11-28 12:50:47.707187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.280 qpair failed and we were unable to recover it. 00:27:05.280 [2024-11-28 12:50:47.707392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.280 [2024-11-28 12:50:47.707408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.280 qpair failed and we were unable to recover it. 00:27:05.280 [2024-11-28 12:50:47.707576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.280 [2024-11-28 12:50:47.707591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.280 qpair failed and we were unable to recover it. 00:27:05.280 [2024-11-28 12:50:47.707778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.280 [2024-11-28 12:50:47.707809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.280 qpair failed and we were unable to recover it. 00:27:05.280 [2024-11-28 12:50:47.707932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.280 [2024-11-28 12:50:47.707977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.280 qpair failed and we were unable to recover it. 00:27:05.280 [2024-11-28 12:50:47.708178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.280 [2024-11-28 12:50:47.708210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.280 qpair failed and we were unable to recover it. 00:27:05.280 [2024-11-28 12:50:47.708396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.280 [2024-11-28 12:50:47.708413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.280 qpair failed and we were unable to recover it. 00:27:05.280 [2024-11-28 12:50:47.708576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.280 [2024-11-28 12:50:47.708607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.280 qpair failed and we were unable to recover it. 00:27:05.280 [2024-11-28 12:50:47.708755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.280 [2024-11-28 12:50:47.708786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.280 qpair failed and we were unable to recover it. 00:27:05.280 [2024-11-28 12:50:47.708974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.280 [2024-11-28 12:50:47.709008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.280 qpair failed and we were unable to recover it. 00:27:05.280 [2024-11-28 12:50:47.709199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.280 [2024-11-28 12:50:47.709231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.280 qpair failed and we were unable to recover it. 00:27:05.280 [2024-11-28 12:50:47.709359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.280 [2024-11-28 12:50:47.709389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.280 qpair failed and we were unable to recover it. 00:27:05.280 [2024-11-28 12:50:47.709569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.280 [2024-11-28 12:50:47.709600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.280 qpair failed and we were unable to recover it. 00:27:05.280 [2024-11-28 12:50:47.709791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.280 [2024-11-28 12:50:47.709823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.280 qpair failed and we were unable to recover it. 00:27:05.280 [2024-11-28 12:50:47.710004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.280 [2024-11-28 12:50:47.710036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.280 qpair failed and we were unable to recover it. 00:27:05.280 [2024-11-28 12:50:47.710259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.280 [2024-11-28 12:50:47.710291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.281 qpair failed and we were unable to recover it. 00:27:05.281 [2024-11-28 12:50:47.710587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.281 [2024-11-28 12:50:47.710628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.281 qpair failed and we were unable to recover it. 00:27:05.281 [2024-11-28 12:50:47.710789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.281 [2024-11-28 12:50:47.710831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.281 qpair failed and we were unable to recover it. 00:27:05.281 [2024-11-28 12:50:47.711035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.281 [2024-11-28 12:50:47.711070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.281 qpair failed and we were unable to recover it. 00:27:05.281 [2024-11-28 12:50:47.711203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.281 [2024-11-28 12:50:47.711236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.281 qpair failed and we were unable to recover it. 00:27:05.281 [2024-11-28 12:50:47.711513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.281 [2024-11-28 12:50:47.711546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.281 qpair failed and we were unable to recover it. 00:27:05.281 [2024-11-28 12:50:47.711767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.281 [2024-11-28 12:50:47.711800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.281 qpair failed and we were unable to recover it. 00:27:05.281 [2024-11-28 12:50:47.712002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.281 [2024-11-28 12:50:47.712035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.281 qpair failed and we were unable to recover it. 00:27:05.281 [2024-11-28 12:50:47.712222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.281 [2024-11-28 12:50:47.712238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.281 qpair failed and we were unable to recover it. 00:27:05.281 [2024-11-28 12:50:47.712400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.281 [2024-11-28 12:50:47.712416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.281 qpair failed and we were unable to recover it. 00:27:05.281 [2024-11-28 12:50:47.712489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.281 [2024-11-28 12:50:47.712504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.281 qpair failed and we were unable to recover it. 00:27:05.281 [2024-11-28 12:50:47.712663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.281 [2024-11-28 12:50:47.712694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.281 qpair failed and we were unable to recover it. 00:27:05.281 [2024-11-28 12:50:47.712895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.281 [2024-11-28 12:50:47.712927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.281 qpair failed and we were unable to recover it. 00:27:05.281 [2024-11-28 12:50:47.713266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.281 [2024-11-28 12:50:47.713282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.281 qpair failed and we were unable to recover it. 00:27:05.281 [2024-11-28 12:50:47.713415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.281 [2024-11-28 12:50:47.713431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.281 qpair failed and we were unable to recover it. 00:27:05.281 [2024-11-28 12:50:47.713654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.281 [2024-11-28 12:50:47.713687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.281 qpair failed and we were unable to recover it. 00:27:05.281 [2024-11-28 12:50:47.713880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.281 [2024-11-28 12:50:47.713913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.281 qpair failed and we were unable to recover it. 00:27:05.281 [2024-11-28 12:50:47.714112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.281 [2024-11-28 12:50:47.714145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.281 qpair failed and we were unable to recover it. 00:27:05.281 [2024-11-28 12:50:47.714335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.281 [2024-11-28 12:50:47.714365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.281 qpair failed and we were unable to recover it. 00:27:05.281 [2024-11-28 12:50:47.714560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.281 [2024-11-28 12:50:47.714592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.281 qpair failed and we were unable to recover it. 00:27:05.281 [2024-11-28 12:50:47.714840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.281 [2024-11-28 12:50:47.714872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.281 qpair failed and we were unable to recover it. 00:27:05.281 [2024-11-28 12:50:47.714972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.281 [2024-11-28 12:50:47.714989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.281 qpair failed and we were unable to recover it. 00:27:05.281 [2024-11-28 12:50:47.715152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.281 [2024-11-28 12:50:47.715168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.281 qpair failed and we were unable to recover it. 00:27:05.281 [2024-11-28 12:50:47.715320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.281 [2024-11-28 12:50:47.715335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.281 qpair failed and we were unable to recover it. 00:27:05.281 [2024-11-28 12:50:47.715490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.281 [2024-11-28 12:50:47.715522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.281 qpair failed and we were unable to recover it. 00:27:05.281 [2024-11-28 12:50:47.715769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.281 [2024-11-28 12:50:47.715802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.281 qpair failed and we were unable to recover it. 00:27:05.281 [2024-11-28 12:50:47.715962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.281 [2024-11-28 12:50:47.715997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.281 qpair failed and we were unable to recover it. 00:27:05.281 [2024-11-28 12:50:47.716195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.281 [2024-11-28 12:50:47.716226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.281 qpair failed and we were unable to recover it. 00:27:05.281 [2024-11-28 12:50:47.716443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.281 [2024-11-28 12:50:47.716481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.281 qpair failed and we were unable to recover it. 00:27:05.281 [2024-11-28 12:50:47.716670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.281 [2024-11-28 12:50:47.716702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.281 qpair failed and we were unable to recover it. 00:27:05.281 [2024-11-28 12:50:47.716898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.281 [2024-11-28 12:50:47.716929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.281 qpair failed and we were unable to recover it. 00:27:05.281 [2024-11-28 12:50:47.717050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.281 [2024-11-28 12:50:47.717082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.281 qpair failed and we were unable to recover it. 00:27:05.281 [2024-11-28 12:50:47.717202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.281 [2024-11-28 12:50:47.717234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.281 qpair failed and we were unable to recover it. 00:27:05.281 [2024-11-28 12:50:47.717480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.281 [2024-11-28 12:50:47.717512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.281 qpair failed and we were unable to recover it. 00:27:05.281 [2024-11-28 12:50:47.717703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.281 [2024-11-28 12:50:47.717735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.281 qpair failed and we were unable to recover it. 00:27:05.281 [2024-11-28 12:50:47.717902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.281 [2024-11-28 12:50:47.717918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.281 qpair failed and we were unable to recover it. 00:27:05.281 [2024-11-28 12:50:47.718138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.281 [2024-11-28 12:50:47.718170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.281 qpair failed and we were unable to recover it. 00:27:05.281 [2024-11-28 12:50:47.718344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.281 [2024-11-28 12:50:47.718376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.281 qpair failed and we were unable to recover it. 00:27:05.281 [2024-11-28 12:50:47.718565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.281 [2024-11-28 12:50:47.718598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.281 qpair failed and we were unable to recover it. 00:27:05.281 [2024-11-28 12:50:47.718713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.281 [2024-11-28 12:50:47.718745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.281 qpair failed and we were unable to recover it. 00:27:05.281 [2024-11-28 12:50:47.718927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.281 [2024-11-28 12:50:47.718968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.281 qpair failed and we were unable to recover it. 00:27:05.281 [2024-11-28 12:50:47.719254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.281 [2024-11-28 12:50:47.719270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.281 qpair failed and we were unable to recover it. 00:27:05.281 [2024-11-28 12:50:47.719426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.281 [2024-11-28 12:50:47.719442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.281 qpair failed and we were unable to recover it. 00:27:05.281 [2024-11-28 12:50:47.719599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.281 [2024-11-28 12:50:47.719614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.281 qpair failed and we were unable to recover it. 00:27:05.281 [2024-11-28 12:50:47.719703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.281 [2024-11-28 12:50:47.719719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.281 qpair failed and we were unable to recover it. 00:27:05.281 [2024-11-28 12:50:47.719812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.281 [2024-11-28 12:50:47.719828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.281 qpair failed and we were unable to recover it. 00:27:05.281 [2024-11-28 12:50:47.719919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.281 [2024-11-28 12:50:47.719935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.281 qpair failed and we were unable to recover it. 00:27:05.281 [2024-11-28 12:50:47.720043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.282 [2024-11-28 12:50:47.720059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.282 qpair failed and we were unable to recover it. 00:27:05.282 [2024-11-28 12:50:47.720211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.282 [2024-11-28 12:50:47.720227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.282 qpair failed and we were unable to recover it. 00:27:05.282 [2024-11-28 12:50:47.720433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.282 [2024-11-28 12:50:47.720448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.282 qpair failed and we were unable to recover it. 00:27:05.282 [2024-11-28 12:50:47.720528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.282 [2024-11-28 12:50:47.720543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.282 qpair failed and we were unable to recover it. 00:27:05.282 [2024-11-28 12:50:47.720635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.282 [2024-11-28 12:50:47.720680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.282 qpair failed and we were unable to recover it. 00:27:05.282 [2024-11-28 12:50:47.720928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.282 [2024-11-28 12:50:47.720970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.282 qpair failed and we were unable to recover it. 00:27:05.282 [2024-11-28 12:50:47.721151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.282 [2024-11-28 12:50:47.721182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.282 qpair failed and we were unable to recover it. 00:27:05.282 [2024-11-28 12:50:47.721376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.282 [2024-11-28 12:50:47.721392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.282 qpair failed and we were unable to recover it. 00:27:05.282 [2024-11-28 12:50:47.721495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.282 [2024-11-28 12:50:47.721511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.282 qpair failed and we were unable to recover it. 00:27:05.282 [2024-11-28 12:50:47.721608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.282 [2024-11-28 12:50:47.721623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.282 qpair failed and we were unable to recover it. 00:27:05.282 [2024-11-28 12:50:47.721783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.282 [2024-11-28 12:50:47.721799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.282 qpair failed and we were unable to recover it. 00:27:05.282 [2024-11-28 12:50:47.722073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.282 [2024-11-28 12:50:47.722090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.282 qpair failed and we were unable to recover it. 00:27:05.282 [2024-11-28 12:50:47.722233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.282 [2024-11-28 12:50:47.722248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.282 qpair failed and we were unable to recover it. 00:27:05.282 [2024-11-28 12:50:47.722395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.282 [2024-11-28 12:50:47.722428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.282 qpair failed and we were unable to recover it. 00:27:05.282 [2024-11-28 12:50:47.722623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.282 [2024-11-28 12:50:47.722655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.282 qpair failed and we were unable to recover it. 00:27:05.282 [2024-11-28 12:50:47.722925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.282 [2024-11-28 12:50:47.722941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.282 qpair failed and we were unable to recover it. 00:27:05.565 [2024-11-28 12:50:47.723100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.565 [2024-11-28 12:50:47.723116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.565 qpair failed and we were unable to recover it. 00:27:05.565 [2024-11-28 12:50:47.723201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.565 [2024-11-28 12:50:47.723217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.565 qpair failed and we were unable to recover it. 00:27:05.565 [2024-11-28 12:50:47.723322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.565 [2024-11-28 12:50:47.723337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.565 qpair failed and we were unable to recover it. 00:27:05.565 [2024-11-28 12:50:47.723490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.565 [2024-11-28 12:50:47.723506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.565 qpair failed and we were unable to recover it. 00:27:05.565 [2024-11-28 12:50:47.723604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.565 [2024-11-28 12:50:47.723620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.565 qpair failed and we were unable to recover it. 00:27:05.565 [2024-11-28 12:50:47.723856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.565 [2024-11-28 12:50:47.723872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.565 qpair failed and we were unable to recover it. 00:27:05.565 [2024-11-28 12:50:47.724020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.565 [2024-11-28 12:50:47.724036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.565 qpair failed and we were unable to recover it. 00:27:05.565 [2024-11-28 12:50:47.724260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.565 [2024-11-28 12:50:47.724276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.565 qpair failed and we were unable to recover it. 00:27:05.565 [2024-11-28 12:50:47.724457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.565 [2024-11-28 12:50:47.724473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.565 qpair failed and we were unable to recover it. 00:27:05.565 [2024-11-28 12:50:47.724635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.565 [2024-11-28 12:50:47.724651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.565 qpair failed and we were unable to recover it. 00:27:05.565 [2024-11-28 12:50:47.724826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.565 [2024-11-28 12:50:47.724841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.565 qpair failed and we were unable to recover it. 00:27:05.565 [2024-11-28 12:50:47.724933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.565 [2024-11-28 12:50:47.724952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.565 qpair failed and we were unable to recover it. 00:27:05.565 [2024-11-28 12:50:47.725193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.565 [2024-11-28 12:50:47.725210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.565 qpair failed and we were unable to recover it. 00:27:05.565 [2024-11-28 12:50:47.725349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.565 [2024-11-28 12:50:47.725365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.566 qpair failed and we were unable to recover it. 00:27:05.566 [2024-11-28 12:50:47.725523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.566 [2024-11-28 12:50:47.725538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.566 qpair failed and we were unable to recover it. 00:27:05.566 [2024-11-28 12:50:47.725754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.566 [2024-11-28 12:50:47.725770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.566 qpair failed and we were unable to recover it. 00:27:05.566 [2024-11-28 12:50:47.725916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.566 [2024-11-28 12:50:47.725931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.566 qpair failed and we were unable to recover it. 00:27:05.566 [2024-11-28 12:50:47.726151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.566 [2024-11-28 12:50:47.726180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.566 qpair failed and we were unable to recover it. 00:27:05.566 [2024-11-28 12:50:47.726278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.566 [2024-11-28 12:50:47.726292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.566 qpair failed and we were unable to recover it. 00:27:05.566 [2024-11-28 12:50:47.726388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.566 [2024-11-28 12:50:47.726400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.566 qpair failed and we were unable to recover it. 00:27:05.566 [2024-11-28 12:50:47.726498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.566 [2024-11-28 12:50:47.726511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.566 qpair failed and we were unable to recover it. 00:27:05.566 [2024-11-28 12:50:47.726747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.566 [2024-11-28 12:50:47.726759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.566 qpair failed and we were unable to recover it. 00:27:05.566 [2024-11-28 12:50:47.726833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.566 [2024-11-28 12:50:47.726845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.566 qpair failed and we were unable to recover it. 00:27:05.566 [2024-11-28 12:50:47.726925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.566 [2024-11-28 12:50:47.726937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.566 qpair failed and we were unable to recover it. 00:27:05.566 [2024-11-28 12:50:47.727154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.566 [2024-11-28 12:50:47.727167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.566 qpair failed and we were unable to recover it. 00:27:05.566 [2024-11-28 12:50:47.727308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.566 [2024-11-28 12:50:47.727320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.566 qpair failed and we were unable to recover it. 00:27:05.566 [2024-11-28 12:50:47.727421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.566 [2024-11-28 12:50:47.727433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.566 qpair failed and we were unable to recover it. 00:27:05.566 [2024-11-28 12:50:47.727602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.566 [2024-11-28 12:50:47.727613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.566 qpair failed and we were unable to recover it. 00:27:05.566 [2024-11-28 12:50:47.727770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.566 [2024-11-28 12:50:47.727782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.566 qpair failed and we were unable to recover it. 00:27:05.566 [2024-11-28 12:50:47.727867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.566 [2024-11-28 12:50:47.727879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.566 qpair failed and we were unable to recover it. 00:27:05.566 [2024-11-28 12:50:47.728027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.566 [2024-11-28 12:50:47.728039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.566 qpair failed and we were unable to recover it. 00:27:05.566 [2024-11-28 12:50:47.728187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.566 [2024-11-28 12:50:47.728219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.566 qpair failed and we were unable to recover it. 00:27:05.566 [2024-11-28 12:50:47.728451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.566 [2024-11-28 12:50:47.728483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.566 qpair failed and we were unable to recover it. 00:27:05.566 [2024-11-28 12:50:47.728797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.566 [2024-11-28 12:50:47.728866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.566 qpair failed and we were unable to recover it. 00:27:05.566 [2024-11-28 12:50:47.729125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.566 [2024-11-28 12:50:47.729197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.566 qpair failed and we were unable to recover it. 00:27:05.566 [2024-11-28 12:50:47.729448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.566 [2024-11-28 12:50:47.729467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.566 qpair failed and we were unable to recover it. 00:27:05.566 [2024-11-28 12:50:47.729656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.566 [2024-11-28 12:50:47.729669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.566 qpair failed and we were unable to recover it. 00:27:05.566 [2024-11-28 12:50:47.729890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.566 [2024-11-28 12:50:47.729923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.566 qpair failed and we were unable to recover it. 00:27:05.566 [2024-11-28 12:50:47.730129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.566 [2024-11-28 12:50:47.730163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.566 qpair failed and we were unable to recover it. 00:27:05.566 [2024-11-28 12:50:47.730415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.566 [2024-11-28 12:50:47.730447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.566 qpair failed and we were unable to recover it. 00:27:05.566 [2024-11-28 12:50:47.730571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.566 [2024-11-28 12:50:47.730604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.566 qpair failed and we were unable to recover it. 00:27:05.566 [2024-11-28 12:50:47.730739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.566 [2024-11-28 12:50:47.730773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.566 qpair failed and we were unable to recover it. 00:27:05.566 [2024-11-28 12:50:47.730989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.566 [2024-11-28 12:50:47.731024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.566 qpair failed and we were unable to recover it. 00:27:05.566 [2024-11-28 12:50:47.731216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.566 [2024-11-28 12:50:47.731248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.566 qpair failed and we were unable to recover it. 00:27:05.566 [2024-11-28 12:50:47.731446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.566 [2024-11-28 12:50:47.731457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.566 qpair failed and we were unable to recover it. 00:27:05.566 [2024-11-28 12:50:47.731550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.566 [2024-11-28 12:50:47.731562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.566 qpair failed and we were unable to recover it. 00:27:05.566 [2024-11-28 12:50:47.731749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.566 [2024-11-28 12:50:47.731788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.566 qpair failed and we were unable to recover it. 00:27:05.566 [2024-11-28 12:50:47.731970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.566 [2024-11-28 12:50:47.732002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.566 qpair failed and we were unable to recover it. 00:27:05.566 [2024-11-28 12:50:47.732197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.567 [2024-11-28 12:50:47.732229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.567 qpair failed and we were unable to recover it. 00:27:05.567 [2024-11-28 12:50:47.732495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.567 [2024-11-28 12:50:47.732528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.567 qpair failed and we were unable to recover it. 00:27:05.567 [2024-11-28 12:50:47.732658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.567 [2024-11-28 12:50:47.732691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.567 qpair failed and we were unable to recover it. 00:27:05.567 [2024-11-28 12:50:47.732887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.567 [2024-11-28 12:50:47.732919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.567 qpair failed and we were unable to recover it. 00:27:05.567 [2024-11-28 12:50:47.733124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.567 [2024-11-28 12:50:47.733157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.567 qpair failed and we were unable to recover it. 00:27:05.567 [2024-11-28 12:50:47.733271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.567 [2024-11-28 12:50:47.733282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.567 qpair failed and we were unable to recover it. 00:27:05.567 [2024-11-28 12:50:47.733380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.567 [2024-11-28 12:50:47.733391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.567 qpair failed and we were unable to recover it. 00:27:05.567 [2024-11-28 12:50:47.733469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.567 [2024-11-28 12:50:47.733502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.567 qpair failed and we were unable to recover it. 00:27:05.567 [2024-11-28 12:50:47.733644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.567 [2024-11-28 12:50:47.733676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.567 qpair failed and we were unable to recover it. 00:27:05.567 [2024-11-28 12:50:47.733816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.567 [2024-11-28 12:50:47.733848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.567 qpair failed and we were unable to recover it. 00:27:05.567 [2024-11-28 12:50:47.734085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.567 [2024-11-28 12:50:47.734098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.567 qpair failed and we were unable to recover it. 00:27:05.567 [2024-11-28 12:50:47.734325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.567 [2024-11-28 12:50:47.734358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.567 qpair failed and we were unable to recover it. 00:27:05.567 [2024-11-28 12:50:47.734481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.567 [2024-11-28 12:50:47.734514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.567 qpair failed and we were unable to recover it. 00:27:05.567 [2024-11-28 12:50:47.734709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.567 [2024-11-28 12:50:47.734741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.567 qpair failed and we were unable to recover it. 00:27:05.567 [2024-11-28 12:50:47.734877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.567 [2024-11-28 12:50:47.734889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.567 qpair failed and we were unable to recover it. 00:27:05.567 [2024-11-28 12:50:47.734972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.567 [2024-11-28 12:50:47.734985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.567 qpair failed and we were unable to recover it. 00:27:05.567 [2024-11-28 12:50:47.735216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.567 [2024-11-28 12:50:47.735249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.567 qpair failed and we were unable to recover it. 00:27:05.567 [2024-11-28 12:50:47.735425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.567 [2024-11-28 12:50:47.735458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.567 qpair failed and we were unable to recover it. 00:27:05.567 [2024-11-28 12:50:47.735656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.567 [2024-11-28 12:50:47.735688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.567 qpair failed and we were unable to recover it. 00:27:05.567 [2024-11-28 12:50:47.735881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.567 [2024-11-28 12:50:47.735913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.567 qpair failed and we were unable to recover it. 00:27:05.567 [2024-11-28 12:50:47.736088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.567 [2024-11-28 12:50:47.736173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.567 qpair failed and we were unable to recover it. 00:27:05.567 [2024-11-28 12:50:47.736340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.567 [2024-11-28 12:50:47.736359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.567 qpair failed and we were unable to recover it. 00:27:05.567 [2024-11-28 12:50:47.736521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.567 [2024-11-28 12:50:47.736537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.567 qpair failed and we were unable to recover it. 00:27:05.567 [2024-11-28 12:50:47.736633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.567 [2024-11-28 12:50:47.736649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.567 qpair failed and we were unable to recover it. 00:27:05.567 [2024-11-28 12:50:47.736812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.567 [2024-11-28 12:50:47.736845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.567 qpair failed and we were unable to recover it. 00:27:05.567 [2024-11-28 12:50:47.737072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.567 [2024-11-28 12:50:47.737118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.567 qpair failed and we were unable to recover it. 00:27:05.567 [2024-11-28 12:50:47.737271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.567 [2024-11-28 12:50:47.737306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.567 qpair failed and we were unable to recover it. 00:27:05.567 [2024-11-28 12:50:47.737451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.567 [2024-11-28 12:50:47.737483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.567 qpair failed and we were unable to recover it. 00:27:05.567 [2024-11-28 12:50:47.737604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.567 [2024-11-28 12:50:47.737637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.567 qpair failed and we were unable to recover it. 00:27:05.567 [2024-11-28 12:50:47.737775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.567 [2024-11-28 12:50:47.737807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.567 qpair failed and we were unable to recover it. 00:27:05.567 [2024-11-28 12:50:47.738019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.567 [2024-11-28 12:50:47.738051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.567 qpair failed and we were unable to recover it. 00:27:05.567 [2024-11-28 12:50:47.738234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.567 [2024-11-28 12:50:47.738266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.567 qpair failed and we were unable to recover it. 00:27:05.567 [2024-11-28 12:50:47.738412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.567 [2024-11-28 12:50:47.738444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.567 qpair failed and we were unable to recover it. 00:27:05.567 [2024-11-28 12:50:47.738572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.567 [2024-11-28 12:50:47.738605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.567 qpair failed and we were unable to recover it. 00:27:05.567 [2024-11-28 12:50:47.738858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.567 [2024-11-28 12:50:47.738890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.567 qpair failed and we were unable to recover it. 00:27:05.567 [2024-11-28 12:50:47.739118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.567 [2024-11-28 12:50:47.739152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.567 qpair failed and we were unable to recover it. 00:27:05.567 [2024-11-28 12:50:47.739287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.567 [2024-11-28 12:50:47.739320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.567 qpair failed and we were unable to recover it. 00:27:05.567 [2024-11-28 12:50:47.739521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.567 [2024-11-28 12:50:47.739554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.567 qpair failed and we were unable to recover it. 00:27:05.568 [2024-11-28 12:50:47.739747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.568 [2024-11-28 12:50:47.739785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.568 qpair failed and we were unable to recover it. 00:27:05.568 [2024-11-28 12:50:47.739908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.568 [2024-11-28 12:50:47.739920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.568 qpair failed and we were unable to recover it. 00:27:05.568 [2024-11-28 12:50:47.740126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.568 [2024-11-28 12:50:47.740160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.568 qpair failed and we were unable to recover it. 00:27:05.568 [2024-11-28 12:50:47.740428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.568 [2024-11-28 12:50:47.740460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.568 qpair failed and we were unable to recover it. 00:27:05.568 [2024-11-28 12:50:47.740682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.568 [2024-11-28 12:50:47.740714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.568 qpair failed and we were unable to recover it. 00:27:05.568 [2024-11-28 12:50:47.740903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.568 [2024-11-28 12:50:47.740935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.568 qpair failed and we were unable to recover it. 00:27:05.568 [2024-11-28 12:50:47.741140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.568 [2024-11-28 12:50:47.741184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.568 qpair failed and we were unable to recover it. 00:27:05.568 [2024-11-28 12:50:47.741348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.568 [2024-11-28 12:50:47.741359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.568 qpair failed and we were unable to recover it. 00:27:05.568 [2024-11-28 12:50:47.741616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.568 [2024-11-28 12:50:47.741647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.568 qpair failed and we were unable to recover it. 00:27:05.568 [2024-11-28 12:50:47.741899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.568 [2024-11-28 12:50:47.741931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.568 qpair failed and we were unable to recover it. 00:27:05.568 [2024-11-28 12:50:47.742082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.568 [2024-11-28 12:50:47.742094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.568 qpair failed and we were unable to recover it. 00:27:05.568 [2024-11-28 12:50:47.742170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.568 [2024-11-28 12:50:47.742181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.568 qpair failed and we were unable to recover it. 00:27:05.568 [2024-11-28 12:50:47.742330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.568 [2024-11-28 12:50:47.742362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.568 qpair failed and we were unable to recover it. 00:27:05.568 [2024-11-28 12:50:47.742490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.568 [2024-11-28 12:50:47.742523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.568 qpair failed and we were unable to recover it. 00:27:05.568 [2024-11-28 12:50:47.742722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.568 [2024-11-28 12:50:47.742755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.568 qpair failed and we were unable to recover it. 00:27:05.568 [2024-11-28 12:50:47.742976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.568 [2024-11-28 12:50:47.743009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.568 qpair failed and we were unable to recover it. 00:27:05.568 [2024-11-28 12:50:47.743150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.568 [2024-11-28 12:50:47.743182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.568 qpair failed and we were unable to recover it. 00:27:05.568 [2024-11-28 12:50:47.743317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.568 [2024-11-28 12:50:47.743357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.568 qpair failed and we were unable to recover it. 00:27:05.568 [2024-11-28 12:50:47.743587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.568 [2024-11-28 12:50:47.743599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.568 qpair failed and we were unable to recover it. 00:27:05.568 [2024-11-28 12:50:47.743674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.568 [2024-11-28 12:50:47.743686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.568 qpair failed and we were unable to recover it. 00:27:05.568 [2024-11-28 12:50:47.743864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.568 [2024-11-28 12:50:47.743876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.568 qpair failed and we were unable to recover it. 00:27:05.568 [2024-11-28 12:50:47.743962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.568 [2024-11-28 12:50:47.743974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.568 qpair failed and we were unable to recover it. 00:27:05.568 [2024-11-28 12:50:47.744146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.568 [2024-11-28 12:50:47.744177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.568 qpair failed and we were unable to recover it. 00:27:05.568 [2024-11-28 12:50:47.744326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.568 [2024-11-28 12:50:47.744357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.568 qpair failed and we were unable to recover it. 00:27:05.568 [2024-11-28 12:50:47.744573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.568 [2024-11-28 12:50:47.744606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.568 qpair failed and we were unable to recover it. 00:27:05.568 [2024-11-28 12:50:47.744745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.568 [2024-11-28 12:50:47.744777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.568 qpair failed and we were unable to recover it. 00:27:05.568 [2024-11-28 12:50:47.744965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.568 [2024-11-28 12:50:47.744999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.568 qpair failed and we were unable to recover it. 00:27:05.568 [2024-11-28 12:50:47.745210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.568 [2024-11-28 12:50:47.745232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.568 qpair failed and we were unable to recover it. 00:27:05.568 [2024-11-28 12:50:47.745396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.568 [2024-11-28 12:50:47.745430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.568 qpair failed and we were unable to recover it. 00:27:05.568 [2024-11-28 12:50:47.745607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.568 [2024-11-28 12:50:47.745640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.568 qpair failed and we were unable to recover it. 00:27:05.568 [2024-11-28 12:50:47.745853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.568 [2024-11-28 12:50:47.745886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.568 qpair failed and we were unable to recover it. 00:27:05.568 [2024-11-28 12:50:47.746105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.568 [2024-11-28 12:50:47.746141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.568 qpair failed and we were unable to recover it. 00:27:05.568 [2024-11-28 12:50:47.746340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.568 [2024-11-28 12:50:47.746378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.568 qpair failed and we were unable to recover it. 00:27:05.568 [2024-11-28 12:50:47.746471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.568 [2024-11-28 12:50:47.746486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.568 qpair failed and we were unable to recover it. 00:27:05.568 [2024-11-28 12:50:47.746568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.568 [2024-11-28 12:50:47.746583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.568 qpair failed and we were unable to recover it. 00:27:05.568 [2024-11-28 12:50:47.746796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.568 [2024-11-28 12:50:47.746812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.568 qpair failed and we were unable to recover it. 00:27:05.568 [2024-11-28 12:50:47.746909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.568 [2024-11-28 12:50:47.746922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.568 qpair failed and we were unable to recover it. 00:27:05.568 [2024-11-28 12:50:47.747112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.568 [2024-11-28 12:50:47.747147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.568 qpair failed and we were unable to recover it. 00:27:05.569 [2024-11-28 12:50:47.747339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.569 [2024-11-28 12:50:47.747371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.569 qpair failed and we were unable to recover it. 00:27:05.569 [2024-11-28 12:50:47.747513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.569 [2024-11-28 12:50:47.747546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.569 qpair failed and we were unable to recover it. 00:27:05.569 [2024-11-28 12:50:47.747676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.569 [2024-11-28 12:50:47.747714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.569 qpair failed and we were unable to recover it. 00:27:05.569 [2024-11-28 12:50:47.747917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.569 [2024-11-28 12:50:47.747961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.569 qpair failed and we were unable to recover it. 00:27:05.569 [2024-11-28 12:50:47.748162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.569 [2024-11-28 12:50:47.748204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.569 qpair failed and we were unable to recover it. 00:27:05.569 [2024-11-28 12:50:47.748303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.569 [2024-11-28 12:50:47.748314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.569 qpair failed and we were unable to recover it. 00:27:05.569 [2024-11-28 12:50:47.748489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.569 [2024-11-28 12:50:47.748522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.569 qpair failed and we were unable to recover it. 00:27:05.569 [2024-11-28 12:50:47.748740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.569 [2024-11-28 12:50:47.748772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.569 qpair failed and we were unable to recover it. 00:27:05.569 [2024-11-28 12:50:47.748912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.569 [2024-11-28 12:50:47.748945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.569 qpair failed and we were unable to recover it. 00:27:05.569 [2024-11-28 12:50:47.749143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.569 [2024-11-28 12:50:47.749154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.569 qpair failed and we were unable to recover it. 00:27:05.569 [2024-11-28 12:50:47.749358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.569 [2024-11-28 12:50:47.749390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.569 qpair failed and we were unable to recover it. 00:27:05.569 [2024-11-28 12:50:47.749495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.569 [2024-11-28 12:50:47.749528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.569 qpair failed and we were unable to recover it. 00:27:05.569 [2024-11-28 12:50:47.749728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.569 [2024-11-28 12:50:47.749760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.569 qpair failed and we were unable to recover it. 00:27:05.569 [2024-11-28 12:50:47.749895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.569 [2024-11-28 12:50:47.749927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.569 qpair failed and we were unable to recover it. 00:27:05.569 [2024-11-28 12:50:47.750079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.569 [2024-11-28 12:50:47.750112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.569 qpair failed and we were unable to recover it. 00:27:05.569 [2024-11-28 12:50:47.750361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.569 [2024-11-28 12:50:47.750393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.569 qpair failed and we were unable to recover it. 00:27:05.569 [2024-11-28 12:50:47.750549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.569 [2024-11-28 12:50:47.750581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.569 qpair failed and we were unable to recover it. 00:27:05.569 [2024-11-28 12:50:47.750763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.569 [2024-11-28 12:50:47.750796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.569 qpair failed and we were unable to recover it. 00:27:05.569 [2024-11-28 12:50:47.751044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.569 [2024-11-28 12:50:47.751057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.569 qpair failed and we were unable to recover it. 00:27:05.569 [2024-11-28 12:50:47.751258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.569 [2024-11-28 12:50:47.751270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.569 qpair failed and we were unable to recover it. 00:27:05.569 [2024-11-28 12:50:47.751427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.569 [2024-11-28 12:50:47.751438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.569 qpair failed and we were unable to recover it. 00:27:05.569 [2024-11-28 12:50:47.751572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.569 [2024-11-28 12:50:47.751584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.569 qpair failed and we were unable to recover it. 00:27:05.569 [2024-11-28 12:50:47.751810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.569 [2024-11-28 12:50:47.751821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.569 qpair failed and we were unable to recover it. 00:27:05.569 [2024-11-28 12:50:47.751982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.569 [2024-11-28 12:50:47.752015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.569 qpair failed and we were unable to recover it. 00:27:05.569 [2024-11-28 12:50:47.752210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.569 [2024-11-28 12:50:47.752223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.569 qpair failed and we were unable to recover it. 00:27:05.569 [2024-11-28 12:50:47.752380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.569 [2024-11-28 12:50:47.752412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.569 qpair failed and we were unable to recover it. 00:27:05.569 [2024-11-28 12:50:47.752533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.569 [2024-11-28 12:50:47.752566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.569 qpair failed and we were unable to recover it. 00:27:05.569 [2024-11-28 12:50:47.752699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.569 [2024-11-28 12:50:47.752732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.569 qpair failed and we were unable to recover it. 00:27:05.569 [2024-11-28 12:50:47.752918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.569 [2024-11-28 12:50:47.752970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.569 qpair failed and we were unable to recover it. 00:27:05.569 [2024-11-28 12:50:47.753236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.569 [2024-11-28 12:50:47.753309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.569 qpair failed and we were unable to recover it. 00:27:05.569 [2024-11-28 12:50:47.753577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.569 [2024-11-28 12:50:47.753650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.569 qpair failed and we were unable to recover it. 00:27:05.569 [2024-11-28 12:50:47.753902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.569 [2024-11-28 12:50:47.753937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.569 qpair failed and we were unable to recover it. 00:27:05.569 [2024-11-28 12:50:47.754098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.569 [2024-11-28 12:50:47.754131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.569 qpair failed and we were unable to recover it. 00:27:05.569 [2024-11-28 12:50:47.754408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.569 [2024-11-28 12:50:47.754441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.569 qpair failed and we were unable to recover it. 00:27:05.569 [2024-11-28 12:50:47.754639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.569 [2024-11-28 12:50:47.754671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.569 qpair failed and we were unable to recover it. 00:27:05.569 [2024-11-28 12:50:47.754869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.569 [2024-11-28 12:50:47.754902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.569 qpair failed and we were unable to recover it. 00:27:05.569 [2024-11-28 12:50:47.755110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.569 [2024-11-28 12:50:47.755144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.569 qpair failed and we were unable to recover it. 00:27:05.569 [2024-11-28 12:50:47.755404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.569 [2024-11-28 12:50:47.755436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.569 qpair failed and we were unable to recover it. 00:27:05.570 [2024-11-28 12:50:47.755624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.570 [2024-11-28 12:50:47.755656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.570 qpair failed and we were unable to recover it. 00:27:05.570 [2024-11-28 12:50:47.755772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.570 [2024-11-28 12:50:47.755804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.570 qpair failed and we were unable to recover it. 00:27:05.570 [2024-11-28 12:50:47.756047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.570 [2024-11-28 12:50:47.756063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.570 qpair failed and we were unable to recover it. 00:27:05.570 [2024-11-28 12:50:47.756278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.570 [2024-11-28 12:50:47.756310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.570 qpair failed and we were unable to recover it. 00:27:05.570 [2024-11-28 12:50:47.756559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.570 [2024-11-28 12:50:47.756601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.570 qpair failed and we were unable to recover it. 00:27:05.570 [2024-11-28 12:50:47.756907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.570 [2024-11-28 12:50:47.756939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.570 qpair failed and we were unable to recover it. 00:27:05.570 [2024-11-28 12:50:47.757157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.570 [2024-11-28 12:50:47.757189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.570 qpair failed and we were unable to recover it. 00:27:05.570 [2024-11-28 12:50:47.757316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.570 [2024-11-28 12:50:47.757349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.570 qpair failed and we were unable to recover it. 00:27:05.570 [2024-11-28 12:50:47.757621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.570 [2024-11-28 12:50:47.757652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.570 qpair failed and we were unable to recover it. 00:27:05.570 [2024-11-28 12:50:47.757862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.570 [2024-11-28 12:50:47.757894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.570 qpair failed and we were unable to recover it. 00:27:05.570 [2024-11-28 12:50:47.758091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.570 [2024-11-28 12:50:47.758108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.570 qpair failed and we were unable to recover it. 00:27:05.570 [2024-11-28 12:50:47.758291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.570 [2024-11-28 12:50:47.758307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.570 qpair failed and we were unable to recover it. 00:27:05.570 [2024-11-28 12:50:47.758472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.570 [2024-11-28 12:50:47.758487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.570 qpair failed and we were unable to recover it. 00:27:05.570 [2024-11-28 12:50:47.758641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.570 [2024-11-28 12:50:47.758679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.570 qpair failed and we were unable to recover it. 00:27:05.570 [2024-11-28 12:50:47.758864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.570 [2024-11-28 12:50:47.758896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.570 qpair failed and we were unable to recover it. 00:27:05.570 [2024-11-28 12:50:47.759093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.570 [2024-11-28 12:50:47.759126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.570 qpair failed and we were unable to recover it. 00:27:05.570 [2024-11-28 12:50:47.759271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.570 [2024-11-28 12:50:47.759286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.570 qpair failed and we were unable to recover it. 00:27:05.570 [2024-11-28 12:50:47.759374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.570 [2024-11-28 12:50:47.759390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.570 qpair failed and we were unable to recover it. 00:27:05.570 [2024-11-28 12:50:47.759559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.570 [2024-11-28 12:50:47.759575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.570 qpair failed and we were unable to recover it. 00:27:05.570 [2024-11-28 12:50:47.759685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.570 [2024-11-28 12:50:47.759717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.570 qpair failed and we were unable to recover it. 00:27:05.570 [2024-11-28 12:50:47.759895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.570 [2024-11-28 12:50:47.759927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.570 qpair failed and we were unable to recover it. 00:27:05.570 [2024-11-28 12:50:47.760057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.570 [2024-11-28 12:50:47.760090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.570 qpair failed and we were unable to recover it. 00:27:05.570 [2024-11-28 12:50:47.760211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.570 [2024-11-28 12:50:47.760243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.570 qpair failed and we were unable to recover it. 00:27:05.570 [2024-11-28 12:50:47.760516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.570 [2024-11-28 12:50:47.760549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.570 qpair failed and we were unable to recover it. 00:27:05.570 [2024-11-28 12:50:47.760740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.570 [2024-11-28 12:50:47.760772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.570 qpair failed and we were unable to recover it. 00:27:05.570 [2024-11-28 12:50:47.761013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.570 [2024-11-28 12:50:47.761029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.570 qpair failed and we were unable to recover it. 00:27:05.570 [2024-11-28 12:50:47.761132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.570 [2024-11-28 12:50:47.761148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.570 qpair failed and we were unable to recover it. 00:27:05.570 [2024-11-28 12:50:47.761347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.570 [2024-11-28 12:50:47.761363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.570 qpair failed and we were unable to recover it. 00:27:05.570 [2024-11-28 12:50:47.761606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.570 [2024-11-28 12:50:47.761638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.570 qpair failed and we were unable to recover it. 00:27:05.570 [2024-11-28 12:50:47.761779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.570 [2024-11-28 12:50:47.761811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.570 qpair failed and we were unable to recover it. 00:27:05.570 [2024-11-28 12:50:47.762061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.570 [2024-11-28 12:50:47.762094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.570 qpair failed and we were unable to recover it. 00:27:05.570 [2024-11-28 12:50:47.762299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.570 [2024-11-28 12:50:47.762336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.571 qpair failed and we were unable to recover it. 00:27:05.571 [2024-11-28 12:50:47.762449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.571 [2024-11-28 12:50:47.762485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.571 qpair failed and we were unable to recover it. 00:27:05.571 [2024-11-28 12:50:47.762596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.571 [2024-11-28 12:50:47.762611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.571 qpair failed and we were unable to recover it. 00:27:05.571 [2024-11-28 12:50:47.762680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.571 [2024-11-28 12:50:47.762692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.571 qpair failed and we were unable to recover it. 00:27:05.571 [2024-11-28 12:50:47.762847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.571 [2024-11-28 12:50:47.762858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.571 qpair failed and we were unable to recover it. 00:27:05.571 [2024-11-28 12:50:47.763011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.571 [2024-11-28 12:50:47.763024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.571 qpair failed and we were unable to recover it. 00:27:05.571 [2024-11-28 12:50:47.763186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.571 [2024-11-28 12:50:47.763219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.571 qpair failed and we were unable to recover it. 00:27:05.571 [2024-11-28 12:50:47.763408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.571 [2024-11-28 12:50:47.763440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.571 qpair failed and we were unable to recover it. 00:27:05.571 [2024-11-28 12:50:47.763568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.571 [2024-11-28 12:50:47.763601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.571 qpair failed and we were unable to recover it. 00:27:05.571 [2024-11-28 12:50:47.763745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.571 [2024-11-28 12:50:47.763778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.571 qpair failed and we were unable to recover it. 00:27:05.571 [2024-11-28 12:50:47.763908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.571 [2024-11-28 12:50:47.763940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.571 qpair failed and we were unable to recover it. 00:27:05.571 [2024-11-28 12:50:47.764162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.571 [2024-11-28 12:50:47.764195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.571 qpair failed and we were unable to recover it. 00:27:05.571 [2024-11-28 12:50:47.764439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.571 [2024-11-28 12:50:47.764473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.571 qpair failed and we were unable to recover it. 00:27:05.571 [2024-11-28 12:50:47.764688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.571 [2024-11-28 12:50:47.764720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.571 qpair failed and we were unable to recover it. 00:27:05.571 [2024-11-28 12:50:47.764916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.571 [2024-11-28 12:50:47.764959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.571 qpair failed and we were unable to recover it. 00:27:05.571 [2024-11-28 12:50:47.765264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.571 [2024-11-28 12:50:47.765296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.571 qpair failed and we were unable to recover it. 00:27:05.571 [2024-11-28 12:50:47.765494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.571 [2024-11-28 12:50:47.765527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.571 qpair failed and we were unable to recover it. 00:27:05.571 [2024-11-28 12:50:47.765720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.571 [2024-11-28 12:50:47.765753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.571 qpair failed and we were unable to recover it. 00:27:05.571 [2024-11-28 12:50:47.765883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.571 [2024-11-28 12:50:47.765917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.571 qpair failed and we were unable to recover it. 00:27:05.571 [2024-11-28 12:50:47.766037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.571 [2024-11-28 12:50:47.766069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.571 qpair failed and we were unable to recover it. 00:27:05.571 [2024-11-28 12:50:47.766196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.571 [2024-11-28 12:50:47.766228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.571 qpair failed and we were unable to recover it. 00:27:05.571 [2024-11-28 12:50:47.766423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.571 [2024-11-28 12:50:47.766456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.571 qpair failed and we were unable to recover it. 00:27:05.571 [2024-11-28 12:50:47.766704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.571 [2024-11-28 12:50:47.766736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.571 qpair failed and we were unable to recover it. 00:27:05.571 [2024-11-28 12:50:47.766938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.571 [2024-11-28 12:50:47.766981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.571 qpair failed and we were unable to recover it. 00:27:05.571 [2024-11-28 12:50:47.767208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.571 [2024-11-28 12:50:47.767240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.571 qpair failed and we were unable to recover it. 00:27:05.571 [2024-11-28 12:50:47.767425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.571 [2024-11-28 12:50:47.767457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.571 qpair failed and we were unable to recover it. 00:27:05.571 [2024-11-28 12:50:47.767636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.571 [2024-11-28 12:50:47.767668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.571 qpair failed and we were unable to recover it. 00:27:05.571 [2024-11-28 12:50:47.767871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.571 [2024-11-28 12:50:47.767904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.571 qpair failed and we were unable to recover it. 00:27:05.571 [2024-11-28 12:50:47.768103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.571 [2024-11-28 12:50:47.768138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.571 qpair failed and we were unable to recover it. 00:27:05.571 [2024-11-28 12:50:47.768309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.571 [2024-11-28 12:50:47.768322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.571 qpair failed and we were unable to recover it. 00:27:05.571 [2024-11-28 12:50:47.768492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.571 [2024-11-28 12:50:47.768525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.571 qpair failed and we were unable to recover it. 00:27:05.571 [2024-11-28 12:50:47.768704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.571 [2024-11-28 12:50:47.768736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.571 qpair failed and we were unable to recover it. 00:27:05.571 [2024-11-28 12:50:47.768917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.571 [2024-11-28 12:50:47.768959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.571 qpair failed and we were unable to recover it. 00:27:05.571 [2024-11-28 12:50:47.769153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.571 [2024-11-28 12:50:47.769186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.571 qpair failed and we were unable to recover it. 00:27:05.571 [2024-11-28 12:50:47.769334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.571 [2024-11-28 12:50:47.769346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.571 qpair failed and we were unable to recover it. 00:27:05.571 [2024-11-28 12:50:47.769476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.571 [2024-11-28 12:50:47.769489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.571 qpair failed and we were unable to recover it. 00:27:05.571 [2024-11-28 12:50:47.769642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.571 [2024-11-28 12:50:47.769654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.571 qpair failed and we were unable to recover it. 00:27:05.571 [2024-11-28 12:50:47.769822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.571 [2024-11-28 12:50:47.769834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.571 qpair failed and we were unable to recover it. 00:27:05.571 [2024-11-28 12:50:47.769994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.571 [2024-11-28 12:50:47.770006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.572 qpair failed and we were unable to recover it. 00:27:05.572 [2024-11-28 12:50:47.770153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.572 [2024-11-28 12:50:47.770186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.572 qpair failed and we were unable to recover it. 00:27:05.572 [2024-11-28 12:50:47.770335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.572 [2024-11-28 12:50:47.770372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.572 qpair failed and we were unable to recover it. 00:27:05.572 [2024-11-28 12:50:47.770505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.572 [2024-11-28 12:50:47.770538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.572 qpair failed and we were unable to recover it. 00:27:05.572 [2024-11-28 12:50:47.770789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.572 [2024-11-28 12:50:47.770822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.572 qpair failed and we were unable to recover it. 00:27:05.572 [2024-11-28 12:50:47.770974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.572 [2024-11-28 12:50:47.771008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.572 qpair failed and we were unable to recover it. 00:27:05.572 [2024-11-28 12:50:47.771214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.572 [2024-11-28 12:50:47.771246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.572 qpair failed and we were unable to recover it. 00:27:05.572 [2024-11-28 12:50:47.771505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.572 [2024-11-28 12:50:47.771518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.572 qpair failed and we were unable to recover it. 00:27:05.572 [2024-11-28 12:50:47.771757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.572 [2024-11-28 12:50:47.771770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.572 qpair failed and we were unable to recover it. 00:27:05.572 [2024-11-28 12:50:47.771868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.572 [2024-11-28 12:50:47.771902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.572 qpair failed and we were unable to recover it. 00:27:05.572 [2024-11-28 12:50:47.772036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.572 [2024-11-28 12:50:47.772070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.572 qpair failed and we were unable to recover it. 00:27:05.572 [2024-11-28 12:50:47.772270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.572 [2024-11-28 12:50:47.772303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.572 qpair failed and we were unable to recover it. 00:27:05.572 [2024-11-28 12:50:47.772491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.572 [2024-11-28 12:50:47.772524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.572 qpair failed and we were unable to recover it. 00:27:05.572 [2024-11-28 12:50:47.772705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.572 [2024-11-28 12:50:47.772739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.572 qpair failed and we were unable to recover it. 00:27:05.572 [2024-11-28 12:50:47.772879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.572 [2024-11-28 12:50:47.772912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.572 qpair failed and we were unable to recover it. 00:27:05.572 [2024-11-28 12:50:47.773100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.572 [2024-11-28 12:50:47.773114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.572 qpair failed and we were unable to recover it. 00:27:05.572 [2024-11-28 12:50:47.773321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.572 [2024-11-28 12:50:47.773354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.572 qpair failed and we were unable to recover it. 00:27:05.572 [2024-11-28 12:50:47.773548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.572 [2024-11-28 12:50:47.773581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.572 qpair failed and we were unable to recover it. 00:27:05.572 [2024-11-28 12:50:47.773766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.572 [2024-11-28 12:50:47.773799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.572 qpair failed and we were unable to recover it. 00:27:05.572 [2024-11-28 12:50:47.773999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.572 [2024-11-28 12:50:47.774032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.572 qpair failed and we were unable to recover it. 00:27:05.572 [2024-11-28 12:50:47.774156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.572 [2024-11-28 12:50:47.774188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.572 qpair failed and we were unable to recover it. 00:27:05.572 [2024-11-28 12:50:47.774364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.572 [2024-11-28 12:50:47.774376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.572 qpair failed and we were unable to recover it. 00:27:05.572 [2024-11-28 12:50:47.774446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.572 [2024-11-28 12:50:47.774458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.572 qpair failed and we were unable to recover it. 00:27:05.572 [2024-11-28 12:50:47.774603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.572 [2024-11-28 12:50:47.774635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.572 qpair failed and we were unable to recover it. 00:27:05.572 [2024-11-28 12:50:47.774882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.572 [2024-11-28 12:50:47.774915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.572 qpair failed and we were unable to recover it. 00:27:05.572 [2024-11-28 12:50:47.775119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.572 [2024-11-28 12:50:47.775156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.572 qpair failed and we were unable to recover it. 00:27:05.572 [2024-11-28 12:50:47.775343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.572 [2024-11-28 12:50:47.775375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.572 qpair failed and we were unable to recover it. 00:27:05.572 [2024-11-28 12:50:47.775505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.572 [2024-11-28 12:50:47.775537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.572 qpair failed and we were unable to recover it. 00:27:05.572 [2024-11-28 12:50:47.775750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.572 [2024-11-28 12:50:47.775782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.572 qpair failed and we were unable to recover it. 00:27:05.572 [2024-11-28 12:50:47.775980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.572 [2024-11-28 12:50:47.776013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.572 qpair failed and we were unable to recover it. 00:27:05.572 [2024-11-28 12:50:47.776216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.572 [2024-11-28 12:50:47.776249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.572 qpair failed and we were unable to recover it. 00:27:05.572 [2024-11-28 12:50:47.776445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.572 [2024-11-28 12:50:47.776477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.572 qpair failed and we were unable to recover it. 00:27:05.572 [2024-11-28 12:50:47.776673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.572 [2024-11-28 12:50:47.776705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.572 qpair failed and we were unable to recover it. 00:27:05.572 [2024-11-28 12:50:47.777041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.572 [2024-11-28 12:50:47.777058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.572 qpair failed and we were unable to recover it. 00:27:05.572 [2024-11-28 12:50:47.777219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.572 [2024-11-28 12:50:47.777234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.572 qpair failed and we were unable to recover it. 00:27:05.572 [2024-11-28 12:50:47.777377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.572 [2024-11-28 12:50:47.777409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.572 qpair failed and we were unable to recover it. 00:27:05.572 [2024-11-28 12:50:47.777607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.572 [2024-11-28 12:50:47.777639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.572 qpair failed and we were unable to recover it. 00:27:05.572 [2024-11-28 12:50:47.777857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.572 [2024-11-28 12:50:47.777888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.572 qpair failed and we were unable to recover it. 00:27:05.572 [2024-11-28 12:50:47.778033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.573 [2024-11-28 12:50:47.778066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.573 qpair failed and we were unable to recover it. 00:27:05.573 [2024-11-28 12:50:47.778317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.573 [2024-11-28 12:50:47.778332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.573 qpair failed and we were unable to recover it. 00:27:05.573 [2024-11-28 12:50:47.778416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.573 [2024-11-28 12:50:47.778432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.573 qpair failed and we were unable to recover it. 00:27:05.573 [2024-11-28 12:50:47.778579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.573 [2024-11-28 12:50:47.778594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.573 qpair failed and we were unable to recover it. 00:27:05.573 [2024-11-28 12:50:47.778800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.573 [2024-11-28 12:50:47.778820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.573 qpair failed and we were unable to recover it. 00:27:05.573 [2024-11-28 12:50:47.778908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.573 [2024-11-28 12:50:47.778922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.573 qpair failed and we were unable to recover it. 00:27:05.573 [2024-11-28 12:50:47.779083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.573 [2024-11-28 12:50:47.779127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.573 qpair failed and we were unable to recover it. 00:27:05.573 [2024-11-28 12:50:47.779257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.573 [2024-11-28 12:50:47.779291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.573 qpair failed and we were unable to recover it. 00:27:05.573 [2024-11-28 12:50:47.779420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.573 [2024-11-28 12:50:47.779452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.573 qpair failed and we were unable to recover it. 00:27:05.573 [2024-11-28 12:50:47.779578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.573 [2024-11-28 12:50:47.779610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.573 qpair failed and we were unable to recover it. 00:27:05.573 [2024-11-28 12:50:47.779803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.573 [2024-11-28 12:50:47.779836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.573 qpair failed and we were unable to recover it. 00:27:05.573 [2024-11-28 12:50:47.779970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.573 [2024-11-28 12:50:47.780004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.573 qpair failed and we were unable to recover it. 00:27:05.573 [2024-11-28 12:50:47.780273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.573 [2024-11-28 12:50:47.780304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.573 qpair failed and we were unable to recover it. 00:27:05.573 [2024-11-28 12:50:47.780492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.573 [2024-11-28 12:50:47.780524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.573 qpair failed and we were unable to recover it. 00:27:05.573 [2024-11-28 12:50:47.780723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.573 [2024-11-28 12:50:47.780756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.573 qpair failed and we were unable to recover it. 00:27:05.573 [2024-11-28 12:50:47.780889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.573 [2024-11-28 12:50:47.780921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.573 qpair failed and we were unable to recover it. 00:27:05.573 [2024-11-28 12:50:47.781107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.573 [2024-11-28 12:50:47.781118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.573 qpair failed and we were unable to recover it. 00:27:05.573 [2024-11-28 12:50:47.781198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.573 [2024-11-28 12:50:47.781210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.573 qpair failed and we were unable to recover it. 00:27:05.573 [2024-11-28 12:50:47.781364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.573 [2024-11-28 12:50:47.781396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.573 qpair failed and we were unable to recover it. 00:27:05.573 [2024-11-28 12:50:47.781591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.573 [2024-11-28 12:50:47.781622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.573 qpair failed and we were unable to recover it. 00:27:05.573 [2024-11-28 12:50:47.781802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.573 [2024-11-28 12:50:47.781835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.573 qpair failed and we were unable to recover it. 00:27:05.573 [2024-11-28 12:50:47.781965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.573 [2024-11-28 12:50:47.781999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.573 qpair failed and we were unable to recover it. 00:27:05.573 [2024-11-28 12:50:47.782106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.573 [2024-11-28 12:50:47.782138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.573 qpair failed and we were unable to recover it. 00:27:05.573 [2024-11-28 12:50:47.782338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.573 [2024-11-28 12:50:47.782372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.573 qpair failed and we were unable to recover it. 00:27:05.573 [2024-11-28 12:50:47.782602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.573 [2024-11-28 12:50:47.782614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.573 qpair failed and we were unable to recover it. 00:27:05.573 [2024-11-28 12:50:47.782748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.573 [2024-11-28 12:50:47.782760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.573 qpair failed and we were unable to recover it. 00:27:05.573 [2024-11-28 12:50:47.782849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.573 [2024-11-28 12:50:47.782861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.573 qpair failed and we were unable to recover it. 00:27:05.573 [2024-11-28 12:50:47.783063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.573 [2024-11-28 12:50:47.783097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.573 qpair failed and we were unable to recover it. 00:27:05.573 [2024-11-28 12:50:47.783221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.573 [2024-11-28 12:50:47.783253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.573 qpair failed and we were unable to recover it. 00:27:05.573 [2024-11-28 12:50:47.783461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.573 [2024-11-28 12:50:47.783494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.573 qpair failed and we were unable to recover it. 00:27:05.573 [2024-11-28 12:50:47.783674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.573 [2024-11-28 12:50:47.783706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.573 qpair failed and we were unable to recover it. 00:27:05.573 [2024-11-28 12:50:47.783849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.573 [2024-11-28 12:50:47.783882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.573 qpair failed and we were unable to recover it. 00:27:05.573 [2024-11-28 12:50:47.784028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.573 [2024-11-28 12:50:47.784061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.573 qpair failed and we were unable to recover it. 00:27:05.573 [2024-11-28 12:50:47.784247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.573 [2024-11-28 12:50:47.784279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.573 qpair failed and we were unable to recover it. 00:27:05.573 [2024-11-28 12:50:47.784421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.573 [2024-11-28 12:50:47.784453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.573 qpair failed and we were unable to recover it. 00:27:05.573 [2024-11-28 12:50:47.784596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.573 [2024-11-28 12:50:47.784628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.573 qpair failed and we were unable to recover it. 00:27:05.573 [2024-11-28 12:50:47.784775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.573 [2024-11-28 12:50:47.784808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.573 qpair failed and we were unable to recover it. 00:27:05.573 [2024-11-28 12:50:47.785030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.573 [2024-11-28 12:50:47.785064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.573 qpair failed and we were unable to recover it. 00:27:05.573 [2024-11-28 12:50:47.785193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.574 [2024-11-28 12:50:47.785205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.574 qpair failed and we were unable to recover it. 00:27:05.574 [2024-11-28 12:50:47.785349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.574 [2024-11-28 12:50:47.785361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.574 qpair failed and we were unable to recover it. 00:27:05.574 [2024-11-28 12:50:47.785451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.574 [2024-11-28 12:50:47.785462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.574 qpair failed and we were unable to recover it. 00:27:05.574 [2024-11-28 12:50:47.785612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.574 [2024-11-28 12:50:47.785651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.574 qpair failed and we were unable to recover it. 00:27:05.574 [2024-11-28 12:50:47.785765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.574 [2024-11-28 12:50:47.785797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.574 qpair failed and we were unable to recover it. 00:27:05.574 [2024-11-28 12:50:47.785934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.574 [2024-11-28 12:50:47.785978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.574 qpair failed and we were unable to recover it. 00:27:05.574 [2024-11-28 12:50:47.786121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.574 [2024-11-28 12:50:47.786165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.574 qpair failed and we were unable to recover it. 00:27:05.574 [2024-11-28 12:50:47.786294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.574 [2024-11-28 12:50:47.786306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.574 qpair failed and we were unable to recover it. 00:27:05.574 [2024-11-28 12:50:47.786453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.574 [2024-11-28 12:50:47.786467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.574 qpair failed and we were unable to recover it. 00:27:05.574 [2024-11-28 12:50:47.786650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.574 [2024-11-28 12:50:47.786661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.574 qpair failed and we were unable to recover it. 00:27:05.574 [2024-11-28 12:50:47.786751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.574 [2024-11-28 12:50:47.786761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.574 qpair failed and we were unable to recover it. 00:27:05.574 [2024-11-28 12:50:47.786906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.574 [2024-11-28 12:50:47.786938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.574 qpair failed and we were unable to recover it. 00:27:05.574 [2024-11-28 12:50:47.787181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.574 [2024-11-28 12:50:47.787214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.574 qpair failed and we were unable to recover it. 00:27:05.574 [2024-11-28 12:50:47.787396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.574 [2024-11-28 12:50:47.787427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.574 qpair failed and we were unable to recover it. 00:27:05.574 [2024-11-28 12:50:47.787616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.574 [2024-11-28 12:50:47.787648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.574 qpair failed and we were unable to recover it. 00:27:05.574 [2024-11-28 12:50:47.787781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.574 [2024-11-28 12:50:47.787815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.574 qpair failed and we were unable to recover it. 00:27:05.574 [2024-11-28 12:50:47.787957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.574 [2024-11-28 12:50:47.787990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.574 qpair failed and we were unable to recover it. 00:27:05.574 [2024-11-28 12:50:47.788195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.574 [2024-11-28 12:50:47.788226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.574 qpair failed and we were unable to recover it. 00:27:05.574 [2024-11-28 12:50:47.788366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.574 [2024-11-28 12:50:47.788398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.574 qpair failed and we were unable to recover it. 00:27:05.574 [2024-11-28 12:50:47.788585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.574 [2024-11-28 12:50:47.788595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.574 qpair failed and we were unable to recover it. 00:27:05.574 [2024-11-28 12:50:47.788678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.574 [2024-11-28 12:50:47.788689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.574 qpair failed and we were unable to recover it. 00:27:05.574 [2024-11-28 12:50:47.788829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.574 [2024-11-28 12:50:47.788839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.574 qpair failed and we were unable to recover it. 00:27:05.574 [2024-11-28 12:50:47.789009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.574 [2024-11-28 12:50:47.789042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.574 qpair failed and we were unable to recover it. 00:27:05.574 [2024-11-28 12:50:47.789239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.574 [2024-11-28 12:50:47.789273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.574 qpair failed and we were unable to recover it. 00:27:05.574 [2024-11-28 12:50:47.789400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.574 [2024-11-28 12:50:47.789430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.574 qpair failed and we were unable to recover it. 00:27:05.574 [2024-11-28 12:50:47.789649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.574 [2024-11-28 12:50:47.789681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.574 qpair failed and we were unable to recover it. 00:27:05.574 [2024-11-28 12:50:47.789808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.574 [2024-11-28 12:50:47.789839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.574 qpair failed and we were unable to recover it. 00:27:05.574 [2024-11-28 12:50:47.790024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.574 [2024-11-28 12:50:47.790055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.574 qpair failed and we were unable to recover it. 00:27:05.574 [2024-11-28 12:50:47.790188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.574 [2024-11-28 12:50:47.790220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.574 qpair failed and we were unable to recover it. 00:27:05.574 [2024-11-28 12:50:47.790425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.574 [2024-11-28 12:50:47.790435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.574 qpair failed and we were unable to recover it. 00:27:05.574 [2024-11-28 12:50:47.790513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.574 [2024-11-28 12:50:47.790546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.574 qpair failed and we were unable to recover it. 00:27:05.574 [2024-11-28 12:50:47.790686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.574 [2024-11-28 12:50:47.790716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.574 qpair failed and we were unable to recover it. 00:27:05.574 [2024-11-28 12:50:47.790916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.574 [2024-11-28 12:50:47.790965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.574 qpair failed and we were unable to recover it. 00:27:05.574 [2024-11-28 12:50:47.791128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.574 [2024-11-28 12:50:47.791159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.575 qpair failed and we were unable to recover it. 00:27:05.575 [2024-11-28 12:50:47.791294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.575 [2024-11-28 12:50:47.791325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.575 qpair failed and we were unable to recover it. 00:27:05.575 [2024-11-28 12:50:47.791450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.575 [2024-11-28 12:50:47.791460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.575 qpair failed and we were unable to recover it. 00:27:05.575 [2024-11-28 12:50:47.791682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.575 [2024-11-28 12:50:47.791714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.575 qpair failed and we were unable to recover it. 00:27:05.575 [2024-11-28 12:50:47.791932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.575 [2024-11-28 12:50:47.791975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.575 qpair failed and we were unable to recover it. 00:27:05.575 [2024-11-28 12:50:47.792215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.575 [2024-11-28 12:50:47.792246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.575 qpair failed and we were unable to recover it. 00:27:05.575 [2024-11-28 12:50:47.792444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.575 [2024-11-28 12:50:47.792474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.575 qpair failed and we were unable to recover it. 00:27:05.575 [2024-11-28 12:50:47.792584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.575 [2024-11-28 12:50:47.792616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.575 qpair failed and we were unable to recover it. 00:27:05.575 [2024-11-28 12:50:47.792795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.575 [2024-11-28 12:50:47.792827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.575 qpair failed and we were unable to recover it. 00:27:05.575 [2024-11-28 12:50:47.793109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.575 [2024-11-28 12:50:47.793141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.575 qpair failed and we were unable to recover it. 00:27:05.575 [2024-11-28 12:50:47.793319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.575 [2024-11-28 12:50:47.793350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.575 qpair failed and we were unable to recover it. 00:27:05.575 [2024-11-28 12:50:47.793489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.575 [2024-11-28 12:50:47.793520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.575 qpair failed and we were unable to recover it. 00:27:05.575 [2024-11-28 12:50:47.793721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.575 [2024-11-28 12:50:47.793753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.575 qpair failed and we were unable to recover it. 00:27:05.575 [2024-11-28 12:50:47.793934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.575 [2024-11-28 12:50:47.793981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.575 qpair failed and we were unable to recover it. 00:27:05.575 [2024-11-28 12:50:47.794196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.575 [2024-11-28 12:50:47.794226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.575 qpair failed and we were unable to recover it. 00:27:05.575 [2024-11-28 12:50:47.794341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.575 [2024-11-28 12:50:47.794372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.575 qpair failed and we were unable to recover it. 00:27:05.575 [2024-11-28 12:50:47.794566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.575 [2024-11-28 12:50:47.794598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.575 qpair failed and we were unable to recover it. 00:27:05.575 [2024-11-28 12:50:47.794707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.575 [2024-11-28 12:50:47.794737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.575 qpair failed and we were unable to recover it. 00:27:05.575 [2024-11-28 12:50:47.794870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.575 [2024-11-28 12:50:47.794900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.575 qpair failed and we were unable to recover it. 00:27:05.575 [2024-11-28 12:50:47.795057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.575 [2024-11-28 12:50:47.795088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.575 qpair failed and we were unable to recover it. 00:27:05.575 [2024-11-28 12:50:47.795276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.575 [2024-11-28 12:50:47.795319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.575 qpair failed and we were unable to recover it. 00:27:05.575 [2024-11-28 12:50:47.795457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.575 [2024-11-28 12:50:47.795467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.575 qpair failed and we were unable to recover it. 00:27:05.575 [2024-11-28 12:50:47.795538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.575 [2024-11-28 12:50:47.795548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.575 qpair failed and we were unable to recover it. 00:27:05.575 [2024-11-28 12:50:47.795679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.575 [2024-11-28 12:50:47.795691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.575 qpair failed and we were unable to recover it. 00:27:05.575 [2024-11-28 12:50:47.795773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.575 [2024-11-28 12:50:47.795785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.575 qpair failed and we were unable to recover it. 00:27:05.575 [2024-11-28 12:50:47.796007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.575 [2024-11-28 12:50:47.796017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.575 qpair failed and we were unable to recover it. 00:27:05.575 [2024-11-28 12:50:47.796078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.575 [2024-11-28 12:50:47.796087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.575 qpair failed and we were unable to recover it. 00:27:05.575 [2024-11-28 12:50:47.796207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.575 [2024-11-28 12:50:47.796217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.575 qpair failed and we were unable to recover it. 00:27:05.575 [2024-11-28 12:50:47.796313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.575 [2024-11-28 12:50:47.796324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.575 qpair failed and we were unable to recover it. 00:27:05.575 [2024-11-28 12:50:47.796401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.575 [2024-11-28 12:50:47.796411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.575 qpair failed and we were unable to recover it. 00:27:05.575 [2024-11-28 12:50:47.796495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.575 [2024-11-28 12:50:47.796505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.575 qpair failed and we were unable to recover it. 00:27:05.575 [2024-11-28 12:50:47.796593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.575 [2024-11-28 12:50:47.796604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.575 qpair failed and we were unable to recover it. 00:27:05.575 [2024-11-28 12:50:47.796680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.575 [2024-11-28 12:50:47.796690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.575 qpair failed and we were unable to recover it. 00:27:05.575 [2024-11-28 12:50:47.796863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.575 [2024-11-28 12:50:47.796896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.575 qpair failed and we were unable to recover it. 00:27:05.575 [2024-11-28 12:50:47.797018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.575 [2024-11-28 12:50:47.797050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.575 qpair failed and we were unable to recover it. 00:27:05.575 [2024-11-28 12:50:47.797176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.575 [2024-11-28 12:50:47.797206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.575 qpair failed and we were unable to recover it. 00:27:05.575 [2024-11-28 12:50:47.797327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.575 [2024-11-28 12:50:47.797364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.575 qpair failed and we were unable to recover it. 00:27:05.575 [2024-11-28 12:50:47.797435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.575 [2024-11-28 12:50:47.797445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.575 qpair failed and we were unable to recover it. 00:27:05.575 [2024-11-28 12:50:47.797544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.575 [2024-11-28 12:50:47.797554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.575 qpair failed and we were unable to recover it. 00:27:05.576 [2024-11-28 12:50:47.797702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.576 [2024-11-28 12:50:47.797712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.576 qpair failed and we were unable to recover it. 00:27:05.576 [2024-11-28 12:50:47.797926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.576 [2024-11-28 12:50:47.797969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.576 qpair failed and we were unable to recover it. 00:27:05.576 [2024-11-28 12:50:47.798089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.576 [2024-11-28 12:50:47.798119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.576 qpair failed and we were unable to recover it. 00:27:05.576 [2024-11-28 12:50:47.798251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.576 [2024-11-28 12:50:47.798283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.576 qpair failed and we were unable to recover it. 00:27:05.576 [2024-11-28 12:50:47.798408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.576 [2024-11-28 12:50:47.798439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.576 qpair failed and we were unable to recover it. 00:27:05.576 [2024-11-28 12:50:47.798559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.576 [2024-11-28 12:50:47.798569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.576 qpair failed and we were unable to recover it. 00:27:05.576 [2024-11-28 12:50:47.798673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.576 [2024-11-28 12:50:47.798683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.576 qpair failed and we were unable to recover it. 00:27:05.576 [2024-11-28 12:50:47.798751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.576 [2024-11-28 12:50:47.798761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.576 qpair failed and we were unable to recover it. 00:27:05.576 [2024-11-28 12:50:47.798901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.576 [2024-11-28 12:50:47.798911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.576 qpair failed and we were unable to recover it. 00:27:05.576 [2024-11-28 12:50:47.799058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.576 [2024-11-28 12:50:47.799070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.576 qpair failed and we were unable to recover it. 00:27:05.576 [2024-11-28 12:50:47.799200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.576 [2024-11-28 12:50:47.799210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.576 qpair failed and we were unable to recover it. 00:27:05.576 [2024-11-28 12:50:47.799289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.576 [2024-11-28 12:50:47.799299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.576 qpair failed and we were unable to recover it. 00:27:05.576 [2024-11-28 12:50:47.799376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.576 [2024-11-28 12:50:47.799386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.576 qpair failed and we were unable to recover it. 00:27:05.576 [2024-11-28 12:50:47.799464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.576 [2024-11-28 12:50:47.799474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.576 qpair failed and we were unable to recover it. 00:27:05.576 [2024-11-28 12:50:47.799608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.576 [2024-11-28 12:50:47.799644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.576 qpair failed and we were unable to recover it. 00:27:05.576 [2024-11-28 12:50:47.799837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.576 [2024-11-28 12:50:47.799869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.576 qpair failed and we were unable to recover it. 00:27:05.576 [2024-11-28 12:50:47.800002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.576 [2024-11-28 12:50:47.800049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.576 qpair failed and we were unable to recover it. 00:27:05.576 [2024-11-28 12:50:47.800307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.576 [2024-11-28 12:50:47.800317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.576 qpair failed and we were unable to recover it. 00:27:05.576 [2024-11-28 12:50:47.800459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.576 [2024-11-28 12:50:47.800469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.576 qpair failed and we were unable to recover it. 00:27:05.576 [2024-11-28 12:50:47.800562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.576 [2024-11-28 12:50:47.800571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.576 qpair failed and we were unable to recover it. 00:27:05.576 [2024-11-28 12:50:47.800639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.576 [2024-11-28 12:50:47.800649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.576 qpair failed and we were unable to recover it. 00:27:05.576 [2024-11-28 12:50:47.800856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.576 [2024-11-28 12:50:47.800872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.576 qpair failed and we were unable to recover it. 00:27:05.576 [2024-11-28 12:50:47.801086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.576 [2024-11-28 12:50:47.801098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.576 qpair failed and we were unable to recover it. 00:27:05.576 [2024-11-28 12:50:47.801180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.576 [2024-11-28 12:50:47.801190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.576 qpair failed and we were unable to recover it. 00:27:05.576 [2024-11-28 12:50:47.801402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.576 [2024-11-28 12:50:47.801433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.576 qpair failed and we were unable to recover it. 00:27:05.576 [2024-11-28 12:50:47.801562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.576 [2024-11-28 12:50:47.801593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.576 qpair failed and we were unable to recover it. 00:27:05.576 [2024-11-28 12:50:47.801722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.576 [2024-11-28 12:50:47.801753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.576 qpair failed and we were unable to recover it. 00:27:05.576 [2024-11-28 12:50:47.801962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.576 [2024-11-28 12:50:47.801995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.576 qpair failed and we were unable to recover it. 00:27:05.576 [2024-11-28 12:50:47.802202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.576 [2024-11-28 12:50:47.802234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.576 qpair failed and we were unable to recover it. 00:27:05.576 [2024-11-28 12:50:47.802355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.576 [2024-11-28 12:50:47.802386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.576 qpair failed and we were unable to recover it. 00:27:05.576 [2024-11-28 12:50:47.802569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.576 [2024-11-28 12:50:47.802601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.576 qpair failed and we were unable to recover it. 00:27:05.576 [2024-11-28 12:50:47.802721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.576 [2024-11-28 12:50:47.802752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.576 qpair failed and we were unable to recover it. 00:27:05.576 [2024-11-28 12:50:47.802884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.576 [2024-11-28 12:50:47.802915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.576 qpair failed and we were unable to recover it. 00:27:05.576 [2024-11-28 12:50:47.803102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.576 [2024-11-28 12:50:47.803163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.576 qpair failed and we were unable to recover it. 00:27:05.576 [2024-11-28 12:50:47.803335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.576 [2024-11-28 12:50:47.803351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.576 qpair failed and we were unable to recover it. 00:27:05.576 [2024-11-28 12:50:47.803495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.576 [2024-11-28 12:50:47.803508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.576 qpair failed and we were unable to recover it. 00:27:05.576 [2024-11-28 12:50:47.803583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.576 [2024-11-28 12:50:47.803597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.576 qpair failed and we were unable to recover it. 00:27:05.576 [2024-11-28 12:50:47.803752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.576 [2024-11-28 12:50:47.803766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.576 qpair failed and we were unable to recover it. 00:27:05.577 [2024-11-28 12:50:47.803919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.577 [2024-11-28 12:50:47.803961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.577 qpair failed and we were unable to recover it. 00:27:05.577 [2024-11-28 12:50:47.804096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.577 [2024-11-28 12:50:47.804127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.577 qpair failed and we were unable to recover it. 00:27:05.577 [2024-11-28 12:50:47.804316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.577 [2024-11-28 12:50:47.804347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.577 qpair failed and we were unable to recover it. 00:27:05.577 [2024-11-28 12:50:47.804479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.577 [2024-11-28 12:50:47.804493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.577 qpair failed and we were unable to recover it. 00:27:05.577 [2024-11-28 12:50:47.804579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.577 [2024-11-28 12:50:47.804592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.577 qpair failed and we were unable to recover it. 00:27:05.577 [2024-11-28 12:50:47.804745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.577 [2024-11-28 12:50:47.804775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.577 qpair failed and we were unable to recover it. 00:27:05.577 [2024-11-28 12:50:47.804891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.577 [2024-11-28 12:50:47.804923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.577 qpair failed and we were unable to recover it. 00:27:05.577 [2024-11-28 12:50:47.805046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.577 [2024-11-28 12:50:47.805077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.577 qpair failed and we were unable to recover it. 00:27:05.577 [2024-11-28 12:50:47.805186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.577 [2024-11-28 12:50:47.805215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.577 qpair failed and we were unable to recover it. 00:27:05.577 [2024-11-28 12:50:47.805338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.577 [2024-11-28 12:50:47.805352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.577 qpair failed and we were unable to recover it. 00:27:05.577 [2024-11-28 12:50:47.805527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.577 [2024-11-28 12:50:47.805541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.577 qpair failed and we were unable to recover it. 00:27:05.577 [2024-11-28 12:50:47.805683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.577 [2024-11-28 12:50:47.805698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.577 qpair failed and we were unable to recover it. 00:27:05.577 [2024-11-28 12:50:47.805790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.577 [2024-11-28 12:50:47.805804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.577 qpair failed and we were unable to recover it. 00:27:05.577 [2024-11-28 12:50:47.805955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.577 [2024-11-28 12:50:47.805970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.577 qpair failed and we were unable to recover it. 00:27:05.577 [2024-11-28 12:50:47.806056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.577 [2024-11-28 12:50:47.806070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.577 qpair failed and we were unable to recover it. 00:27:05.577 [2024-11-28 12:50:47.806147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.577 [2024-11-28 12:50:47.806161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.577 qpair failed and we were unable to recover it. 00:27:05.577 [2024-11-28 12:50:47.806317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.577 [2024-11-28 12:50:47.806354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.577 qpair failed and we were unable to recover it. 00:27:05.577 [2024-11-28 12:50:47.806541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.577 [2024-11-28 12:50:47.806571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.577 qpair failed and we were unable to recover it. 00:27:05.577 [2024-11-28 12:50:47.806746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.577 [2024-11-28 12:50:47.806777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.577 qpair failed and we were unable to recover it. 00:27:05.577 [2024-11-28 12:50:47.806964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.577 [2024-11-28 12:50:47.806996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.577 qpair failed and we were unable to recover it. 00:27:05.577 [2024-11-28 12:50:47.807185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.577 [2024-11-28 12:50:47.807199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.577 qpair failed and we were unable to recover it. 00:27:05.577 [2024-11-28 12:50:47.807272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.577 [2024-11-28 12:50:47.807286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.577 qpair failed and we were unable to recover it. 00:27:05.577 [2024-11-28 12:50:47.807373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.577 [2024-11-28 12:50:47.807386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.577 qpair failed and we were unable to recover it. 00:27:05.577 [2024-11-28 12:50:47.807539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.577 [2024-11-28 12:50:47.807553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.577 qpair failed and we were unable to recover it. 00:27:05.577 [2024-11-28 12:50:47.807700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.577 [2024-11-28 12:50:47.807715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.577 qpair failed and we were unable to recover it. 00:27:05.577 [2024-11-28 12:50:47.807869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.577 [2024-11-28 12:50:47.807899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.577 qpair failed and we were unable to recover it. 00:27:05.577 [2024-11-28 12:50:47.808057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.577 [2024-11-28 12:50:47.808089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.577 qpair failed and we were unable to recover it. 00:27:05.577 [2024-11-28 12:50:47.808201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.577 [2024-11-28 12:50:47.808233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.577 qpair failed and we were unable to recover it. 00:27:05.577 [2024-11-28 12:50:47.808411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.577 [2024-11-28 12:50:47.808441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.577 qpair failed and we were unable to recover it. 00:27:05.577 [2024-11-28 12:50:47.808553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.577 [2024-11-28 12:50:47.808583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.577 qpair failed and we were unable to recover it. 00:27:05.577 [2024-11-28 12:50:47.808835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.577 [2024-11-28 12:50:47.808866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.577 qpair failed and we were unable to recover it. 00:27:05.577 [2024-11-28 12:50:47.808992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.577 [2024-11-28 12:50:47.809007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.577 qpair failed and we were unable to recover it. 00:27:05.577 [2024-11-28 12:50:47.809089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.577 [2024-11-28 12:50:47.809103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.577 qpair failed and we were unable to recover it. 00:27:05.577 [2024-11-28 12:50:47.809202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.577 [2024-11-28 12:50:47.809216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.577 qpair failed and we were unable to recover it. 00:27:05.577 [2024-11-28 12:50:47.809367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.577 [2024-11-28 12:50:47.809381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.577 qpair failed and we were unable to recover it. 00:27:05.577 [2024-11-28 12:50:47.809467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.577 [2024-11-28 12:50:47.809507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.577 qpair failed and we were unable to recover it. 00:27:05.577 [2024-11-28 12:50:47.809773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.577 [2024-11-28 12:50:47.809804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.577 qpair failed and we were unable to recover it. 00:27:05.577 [2024-11-28 12:50:47.809936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.577 [2024-11-28 12:50:47.809976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.577 qpair failed and we were unable to recover it. 00:27:05.577 [2024-11-28 12:50:47.810165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.578 [2024-11-28 12:50:47.810179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.578 qpair failed and we were unable to recover it. 00:27:05.578 [2024-11-28 12:50:47.810267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.578 [2024-11-28 12:50:47.810299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.578 qpair failed and we were unable to recover it. 00:27:05.578 [2024-11-28 12:50:47.810488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.578 [2024-11-28 12:50:47.810518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.578 qpair failed and we were unable to recover it. 00:27:05.578 [2024-11-28 12:50:47.810718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.578 [2024-11-28 12:50:47.810748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.578 qpair failed and we were unable to recover it. 00:27:05.578 [2024-11-28 12:50:47.810859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.578 [2024-11-28 12:50:47.810890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.578 qpair failed and we were unable to recover it. 00:27:05.578 [2024-11-28 12:50:47.811118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.578 [2024-11-28 12:50:47.811154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.578 qpair failed and we were unable to recover it. 00:27:05.578 [2024-11-28 12:50:47.811335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.578 [2024-11-28 12:50:47.811366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.578 qpair failed and we were unable to recover it. 00:27:05.578 [2024-11-28 12:50:47.811553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.578 [2024-11-28 12:50:47.811584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.578 qpair failed and we were unable to recover it. 00:27:05.578 [2024-11-28 12:50:47.811759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.578 [2024-11-28 12:50:47.811791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.578 qpair failed and we were unable to recover it. 00:27:05.578 [2024-11-28 12:50:47.811982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.578 [2024-11-28 12:50:47.812015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.578 qpair failed and we were unable to recover it. 00:27:05.578 [2024-11-28 12:50:47.812139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.578 [2024-11-28 12:50:47.812169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.578 qpair failed and we were unable to recover it. 00:27:05.578 [2024-11-28 12:50:47.812352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.578 [2024-11-28 12:50:47.812384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.578 qpair failed and we were unable to recover it. 00:27:05.578 [2024-11-28 12:50:47.812632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.578 [2024-11-28 12:50:47.812662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.578 qpair failed and we were unable to recover it. 00:27:05.578 [2024-11-28 12:50:47.812788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.578 [2024-11-28 12:50:47.812818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.578 qpair failed and we were unable to recover it. 00:27:05.578 [2024-11-28 12:50:47.813000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.578 [2024-11-28 12:50:47.813032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.578 qpair failed and we were unable to recover it. 00:27:05.578 [2024-11-28 12:50:47.813218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.578 [2024-11-28 12:50:47.813248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.578 qpair failed and we were unable to recover it. 00:27:05.578 [2024-11-28 12:50:47.813378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.578 [2024-11-28 12:50:47.813409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.578 qpair failed and we were unable to recover it. 00:27:05.578 [2024-11-28 12:50:47.813534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.578 [2024-11-28 12:50:47.813564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.578 qpair failed and we were unable to recover it. 00:27:05.578 [2024-11-28 12:50:47.813688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.578 [2024-11-28 12:50:47.813727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.578 qpair failed and we were unable to recover it. 00:27:05.578 [2024-11-28 12:50:47.813919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.578 [2024-11-28 12:50:47.813958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.578 qpair failed and we were unable to recover it. 00:27:05.578 [2024-11-28 12:50:47.814172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.578 [2024-11-28 12:50:47.814203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.578 qpair failed and we were unable to recover it. 00:27:05.578 [2024-11-28 12:50:47.814386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.578 [2024-11-28 12:50:47.814418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.578 qpair failed and we were unable to recover it. 00:27:05.578 [2024-11-28 12:50:47.814593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.578 [2024-11-28 12:50:47.814604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.578 qpair failed and we were unable to recover it. 00:27:05.578 [2024-11-28 12:50:47.814676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.578 [2024-11-28 12:50:47.814686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.578 qpair failed and we were unable to recover it. 00:27:05.578 [2024-11-28 12:50:47.814824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.578 [2024-11-28 12:50:47.814835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.578 qpair failed and we were unable to recover it. 00:27:05.578 [2024-11-28 12:50:47.814975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.578 [2024-11-28 12:50:47.815007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.578 qpair failed and we were unable to recover it. 00:27:05.578 [2024-11-28 12:50:47.815143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.578 [2024-11-28 12:50:47.815173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.578 qpair failed and we were unable to recover it. 00:27:05.578 [2024-11-28 12:50:47.815369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.578 [2024-11-28 12:50:47.815401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.578 qpair failed and we were unable to recover it. 00:27:05.578 [2024-11-28 12:50:47.815594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.578 [2024-11-28 12:50:47.815627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.578 qpair failed and we were unable to recover it. 00:27:05.578 [2024-11-28 12:50:47.815756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.578 [2024-11-28 12:50:47.815786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.578 qpair failed and we were unable to recover it. 00:27:05.578 [2024-11-28 12:50:47.815912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.578 [2024-11-28 12:50:47.815943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.578 qpair failed and we were unable to recover it. 00:27:05.578 [2024-11-28 12:50:47.816092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.578 [2024-11-28 12:50:47.816103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.578 qpair failed and we were unable to recover it. 00:27:05.578 [2024-11-28 12:50:47.816209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.578 [2024-11-28 12:50:47.816219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.578 qpair failed and we were unable to recover it. 00:27:05.578 [2024-11-28 12:50:47.816377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.578 [2024-11-28 12:50:47.816408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.578 qpair failed and we were unable to recover it. 00:27:05.578 [2024-11-28 12:50:47.816530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.578 [2024-11-28 12:50:47.816561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.578 qpair failed and we were unable to recover it. 00:27:05.578 [2024-11-28 12:50:47.816739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.578 [2024-11-28 12:50:47.816771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.578 qpair failed and we were unable to recover it. 00:27:05.578 [2024-11-28 12:50:47.817031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.578 [2024-11-28 12:50:47.817063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.578 qpair failed and we were unable to recover it. 00:27:05.578 [2024-11-28 12:50:47.817241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.579 [2024-11-28 12:50:47.817271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.579 qpair failed and we were unable to recover it. 00:27:05.579 [2024-11-28 12:50:47.817514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.579 [2024-11-28 12:50:47.817545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.579 qpair failed and we were unable to recover it. 00:27:05.579 [2024-11-28 12:50:47.817721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.579 [2024-11-28 12:50:47.817752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.579 qpair failed and we were unable to recover it. 00:27:05.579 [2024-11-28 12:50:47.817926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.579 [2024-11-28 12:50:47.817965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.579 qpair failed and we were unable to recover it. 00:27:05.579 [2024-11-28 12:50:47.818216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.579 [2024-11-28 12:50:47.818247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.579 qpair failed and we were unable to recover it. 00:27:05.579 [2024-11-28 12:50:47.818445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.579 [2024-11-28 12:50:47.818477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.579 qpair failed and we were unable to recover it. 00:27:05.579 [2024-11-28 12:50:47.818693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.579 [2024-11-28 12:50:47.818724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.579 qpair failed and we were unable to recover it. 00:27:05.579 [2024-11-28 12:50:47.818977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.579 [2024-11-28 12:50:47.819009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.579 qpair failed and we were unable to recover it. 00:27:05.579 [2024-11-28 12:50:47.819141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.579 [2024-11-28 12:50:47.819151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.579 qpair failed and we were unable to recover it. 00:27:05.579 [2024-11-28 12:50:47.819234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.579 [2024-11-28 12:50:47.819245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.579 qpair failed and we were unable to recover it. 00:27:05.579 [2024-11-28 12:50:47.819316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.579 [2024-11-28 12:50:47.819327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.579 qpair failed and we were unable to recover it. 00:27:05.579 [2024-11-28 12:50:47.819457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.579 [2024-11-28 12:50:47.819468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.579 qpair failed and we were unable to recover it. 00:27:05.579 [2024-11-28 12:50:47.819537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.579 [2024-11-28 12:50:47.819547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.579 qpair failed and we were unable to recover it. 00:27:05.579 [2024-11-28 12:50:47.819600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.579 [2024-11-28 12:50:47.819610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.579 qpair failed and we were unable to recover it. 00:27:05.579 [2024-11-28 12:50:47.819749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.579 [2024-11-28 12:50:47.819760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.579 qpair failed and we were unable to recover it. 00:27:05.579 [2024-11-28 12:50:47.819841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.579 [2024-11-28 12:50:47.819851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.579 qpair failed and we were unable to recover it. 00:27:05.579 [2024-11-28 12:50:47.819929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.579 [2024-11-28 12:50:47.819940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.579 qpair failed and we were unable to recover it. 00:27:05.579 [2024-11-28 12:50:47.820037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.579 [2024-11-28 12:50:47.820048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.579 qpair failed and we were unable to recover it. 00:27:05.579 [2024-11-28 12:50:47.820196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.579 [2024-11-28 12:50:47.820207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.579 qpair failed and we were unable to recover it. 00:27:05.579 [2024-11-28 12:50:47.820288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.579 [2024-11-28 12:50:47.820298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.579 qpair failed and we were unable to recover it. 00:27:05.579 [2024-11-28 12:50:47.820409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.579 [2024-11-28 12:50:47.820440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.579 qpair failed and we were unable to recover it. 00:27:05.579 [2024-11-28 12:50:47.820551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.579 [2024-11-28 12:50:47.820588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.579 qpair failed and we were unable to recover it. 00:27:05.579 [2024-11-28 12:50:47.820833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.579 [2024-11-28 12:50:47.820863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.579 qpair failed and we were unable to recover it. 00:27:05.579 [2024-11-28 12:50:47.821068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.579 [2024-11-28 12:50:47.821101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.579 qpair failed and we were unable to recover it. 00:27:05.579 [2024-11-28 12:50:47.821414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.579 [2024-11-28 12:50:47.821444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.579 qpair failed and we were unable to recover it. 00:27:05.579 [2024-11-28 12:50:47.821697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.579 [2024-11-28 12:50:47.821728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.579 qpair failed and we were unable to recover it. 00:27:05.579 [2024-11-28 12:50:47.821909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.579 [2024-11-28 12:50:47.821939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.579 qpair failed and we were unable to recover it. 00:27:05.579 [2024-11-28 12:50:47.822191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.579 [2024-11-28 12:50:47.822221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.579 qpair failed and we were unable to recover it. 00:27:05.579 [2024-11-28 12:50:47.822368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.579 [2024-11-28 12:50:47.822398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.579 qpair failed and we were unable to recover it. 00:27:05.579 [2024-11-28 12:50:47.822580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.579 [2024-11-28 12:50:47.822591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.579 qpair failed and we were unable to recover it. 00:27:05.579 [2024-11-28 12:50:47.822680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.579 [2024-11-28 12:50:47.822691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.579 qpair failed and we were unable to recover it. 00:27:05.579 [2024-11-28 12:50:47.822752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.579 [2024-11-28 12:50:47.822763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.579 qpair failed and we were unable to recover it. 00:27:05.579 [2024-11-28 12:50:47.822833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.579 [2024-11-28 12:50:47.822844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.579 qpair failed and we were unable to recover it. 00:27:05.579 [2024-11-28 12:50:47.822928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.579 [2024-11-28 12:50:47.822938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.579 qpair failed and we were unable to recover it. 00:27:05.580 [2024-11-28 12:50:47.823150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.580 [2024-11-28 12:50:47.823160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.580 qpair failed and we were unable to recover it. 00:27:05.580 [2024-11-28 12:50:47.823317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.580 [2024-11-28 12:50:47.823349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.580 qpair failed and we were unable to recover it. 00:27:05.580 [2024-11-28 12:50:47.823534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.580 [2024-11-28 12:50:47.823565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.580 qpair failed and we were unable to recover it. 00:27:05.580 [2024-11-28 12:50:47.823759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.580 [2024-11-28 12:50:47.823791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.580 qpair failed and we were unable to recover it. 00:27:05.580 [2024-11-28 12:50:47.824060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.580 [2024-11-28 12:50:47.824091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.580 qpair failed and we were unable to recover it. 00:27:05.580 [2024-11-28 12:50:47.824273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.580 [2024-11-28 12:50:47.824305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.580 qpair failed and we were unable to recover it. 00:27:05.580 [2024-11-28 12:50:47.824532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.580 [2024-11-28 12:50:47.824543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.580 qpair failed and we were unable to recover it. 00:27:05.580 [2024-11-28 12:50:47.824704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.580 [2024-11-28 12:50:47.824714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.580 qpair failed and we were unable to recover it. 00:27:05.580 [2024-11-28 12:50:47.824858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.580 [2024-11-28 12:50:47.824868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.580 qpair failed and we were unable to recover it. 00:27:05.580 [2024-11-28 12:50:47.825007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.580 [2024-11-28 12:50:47.825040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.580 qpair failed and we were unable to recover it. 00:27:05.580 [2024-11-28 12:50:47.825163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.580 [2024-11-28 12:50:47.825195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.580 qpair failed and we were unable to recover it. 00:27:05.580 [2024-11-28 12:50:47.825373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.580 [2024-11-28 12:50:47.825404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.580 qpair failed and we were unable to recover it. 00:27:05.580 [2024-11-28 12:50:47.825519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.580 [2024-11-28 12:50:47.825529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.580 qpair failed and we were unable to recover it. 00:27:05.580 [2024-11-28 12:50:47.825687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.580 [2024-11-28 12:50:47.825697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.580 qpair failed and we were unable to recover it. 00:27:05.580 [2024-11-28 12:50:47.825843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.580 [2024-11-28 12:50:47.825853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.580 qpair failed and we were unable to recover it. 00:27:05.580 [2024-11-28 12:50:47.826001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.580 [2024-11-28 12:50:47.826034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.580 qpair failed and we were unable to recover it. 00:27:05.580 [2024-11-28 12:50:47.826158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.580 [2024-11-28 12:50:47.826189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.580 qpair failed and we were unable to recover it. 00:27:05.580 [2024-11-28 12:50:47.826317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.580 [2024-11-28 12:50:47.826347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.580 qpair failed and we were unable to recover it. 00:27:05.580 [2024-11-28 12:50:47.826460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.580 [2024-11-28 12:50:47.826490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.580 qpair failed and we were unable to recover it. 00:27:05.580 [2024-11-28 12:50:47.826669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.580 [2024-11-28 12:50:47.826679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.580 qpair failed and we were unable to recover it. 00:27:05.580 [2024-11-28 12:50:47.826764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.580 [2024-11-28 12:50:47.826774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.580 qpair failed and we were unable to recover it. 00:27:05.580 [2024-11-28 12:50:47.826840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.580 [2024-11-28 12:50:47.826850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.580 qpair failed and we were unable to recover it. 00:27:05.580 [2024-11-28 12:50:47.826934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.580 [2024-11-28 12:50:47.826944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.580 qpair failed and we were unable to recover it. 00:27:05.580 [2024-11-28 12:50:47.827106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.580 [2024-11-28 12:50:47.827117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.580 qpair failed and we were unable to recover it. 00:27:05.580 [2024-11-28 12:50:47.827270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.580 [2024-11-28 12:50:47.827281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.580 qpair failed and we were unable to recover it. 00:27:05.580 [2024-11-28 12:50:47.827487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.580 [2024-11-28 12:50:47.827518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.580 qpair failed and we were unable to recover it. 00:27:05.580 [2024-11-28 12:50:47.827656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.580 [2024-11-28 12:50:47.827688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.580 qpair failed and we were unable to recover it. 00:27:05.580 [2024-11-28 12:50:47.827878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.580 [2024-11-28 12:50:47.827914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.580 qpair failed and we were unable to recover it. 00:27:05.580 [2024-11-28 12:50:47.828061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.580 [2024-11-28 12:50:47.828093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.580 qpair failed and we were unable to recover it. 00:27:05.580 [2024-11-28 12:50:47.828335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.580 [2024-11-28 12:50:47.828366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.580 qpair failed and we were unable to recover it. 00:27:05.580 [2024-11-28 12:50:47.828554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.580 [2024-11-28 12:50:47.828584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.580 qpair failed and we were unable to recover it. 00:27:05.580 [2024-11-28 12:50:47.828828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.580 [2024-11-28 12:50:47.828860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.580 qpair failed and we were unable to recover it. 00:27:05.580 [2024-11-28 12:50:47.829039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.580 [2024-11-28 12:50:47.829071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.580 qpair failed and we were unable to recover it. 00:27:05.580 [2024-11-28 12:50:47.829292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.580 [2024-11-28 12:50:47.829323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.580 qpair failed and we were unable to recover it. 00:27:05.580 [2024-11-28 12:50:47.829488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.580 [2024-11-28 12:50:47.829499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.580 qpair failed and we were unable to recover it. 00:27:05.580 [2024-11-28 12:50:47.829735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.580 [2024-11-28 12:50:47.829745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.580 qpair failed and we were unable to recover it. 00:27:05.580 [2024-11-28 12:50:47.829873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.580 [2024-11-28 12:50:47.829918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.580 qpair failed and we were unable to recover it. 00:27:05.580 [2024-11-28 12:50:47.830127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.580 [2024-11-28 12:50:47.830158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.580 qpair failed and we were unable to recover it. 00:27:05.581 [2024-11-28 12:50:47.830355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.581 [2024-11-28 12:50:47.830386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.581 qpair failed and we were unable to recover it. 00:27:05.581 [2024-11-28 12:50:47.830570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.581 [2024-11-28 12:50:47.830581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.581 qpair failed and we were unable to recover it. 00:27:05.581 [2024-11-28 12:50:47.830784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.581 [2024-11-28 12:50:47.830795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.581 qpair failed and we were unable to recover it. 00:27:05.581 [2024-11-28 12:50:47.831042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.581 [2024-11-28 12:50:47.831075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.581 qpair failed and we were unable to recover it. 00:27:05.581 [2024-11-28 12:50:47.831211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.581 [2024-11-28 12:50:47.831243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.581 qpair failed and we were unable to recover it. 00:27:05.581 [2024-11-28 12:50:47.831507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.581 [2024-11-28 12:50:47.831538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.581 qpair failed and we were unable to recover it. 00:27:05.581 [2024-11-28 12:50:47.831803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.581 [2024-11-28 12:50:47.831814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.581 qpair failed and we were unable to recover it. 00:27:05.581 [2024-11-28 12:50:47.831968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.581 [2024-11-28 12:50:47.831979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.581 qpair failed and we were unable to recover it. 00:27:05.581 [2024-11-28 12:50:47.832205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.581 [2024-11-28 12:50:47.832237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.581 qpair failed and we were unable to recover it. 00:27:05.581 [2024-11-28 12:50:47.832453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.581 [2024-11-28 12:50:47.832484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.581 qpair failed and we were unable to recover it. 00:27:05.581 [2024-11-28 12:50:47.832662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.581 [2024-11-28 12:50:47.832694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.581 qpair failed and we were unable to recover it. 00:27:05.581 [2024-11-28 12:50:47.832815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.581 [2024-11-28 12:50:47.832846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.581 qpair failed and we were unable to recover it. 00:27:05.581 [2024-11-28 12:50:47.832972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.581 [2024-11-28 12:50:47.833004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.581 qpair failed and we were unable to recover it. 00:27:05.581 [2024-11-28 12:50:47.833207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.581 [2024-11-28 12:50:47.833238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.581 qpair failed and we were unable to recover it. 00:27:05.581 [2024-11-28 12:50:47.833425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.581 [2024-11-28 12:50:47.833456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.581 qpair failed and we were unable to recover it. 00:27:05.581 [2024-11-28 12:50:47.833637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.581 [2024-11-28 12:50:47.833668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.581 qpair failed and we were unable to recover it. 00:27:05.581 [2024-11-28 12:50:47.833921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.581 [2024-11-28 12:50:47.833961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.581 qpair failed and we were unable to recover it. 00:27:05.581 [2024-11-28 12:50:47.834215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.581 [2024-11-28 12:50:47.834245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.581 qpair failed and we were unable to recover it. 00:27:05.581 [2024-11-28 12:50:47.834356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.581 [2024-11-28 12:50:47.834387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.581 qpair failed and we were unable to recover it. 00:27:05.581 [2024-11-28 12:50:47.834521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.581 [2024-11-28 12:50:47.834552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.581 qpair failed and we were unable to recover it. 00:27:05.581 [2024-11-28 12:50:47.834661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.581 [2024-11-28 12:50:47.834671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.581 qpair failed and we were unable to recover it. 00:27:05.581 [2024-11-28 12:50:47.834804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.581 [2024-11-28 12:50:47.834814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.581 qpair failed and we were unable to recover it. 00:27:05.581 [2024-11-28 12:50:47.834979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.581 [2024-11-28 12:50:47.834989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.581 qpair failed and we were unable to recover it. 00:27:05.581 [2024-11-28 12:50:47.835068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.581 [2024-11-28 12:50:47.835103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.581 qpair failed and we were unable to recover it. 00:27:05.581 [2024-11-28 12:50:47.835286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.581 [2024-11-28 12:50:47.835317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.581 qpair failed and we were unable to recover it. 00:27:05.581 [2024-11-28 12:50:47.835494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.581 [2024-11-28 12:50:47.835525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.581 qpair failed and we were unable to recover it. 00:27:05.581 [2024-11-28 12:50:47.835660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.581 [2024-11-28 12:50:47.835691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.581 qpair failed and we were unable to recover it. 00:27:05.581 [2024-11-28 12:50:47.835938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.581 [2024-11-28 12:50:47.835979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.581 qpair failed and we were unable to recover it. 00:27:05.581 [2024-11-28 12:50:47.836179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.581 [2024-11-28 12:50:47.836210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.581 qpair failed and we were unable to recover it. 00:27:05.581 [2024-11-28 12:50:47.836402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.581 [2024-11-28 12:50:47.836438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.581 qpair failed and we were unable to recover it. 00:27:05.581 [2024-11-28 12:50:47.836650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.581 [2024-11-28 12:50:47.836682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.581 qpair failed and we were unable to recover it. 00:27:05.581 [2024-11-28 12:50:47.836821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.581 [2024-11-28 12:50:47.836852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.581 qpair failed and we were unable to recover it. 00:27:05.581 [2024-11-28 12:50:47.837044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.581 [2024-11-28 12:50:47.837088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.581 qpair failed and we were unable to recover it. 00:27:05.581 [2024-11-28 12:50:47.837308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.581 [2024-11-28 12:50:47.837319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.581 qpair failed and we were unable to recover it. 00:27:05.581 [2024-11-28 12:50:47.837486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.581 [2024-11-28 12:50:47.837497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.581 qpair failed and we were unable to recover it. 00:27:05.581 [2024-11-28 12:50:47.837663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.581 [2024-11-28 12:50:47.837693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.581 qpair failed and we were unable to recover it. 00:27:05.581 [2024-11-28 12:50:47.837972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.581 [2024-11-28 12:50:47.838004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.581 qpair failed and we were unable to recover it. 00:27:05.581 [2024-11-28 12:50:47.838204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.581 [2024-11-28 12:50:47.838215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.581 qpair failed and we were unable to recover it. 00:27:05.581 [2024-11-28 12:50:47.838361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.582 [2024-11-28 12:50:47.838371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.582 qpair failed and we were unable to recover it. 00:27:05.582 [2024-11-28 12:50:47.838643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.582 [2024-11-28 12:50:47.838675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.582 qpair failed and we were unable to recover it. 00:27:05.582 [2024-11-28 12:50:47.838892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.582 [2024-11-28 12:50:47.838924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.582 qpair failed and we were unable to recover it. 00:27:05.582 [2024-11-28 12:50:47.839217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.582 [2024-11-28 12:50:47.839249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.582 qpair failed and we were unable to recover it. 00:27:05.582 [2024-11-28 12:50:47.839345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.582 [2024-11-28 12:50:47.839355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.582 qpair failed and we were unable to recover it. 00:27:05.582 [2024-11-28 12:50:47.839575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.582 [2024-11-28 12:50:47.839606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.582 qpair failed and we were unable to recover it. 00:27:05.582 [2024-11-28 12:50:47.839825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.582 [2024-11-28 12:50:47.839857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.582 qpair failed and we were unable to recover it. 00:27:05.582 [2024-11-28 12:50:47.839989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.582 [2024-11-28 12:50:47.840020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.582 qpair failed and we were unable to recover it. 00:27:05.582 [2024-11-28 12:50:47.840292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.582 [2024-11-28 12:50:47.840323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.582 qpair failed and we were unable to recover it. 00:27:05.582 [2024-11-28 12:50:47.840506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.582 [2024-11-28 12:50:47.840538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.582 qpair failed and we were unable to recover it. 00:27:05.582 [2024-11-28 12:50:47.840659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.582 [2024-11-28 12:50:47.840690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.582 qpair failed and we were unable to recover it. 00:27:05.582 [2024-11-28 12:50:47.840819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.582 [2024-11-28 12:50:47.840850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.582 qpair failed and we were unable to recover it. 00:27:05.582 [2024-11-28 12:50:47.841118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.582 [2024-11-28 12:50:47.841163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.582 qpair failed and we were unable to recover it. 00:27:05.582 [2024-11-28 12:50:47.841264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.582 [2024-11-28 12:50:47.841274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.582 qpair failed and we were unable to recover it. 00:27:05.582 [2024-11-28 12:50:47.841410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.582 [2024-11-28 12:50:47.841444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.582 qpair failed and we were unable to recover it. 00:27:05.582 [2024-11-28 12:50:47.841633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.582 [2024-11-28 12:50:47.841664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.582 qpair failed and we were unable to recover it. 00:27:05.582 [2024-11-28 12:50:47.841866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.582 [2024-11-28 12:50:47.841896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.582 qpair failed and we were unable to recover it. 00:27:05.582 [2024-11-28 12:50:47.842151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.582 [2024-11-28 12:50:47.842183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.582 qpair failed and we were unable to recover it. 00:27:05.582 [2024-11-28 12:50:47.842331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.582 [2024-11-28 12:50:47.842342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.582 qpair failed and we were unable to recover it. 00:27:05.582 [2024-11-28 12:50:47.842488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.582 [2024-11-28 12:50:47.842499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.582 qpair failed and we were unable to recover it. 00:27:05.582 [2024-11-28 12:50:47.842642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.582 [2024-11-28 12:50:47.842653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.582 qpair failed and we were unable to recover it. 00:27:05.582 [2024-11-28 12:50:47.842745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.582 [2024-11-28 12:50:47.842755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.582 qpair failed and we were unable to recover it. 00:27:05.582 [2024-11-28 12:50:47.842883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.582 [2024-11-28 12:50:47.842893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.582 qpair failed and we were unable to recover it. 00:27:05.582 [2024-11-28 12:50:47.843112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.582 [2024-11-28 12:50:47.843144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.582 qpair failed and we were unable to recover it. 00:27:05.582 [2024-11-28 12:50:47.843330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.582 [2024-11-28 12:50:47.843360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.582 qpair failed and we were unable to recover it. 00:27:05.582 [2024-11-28 12:50:47.843495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.582 [2024-11-28 12:50:47.843527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.582 qpair failed and we were unable to recover it. 00:27:05.582 [2024-11-28 12:50:47.843662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.582 [2024-11-28 12:50:47.843673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.582 qpair failed and we were unable to recover it. 00:27:05.582 [2024-11-28 12:50:47.843819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.582 [2024-11-28 12:50:47.843830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.582 qpair failed and we were unable to recover it. 00:27:05.582 [2024-11-28 12:50:47.843923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.582 [2024-11-28 12:50:47.843934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.582 qpair failed and we were unable to recover it. 00:27:05.582 [2024-11-28 12:50:47.844019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.582 [2024-11-28 12:50:47.844030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.582 qpair failed and we were unable to recover it. 00:27:05.582 [2024-11-28 12:50:47.844105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.582 [2024-11-28 12:50:47.844115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.582 qpair failed and we were unable to recover it. 00:27:05.582 [2024-11-28 12:50:47.844283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.582 [2024-11-28 12:50:47.844295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.582 qpair failed and we were unable to recover it. 00:27:05.582 [2024-11-28 12:50:47.844432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.582 [2024-11-28 12:50:47.844464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.582 qpair failed and we were unable to recover it. 00:27:05.582 [2024-11-28 12:50:47.844591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.582 [2024-11-28 12:50:47.844621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.582 qpair failed and we were unable to recover it. 00:27:05.582 [2024-11-28 12:50:47.844803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.582 [2024-11-28 12:50:47.844833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.582 qpair failed and we were unable to recover it. 00:27:05.582 [2024-11-28 12:50:47.844974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.582 [2024-11-28 12:50:47.845007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.582 qpair failed and we were unable to recover it. 00:27:05.582 [2024-11-28 12:50:47.845120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.582 [2024-11-28 12:50:47.845150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.582 qpair failed and we were unable to recover it. 00:27:05.582 [2024-11-28 12:50:47.845341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.582 [2024-11-28 12:50:47.845352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.582 qpair failed and we were unable to recover it. 00:27:05.582 [2024-11-28 12:50:47.845487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.583 [2024-11-28 12:50:47.845497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.583 qpair failed and we were unable to recover it. 00:27:05.583 [2024-11-28 12:50:47.845652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.583 [2024-11-28 12:50:47.845662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.583 qpair failed and we were unable to recover it. 00:27:05.583 [2024-11-28 12:50:47.845957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.583 [2024-11-28 12:50:47.845990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.583 qpair failed and we were unable to recover it. 00:27:05.583 [2024-11-28 12:50:47.846137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.583 [2024-11-28 12:50:47.846168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.583 qpair failed and we were unable to recover it. 00:27:05.583 [2024-11-28 12:50:47.846287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.583 [2024-11-28 12:50:47.846317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.583 qpair failed and we were unable to recover it. 00:27:05.583 [2024-11-28 12:50:47.846520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.583 [2024-11-28 12:50:47.846551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.583 qpair failed and we were unable to recover it. 00:27:05.583 [2024-11-28 12:50:47.846731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.583 [2024-11-28 12:50:47.846762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.583 qpair failed and we were unable to recover it. 00:27:05.583 [2024-11-28 12:50:47.846989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.583 [2024-11-28 12:50:47.847023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.583 qpair failed and we were unable to recover it. 00:27:05.583 [2024-11-28 12:50:47.847280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.583 [2024-11-28 12:50:47.847311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.583 qpair failed and we were unable to recover it. 00:27:05.583 [2024-11-28 12:50:47.847438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.583 [2024-11-28 12:50:47.847469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.583 qpair failed and we were unable to recover it. 00:27:05.583 [2024-11-28 12:50:47.847589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.583 [2024-11-28 12:50:47.847619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.583 qpair failed and we were unable to recover it. 00:27:05.583 [2024-11-28 12:50:47.847813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.583 [2024-11-28 12:50:47.847843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.583 qpair failed and we were unable to recover it. 00:27:05.583 [2024-11-28 12:50:47.848109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.583 [2024-11-28 12:50:47.848141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.583 qpair failed and we were unable to recover it. 00:27:05.583 [2024-11-28 12:50:47.848335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.583 [2024-11-28 12:50:47.848366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.583 qpair failed and we were unable to recover it. 00:27:05.583 [2024-11-28 12:50:47.848556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.583 [2024-11-28 12:50:47.848587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.583 qpair failed and we were unable to recover it. 00:27:05.583 [2024-11-28 12:50:47.848699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.583 [2024-11-28 12:50:47.848730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.583 qpair failed and we were unable to recover it. 00:27:05.583 [2024-11-28 12:50:47.848869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.583 [2024-11-28 12:50:47.848900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.583 qpair failed and we were unable to recover it. 00:27:05.583 [2024-11-28 12:50:47.849023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.583 [2024-11-28 12:50:47.849054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.583 qpair failed and we were unable to recover it. 00:27:05.583 [2024-11-28 12:50:47.849300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.583 [2024-11-28 12:50:47.849330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.583 qpair failed and we were unable to recover it. 00:27:05.583 [2024-11-28 12:50:47.849536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.583 [2024-11-28 12:50:47.849567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.583 qpair failed and we were unable to recover it. 00:27:05.583 [2024-11-28 12:50:47.849754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.583 [2024-11-28 12:50:47.849786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.583 qpair failed and we were unable to recover it. 00:27:05.583 [2024-11-28 12:50:47.849984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.583 [2024-11-28 12:50:47.850017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.583 qpair failed and we were unable to recover it. 00:27:05.583 [2024-11-28 12:50:47.850190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.583 [2024-11-28 12:50:47.850200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.583 qpair failed and we were unable to recover it. 00:27:05.583 [2024-11-28 12:50:47.850343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.583 [2024-11-28 12:50:47.850354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.583 qpair failed and we were unable to recover it. 00:27:05.583 [2024-11-28 12:50:47.850636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.583 [2024-11-28 12:50:47.850667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.583 qpair failed and we were unable to recover it. 00:27:05.583 [2024-11-28 12:50:47.850837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.583 [2024-11-28 12:50:47.850867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.583 qpair failed and we were unable to recover it. 00:27:05.583 [2024-11-28 12:50:47.850992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.583 [2024-11-28 12:50:47.851025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.583 qpair failed and we were unable to recover it. 00:27:05.583 [2024-11-28 12:50:47.851295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.583 [2024-11-28 12:50:47.851326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.583 qpair failed and we were unable to recover it. 00:27:05.583 [2024-11-28 12:50:47.851475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.583 [2024-11-28 12:50:47.851508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.583 qpair failed and we were unable to recover it. 00:27:05.583 [2024-11-28 12:50:47.851699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.583 [2024-11-28 12:50:47.851709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.583 qpair failed and we were unable to recover it. 00:27:05.583 [2024-11-28 12:50:47.851856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.583 [2024-11-28 12:50:47.851887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.583 qpair failed and we were unable to recover it. 00:27:05.583 [2024-11-28 12:50:47.852073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.583 [2024-11-28 12:50:47.852106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.583 qpair failed and we were unable to recover it. 00:27:05.583 [2024-11-28 12:50:47.852359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.583 [2024-11-28 12:50:47.852390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.583 qpair failed and we were unable to recover it. 00:27:05.583 [2024-11-28 12:50:47.852594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.583 [2024-11-28 12:50:47.852624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.583 qpair failed and we were unable to recover it. 00:27:05.583 [2024-11-28 12:50:47.852772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.583 [2024-11-28 12:50:47.852803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.583 qpair failed and we were unable to recover it. 00:27:05.583 [2024-11-28 12:50:47.853071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.584 [2024-11-28 12:50:47.853112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.584 qpair failed and we were unable to recover it. 00:27:05.584 [2024-11-28 12:50:47.853258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.584 [2024-11-28 12:50:47.853268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.584 qpair failed and we were unable to recover it. 00:27:05.584 [2024-11-28 12:50:47.853479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.584 [2024-11-28 12:50:47.853510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.584 qpair failed and we were unable to recover it. 00:27:05.584 [2024-11-28 12:50:47.853754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.584 [2024-11-28 12:50:47.853785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.584 qpair failed and we were unable to recover it. 00:27:05.584 [2024-11-28 12:50:47.854024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.584 [2024-11-28 12:50:47.854059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.584 qpair failed and we were unable to recover it. 00:27:05.584 [2024-11-28 12:50:47.854256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.584 [2024-11-28 12:50:47.854288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.584 qpair failed and we were unable to recover it. 00:27:05.584 [2024-11-28 12:50:47.854409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.584 [2024-11-28 12:50:47.854440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.584 qpair failed and we were unable to recover it. 00:27:05.584 [2024-11-28 12:50:47.854693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.584 [2024-11-28 12:50:47.854724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.584 qpair failed and we were unable to recover it. 00:27:05.584 [2024-11-28 12:50:47.854944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.584 [2024-11-28 12:50:47.854994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.584 qpair failed and we were unable to recover it. 00:27:05.584 [2024-11-28 12:50:47.855234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.584 [2024-11-28 12:50:47.855266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.584 qpair failed and we were unable to recover it. 00:27:05.584 [2024-11-28 12:50:47.855458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.584 [2024-11-28 12:50:47.855489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.584 qpair failed and we were unable to recover it. 00:27:05.584 [2024-11-28 12:50:47.855684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.584 [2024-11-28 12:50:47.855694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.584 qpair failed and we were unable to recover it. 00:27:05.584 [2024-11-28 12:50:47.855932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.584 [2024-11-28 12:50:47.855971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.584 qpair failed and we were unable to recover it. 00:27:05.584 [2024-11-28 12:50:47.856231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.584 [2024-11-28 12:50:47.856263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.584 qpair failed and we were unable to recover it. 00:27:05.584 [2024-11-28 12:50:47.856448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.584 [2024-11-28 12:50:47.856458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.584 qpair failed and we were unable to recover it. 00:27:05.584 [2024-11-28 12:50:47.856606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.584 [2024-11-28 12:50:47.856637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.584 qpair failed and we were unable to recover it. 00:27:05.584 [2024-11-28 12:50:47.856763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.584 [2024-11-28 12:50:47.856795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.584 qpair failed and we were unable to recover it. 00:27:05.584 [2024-11-28 12:50:47.857062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.584 [2024-11-28 12:50:47.857093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.584 qpair failed and we were unable to recover it. 00:27:05.584 [2024-11-28 12:50:47.857302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.584 [2024-11-28 12:50:47.857312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.584 qpair failed and we were unable to recover it. 00:27:05.584 [2024-11-28 12:50:47.857539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.584 [2024-11-28 12:50:47.857572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.584 qpair failed and we were unable to recover it. 00:27:05.584 [2024-11-28 12:50:47.857762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.584 [2024-11-28 12:50:47.857793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.584 qpair failed and we were unable to recover it. 00:27:05.584 [2024-11-28 12:50:47.858045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.584 [2024-11-28 12:50:47.858078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.584 qpair failed and we were unable to recover it. 00:27:05.584 [2024-11-28 12:50:47.858305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.584 [2024-11-28 12:50:47.858337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.584 qpair failed and we were unable to recover it. 00:27:05.584 [2024-11-28 12:50:47.858532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.584 [2024-11-28 12:50:47.858563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.584 qpair failed and we were unable to recover it. 00:27:05.584 [2024-11-28 12:50:47.858759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.584 [2024-11-28 12:50:47.858791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.584 qpair failed and we were unable to recover it. 00:27:05.584 [2024-11-28 12:50:47.859083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.584 [2024-11-28 12:50:47.859121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.584 qpair failed and we were unable to recover it. 00:27:05.584 [2024-11-28 12:50:47.859369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.584 [2024-11-28 12:50:47.859401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.584 qpair failed and we were unable to recover it. 00:27:05.584 [2024-11-28 12:50:47.859535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.584 [2024-11-28 12:50:47.859566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.584 qpair failed and we were unable to recover it. 00:27:05.584 [2024-11-28 12:50:47.859868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.584 [2024-11-28 12:50:47.859879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.584 qpair failed and we were unable to recover it. 00:27:05.584 [2024-11-28 12:50:47.859978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.584 [2024-11-28 12:50:47.859990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.584 qpair failed and we were unable to recover it. 00:27:05.584 [2024-11-28 12:50:47.860202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.584 [2024-11-28 12:50:47.860233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.584 qpair failed and we were unable to recover it. 00:27:05.584 [2024-11-28 12:50:47.860407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.584 [2024-11-28 12:50:47.860438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.584 qpair failed and we were unable to recover it. 00:27:05.584 [2024-11-28 12:50:47.860577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.584 [2024-11-28 12:50:47.860609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.584 qpair failed and we were unable to recover it. 00:27:05.584 [2024-11-28 12:50:47.860808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.584 [2024-11-28 12:50:47.860839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.584 qpair failed and we were unable to recover it. 00:27:05.584 [2024-11-28 12:50:47.861109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.584 [2024-11-28 12:50:47.861141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.584 qpair failed and we were unable to recover it. 00:27:05.584 [2024-11-28 12:50:47.861434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.584 [2024-11-28 12:50:47.861465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.584 qpair failed and we were unable to recover it. 00:27:05.585 [2024-11-28 12:50:47.861611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.585 [2024-11-28 12:50:47.861642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.585 qpair failed and we were unable to recover it. 00:27:05.585 [2024-11-28 12:50:47.861926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.585 [2024-11-28 12:50:47.861967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.585 qpair failed and we were unable to recover it. 00:27:05.585 [2024-11-28 12:50:47.862106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.585 [2024-11-28 12:50:47.862137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.585 qpair failed and we were unable to recover it. 00:27:05.585 [2024-11-28 12:50:47.862325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.585 [2024-11-28 12:50:47.862357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.585 qpair failed and we were unable to recover it. 00:27:05.585 [2024-11-28 12:50:47.862488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.585 [2024-11-28 12:50:47.862519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.585 qpair failed and we were unable to recover it. 00:27:05.585 [2024-11-28 12:50:47.862676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.585 [2024-11-28 12:50:47.862686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.585 qpair failed and we were unable to recover it. 00:27:05.585 [2024-11-28 12:50:47.862866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.585 [2024-11-28 12:50:47.862897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.585 qpair failed and we were unable to recover it. 00:27:05.585 [2024-11-28 12:50:47.863048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.585 [2024-11-28 12:50:47.863081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.585 qpair failed and we were unable to recover it. 00:27:05.585 [2024-11-28 12:50:47.863300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.585 [2024-11-28 12:50:47.863331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.585 qpair failed and we were unable to recover it. 00:27:05.585 [2024-11-28 12:50:47.863517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.585 [2024-11-28 12:50:47.863527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.585 qpair failed and we were unable to recover it. 00:27:05.585 [2024-11-28 12:50:47.863725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.585 [2024-11-28 12:50:47.863736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.585 qpair failed and we were unable to recover it. 00:27:05.585 [2024-11-28 12:50:47.863890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.585 [2024-11-28 12:50:47.863900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.585 qpair failed and we were unable to recover it. 00:27:05.585 [2024-11-28 12:50:47.864030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.585 [2024-11-28 12:50:47.864042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.585 qpair failed and we were unable to recover it. 00:27:05.585 [2024-11-28 12:50:47.864137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.585 [2024-11-28 12:50:47.864149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.585 qpair failed and we were unable to recover it. 00:27:05.585 [2024-11-28 12:50:47.864302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.585 [2024-11-28 12:50:47.864333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.585 qpair failed and we were unable to recover it. 00:27:05.585 [2024-11-28 12:50:47.864576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.585 [2024-11-28 12:50:47.864608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.585 qpair failed and we were unable to recover it. 00:27:05.585 [2024-11-28 12:50:47.864804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.585 [2024-11-28 12:50:47.864836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.585 qpair failed and we were unable to recover it. 00:27:05.585 [2024-11-28 12:50:47.865019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.585 [2024-11-28 12:50:47.865051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.585 qpair failed and we were unable to recover it. 00:27:05.585 [2024-11-28 12:50:47.865294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.585 [2024-11-28 12:50:47.865325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.585 qpair failed and we were unable to recover it. 00:27:05.585 [2024-11-28 12:50:47.865570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.585 [2024-11-28 12:50:47.865601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.585 qpair failed and we were unable to recover it. 00:27:05.585 [2024-11-28 12:50:47.865791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.585 [2024-11-28 12:50:47.865823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.585 qpair failed and we were unable to recover it. 00:27:05.585 [2024-11-28 12:50:47.865969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.585 [2024-11-28 12:50:47.866001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.585 qpair failed and we were unable to recover it. 00:27:05.585 [2024-11-28 12:50:47.866180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.585 [2024-11-28 12:50:47.866212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.585 qpair failed and we were unable to recover it. 00:27:05.585 [2024-11-28 12:50:47.866335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.585 [2024-11-28 12:50:47.866366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.585 qpair failed and we were unable to recover it. 00:27:05.585 [2024-11-28 12:50:47.866491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.585 [2024-11-28 12:50:47.866522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.585 qpair failed and we were unable to recover it. 00:27:05.585 [2024-11-28 12:50:47.866711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.585 [2024-11-28 12:50:47.866722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.585 qpair failed and we were unable to recover it. 00:27:05.585 [2024-11-28 12:50:47.866865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.585 [2024-11-28 12:50:47.866876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.585 qpair failed and we were unable to recover it. 00:27:05.585 [2024-11-28 12:50:47.867028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.585 [2024-11-28 12:50:47.867039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.585 qpair failed and we were unable to recover it. 00:27:05.585 [2024-11-28 12:50:47.867117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.585 [2024-11-28 12:50:47.867127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.585 qpair failed and we were unable to recover it. 00:27:05.585 [2024-11-28 12:50:47.867204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.585 [2024-11-28 12:50:47.867216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.585 qpair failed and we were unable to recover it. 00:27:05.585 [2024-11-28 12:50:47.867358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.585 [2024-11-28 12:50:47.867369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.585 qpair failed and we were unable to recover it. 00:27:05.585 [2024-11-28 12:50:47.867540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.585 [2024-11-28 12:50:47.867571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.585 qpair failed and we were unable to recover it. 00:27:05.585 [2024-11-28 12:50:47.867793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.585 [2024-11-28 12:50:47.867824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.585 qpair failed and we were unable to recover it. 00:27:05.585 [2024-11-28 12:50:47.867944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.585 [2024-11-28 12:50:47.867987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.585 qpair failed and we were unable to recover it. 00:27:05.585 [2024-11-28 12:50:47.868181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.585 [2024-11-28 12:50:47.868212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.585 qpair failed and we were unable to recover it. 00:27:05.585 [2024-11-28 12:50:47.868323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.585 [2024-11-28 12:50:47.868353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.585 qpair failed and we were unable to recover it. 00:27:05.585 [2024-11-28 12:50:47.868536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.585 [2024-11-28 12:50:47.868547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.585 qpair failed and we were unable to recover it. 00:27:05.585 [2024-11-28 12:50:47.868687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.585 [2024-11-28 12:50:47.868698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.585 qpair failed and we were unable to recover it. 00:27:05.585 [2024-11-28 12:50:47.868915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.586 [2024-11-28 12:50:47.868925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.586 qpair failed and we were unable to recover it. 00:27:05.586 [2024-11-28 12:50:47.869101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.586 [2024-11-28 12:50:47.869111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.586 qpair failed and we were unable to recover it. 00:27:05.586 [2024-11-28 12:50:47.869257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.586 [2024-11-28 12:50:47.869288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.586 qpair failed and we were unable to recover it. 00:27:05.586 [2024-11-28 12:50:47.869482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.586 [2024-11-28 12:50:47.869513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.586 qpair failed and we were unable to recover it. 00:27:05.586 [2024-11-28 12:50:47.869630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.586 [2024-11-28 12:50:47.869661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.586 qpair failed and we were unable to recover it. 00:27:05.586 [2024-11-28 12:50:47.869810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.586 [2024-11-28 12:50:47.869842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.586 qpair failed and we were unable to recover it. 00:27:05.586 [2024-11-28 12:50:47.869976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.586 [2024-11-28 12:50:47.870008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.586 qpair failed and we were unable to recover it. 00:27:05.586 [2024-11-28 12:50:47.870208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.586 [2024-11-28 12:50:47.870240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.586 qpair failed and we were unable to recover it. 00:27:05.586 [2024-11-28 12:50:47.870423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.586 [2024-11-28 12:50:47.870433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.586 qpair failed and we were unable to recover it. 00:27:05.586 [2024-11-28 12:50:47.870584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.586 [2024-11-28 12:50:47.870595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.586 qpair failed and we were unable to recover it. 00:27:05.586 [2024-11-28 12:50:47.870737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.586 [2024-11-28 12:50:47.870748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.586 qpair failed and we were unable to recover it. 00:27:05.586 [2024-11-28 12:50:47.870843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.586 [2024-11-28 12:50:47.870854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.586 qpair failed and we were unable to recover it. 00:27:05.586 [2024-11-28 12:50:47.871053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.586 [2024-11-28 12:50:47.871065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.586 qpair failed and we were unable to recover it. 00:27:05.586 [2024-11-28 12:50:47.871128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.586 [2024-11-28 12:50:47.871139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.586 qpair failed and we were unable to recover it. 00:27:05.586 [2024-11-28 12:50:47.871283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.586 [2024-11-28 12:50:47.871320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.586 qpair failed and we were unable to recover it. 00:27:05.586 [2024-11-28 12:50:47.871496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.586 [2024-11-28 12:50:47.871527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.586 qpair failed and we were unable to recover it. 00:27:05.586 [2024-11-28 12:50:47.871743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.586 [2024-11-28 12:50:47.871774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.586 qpair failed and we were unable to recover it. 00:27:05.586 [2024-11-28 12:50:47.872045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.586 [2024-11-28 12:50:47.872076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.586 qpair failed and we were unable to recover it. 00:27:05.586 [2024-11-28 12:50:47.872295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.586 [2024-11-28 12:50:47.872325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.586 qpair failed and we were unable to recover it. 00:27:05.586 [2024-11-28 12:50:47.872527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.586 [2024-11-28 12:50:47.872538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.586 qpair failed and we were unable to recover it. 00:27:05.586 [2024-11-28 12:50:47.872680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.586 [2024-11-28 12:50:47.872690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.586 qpair failed and we were unable to recover it. 00:27:05.586 [2024-11-28 12:50:47.872921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.586 [2024-11-28 12:50:47.872960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.586 qpair failed and we were unable to recover it. 00:27:05.586 [2024-11-28 12:50:47.873233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.586 [2024-11-28 12:50:47.873265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.586 qpair failed and we were unable to recover it. 00:27:05.586 [2024-11-28 12:50:47.873450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.586 [2024-11-28 12:50:47.873460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.586 qpair failed and we were unable to recover it. 00:27:05.586 [2024-11-28 12:50:47.873610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.586 [2024-11-28 12:50:47.873634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.586 qpair failed and we were unable to recover it. 00:27:05.586 [2024-11-28 12:50:47.873846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.586 [2024-11-28 12:50:47.873878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.586 qpair failed and we were unable to recover it. 00:27:05.586 [2024-11-28 12:50:47.874136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.586 [2024-11-28 12:50:47.874169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.586 qpair failed and we were unable to recover it. 00:27:05.586 [2024-11-28 12:50:47.874348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.586 [2024-11-28 12:50:47.874379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.586 qpair failed and we were unable to recover it. 00:27:05.586 [2024-11-28 12:50:47.874562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.586 [2024-11-28 12:50:47.874573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.586 qpair failed and we were unable to recover it. 00:27:05.586 [2024-11-28 12:50:47.874749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.586 [2024-11-28 12:50:47.874780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.586 qpair failed and we were unable to recover it. 00:27:05.586 [2024-11-28 12:50:47.874891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.586 [2024-11-28 12:50:47.874923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.586 qpair failed and we were unable to recover it. 00:27:05.586 [2024-11-28 12:50:47.875067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.586 [2024-11-28 12:50:47.875104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.586 qpair failed and we were unable to recover it. 00:27:05.586 [2024-11-28 12:50:47.875285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.586 [2024-11-28 12:50:47.875316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.586 qpair failed and we were unable to recover it. 00:27:05.586 [2024-11-28 12:50:47.875507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.586 [2024-11-28 12:50:47.875538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.586 qpair failed and we were unable to recover it. 00:27:05.586 [2024-11-28 12:50:47.875676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.586 [2024-11-28 12:50:47.875707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.586 qpair failed and we were unable to recover it. 00:27:05.586 [2024-11-28 12:50:47.875904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.586 [2024-11-28 12:50:47.875934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.586 qpair failed and we were unable to recover it. 00:27:05.586 [2024-11-28 12:50:47.876140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.586 [2024-11-28 12:50:47.876171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.586 qpair failed and we were unable to recover it. 00:27:05.586 [2024-11-28 12:50:47.876297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.586 [2024-11-28 12:50:47.876329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.586 qpair failed and we were unable to recover it. 00:27:05.587 [2024-11-28 12:50:47.876499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.587 [2024-11-28 12:50:47.876510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.587 qpair failed and we were unable to recover it. 00:27:05.587 [2024-11-28 12:50:47.876715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.587 [2024-11-28 12:50:47.876747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.587 qpair failed and we were unable to recover it. 00:27:05.587 [2024-11-28 12:50:47.876856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.587 [2024-11-28 12:50:47.876886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.587 qpair failed and we were unable to recover it. 00:27:05.587 [2024-11-28 12:50:47.877068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.587 [2024-11-28 12:50:47.877100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.587 qpair failed and we were unable to recover it. 00:27:05.587 [2024-11-28 12:50:47.877299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.587 [2024-11-28 12:50:47.877330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.587 qpair failed and we were unable to recover it. 00:27:05.587 [2024-11-28 12:50:47.877433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.587 [2024-11-28 12:50:47.877444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.587 qpair failed and we were unable to recover it. 00:27:05.587 [2024-11-28 12:50:47.877595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.587 [2024-11-28 12:50:47.877623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.587 qpair failed and we were unable to recover it. 00:27:05.587 [2024-11-28 12:50:47.877810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.587 [2024-11-28 12:50:47.877842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.587 qpair failed and we were unable to recover it. 00:27:05.587 [2024-11-28 12:50:47.878031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.587 [2024-11-28 12:50:47.878064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.587 qpair failed and we were unable to recover it. 00:27:05.587 [2024-11-28 12:50:47.878256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.587 [2024-11-28 12:50:47.878267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.587 qpair failed and we were unable to recover it. 00:27:05.587 [2024-11-28 12:50:47.878350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.587 [2024-11-28 12:50:47.878361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.587 qpair failed and we were unable to recover it. 00:27:05.587 [2024-11-28 12:50:47.878504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.587 [2024-11-28 12:50:47.878515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.587 qpair failed and we were unable to recover it. 00:27:05.587 [2024-11-28 12:50:47.878603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.587 [2024-11-28 12:50:47.878613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.587 qpair failed and we were unable to recover it. 00:27:05.587 [2024-11-28 12:50:47.878808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.587 [2024-11-28 12:50:47.878840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.587 qpair failed and we were unable to recover it. 00:27:05.587 [2024-11-28 12:50:47.879086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.587 [2024-11-28 12:50:47.879118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.587 qpair failed and we were unable to recover it. 00:27:05.587 [2024-11-28 12:50:47.879324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.587 [2024-11-28 12:50:47.879356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.587 qpair failed and we were unable to recover it. 00:27:05.587 [2024-11-28 12:50:47.879541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.587 [2024-11-28 12:50:47.879573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.587 qpair failed and we were unable to recover it. 00:27:05.587 [2024-11-28 12:50:47.879780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.587 [2024-11-28 12:50:47.879791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.587 qpair failed and we were unable to recover it. 00:27:05.587 [2024-11-28 12:50:47.879885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.587 [2024-11-28 12:50:47.879896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.587 qpair failed and we were unable to recover it. 00:27:05.587 [2024-11-28 12:50:47.880046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.587 [2024-11-28 12:50:47.880058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.587 qpair failed and we were unable to recover it. 00:27:05.587 [2024-11-28 12:50:47.880140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.587 [2024-11-28 12:50:47.880151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.587 qpair failed and we were unable to recover it. 00:27:05.587 [2024-11-28 12:50:47.880237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.587 [2024-11-28 12:50:47.880247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.587 qpair failed and we were unable to recover it. 00:27:05.587 [2024-11-28 12:50:47.880398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.587 [2024-11-28 12:50:47.880409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.587 qpair failed and we were unable to recover it. 00:27:05.587 [2024-11-28 12:50:47.880549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.587 [2024-11-28 12:50:47.880580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.587 qpair failed and we were unable to recover it. 00:27:05.587 [2024-11-28 12:50:47.880696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.587 [2024-11-28 12:50:47.880728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.587 qpair failed and we were unable to recover it. 00:27:05.587 [2024-11-28 12:50:47.880854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.587 [2024-11-28 12:50:47.880886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.587 qpair failed and we were unable to recover it. 00:27:05.587 [2024-11-28 12:50:47.881093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.587 [2024-11-28 12:50:47.881125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.587 qpair failed and we were unable to recover it. 00:27:05.587 [2024-11-28 12:50:47.881253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.587 [2024-11-28 12:50:47.881297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.587 qpair failed and we were unable to recover it. 00:27:05.587 [2024-11-28 12:50:47.881509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.587 [2024-11-28 12:50:47.881519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.587 qpair failed and we were unable to recover it. 00:27:05.587 [2024-11-28 12:50:47.881666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.587 [2024-11-28 12:50:47.881677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.587 qpair failed and we were unable to recover it. 00:27:05.587 [2024-11-28 12:50:47.881902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.587 [2024-11-28 12:50:47.881934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.587 qpair failed and we were unable to recover it. 00:27:05.587 [2024-11-28 12:50:47.882074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.587 [2024-11-28 12:50:47.882107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.587 qpair failed and we were unable to recover it. 00:27:05.587 [2024-11-28 12:50:47.882230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.587 [2024-11-28 12:50:47.882262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.587 qpair failed and we were unable to recover it. 00:27:05.587 [2024-11-28 12:50:47.882397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.587 [2024-11-28 12:50:47.882409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.587 qpair failed and we were unable to recover it. 00:27:05.587 [2024-11-28 12:50:47.882502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.587 [2024-11-28 12:50:47.882513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.587 qpair failed and we were unable to recover it. 00:27:05.587 [2024-11-28 12:50:47.882662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.587 [2024-11-28 12:50:47.882693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.587 qpair failed and we were unable to recover it. 00:27:05.587 [2024-11-28 12:50:47.882912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.587 [2024-11-28 12:50:47.882943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.587 qpair failed and we were unable to recover it. 00:27:05.587 [2024-11-28 12:50:47.883082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.587 [2024-11-28 12:50:47.883114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.587 qpair failed and we were unable to recover it. 00:27:05.588 [2024-11-28 12:50:47.883223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.588 [2024-11-28 12:50:47.883254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.588 qpair failed and we were unable to recover it. 00:27:05.588 [2024-11-28 12:50:47.883442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.588 [2024-11-28 12:50:47.883472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.588 qpair failed and we were unable to recover it. 00:27:05.588 [2024-11-28 12:50:47.883622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.588 [2024-11-28 12:50:47.883632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.588 qpair failed and we were unable to recover it. 00:27:05.588 [2024-11-28 12:50:47.883840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.588 [2024-11-28 12:50:47.883871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.588 qpair failed and we were unable to recover it. 00:27:05.588 [2024-11-28 12:50:47.884147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.588 [2024-11-28 12:50:47.884180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.588 qpair failed and we were unable to recover it. 00:27:05.588 [2024-11-28 12:50:47.884454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.588 [2024-11-28 12:50:47.884486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.588 qpair failed and we were unable to recover it. 00:27:05.588 [2024-11-28 12:50:47.884684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.588 [2024-11-28 12:50:47.884715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.588 qpair failed and we were unable to recover it. 00:27:05.588 [2024-11-28 12:50:47.884897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.588 [2024-11-28 12:50:47.884928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.588 qpair failed and we were unable to recover it. 00:27:05.588 [2024-11-28 12:50:47.885155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.588 [2024-11-28 12:50:47.885186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.588 qpair failed and we were unable to recover it. 00:27:05.588 [2024-11-28 12:50:47.885433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.588 [2024-11-28 12:50:47.885444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.588 qpair failed and we were unable to recover it. 00:27:05.588 [2024-11-28 12:50:47.885543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.588 [2024-11-28 12:50:47.885573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.588 qpair failed and we were unable to recover it. 00:27:05.588 [2024-11-28 12:50:47.885699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.588 [2024-11-28 12:50:47.885730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.588 qpair failed and we were unable to recover it. 00:27:05.588 [2024-11-28 12:50:47.885931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.588 [2024-11-28 12:50:47.885970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.588 qpair failed and we were unable to recover it. 00:27:05.588 [2024-11-28 12:50:47.886108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.588 [2024-11-28 12:50:47.886139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.588 qpair failed and we were unable to recover it. 00:27:05.588 [2024-11-28 12:50:47.886380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.588 [2024-11-28 12:50:47.886391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.588 qpair failed and we were unable to recover it. 00:27:05.588 [2024-11-28 12:50:47.886598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.588 [2024-11-28 12:50:47.886608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.588 qpair failed and we were unable to recover it. 00:27:05.588 [2024-11-28 12:50:47.886809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.588 [2024-11-28 12:50:47.886819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.588 qpair failed and we were unable to recover it. 00:27:05.588 [2024-11-28 12:50:47.886897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.588 [2024-11-28 12:50:47.886908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.588 qpair failed and we were unable to recover it. 00:27:05.588 [2024-11-28 12:50:47.887042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.588 [2024-11-28 12:50:47.887053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.588 qpair failed and we were unable to recover it. 00:27:05.588 [2024-11-28 12:50:47.887188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.588 [2024-11-28 12:50:47.887198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.588 qpair failed and we were unable to recover it. 00:27:05.588 [2024-11-28 12:50:47.887398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.588 [2024-11-28 12:50:47.887408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.588 qpair failed and we were unable to recover it. 00:27:05.588 [2024-11-28 12:50:47.887510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.588 [2024-11-28 12:50:47.887521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.588 qpair failed and we were unable to recover it. 00:27:05.588 [2024-11-28 12:50:47.887725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.588 [2024-11-28 12:50:47.887757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.588 qpair failed and we were unable to recover it. 00:27:05.588 [2024-11-28 12:50:47.888051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.588 [2024-11-28 12:50:47.888083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.588 qpair failed and we were unable to recover it. 00:27:05.588 [2024-11-28 12:50:47.888220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.588 [2024-11-28 12:50:47.888252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.588 qpair failed and we were unable to recover it. 00:27:05.588 [2024-11-28 12:50:47.888433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.588 [2024-11-28 12:50:47.888444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.588 qpair failed and we were unable to recover it. 00:27:05.588 [2024-11-28 12:50:47.888607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.588 [2024-11-28 12:50:47.888618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.588 qpair failed and we were unable to recover it. 00:27:05.588 [2024-11-28 12:50:47.888804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.588 [2024-11-28 12:50:47.888836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.588 qpair failed and we were unable to recover it. 00:27:05.588 [2024-11-28 12:50:47.888979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.588 [2024-11-28 12:50:47.889011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.588 qpair failed and we were unable to recover it. 00:27:05.588 [2024-11-28 12:50:47.889135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.588 [2024-11-28 12:50:47.889166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.588 qpair failed and we were unable to recover it. 00:27:05.588 [2024-11-28 12:50:47.889275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.588 [2024-11-28 12:50:47.889286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.588 qpair failed and we were unable to recover it. 00:27:05.588 [2024-11-28 12:50:47.889421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.588 [2024-11-28 12:50:47.889432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.588 qpair failed and we were unable to recover it. 00:27:05.588 [2024-11-28 12:50:47.889533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.588 [2024-11-28 12:50:47.889544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.588 qpair failed and we were unable to recover it. 00:27:05.588 [2024-11-28 12:50:47.889680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.589 [2024-11-28 12:50:47.889691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.589 qpair failed and we were unable to recover it. 00:27:05.589 [2024-11-28 12:50:47.889794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.589 [2024-11-28 12:50:47.889805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.589 qpair failed and we were unable to recover it. 00:27:05.589 [2024-11-28 12:50:47.889896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.589 [2024-11-28 12:50:47.889909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.589 qpair failed and we were unable to recover it. 00:27:05.589 [2024-11-28 12:50:47.890066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.589 [2024-11-28 12:50:47.890077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.589 qpair failed and we were unable to recover it. 00:27:05.589 [2024-11-28 12:50:47.890240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.589 [2024-11-28 12:50:47.890272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.589 qpair failed and we were unable to recover it. 00:27:05.589 [2024-11-28 12:50:47.890451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.589 [2024-11-28 12:50:47.890482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.589 qpair failed and we were unable to recover it. 00:27:05.589 [2024-11-28 12:50:47.890602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.589 [2024-11-28 12:50:47.890633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.589 qpair failed and we were unable to recover it. 00:27:05.589 [2024-11-28 12:50:47.890750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.589 [2024-11-28 12:50:47.890781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.589 qpair failed and we were unable to recover it. 00:27:05.589 [2024-11-28 12:50:47.890966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.589 [2024-11-28 12:50:47.890998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.589 qpair failed and we were unable to recover it. 00:27:05.589 [2024-11-28 12:50:47.891131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.589 [2024-11-28 12:50:47.891163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.589 qpair failed and we were unable to recover it. 00:27:05.589 [2024-11-28 12:50:47.891359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.589 [2024-11-28 12:50:47.891389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.589 qpair failed and we were unable to recover it. 00:27:05.589 [2024-11-28 12:50:47.891534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.589 [2024-11-28 12:50:47.891565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.589 qpair failed and we were unable to recover it. 00:27:05.589 [2024-11-28 12:50:47.891750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.589 [2024-11-28 12:50:47.891780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.589 qpair failed and we were unable to recover it. 00:27:05.589 [2024-11-28 12:50:47.891984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.589 [2024-11-28 12:50:47.892016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.589 qpair failed and we were unable to recover it. 00:27:05.589 [2024-11-28 12:50:47.892271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.589 [2024-11-28 12:50:47.892282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.589 qpair failed and we were unable to recover it. 00:27:05.589 [2024-11-28 12:50:47.892443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.589 [2024-11-28 12:50:47.892474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.589 qpair failed and we were unable to recover it. 00:27:05.589 [2024-11-28 12:50:47.892620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.589 [2024-11-28 12:50:47.892651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.589 qpair failed and we were unable to recover it. 00:27:05.589 [2024-11-28 12:50:47.892780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.589 [2024-11-28 12:50:47.892811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.589 qpair failed and we were unable to recover it. 00:27:05.589 [2024-11-28 12:50:47.893013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.589 [2024-11-28 12:50:47.893046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.589 qpair failed and we were unable to recover it. 00:27:05.589 [2024-11-28 12:50:47.893292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.589 [2024-11-28 12:50:47.893324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.589 qpair failed and we were unable to recover it. 00:27:05.589 [2024-11-28 12:50:47.893503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.589 [2024-11-28 12:50:47.893514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.589 qpair failed and we were unable to recover it. 00:27:05.589 [2024-11-28 12:50:47.893580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.589 [2024-11-28 12:50:47.893591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.589 qpair failed and we were unable to recover it. 00:27:05.589 [2024-11-28 12:50:47.893707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.589 [2024-11-28 12:50:47.893737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.589 qpair failed and we were unable to recover it. 00:27:05.589 [2024-11-28 12:50:47.893858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.589 [2024-11-28 12:50:47.893890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.589 qpair failed and we were unable to recover it. 00:27:05.589 [2024-11-28 12:50:47.894010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.589 [2024-11-28 12:50:47.894041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.589 qpair failed and we were unable to recover it. 00:27:05.589 [2024-11-28 12:50:47.894239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.589 [2024-11-28 12:50:47.894270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.589 qpair failed and we were unable to recover it. 00:27:05.589 [2024-11-28 12:50:47.894488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.589 [2024-11-28 12:50:47.894520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.589 qpair failed and we were unable to recover it. 00:27:05.589 [2024-11-28 12:50:47.894629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.589 [2024-11-28 12:50:47.894661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.589 qpair failed and we were unable to recover it. 00:27:05.589 [2024-11-28 12:50:47.894794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.589 [2024-11-28 12:50:47.894804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.589 qpair failed and we were unable to recover it. 00:27:05.589 [2024-11-28 12:50:47.894960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.589 [2024-11-28 12:50:47.894971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.589 qpair failed and we were unable to recover it. 00:27:05.589 [2024-11-28 12:50:47.895130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.589 [2024-11-28 12:50:47.895140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.589 qpair failed and we were unable to recover it. 00:27:05.589 [2024-11-28 12:50:47.895216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.589 [2024-11-28 12:50:47.895226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.589 qpair failed and we were unable to recover it. 00:27:05.589 [2024-11-28 12:50:47.895393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.589 [2024-11-28 12:50:47.895403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.589 qpair failed and we were unable to recover it. 00:27:05.589 [2024-11-28 12:50:47.895640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.589 [2024-11-28 12:50:47.895672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.589 qpair failed and we were unable to recover it. 00:27:05.589 [2024-11-28 12:50:47.895851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.589 [2024-11-28 12:50:47.895883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.589 qpair failed and we were unable to recover it. 00:27:05.589 [2024-11-28 12:50:47.896008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.589 [2024-11-28 12:50:47.896040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.589 qpair failed and we were unable to recover it. 00:27:05.589 [2024-11-28 12:50:47.896223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.589 [2024-11-28 12:50:47.896255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.589 qpair failed and we were unable to recover it. 00:27:05.589 [2024-11-28 12:50:47.896541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.589 [2024-11-28 12:50:47.896576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.589 qpair failed and we were unable to recover it. 00:27:05.589 [2024-11-28 12:50:47.896773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.590 [2024-11-28 12:50:47.896783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.590 qpair failed and we were unable to recover it. 00:27:05.590 [2024-11-28 12:50:47.896926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.590 [2024-11-28 12:50:47.896937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.590 qpair failed and we were unable to recover it. 00:27:05.590 [2024-11-28 12:50:47.897029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.590 [2024-11-28 12:50:47.897040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.590 qpair failed and we were unable to recover it. 00:27:05.590 [2024-11-28 12:50:47.897274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.590 [2024-11-28 12:50:47.897285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.590 qpair failed and we were unable to recover it. 00:27:05.590 [2024-11-28 12:50:47.897380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.590 [2024-11-28 12:50:47.897393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.590 qpair failed and we were unable to recover it. 00:27:05.590 [2024-11-28 12:50:47.897527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.590 [2024-11-28 12:50:47.897537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.590 qpair failed and we were unable to recover it. 00:27:05.590 [2024-11-28 12:50:47.897780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.590 [2024-11-28 12:50:47.897810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.590 qpair failed and we were unable to recover it. 00:27:05.590 [2024-11-28 12:50:47.897955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.590 [2024-11-28 12:50:47.897988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.590 qpair failed and we were unable to recover it. 00:27:05.590 [2024-11-28 12:50:47.898200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.590 [2024-11-28 12:50:47.898232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.590 qpair failed and we were unable to recover it. 00:27:05.590 [2024-11-28 12:50:47.898479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.590 [2024-11-28 12:50:47.898510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.590 qpair failed and we were unable to recover it. 00:27:05.590 [2024-11-28 12:50:47.898726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.590 [2024-11-28 12:50:47.898757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.590 qpair failed and we were unable to recover it. 00:27:05.590 [2024-11-28 12:50:47.898990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.590 [2024-11-28 12:50:47.899022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.590 qpair failed and we were unable to recover it. 00:27:05.590 [2024-11-28 12:50:47.899228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.590 [2024-11-28 12:50:47.899259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.590 qpair failed and we were unable to recover it. 00:27:05.590 [2024-11-28 12:50:47.899493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.590 [2024-11-28 12:50:47.899523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.590 qpair failed and we were unable to recover it. 00:27:05.590 [2024-11-28 12:50:47.899666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.590 [2024-11-28 12:50:47.899697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.590 qpair failed and we were unable to recover it. 00:27:05.590 [2024-11-28 12:50:47.899946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.590 [2024-11-28 12:50:47.899989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.590 qpair failed and we were unable to recover it. 00:27:05.590 [2024-11-28 12:50:47.900187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.590 [2024-11-28 12:50:47.900218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.590 qpair failed and we were unable to recover it. 00:27:05.590 [2024-11-28 12:50:47.900410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.590 [2024-11-28 12:50:47.900441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.590 qpair failed and we were unable to recover it. 00:27:05.590 [2024-11-28 12:50:47.900710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.590 [2024-11-28 12:50:47.900741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.590 qpair failed and we were unable to recover it. 00:27:05.590 [2024-11-28 12:50:47.900969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.590 [2024-11-28 12:50:47.901002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.590 qpair failed and we were unable to recover it. 00:27:05.590 [2024-11-28 12:50:47.901133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.590 [2024-11-28 12:50:47.901164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.590 qpair failed and we were unable to recover it. 00:27:05.590 [2024-11-28 12:50:47.901374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.590 [2024-11-28 12:50:47.901395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.590 qpair failed and we were unable to recover it. 00:27:05.590 [2024-11-28 12:50:47.901528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.590 [2024-11-28 12:50:47.901539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.590 qpair failed and we were unable to recover it. 00:27:05.590 [2024-11-28 12:50:47.901701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.590 [2024-11-28 12:50:47.901734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.590 qpair failed and we were unable to recover it. 00:27:05.590 [2024-11-28 12:50:47.901934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.590 [2024-11-28 12:50:47.901974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.590 qpair failed and we were unable to recover it. 00:27:05.590 [2024-11-28 12:50:47.902094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.590 [2024-11-28 12:50:47.902125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.590 qpair failed and we were unable to recover it. 00:27:05.590 [2024-11-28 12:50:47.902405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.590 [2024-11-28 12:50:47.902436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.590 qpair failed and we were unable to recover it. 00:27:05.590 [2024-11-28 12:50:47.902582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.590 [2024-11-28 12:50:47.902614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.590 qpair failed and we were unable to recover it. 00:27:05.590 [2024-11-28 12:50:47.902867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.590 [2024-11-28 12:50:47.902899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.590 qpair failed and we were unable to recover it. 00:27:05.590 [2024-11-28 12:50:47.903122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.590 [2024-11-28 12:50:47.903153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.590 qpair failed and we were unable to recover it. 00:27:05.590 [2024-11-28 12:50:47.903348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.590 [2024-11-28 12:50:47.903380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.590 qpair failed and we were unable to recover it. 00:27:05.590 [2024-11-28 12:50:47.903503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.590 [2024-11-28 12:50:47.903514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.590 qpair failed and we were unable to recover it. 00:27:05.590 [2024-11-28 12:50:47.903665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.590 [2024-11-28 12:50:47.903705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.590 qpair failed and we were unable to recover it. 00:27:05.590 [2024-11-28 12:50:47.903886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.590 [2024-11-28 12:50:47.903916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.590 qpair failed and we were unable to recover it. 00:27:05.590 [2024-11-28 12:50:47.904089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.590 [2024-11-28 12:50:47.904161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.590 qpair failed and we were unable to recover it. 00:27:05.590 [2024-11-28 12:50:47.904366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.590 [2024-11-28 12:50:47.904401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.590 qpair failed and we were unable to recover it. 00:27:05.590 [2024-11-28 12:50:47.904542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.590 [2024-11-28 12:50:47.904574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.590 qpair failed and we were unable to recover it. 00:27:05.590 [2024-11-28 12:50:47.904836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.590 [2024-11-28 12:50:47.904867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.590 qpair failed and we were unable to recover it. 00:27:05.590 [2024-11-28 12:50:47.905056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.591 [2024-11-28 12:50:47.905087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.591 qpair failed and we were unable to recover it. 00:27:05.591 [2024-11-28 12:50:47.905294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.591 [2024-11-28 12:50:47.905326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.591 qpair failed and we were unable to recover it. 00:27:05.591 [2024-11-28 12:50:47.905505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.591 [2024-11-28 12:50:47.905536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.591 qpair failed and we were unable to recover it. 00:27:05.591 [2024-11-28 12:50:47.905790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.591 [2024-11-28 12:50:47.905821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.591 qpair failed and we were unable to recover it. 00:27:05.591 [2024-11-28 12:50:47.905957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.591 [2024-11-28 12:50:47.905990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.591 qpair failed and we were unable to recover it. 00:27:05.591 [2024-11-28 12:50:47.906209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.591 [2024-11-28 12:50:47.906239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.591 qpair failed and we were unable to recover it. 00:27:05.591 [2024-11-28 12:50:47.906361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.591 [2024-11-28 12:50:47.906392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.591 qpair failed and we were unable to recover it. 00:27:05.591 [2024-11-28 12:50:47.906510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.591 [2024-11-28 12:50:47.906525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.591 qpair failed and we were unable to recover it. 00:27:05.591 [2024-11-28 12:50:47.906681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.591 [2024-11-28 12:50:47.906695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.591 qpair failed and we were unable to recover it. 00:27:05.591 [2024-11-28 12:50:47.906901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.591 [2024-11-28 12:50:47.906915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.591 qpair failed and we were unable to recover it. 00:27:05.591 [2024-11-28 12:50:47.906998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.591 [2024-11-28 12:50:47.907014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.591 qpair failed and we were unable to recover it. 00:27:05.591 [2024-11-28 12:50:47.907182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.591 [2024-11-28 12:50:47.907215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.591 qpair failed and we were unable to recover it. 00:27:05.591 [2024-11-28 12:50:47.907412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.591 [2024-11-28 12:50:47.907442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.591 qpair failed and we were unable to recover it. 00:27:05.591 [2024-11-28 12:50:47.907569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.591 [2024-11-28 12:50:47.907600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.591 qpair failed and we were unable to recover it. 00:27:05.591 [2024-11-28 12:50:47.907734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.591 [2024-11-28 12:50:47.907765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.591 qpair failed and we were unable to recover it. 00:27:05.591 [2024-11-28 12:50:47.907853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.591 [2024-11-28 12:50:47.907868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.591 qpair failed and we were unable to recover it. 00:27:05.591 [2024-11-28 12:50:47.908039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.591 [2024-11-28 12:50:47.908071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.591 qpair failed and we were unable to recover it. 00:27:05.591 [2024-11-28 12:50:47.908257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.591 [2024-11-28 12:50:47.908289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.591 qpair failed and we were unable to recover it. 00:27:05.591 [2024-11-28 12:50:47.908417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.591 [2024-11-28 12:50:47.908431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.591 qpair failed and we were unable to recover it. 00:27:05.591 [2024-11-28 12:50:47.908519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.591 [2024-11-28 12:50:47.908533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.591 qpair failed and we were unable to recover it. 00:27:05.591 [2024-11-28 12:50:47.908644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.591 [2024-11-28 12:50:47.908680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.591 qpair failed and we were unable to recover it. 00:27:05.591 [2024-11-28 12:50:47.908819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.591 [2024-11-28 12:50:47.908850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.591 qpair failed and we were unable to recover it. 00:27:05.591 [2024-11-28 12:50:47.908973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.591 [2024-11-28 12:50:47.909006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.591 qpair failed and we were unable to recover it. 00:27:05.591 [2024-11-28 12:50:47.909186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.591 [2024-11-28 12:50:47.909218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.591 qpair failed and we were unable to recover it. 00:27:05.591 [2024-11-28 12:50:47.909398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.591 [2024-11-28 12:50:47.909430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.591 qpair failed and we were unable to recover it. 00:27:05.591 [2024-11-28 12:50:47.909622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.591 [2024-11-28 12:50:47.909653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.591 qpair failed and we were unable to recover it. 00:27:05.591 [2024-11-28 12:50:47.909910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.591 [2024-11-28 12:50:47.909920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.591 qpair failed and we were unable to recover it. 00:27:05.591 [2024-11-28 12:50:47.910080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.591 [2024-11-28 12:50:47.910091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.591 qpair failed and we were unable to recover it. 00:27:05.591 [2024-11-28 12:50:47.910258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.591 [2024-11-28 12:50:47.910289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.591 qpair failed and we were unable to recover it. 00:27:05.591 [2024-11-28 12:50:47.910432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.591 [2024-11-28 12:50:47.910464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.591 qpair failed and we were unable to recover it. 00:27:05.591 [2024-11-28 12:50:47.910653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.591 [2024-11-28 12:50:47.910685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.591 qpair failed and we were unable to recover it. 00:27:05.591 [2024-11-28 12:50:47.910927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.591 [2024-11-28 12:50:47.910964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.591 qpair failed and we were unable to recover it. 00:27:05.591 [2024-11-28 12:50:47.911167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.591 [2024-11-28 12:50:47.911199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.591 qpair failed and we were unable to recover it. 00:27:05.591 [2024-11-28 12:50:47.911439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.591 [2024-11-28 12:50:47.911451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.591 qpair failed and we were unable to recover it. 00:27:05.591 [2024-11-28 12:50:47.911537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.591 [2024-11-28 12:50:47.911571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.591 qpair failed and we were unable to recover it. 00:27:05.591 [2024-11-28 12:50:47.911781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.591 [2024-11-28 12:50:47.911812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.591 qpair failed and we were unable to recover it. 00:27:05.591 [2024-11-28 12:50:47.912009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.591 [2024-11-28 12:50:47.912041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.591 qpair failed and we were unable to recover it. 00:27:05.591 [2024-11-28 12:50:47.912255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.591 [2024-11-28 12:50:47.912286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.591 qpair failed and we were unable to recover it. 00:27:05.591 [2024-11-28 12:50:47.912483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.591 [2024-11-28 12:50:47.912514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.592 qpair failed and we were unable to recover it. 00:27:05.592 [2024-11-28 12:50:47.912722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.592 [2024-11-28 12:50:47.912753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.592 qpair failed and we were unable to recover it. 00:27:05.592 [2024-11-28 12:50:47.912999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.592 [2024-11-28 12:50:47.913031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.592 qpair failed and we were unable to recover it. 00:27:05.592 [2024-11-28 12:50:47.913323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.592 [2024-11-28 12:50:47.913355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.592 qpair failed and we were unable to recover it. 00:27:05.592 [2024-11-28 12:50:47.913532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.592 [2024-11-28 12:50:47.913563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.592 qpair failed and we were unable to recover it. 00:27:05.592 [2024-11-28 12:50:47.913828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.592 [2024-11-28 12:50:47.913860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.592 qpair failed and we were unable to recover it. 00:27:05.592 [2024-11-28 12:50:47.913999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.592 [2024-11-28 12:50:47.914032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.592 qpair failed and we were unable to recover it. 00:27:05.592 [2024-11-28 12:50:47.914155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.592 [2024-11-28 12:50:47.914186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.592 qpair failed and we were unable to recover it. 00:27:05.592 [2024-11-28 12:50:47.914380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.592 [2024-11-28 12:50:47.914411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.592 qpair failed and we were unable to recover it. 00:27:05.592 [2024-11-28 12:50:47.914592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.592 [2024-11-28 12:50:47.914603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.592 qpair failed and we were unable to recover it. 00:27:05.592 [2024-11-28 12:50:47.914760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.592 [2024-11-28 12:50:47.914791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.592 qpair failed and we were unable to recover it. 00:27:05.592 [2024-11-28 12:50:47.914912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.592 [2024-11-28 12:50:47.914944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.592 qpair failed and we were unable to recover it. 00:27:05.592 [2024-11-28 12:50:47.915211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.592 [2024-11-28 12:50:47.915241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.592 qpair failed and we were unable to recover it. 00:27:05.592 [2024-11-28 12:50:47.915422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.592 [2024-11-28 12:50:47.915433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.592 qpair failed and we were unable to recover it. 00:27:05.592 [2024-11-28 12:50:47.915573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.592 [2024-11-28 12:50:47.915603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.592 qpair failed and we were unable to recover it. 00:27:05.592 [2024-11-28 12:50:47.915807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.592 [2024-11-28 12:50:47.915839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.592 qpair failed and we were unable to recover it. 00:27:05.592 [2024-11-28 12:50:47.916019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.592 [2024-11-28 12:50:47.916051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.592 qpair failed and we were unable to recover it. 00:27:05.592 [2024-11-28 12:50:47.916228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.592 [2024-11-28 12:50:47.916259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.592 qpair failed and we were unable to recover it. 00:27:05.592 [2024-11-28 12:50:47.916446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.592 [2024-11-28 12:50:47.916457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.592 qpair failed and we were unable to recover it. 00:27:05.592 [2024-11-28 12:50:47.916543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.592 [2024-11-28 12:50:47.916553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.592 qpair failed and we were unable to recover it. 00:27:05.592 [2024-11-28 12:50:47.916751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.592 [2024-11-28 12:50:47.916761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.592 qpair failed and we were unable to recover it. 00:27:05.592 [2024-11-28 12:50:47.916988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.592 [2024-11-28 12:50:47.916999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.592 qpair failed and we were unable to recover it. 00:27:05.592 [2024-11-28 12:50:47.917083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.592 [2024-11-28 12:50:47.917098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.592 qpair failed and we were unable to recover it. 00:27:05.592 [2024-11-28 12:50:47.917228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.592 [2024-11-28 12:50:47.917238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.592 qpair failed and we were unable to recover it. 00:27:05.592 [2024-11-28 12:50:47.917297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.592 [2024-11-28 12:50:47.917308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.592 qpair failed and we were unable to recover it. 00:27:05.592 [2024-11-28 12:50:47.917401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.592 [2024-11-28 12:50:47.917412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.592 qpair failed and we were unable to recover it. 00:27:05.592 [2024-11-28 12:50:47.917565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.592 [2024-11-28 12:50:47.917597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.592 qpair failed and we were unable to recover it. 00:27:05.592 [2024-11-28 12:50:47.917726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.592 [2024-11-28 12:50:47.917757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.592 qpair failed and we were unable to recover it. 00:27:05.592 [2024-11-28 12:50:47.917894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.592 [2024-11-28 12:50:47.917925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.592 qpair failed and we were unable to recover it. 00:27:05.592 [2024-11-28 12:50:47.918094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.592 [2024-11-28 12:50:47.918130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.592 qpair failed and we were unable to recover it. 00:27:05.592 [2024-11-28 12:50:47.918323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.592 [2024-11-28 12:50:47.918354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.592 qpair failed and we were unable to recover it. 00:27:05.592 [2024-11-28 12:50:47.918483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.592 [2024-11-28 12:50:47.918515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.592 qpair failed and we were unable to recover it. 00:27:05.592 [2024-11-28 12:50:47.918737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.592 [2024-11-28 12:50:47.918769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.592 qpair failed and we were unable to recover it. 00:27:05.592 [2024-11-28 12:50:47.918910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.592 [2024-11-28 12:50:47.918941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.592 qpair failed and we were unable to recover it. 00:27:05.592 [2024-11-28 12:50:47.919137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.592 [2024-11-28 12:50:47.919168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.592 qpair failed and we were unable to recover it. 00:27:05.592 [2024-11-28 12:50:47.919391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.592 [2024-11-28 12:50:47.919423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.592 qpair failed and we were unable to recover it. 00:27:05.592 [2024-11-28 12:50:47.919549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.592 [2024-11-28 12:50:47.919560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.592 qpair failed and we were unable to recover it. 00:27:05.592 [2024-11-28 12:50:47.919717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.592 [2024-11-28 12:50:47.919761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.592 qpair failed and we were unable to recover it. 00:27:05.592 [2024-11-28 12:50:47.919869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.592 [2024-11-28 12:50:47.919899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.593 qpair failed and we were unable to recover it. 00:27:05.593 [2024-11-28 12:50:47.920086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.593 [2024-11-28 12:50:47.920119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.593 qpair failed and we were unable to recover it. 00:27:05.593 [2024-11-28 12:50:47.920314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.593 [2024-11-28 12:50:47.920345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.593 qpair failed and we were unable to recover it. 00:27:05.593 [2024-11-28 12:50:47.920600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.593 [2024-11-28 12:50:47.920611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.593 qpair failed and we were unable to recover it. 00:27:05.593 [2024-11-28 12:50:47.920776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.593 [2024-11-28 12:50:47.920787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.593 qpair failed and we were unable to recover it. 00:27:05.593 [2024-11-28 12:50:47.920916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.593 [2024-11-28 12:50:47.920927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.593 qpair failed and we were unable to recover it. 00:27:05.593 [2024-11-28 12:50:47.921138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.593 [2024-11-28 12:50:47.921171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.593 qpair failed and we were unable to recover it. 00:27:05.593 [2024-11-28 12:50:47.921415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.593 [2024-11-28 12:50:47.921446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.593 qpair failed and we were unable to recover it. 00:27:05.593 [2024-11-28 12:50:47.921642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.593 [2024-11-28 12:50:47.921673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.593 qpair failed and we were unable to recover it. 00:27:05.593 [2024-11-28 12:50:47.921893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.593 [2024-11-28 12:50:47.921903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.593 qpair failed and we were unable to recover it. 00:27:05.593 [2024-11-28 12:50:47.922109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.593 [2024-11-28 12:50:47.922120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.593 qpair failed and we were unable to recover it. 00:27:05.593 [2024-11-28 12:50:47.922323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.593 [2024-11-28 12:50:47.922334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.593 qpair failed and we were unable to recover it. 00:27:05.593 [2024-11-28 12:50:47.922489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.593 [2024-11-28 12:50:47.922500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.593 qpair failed and we were unable to recover it. 00:27:05.593 [2024-11-28 12:50:47.922576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.593 [2024-11-28 12:50:47.922587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.593 qpair failed and we were unable to recover it. 00:27:05.593 [2024-11-28 12:50:47.922652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.593 [2024-11-28 12:50:47.922662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.593 qpair failed and we were unable to recover it. 00:27:05.593 [2024-11-28 12:50:47.922831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.593 [2024-11-28 12:50:47.922842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.593 qpair failed and we were unable to recover it. 00:27:05.593 [2024-11-28 12:50:47.922958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.593 [2024-11-28 12:50:47.922991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.593 qpair failed and we were unable to recover it. 00:27:05.593 [2024-11-28 12:50:47.923169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.593 [2024-11-28 12:50:47.923201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.593 qpair failed and we were unable to recover it. 00:27:05.593 [2024-11-28 12:50:47.923397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.593 [2024-11-28 12:50:47.923428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.593 qpair failed and we were unable to recover it. 00:27:05.593 [2024-11-28 12:50:47.923694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.593 [2024-11-28 12:50:47.923705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.593 qpair failed and we were unable to recover it. 00:27:05.593 [2024-11-28 12:50:47.923858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.593 [2024-11-28 12:50:47.923868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.593 qpair failed and we were unable to recover it. 00:27:05.593 [2024-11-28 12:50:47.924043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.593 [2024-11-28 12:50:47.924054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.593 qpair failed and we were unable to recover it. 00:27:05.593 [2024-11-28 12:50:47.924148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.593 [2024-11-28 12:50:47.924159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.593 qpair failed and we were unable to recover it. 00:27:05.593 [2024-11-28 12:50:47.924314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.593 [2024-11-28 12:50:47.924345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.593 qpair failed and we were unable to recover it. 00:27:05.593 [2024-11-28 12:50:47.924485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.593 [2024-11-28 12:50:47.924522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.593 qpair failed and we were unable to recover it. 00:27:05.593 [2024-11-28 12:50:47.924712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.593 [2024-11-28 12:50:47.924743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.593 qpair failed and we were unable to recover it. 00:27:05.593 [2024-11-28 12:50:47.924941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.593 [2024-11-28 12:50:47.924980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.593 qpair failed and we were unable to recover it. 00:27:05.593 [2024-11-28 12:50:47.925109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.593 [2024-11-28 12:50:47.925140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.593 qpair failed and we were unable to recover it. 00:27:05.593 [2024-11-28 12:50:47.925261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.593 [2024-11-28 12:50:47.925292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.593 qpair failed and we were unable to recover it. 00:27:05.593 [2024-11-28 12:50:47.925431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.593 [2024-11-28 12:50:47.925462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.593 qpair failed and we were unable to recover it. 00:27:05.593 [2024-11-28 12:50:47.925645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.593 [2024-11-28 12:50:47.925675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.593 qpair failed and we were unable to recover it. 00:27:05.593 [2024-11-28 12:50:47.925870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.593 [2024-11-28 12:50:47.925880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.593 qpair failed and we were unable to recover it. 00:27:05.593 [2024-11-28 12:50:47.925955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.593 [2024-11-28 12:50:47.925986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.593 qpair failed and we were unable to recover it. 00:27:05.593 [2024-11-28 12:50:47.926137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.594 [2024-11-28 12:50:47.926169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.594 qpair failed and we were unable to recover it. 00:27:05.594 [2024-11-28 12:50:47.926441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.594 [2024-11-28 12:50:47.926472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.594 qpair failed and we were unable to recover it. 00:27:05.594 [2024-11-28 12:50:47.926675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.594 [2024-11-28 12:50:47.926686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.594 qpair failed and we were unable to recover it. 00:27:05.594 [2024-11-28 12:50:47.926830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.594 [2024-11-28 12:50:47.926861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.594 qpair failed and we were unable to recover it. 00:27:05.594 [2024-11-28 12:50:47.926986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.594 [2024-11-28 12:50:47.927018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.594 qpair failed and we were unable to recover it. 00:27:05.594 [2024-11-28 12:50:47.927132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.594 [2024-11-28 12:50:47.927163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.594 qpair failed and we were unable to recover it. 00:27:05.594 [2024-11-28 12:50:47.927261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.594 [2024-11-28 12:50:47.927292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.594 qpair failed and we were unable to recover it. 00:27:05.594 [2024-11-28 12:50:47.927490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.594 [2024-11-28 12:50:47.927500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.594 qpair failed and we were unable to recover it. 00:27:05.594 [2024-11-28 12:50:47.927587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.594 [2024-11-28 12:50:47.927598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.594 qpair failed and we were unable to recover it. 00:27:05.594 [2024-11-28 12:50:47.927823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.594 [2024-11-28 12:50:47.927834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.594 qpair failed and we were unable to recover it. 00:27:05.594 [2024-11-28 12:50:47.927914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.594 [2024-11-28 12:50:47.927925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.594 qpair failed and we were unable to recover it. 00:27:05.594 [2024-11-28 12:50:47.928085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.594 [2024-11-28 12:50:47.928117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.594 qpair failed and we were unable to recover it. 00:27:05.594 [2024-11-28 12:50:47.928262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.594 [2024-11-28 12:50:47.928293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.594 qpair failed and we were unable to recover it. 00:27:05.594 [2024-11-28 12:50:47.928434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.594 [2024-11-28 12:50:47.928465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.594 qpair failed and we were unable to recover it. 00:27:05.594 [2024-11-28 12:50:47.928595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.594 [2024-11-28 12:50:47.928606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.594 qpair failed and we were unable to recover it. 00:27:05.594 [2024-11-28 12:50:47.928814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.594 [2024-11-28 12:50:47.928846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.594 qpair failed and we were unable to recover it. 00:27:05.594 [2024-11-28 12:50:47.928983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.594 [2024-11-28 12:50:47.929016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.594 qpair failed and we were unable to recover it. 00:27:05.594 [2024-11-28 12:50:47.929214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.594 [2024-11-28 12:50:47.929245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.594 qpair failed and we were unable to recover it. 00:27:05.594 [2024-11-28 12:50:47.929468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.594 [2024-11-28 12:50:47.929500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.594 qpair failed and we were unable to recover it. 00:27:05.594 [2024-11-28 12:50:47.929763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.594 [2024-11-28 12:50:47.929794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.594 qpair failed and we were unable to recover it. 00:27:05.594 [2024-11-28 12:50:47.930061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.594 [2024-11-28 12:50:47.930093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.594 qpair failed and we were unable to recover it. 00:27:05.594 [2024-11-28 12:50:47.930224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.594 [2024-11-28 12:50:47.930255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.594 qpair failed and we were unable to recover it. 00:27:05.594 [2024-11-28 12:50:47.930439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.594 [2024-11-28 12:50:47.930470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.594 qpair failed and we were unable to recover it. 00:27:05.594 [2024-11-28 12:50:47.930662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.594 [2024-11-28 12:50:47.930693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.594 qpair failed and we were unable to recover it. 00:27:05.594 [2024-11-28 12:50:47.930968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.594 [2024-11-28 12:50:47.931000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.594 qpair failed and we were unable to recover it. 00:27:05.594 [2024-11-28 12:50:47.931179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.594 [2024-11-28 12:50:47.931210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.594 qpair failed and we were unable to recover it. 00:27:05.594 [2024-11-28 12:50:47.931352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.594 [2024-11-28 12:50:47.931383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.594 qpair failed and we were unable to recover it. 00:27:05.594 [2024-11-28 12:50:47.931655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.594 [2024-11-28 12:50:47.931687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.594 qpair failed and we were unable to recover it. 00:27:05.594 [2024-11-28 12:50:47.931816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.594 [2024-11-28 12:50:47.931847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.594 qpair failed and we were unable to recover it. 00:27:05.594 [2024-11-28 12:50:47.932086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.594 [2024-11-28 12:50:47.932119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.594 qpair failed and we were unable to recover it. 00:27:05.594 [2024-11-28 12:50:47.932326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.594 [2024-11-28 12:50:47.932358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.594 qpair failed and we were unable to recover it. 00:27:05.594 [2024-11-28 12:50:47.932483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.594 [2024-11-28 12:50:47.932519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.594 qpair failed and we were unable to recover it. 00:27:05.594 [2024-11-28 12:50:47.932631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.594 [2024-11-28 12:50:47.932663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.594 qpair failed and we were unable to recover it. 00:27:05.594 [2024-11-28 12:50:47.932936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.594 [2024-11-28 12:50:47.932965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.594 qpair failed and we were unable to recover it. 00:27:05.594 [2024-11-28 12:50:47.933141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.594 [2024-11-28 12:50:47.933171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.594 qpair failed and we were unable to recover it. 00:27:05.594 [2024-11-28 12:50:47.933367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.594 [2024-11-28 12:50:47.933398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.594 qpair failed and we were unable to recover it. 00:27:05.594 [2024-11-28 12:50:47.933595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.594 [2024-11-28 12:50:47.933626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.594 qpair failed and we were unable to recover it. 00:27:05.594 [2024-11-28 12:50:47.933741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.594 [2024-11-28 12:50:47.933772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.594 qpair failed and we were unable to recover it. 00:27:05.594 [2024-11-28 12:50:47.933989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.595 [2024-11-28 12:50:47.934021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.595 qpair failed and we were unable to recover it. 00:27:05.595 [2024-11-28 12:50:47.934199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.595 [2024-11-28 12:50:47.934231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.595 qpair failed and we were unable to recover it. 00:27:05.595 [2024-11-28 12:50:47.934449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.595 [2024-11-28 12:50:47.934481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.595 qpair failed and we were unable to recover it. 00:27:05.595 [2024-11-28 12:50:47.934678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.595 [2024-11-28 12:50:47.934709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.595 qpair failed and we were unable to recover it. 00:27:05.595 [2024-11-28 12:50:47.934889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.595 [2024-11-28 12:50:47.934920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.595 qpair failed and we were unable to recover it. 00:27:05.595 [2024-11-28 12:50:47.935224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.595 [2024-11-28 12:50:47.935256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.595 qpair failed and we were unable to recover it. 00:27:05.595 [2024-11-28 12:50:47.935455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.595 [2024-11-28 12:50:47.935486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.595 qpair failed and we were unable to recover it. 00:27:05.595 [2024-11-28 12:50:47.935716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.595 [2024-11-28 12:50:47.935747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.595 qpair failed and we were unable to recover it. 00:27:05.595 [2024-11-28 12:50:47.935866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.595 [2024-11-28 12:50:47.935876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.595 qpair failed and we were unable to recover it. 00:27:05.595 [2024-11-28 12:50:47.936026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.595 [2024-11-28 12:50:47.936071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.595 qpair failed and we were unable to recover it. 00:27:05.595 [2024-11-28 12:50:47.936194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.595 [2024-11-28 12:50:47.936226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.595 qpair failed and we were unable to recover it. 00:27:05.595 [2024-11-28 12:50:47.936422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.595 [2024-11-28 12:50:47.936452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.595 qpair failed and we were unable to recover it. 00:27:05.595 [2024-11-28 12:50:47.936696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.595 [2024-11-28 12:50:47.936728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.595 qpair failed and we were unable to recover it. 00:27:05.595 [2024-11-28 12:50:47.936908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.595 [2024-11-28 12:50:47.936939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.595 qpair failed and we were unable to recover it. 00:27:05.595 [2024-11-28 12:50:47.937191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.595 [2024-11-28 12:50:47.937224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.595 qpair failed and we were unable to recover it. 00:27:05.595 [2024-11-28 12:50:47.937348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.595 [2024-11-28 12:50:47.937358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.595 qpair failed and we were unable to recover it. 00:27:05.595 [2024-11-28 12:50:47.937575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.595 [2024-11-28 12:50:47.937606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.595 qpair failed and we were unable to recover it. 00:27:05.595 [2024-11-28 12:50:47.937869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.595 [2024-11-28 12:50:47.937900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.595 qpair failed and we were unable to recover it. 00:27:05.595 [2024-11-28 12:50:47.938147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.595 [2024-11-28 12:50:47.938178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.595 qpair failed and we were unable to recover it. 00:27:05.595 [2024-11-28 12:50:47.938379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.595 [2024-11-28 12:50:47.938411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.595 qpair failed and we were unable to recover it. 00:27:05.595 [2024-11-28 12:50:47.938711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.595 [2024-11-28 12:50:47.938743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.595 qpair failed and we were unable to recover it. 00:27:05.595 [2024-11-28 12:50:47.938937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.595 [2024-11-28 12:50:47.938955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.595 qpair failed and we were unable to recover it. 00:27:05.595 [2024-11-28 12:50:47.939127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.595 [2024-11-28 12:50:47.939158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.595 qpair failed and we were unable to recover it. 00:27:05.595 [2024-11-28 12:50:47.939341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.595 [2024-11-28 12:50:47.939372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.595 qpair failed and we were unable to recover it. 00:27:05.595 [2024-11-28 12:50:47.939564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.595 [2024-11-28 12:50:47.939596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.595 qpair failed and we were unable to recover it. 00:27:05.595 [2024-11-28 12:50:47.939795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.595 [2024-11-28 12:50:47.939826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.595 qpair failed and we were unable to recover it. 00:27:05.595 [2024-11-28 12:50:47.940021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.595 [2024-11-28 12:50:47.940053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.595 qpair failed and we were unable to recover it. 00:27:05.595 [2024-11-28 12:50:47.940296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.595 [2024-11-28 12:50:47.940329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.595 qpair failed and we were unable to recover it. 00:27:05.595 [2024-11-28 12:50:47.940539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.595 [2024-11-28 12:50:47.940570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.595 qpair failed and we were unable to recover it. 00:27:05.595 [2024-11-28 12:50:47.940710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.595 [2024-11-28 12:50:47.940720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.595 qpair failed and we were unable to recover it. 00:27:05.595 [2024-11-28 12:50:47.940894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.595 [2024-11-28 12:50:47.940925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.595 qpair failed and we were unable to recover it. 00:27:05.595 [2024-11-28 12:50:47.941150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.595 [2024-11-28 12:50:47.941181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.595 qpair failed and we were unable to recover it. 00:27:05.595 [2024-11-28 12:50:47.941367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.595 [2024-11-28 12:50:47.941397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.595 qpair failed and we were unable to recover it. 00:27:05.595 [2024-11-28 12:50:47.941583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.595 [2024-11-28 12:50:47.941596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.595 qpair failed and we were unable to recover it. 00:27:05.595 [2024-11-28 12:50:47.941805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.595 [2024-11-28 12:50:47.941836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.595 qpair failed and we were unable to recover it. 00:27:05.595 [2024-11-28 12:50:47.941972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.595 [2024-11-28 12:50:47.942004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.595 qpair failed and we were unable to recover it. 00:27:05.595 [2024-11-28 12:50:47.942136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.595 [2024-11-28 12:50:47.942168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.595 qpair failed and we were unable to recover it. 00:27:05.595 [2024-11-28 12:50:47.942376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.595 [2024-11-28 12:50:47.942407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.595 qpair failed and we were unable to recover it. 00:27:05.595 [2024-11-28 12:50:47.942545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.596 [2024-11-28 12:50:47.942576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.596 qpair failed and we were unable to recover it. 00:27:05.596 [2024-11-28 12:50:47.942780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.596 [2024-11-28 12:50:47.942811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.596 qpair failed and we were unable to recover it. 00:27:05.596 [2024-11-28 12:50:47.942943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.596 [2024-11-28 12:50:47.943000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.596 qpair failed and we were unable to recover it. 00:27:05.596 [2024-11-28 12:50:47.943130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.596 [2024-11-28 12:50:47.943161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.596 qpair failed and we were unable to recover it. 00:27:05.596 [2024-11-28 12:50:47.943280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.596 [2024-11-28 12:50:47.943311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.596 qpair failed and we were unable to recover it. 00:27:05.596 [2024-11-28 12:50:47.943455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.596 [2024-11-28 12:50:47.943486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.596 qpair failed and we were unable to recover it. 00:27:05.596 [2024-11-28 12:50:47.943631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.596 [2024-11-28 12:50:47.943662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.596 qpair failed and we were unable to recover it. 00:27:05.596 [2024-11-28 12:50:47.943849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.596 [2024-11-28 12:50:47.943880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.596 qpair failed and we were unable to recover it. 00:27:05.596 [2024-11-28 12:50:47.944091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.596 [2024-11-28 12:50:47.944123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.596 qpair failed and we were unable to recover it. 00:27:05.596 [2024-11-28 12:50:47.944397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.596 [2024-11-28 12:50:47.944428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.596 qpair failed and we were unable to recover it. 00:27:05.596 [2024-11-28 12:50:47.944619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.596 [2024-11-28 12:50:47.944649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.596 qpair failed and we were unable to recover it. 00:27:05.596 [2024-11-28 12:50:47.944748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.596 [2024-11-28 12:50:47.944758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.596 qpair failed and we were unable to recover it. 00:27:05.596 [2024-11-28 12:50:47.944905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.596 [2024-11-28 12:50:47.944916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.596 qpair failed and we were unable to recover it. 00:27:05.596 [2024-11-28 12:50:47.945091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.596 [2024-11-28 12:50:47.945102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.596 qpair failed and we were unable to recover it. 00:27:05.596 [2024-11-28 12:50:47.945355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.596 [2024-11-28 12:50:47.945387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.596 qpair failed and we were unable to recover it. 00:27:05.596 [2024-11-28 12:50:47.945568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.596 [2024-11-28 12:50:47.945597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.596 qpair failed and we were unable to recover it. 00:27:05.596 [2024-11-28 12:50:47.945847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.596 [2024-11-28 12:50:47.945879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.596 qpair failed and we were unable to recover it. 00:27:05.596 [2024-11-28 12:50:47.946082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.596 [2024-11-28 12:50:47.946115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.596 qpair failed and we were unable to recover it. 00:27:05.596 [2024-11-28 12:50:47.946380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.596 [2024-11-28 12:50:47.946410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.596 qpair failed and we were unable to recover it. 00:27:05.596 [2024-11-28 12:50:47.946599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.596 [2024-11-28 12:50:47.946630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.596 qpair failed and we were unable to recover it. 00:27:05.596 [2024-11-28 12:50:47.946815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.596 [2024-11-28 12:50:47.946847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.596 qpair failed and we were unable to recover it. 00:27:05.596 [2024-11-28 12:50:47.946989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.596 [2024-11-28 12:50:47.947021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.596 qpair failed and we were unable to recover it. 00:27:05.596 [2024-11-28 12:50:47.947273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.596 [2024-11-28 12:50:47.947303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.596 qpair failed and we were unable to recover it. 00:27:05.596 [2024-11-28 12:50:47.947410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.596 [2024-11-28 12:50:47.947421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.596 qpair failed and we were unable to recover it. 00:27:05.596 [2024-11-28 12:50:47.947568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.596 [2024-11-28 12:50:47.947579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.596 qpair failed and we were unable to recover it. 00:27:05.596 [2024-11-28 12:50:47.947810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.596 [2024-11-28 12:50:47.947842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.596 qpair failed and we were unable to recover it. 00:27:05.596 [2024-11-28 12:50:47.947979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.597 [2024-11-28 12:50:47.948012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.597 qpair failed and we were unable to recover it. 00:27:05.597 [2024-11-28 12:50:47.948156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.597 [2024-11-28 12:50:47.948188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.597 qpair failed and we were unable to recover it. 00:27:05.597 [2024-11-28 12:50:47.948375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.597 [2024-11-28 12:50:47.948405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.597 qpair failed and we were unable to recover it. 00:27:05.597 [2024-11-28 12:50:47.948600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.597 [2024-11-28 12:50:47.948632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.597 qpair failed and we were unable to recover it. 00:27:05.597 [2024-11-28 12:50:47.948845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.597 [2024-11-28 12:50:47.948876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.597 qpair failed and we were unable to recover it. 00:27:05.597 [2024-11-28 12:50:47.949014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.597 [2024-11-28 12:50:47.949046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.597 qpair failed and we were unable to recover it. 00:27:05.597 [2024-11-28 12:50:47.949253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.597 [2024-11-28 12:50:47.949284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.597 qpair failed and we were unable to recover it. 00:27:05.597 [2024-11-28 12:50:47.949406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.597 [2024-11-28 12:50:47.949437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.597 qpair failed and we were unable to recover it. 00:27:05.597 [2024-11-28 12:50:47.949705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.597 [2024-11-28 12:50:47.949736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.597 qpair failed and we were unable to recover it. 00:27:05.597 [2024-11-28 12:50:47.949863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.597 [2024-11-28 12:50:47.949898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.597 qpair failed and we were unable to recover it. 00:27:05.597 [2024-11-28 12:50:47.950022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.597 [2024-11-28 12:50:47.950055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.597 qpair failed and we were unable to recover it. 00:27:05.597 [2024-11-28 12:50:47.950334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.597 [2024-11-28 12:50:47.950365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.597 qpair failed and we were unable to recover it. 00:27:05.597 [2024-11-28 12:50:47.950609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.597 [2024-11-28 12:50:47.950619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.597 qpair failed and we were unable to recover it. 00:27:05.597 [2024-11-28 12:50:47.950709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.597 [2024-11-28 12:50:47.950720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.597 qpair failed and we were unable to recover it. 00:27:05.597 [2024-11-28 12:50:47.950853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.597 [2024-11-28 12:50:47.950864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.597 qpair failed and we were unable to recover it. 00:27:05.597 [2024-11-28 12:50:47.950925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.597 [2024-11-28 12:50:47.950936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.597 qpair failed and we were unable to recover it. 00:27:05.597 [2024-11-28 12:50:47.951097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.597 [2024-11-28 12:50:47.951108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.597 qpair failed and we were unable to recover it. 00:27:05.597 [2024-11-28 12:50:47.951185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.597 [2024-11-28 12:50:47.951195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.597 qpair failed and we were unable to recover it. 00:27:05.597 [2024-11-28 12:50:47.951342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.597 [2024-11-28 12:50:47.951353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.597 qpair failed and we were unable to recover it. 00:27:05.597 [2024-11-28 12:50:47.951599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.597 [2024-11-28 12:50:47.951630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.597 qpair failed and we were unable to recover it. 00:27:05.597 [2024-11-28 12:50:47.951773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.597 [2024-11-28 12:50:47.951805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.597 qpair failed and we were unable to recover it. 00:27:05.597 [2024-11-28 12:50:47.952001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.597 [2024-11-28 12:50:47.952033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.597 qpair failed and we were unable to recover it. 00:27:05.597 [2024-11-28 12:50:47.952158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.597 [2024-11-28 12:50:47.952190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.597 qpair failed and we were unable to recover it. 00:27:05.597 [2024-11-28 12:50:47.952376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.597 [2024-11-28 12:50:47.952407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.597 qpair failed and we were unable to recover it. 00:27:05.597 [2024-11-28 12:50:47.952603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.597 [2024-11-28 12:50:47.952635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.597 qpair failed and we were unable to recover it. 00:27:05.597 [2024-11-28 12:50:47.952833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.598 [2024-11-28 12:50:47.952863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.598 qpair failed and we were unable to recover it. 00:27:05.598 [2024-11-28 12:50:47.953058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.598 [2024-11-28 12:50:47.953090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.598 qpair failed and we were unable to recover it. 00:27:05.598 [2024-11-28 12:50:47.953270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.598 [2024-11-28 12:50:47.953302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.598 qpair failed and we were unable to recover it. 00:27:05.598 [2024-11-28 12:50:47.953487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.598 [2024-11-28 12:50:47.953518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.598 qpair failed and we were unable to recover it. 00:27:05.598 [2024-11-28 12:50:47.953652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.598 [2024-11-28 12:50:47.953662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.598 qpair failed and we were unable to recover it. 00:27:05.598 [2024-11-28 12:50:47.953922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.598 [2024-11-28 12:50:47.953964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.598 qpair failed and we were unable to recover it. 00:27:05.598 [2024-11-28 12:50:47.954177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.598 [2024-11-28 12:50:47.954208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.598 qpair failed and we were unable to recover it. 00:27:05.598 [2024-11-28 12:50:47.954387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.598 [2024-11-28 12:50:47.954419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.598 qpair failed and we were unable to recover it. 00:27:05.598 [2024-11-28 12:50:47.954535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.598 [2024-11-28 12:50:47.954566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.598 qpair failed and we were unable to recover it. 00:27:05.598 [2024-11-28 12:50:47.954673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.598 [2024-11-28 12:50:47.954684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.598 qpair failed and we were unable to recover it. 00:27:05.598 [2024-11-28 12:50:47.954818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.598 [2024-11-28 12:50:47.954828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.598 qpair failed and we were unable to recover it. 00:27:05.598 [2024-11-28 12:50:47.954987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.598 [2024-11-28 12:50:47.954998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.598 qpair failed and we were unable to recover it. 00:27:05.598 [2024-11-28 12:50:47.955149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.598 [2024-11-28 12:50:47.955181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.598 qpair failed and we were unable to recover it. 00:27:05.598 [2024-11-28 12:50:47.955315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.598 [2024-11-28 12:50:47.955345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.598 qpair failed and we were unable to recover it. 00:27:05.598 [2024-11-28 12:50:47.955634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.598 [2024-11-28 12:50:47.955663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.598 qpair failed and we were unable to recover it. 00:27:05.598 [2024-11-28 12:50:47.955855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.598 [2024-11-28 12:50:47.955886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.598 qpair failed and we were unable to recover it. 00:27:05.598 [2024-11-28 12:50:47.956079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.598 [2024-11-28 12:50:47.956111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.598 qpair failed and we were unable to recover it. 00:27:05.598 [2024-11-28 12:50:47.956322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.598 [2024-11-28 12:50:47.956352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.598 qpair failed and we were unable to recover it. 00:27:05.598 [2024-11-28 12:50:47.956600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.598 [2024-11-28 12:50:47.956633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.598 qpair failed and we were unable to recover it. 00:27:05.598 [2024-11-28 12:50:47.956810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.598 [2024-11-28 12:50:47.956841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.598 qpair failed and we were unable to recover it. 00:27:05.598 [2024-11-28 12:50:47.957102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.598 [2024-11-28 12:50:47.957135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.599 qpair failed and we were unable to recover it. 00:27:05.599 [2024-11-28 12:50:47.957257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.599 [2024-11-28 12:50:47.957289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.599 qpair failed and we were unable to recover it. 00:27:05.599 [2024-11-28 12:50:47.957488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.599 [2024-11-28 12:50:47.957520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.599 qpair failed and we were unable to recover it. 00:27:05.599 [2024-11-28 12:50:47.957649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.599 [2024-11-28 12:50:47.957680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.599 qpair failed and we were unable to recover it. 00:27:05.599 [2024-11-28 12:50:47.957853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.599 [2024-11-28 12:50:47.957867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.599 qpair failed and we were unable to recover it. 00:27:05.599 [2024-11-28 12:50:47.957957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.599 [2024-11-28 12:50:47.957969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.599 qpair failed and we were unable to recover it. 00:27:05.599 [2024-11-28 12:50:47.958190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.599 [2024-11-28 12:50:47.958200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.599 qpair failed and we were unable to recover it. 00:27:05.599 [2024-11-28 12:50:47.958289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.599 [2024-11-28 12:50:47.958299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.599 qpair failed and we were unable to recover it. 00:27:05.599 [2024-11-28 12:50:47.958472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.599 [2024-11-28 12:50:47.958482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.599 qpair failed and we were unable to recover it. 00:27:05.599 [2024-11-28 12:50:47.958579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.599 [2024-11-28 12:50:47.958589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.599 qpair failed and we were unable to recover it. 00:27:05.599 [2024-11-28 12:50:47.958762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.599 [2024-11-28 12:50:47.958802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.599 qpair failed and we were unable to recover it. 00:27:05.599 [2024-11-28 12:50:47.958940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.599 [2024-11-28 12:50:47.958987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.599 qpair failed and we were unable to recover it. 00:27:05.599 [2024-11-28 12:50:47.959117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.599 [2024-11-28 12:50:47.959148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.599 qpair failed and we were unable to recover it. 00:27:05.599 [2024-11-28 12:50:47.959277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.599 [2024-11-28 12:50:47.959307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.599 qpair failed and we were unable to recover it. 00:27:05.599 [2024-11-28 12:50:47.959442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.599 [2024-11-28 12:50:47.959473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.599 qpair failed and we were unable to recover it. 00:27:05.599 [2024-11-28 12:50:47.959689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.599 [2024-11-28 12:50:47.959719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.599 qpair failed and we were unable to recover it. 00:27:05.599 [2024-11-28 12:50:47.959911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.599 [2024-11-28 12:50:47.959942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.599 qpair failed and we were unable to recover it. 00:27:05.599 [2024-11-28 12:50:47.960078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.599 [2024-11-28 12:50:47.960109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.599 qpair failed and we were unable to recover it. 00:27:05.599 [2024-11-28 12:50:47.960295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.599 [2024-11-28 12:50:47.960326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.599 qpair failed and we were unable to recover it. 00:27:05.599 [2024-11-28 12:50:47.960573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.599 [2024-11-28 12:50:47.960604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.599 qpair failed and we were unable to recover it. 00:27:05.599 [2024-11-28 12:50:47.960798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.599 [2024-11-28 12:50:47.960828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.599 qpair failed and we were unable to recover it. 00:27:05.599 [2024-11-28 12:50:47.961072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.599 [2024-11-28 12:50:47.961104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.599 qpair failed and we were unable to recover it. 00:27:05.599 [2024-11-28 12:50:47.961283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.599 [2024-11-28 12:50:47.961314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.599 qpair failed and we were unable to recover it. 00:27:05.599 [2024-11-28 12:50:47.961558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.599 [2024-11-28 12:50:47.961589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.599 qpair failed and we were unable to recover it. 00:27:05.599 [2024-11-28 12:50:47.961802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.599 [2024-11-28 12:50:47.961812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.599 qpair failed and we were unable to recover it. 00:27:05.599 [2024-11-28 12:50:47.961968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.600 [2024-11-28 12:50:47.961999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.600 qpair failed and we were unable to recover it. 00:27:05.600 [2024-11-28 12:50:47.962196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.600 [2024-11-28 12:50:47.962227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.600 qpair failed and we were unable to recover it. 00:27:05.600 [2024-11-28 12:50:47.962370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.600 [2024-11-28 12:50:47.962380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.600 qpair failed and we were unable to recover it. 00:27:05.600 [2024-11-28 12:50:47.962483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.600 [2024-11-28 12:50:47.962494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.600 qpair failed and we were unable to recover it. 00:27:05.600 [2024-11-28 12:50:47.962655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.600 [2024-11-28 12:50:47.962665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.600 qpair failed and we were unable to recover it. 00:27:05.600 [2024-11-28 12:50:47.962818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.600 [2024-11-28 12:50:47.962849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.600 qpair failed and we were unable to recover it. 00:27:05.600 [2024-11-28 12:50:47.963030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.600 [2024-11-28 12:50:47.963062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.600 qpair failed and we were unable to recover it. 00:27:05.600 [2024-11-28 12:50:47.963204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.600 [2024-11-28 12:50:47.963235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.600 qpair failed and we were unable to recover it. 00:27:05.600 [2024-11-28 12:50:47.963414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.600 [2024-11-28 12:50:47.963424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.600 qpair failed and we were unable to recover it. 00:27:05.600 [2024-11-28 12:50:47.963610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.600 [2024-11-28 12:50:47.963641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.600 qpair failed and we were unable to recover it. 00:27:05.600 [2024-11-28 12:50:47.963832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.600 [2024-11-28 12:50:47.963862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.600 qpair failed and we were unable to recover it. 00:27:05.600 [2024-11-28 12:50:47.964037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.600 [2024-11-28 12:50:47.964068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.600 qpair failed and we were unable to recover it. 00:27:05.600 [2024-11-28 12:50:47.964266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.600 [2024-11-28 12:50:47.964297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.600 qpair failed and we were unable to recover it. 00:27:05.600 [2024-11-28 12:50:47.964435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.600 [2024-11-28 12:50:47.964446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.600 qpair failed and we were unable to recover it. 00:27:05.600 [2024-11-28 12:50:47.964579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.600 [2024-11-28 12:50:47.964589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.600 qpair failed and we were unable to recover it. 00:27:05.600 [2024-11-28 12:50:47.964668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.600 [2024-11-28 12:50:47.964678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.600 qpair failed and we were unable to recover it. 00:27:05.600 [2024-11-28 12:50:47.964818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.600 [2024-11-28 12:50:47.964829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.600 qpair failed and we were unable to recover it. 00:27:05.600 [2024-11-28 12:50:47.964984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.600 [2024-11-28 12:50:47.964995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.600 qpair failed and we were unable to recover it. 00:27:05.600 [2024-11-28 12:50:47.965124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.600 [2024-11-28 12:50:47.965135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.600 qpair failed and we were unable to recover it. 00:27:05.600 [2024-11-28 12:50:47.965257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.600 [2024-11-28 12:50:47.965294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.600 qpair failed and we were unable to recover it. 00:27:05.600 [2024-11-28 12:50:47.965498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.600 [2024-11-28 12:50:47.965528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.600 qpair failed and we were unable to recover it. 00:27:05.600 [2024-11-28 12:50:47.965717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.600 [2024-11-28 12:50:47.965747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.600 qpair failed and we were unable to recover it. 00:27:05.600 [2024-11-28 12:50:47.965991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.600 [2024-11-28 12:50:47.966023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.600 qpair failed and we were unable to recover it. 00:27:05.600 [2024-11-28 12:50:47.966283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.600 [2024-11-28 12:50:47.966313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.600 qpair failed and we were unable to recover it. 00:27:05.600 [2024-11-28 12:50:47.966576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.600 [2024-11-28 12:50:47.966606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.600 qpair failed and we were unable to recover it. 00:27:05.600 [2024-11-28 12:50:47.966875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.600 [2024-11-28 12:50:47.966905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.600 qpair failed and we were unable to recover it. 00:27:05.600 [2024-11-28 12:50:47.967093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.601 [2024-11-28 12:50:47.967124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.601 qpair failed and we were unable to recover it. 00:27:05.601 [2024-11-28 12:50:47.967256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.601 [2024-11-28 12:50:47.967287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.601 qpair failed and we were unable to recover it. 00:27:05.601 [2024-11-28 12:50:47.967479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.601 [2024-11-28 12:50:47.967510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.601 qpair failed and we were unable to recover it. 00:27:05.601 [2024-11-28 12:50:47.967706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.601 [2024-11-28 12:50:47.967736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.601 qpair failed and we were unable to recover it. 00:27:05.601 [2024-11-28 12:50:47.968009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.601 [2024-11-28 12:50:47.968040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.601 qpair failed and we were unable to recover it. 00:27:05.601 [2024-11-28 12:50:47.968287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.601 [2024-11-28 12:50:47.968317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.601 qpair failed and we were unable to recover it. 00:27:05.601 [2024-11-28 12:50:47.968455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.601 [2024-11-28 12:50:47.968485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.601 qpair failed and we were unable to recover it. 00:27:05.601 [2024-11-28 12:50:47.968689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.601 [2024-11-28 12:50:47.968720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.601 qpair failed and we were unable to recover it. 00:27:05.601 [2024-11-28 12:50:47.968916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.601 [2024-11-28 12:50:47.968958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.601 qpair failed and we were unable to recover it. 00:27:05.601 [2024-11-28 12:50:47.969155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.601 [2024-11-28 12:50:47.969186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.601 qpair failed and we were unable to recover it. 00:27:05.601 [2024-11-28 12:50:47.969439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.601 [2024-11-28 12:50:47.969470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.601 qpair failed and we were unable to recover it. 00:27:05.601 [2024-11-28 12:50:47.969727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.601 [2024-11-28 12:50:47.969737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.601 qpair failed and we were unable to recover it. 00:27:05.601 [2024-11-28 12:50:47.969892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.601 [2024-11-28 12:50:47.969903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.601 qpair failed and we were unable to recover it. 00:27:05.601 [2024-11-28 12:50:47.970049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.601 [2024-11-28 12:50:47.970060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.601 qpair failed and we were unable to recover it. 00:27:05.601 [2024-11-28 12:50:47.970197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.601 [2024-11-28 12:50:47.970208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.601 qpair failed and we were unable to recover it. 00:27:05.601 [2024-11-28 12:50:47.970351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.601 [2024-11-28 12:50:47.970361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.601 qpair failed and we were unable to recover it. 00:27:05.601 [2024-11-28 12:50:47.970515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.601 [2024-11-28 12:50:47.970545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.601 qpair failed and we were unable to recover it. 00:27:05.601 [2024-11-28 12:50:47.970759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.601 [2024-11-28 12:50:47.970790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.601 qpair failed and we were unable to recover it. 00:27:05.601 [2024-11-28 12:50:47.970898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.601 [2024-11-28 12:50:47.970928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.601 qpair failed and we were unable to recover it. 00:27:05.601 [2024-11-28 12:50:47.971135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.601 [2024-11-28 12:50:47.971166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.601 qpair failed and we were unable to recover it. 00:27:05.601 [2024-11-28 12:50:47.971340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.601 [2024-11-28 12:50:47.971410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.601 qpair failed and we were unable to recover it. 00:27:05.601 [2024-11-28 12:50:47.971657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.601 [2024-11-28 12:50:47.971691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.601 qpair failed and we were unable to recover it. 00:27:05.601 [2024-11-28 12:50:47.971782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.601 [2024-11-28 12:50:47.971797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.601 qpair failed and we were unable to recover it. 00:27:05.601 [2024-11-28 12:50:47.971944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.601 [2024-11-28 12:50:47.971965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.601 qpair failed and we were unable to recover it. 00:27:05.601 [2024-11-28 12:50:47.972232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.601 [2024-11-28 12:50:47.972263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.601 qpair failed and we were unable to recover it. 00:27:05.601 [2024-11-28 12:50:47.972410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.601 [2024-11-28 12:50:47.972441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.601 qpair failed and we were unable to recover it. 00:27:05.601 [2024-11-28 12:50:47.972636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.602 [2024-11-28 12:50:47.972668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.602 qpair failed and we were unable to recover it. 00:27:05.602 [2024-11-28 12:50:47.972856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.602 [2024-11-28 12:50:47.972870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.602 qpair failed and we were unable to recover it. 00:27:05.602 [2024-11-28 12:50:47.973060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.602 [2024-11-28 12:50:47.973093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.602 qpair failed and we were unable to recover it. 00:27:05.602 [2024-11-28 12:50:47.973268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.602 [2024-11-28 12:50:47.973298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.602 qpair failed and we were unable to recover it. 00:27:05.602 [2024-11-28 12:50:47.973424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.602 [2024-11-28 12:50:47.973454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.602 qpair failed and we were unable to recover it. 00:27:05.602 [2024-11-28 12:50:47.973699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.602 [2024-11-28 12:50:47.973731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.602 qpair failed and we were unable to recover it. 00:27:05.602 [2024-11-28 12:50:47.973926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.602 [2024-11-28 12:50:47.973966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.602 qpair failed and we were unable to recover it. 00:27:05.602 [2024-11-28 12:50:47.974150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.602 [2024-11-28 12:50:47.974191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.602 qpair failed and we were unable to recover it. 00:27:05.602 [2024-11-28 12:50:47.974384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.602 [2024-11-28 12:50:47.974416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.602 qpair failed and we were unable to recover it. 00:27:05.602 [2024-11-28 12:50:47.974612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.602 [2024-11-28 12:50:47.974643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.602 qpair failed and we were unable to recover it. 00:27:05.602 [2024-11-28 12:50:47.974844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.602 [2024-11-28 12:50:47.974858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.602 qpair failed and we were unable to recover it. 00:27:05.602 [2024-11-28 12:50:47.975011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.602 [2024-11-28 12:50:47.975051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.602 qpair failed and we were unable to recover it. 00:27:05.602 [2024-11-28 12:50:47.975232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.602 [2024-11-28 12:50:47.975262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.602 qpair failed and we were unable to recover it. 00:27:05.602 [2024-11-28 12:50:47.975507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.602 [2024-11-28 12:50:47.975538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.602 qpair failed and we were unable to recover it. 00:27:05.602 [2024-11-28 12:50:47.975779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.602 [2024-11-28 12:50:47.975793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.602 qpair failed and we were unable to recover it. 00:27:05.602 [2024-11-28 12:50:47.975901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.602 [2024-11-28 12:50:47.975932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.602 qpair failed and we were unable to recover it. 00:27:05.602 [2024-11-28 12:50:47.976088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.602 [2024-11-28 12:50:47.976120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.602 qpair failed and we were unable to recover it. 00:27:05.602 [2024-11-28 12:50:47.976310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.602 [2024-11-28 12:50:47.976341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.602 qpair failed and we were unable to recover it. 00:27:05.602 [2024-11-28 12:50:47.976529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.602 [2024-11-28 12:50:47.976568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.602 qpair failed and we were unable to recover it. 00:27:05.602 [2024-11-28 12:50:47.976675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.602 [2024-11-28 12:50:47.976689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.602 qpair failed and we were unable to recover it. 00:27:05.602 [2024-11-28 12:50:47.976760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.602 [2024-11-28 12:50:47.976774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.602 qpair failed and we were unable to recover it. 00:27:05.602 [2024-11-28 12:50:47.977014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.602 [2024-11-28 12:50:47.977048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.602 qpair failed and we were unable to recover it. 00:27:05.602 [2024-11-28 12:50:47.977240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.602 [2024-11-28 12:50:47.977272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.602 qpair failed and we were unable to recover it. 00:27:05.602 [2024-11-28 12:50:47.977515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.602 [2024-11-28 12:50:47.977545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.602 qpair failed and we were unable to recover it. 00:27:05.602 [2024-11-28 12:50:47.977675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.602 [2024-11-28 12:50:47.977715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.602 qpair failed and we were unable to recover it. 00:27:05.602 [2024-11-28 12:50:47.977873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.602 [2024-11-28 12:50:47.977888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.602 qpair failed and we were unable to recover it. 00:27:05.602 [2024-11-28 12:50:47.978052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.602 [2024-11-28 12:50:47.978067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.602 qpair failed and we were unable to recover it. 00:27:05.602 [2024-11-28 12:50:47.978315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.602 [2024-11-28 12:50:47.978347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.602 qpair failed and we were unable to recover it. 00:27:05.602 [2024-11-28 12:50:47.978559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.602 [2024-11-28 12:50:47.978591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.602 qpair failed and we were unable to recover it. 00:27:05.602 [2024-11-28 12:50:47.978900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.602 [2024-11-28 12:50:47.978930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.602 qpair failed and we were unable to recover it. 00:27:05.602 [2024-11-28 12:50:47.979158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.603 [2024-11-28 12:50:47.979191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.603 qpair failed and we were unable to recover it. 00:27:05.603 [2024-11-28 12:50:47.979333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.603 [2024-11-28 12:50:47.979364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.603 qpair failed and we were unable to recover it. 00:27:05.603 [2024-11-28 12:50:47.979556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.603 [2024-11-28 12:50:47.979587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.603 qpair failed and we were unable to recover it. 00:27:05.603 [2024-11-28 12:50:47.979779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.603 [2024-11-28 12:50:47.979810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.603 qpair failed and we were unable to recover it. 00:27:05.603 [2024-11-28 12:50:47.979943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.603 [2024-11-28 12:50:47.979964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.603 qpair failed and we were unable to recover it. 00:27:05.603 [2024-11-28 12:50:47.980137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.603 [2024-11-28 12:50:47.980168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.603 qpair failed and we were unable to recover it. 00:27:05.603 [2024-11-28 12:50:47.980415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.603 [2024-11-28 12:50:47.980446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.603 qpair failed and we were unable to recover it. 00:27:05.603 [2024-11-28 12:50:47.980634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.603 [2024-11-28 12:50:47.980665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.603 qpair failed and we were unable to recover it. 00:27:05.603 [2024-11-28 12:50:47.980849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.603 [2024-11-28 12:50:47.980864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.603 qpair failed and we were unable to recover it. 00:27:05.603 [2024-11-28 12:50:47.981076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.603 [2024-11-28 12:50:47.981109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.603 qpair failed and we were unable to recover it. 00:27:05.603 [2024-11-28 12:50:47.981232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.603 [2024-11-28 12:50:47.981263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.603 qpair failed and we were unable to recover it. 00:27:05.603 [2024-11-28 12:50:47.981452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.603 [2024-11-28 12:50:47.981483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.603 qpair failed and we were unable to recover it. 00:27:05.603 [2024-11-28 12:50:47.981664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.603 [2024-11-28 12:50:47.981679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.603 qpair failed and we were unable to recover it. 00:27:05.603 [2024-11-28 12:50:47.981832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.603 [2024-11-28 12:50:47.981863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.603 qpair failed and we were unable to recover it. 00:27:05.603 [2024-11-28 12:50:47.982003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.603 [2024-11-28 12:50:47.982035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.603 qpair failed and we were unable to recover it. 00:27:05.603 [2024-11-28 12:50:47.982156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.603 [2024-11-28 12:50:47.982187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.603 qpair failed and we were unable to recover it. 00:27:05.603 [2024-11-28 12:50:47.982432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.603 [2024-11-28 12:50:47.982462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.603 qpair failed and we were unable to recover it. 00:27:05.603 [2024-11-28 12:50:47.982576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.603 [2024-11-28 12:50:47.982618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.603 qpair failed and we were unable to recover it. 00:27:05.603 [2024-11-28 12:50:47.982836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.603 [2024-11-28 12:50:47.982868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.603 qpair failed and we were unable to recover it. 00:27:05.603 [2024-11-28 12:50:47.983157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.603 [2024-11-28 12:50:47.983189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.603 qpair failed and we were unable to recover it. 00:27:05.603 [2024-11-28 12:50:47.983385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.603 [2024-11-28 12:50:47.983416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.603 qpair failed and we were unable to recover it. 00:27:05.603 [2024-11-28 12:50:47.983550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.603 [2024-11-28 12:50:47.983582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.603 qpair failed and we were unable to recover it. 00:27:05.603 [2024-11-28 12:50:47.983761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.603 [2024-11-28 12:50:47.983793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.603 qpair failed and we were unable to recover it. 00:27:05.603 [2024-11-28 12:50:47.983931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.603 [2024-11-28 12:50:47.983945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.603 qpair failed and we were unable to recover it. 00:27:05.603 [2024-11-28 12:50:47.984193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.603 [2024-11-28 12:50:47.984225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.603 qpair failed and we were unable to recover it. 00:27:05.603 [2024-11-28 12:50:47.984405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.603 [2024-11-28 12:50:47.984436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.603 qpair failed and we were unable to recover it. 00:27:05.603 [2024-11-28 12:50:47.984581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.603 [2024-11-28 12:50:47.984612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.603 qpair failed and we were unable to recover it. 00:27:05.603 [2024-11-28 12:50:47.984784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.603 [2024-11-28 12:50:47.984798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.603 qpair failed and we were unable to recover it. 00:27:05.603 [2024-11-28 12:50:47.984934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.603 [2024-11-28 12:50:47.984953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.603 qpair failed and we were unable to recover it. 00:27:05.603 [2024-11-28 12:50:47.985156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.603 [2024-11-28 12:50:47.985171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.603 qpair failed and we were unable to recover it. 00:27:05.603 [2024-11-28 12:50:47.985330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.603 [2024-11-28 12:50:47.985345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.603 qpair failed and we were unable to recover it. 00:27:05.603 [2024-11-28 12:50:47.985446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.603 [2024-11-28 12:50:47.985461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.603 qpair failed and we were unable to recover it. 00:27:05.603 [2024-11-28 12:50:47.985680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.603 [2024-11-28 12:50:47.985694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.603 qpair failed and we were unable to recover it. 00:27:05.603 [2024-11-28 12:50:47.985843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.603 [2024-11-28 12:50:47.985857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.603 qpair failed and we were unable to recover it. 00:27:05.603 [2024-11-28 12:50:47.986001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.603 [2024-11-28 12:50:47.986033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.604 qpair failed and we were unable to recover it. 00:27:05.604 [2024-11-28 12:50:47.986244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.604 [2024-11-28 12:50:47.986276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.604 qpair failed and we were unable to recover it. 00:27:05.604 [2024-11-28 12:50:47.986496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.604 [2024-11-28 12:50:47.986527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.604 qpair failed and we were unable to recover it. 00:27:05.604 [2024-11-28 12:50:47.986719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.604 [2024-11-28 12:50:47.986733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.604 qpair failed and we were unable to recover it. 00:27:05.604 [2024-11-28 12:50:47.986819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.604 [2024-11-28 12:50:47.986833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.604 qpair failed and we were unable to recover it. 00:27:05.604 [2024-11-28 12:50:47.986930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.604 [2024-11-28 12:50:47.986945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.604 qpair failed and we were unable to recover it. 00:27:05.604 [2024-11-28 12:50:47.987033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.604 [2024-11-28 12:50:47.987047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.604 qpair failed and we were unable to recover it. 00:27:05.604 [2024-11-28 12:50:47.987203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.604 [2024-11-28 12:50:47.987219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.604 qpair failed and we were unable to recover it. 00:27:05.604 [2024-11-28 12:50:47.987442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.604 [2024-11-28 12:50:47.987472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.604 qpair failed and we were unable to recover it. 00:27:05.604 [2024-11-28 12:50:47.987739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.604 [2024-11-28 12:50:47.987770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.604 qpair failed and we were unable to recover it. 00:27:05.604 [2024-11-28 12:50:47.987894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.604 [2024-11-28 12:50:47.987909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.604 qpair failed and we were unable to recover it. 00:27:05.604 [2024-11-28 12:50:47.988094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.604 [2024-11-28 12:50:47.988109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.604 qpair failed and we were unable to recover it. 00:27:05.604 [2024-11-28 12:50:47.988277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.604 [2024-11-28 12:50:47.988292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.604 qpair failed and we were unable to recover it. 00:27:05.604 [2024-11-28 12:50:47.988465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.604 [2024-11-28 12:50:47.988496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.604 qpair failed and we were unable to recover it. 00:27:05.604 [2024-11-28 12:50:47.988741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.604 [2024-11-28 12:50:47.988773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.604 qpair failed and we were unable to recover it. 00:27:05.604 [2024-11-28 12:50:47.988903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.604 [2024-11-28 12:50:47.988932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.604 qpair failed and we were unable to recover it. 00:27:05.604 [2024-11-28 12:50:47.989053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.604 [2024-11-28 12:50:47.989067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.604 qpair failed and we were unable to recover it. 00:27:05.604 [2024-11-28 12:50:47.989221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.604 [2024-11-28 12:50:47.989235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.604 qpair failed and we were unable to recover it. 00:27:05.604 [2024-11-28 12:50:47.989339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.604 [2024-11-28 12:50:47.989352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.604 qpair failed and we were unable to recover it. 00:27:05.604 [2024-11-28 12:50:47.989492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.604 [2024-11-28 12:50:47.989506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.604 qpair failed and we were unable to recover it. 00:27:05.604 [2024-11-28 12:50:47.989603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.604 [2024-11-28 12:50:47.989616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.604 qpair failed and we were unable to recover it. 00:27:05.604 [2024-11-28 12:50:47.989767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.604 [2024-11-28 12:50:47.989795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.604 qpair failed and we were unable to recover it. 00:27:05.604 [2024-11-28 12:50:47.989992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.604 [2024-11-28 12:50:47.990025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.604 qpair failed and we were unable to recover it. 00:27:05.604 [2024-11-28 12:50:47.990227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.604 [2024-11-28 12:50:47.990265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.604 qpair failed and we were unable to recover it. 00:27:05.604 [2024-11-28 12:50:47.990392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.604 [2024-11-28 12:50:47.990422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.604 qpair failed and we were unable to recover it. 00:27:05.604 [2024-11-28 12:50:47.990639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.604 [2024-11-28 12:50:47.990671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.604 qpair failed and we were unable to recover it. 00:27:05.604 [2024-11-28 12:50:47.990794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.604 [2024-11-28 12:50:47.990824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.604 qpair failed and we were unable to recover it. 00:27:05.604 [2024-11-28 12:50:47.991044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.604 [2024-11-28 12:50:47.991077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.604 qpair failed and we were unable to recover it. 00:27:05.604 [2024-11-28 12:50:47.991274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.604 [2024-11-28 12:50:47.991305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.604 qpair failed and we were unable to recover it. 00:27:05.604 [2024-11-28 12:50:47.991552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.604 [2024-11-28 12:50:47.991582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.604 qpair failed and we were unable to recover it. 00:27:05.604 [2024-11-28 12:50:47.991771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.604 [2024-11-28 12:50:47.991814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.604 qpair failed and we were unable to recover it. 00:27:05.604 [2024-11-28 12:50:47.991966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.604 [2024-11-28 12:50:47.991981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.604 qpair failed and we were unable to recover it. 00:27:05.604 [2024-11-28 12:50:47.992150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.604 [2024-11-28 12:50:47.992165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.604 qpair failed and we were unable to recover it. 00:27:05.604 [2024-11-28 12:50:47.992312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.604 [2024-11-28 12:50:47.992343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.604 qpair failed and we were unable to recover it. 00:27:05.605 [2024-11-28 12:50:47.992536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.605 [2024-11-28 12:50:47.992567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.605 qpair failed and we were unable to recover it. 00:27:05.605 [2024-11-28 12:50:47.992791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.605 [2024-11-28 12:50:47.992823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.605 qpair failed and we were unable to recover it. 00:27:05.605 [2024-11-28 12:50:47.993007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.605 [2024-11-28 12:50:47.993022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.605 qpair failed and we were unable to recover it. 00:27:05.605 [2024-11-28 12:50:47.993116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.605 [2024-11-28 12:50:47.993132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.605 qpair failed and we were unable to recover it. 00:27:05.605 [2024-11-28 12:50:47.993218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.605 [2024-11-28 12:50:47.993231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.605 qpair failed and we were unable to recover it. 00:27:05.605 [2024-11-28 12:50:47.993414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.605 [2024-11-28 12:50:47.993447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.605 qpair failed and we were unable to recover it. 00:27:05.605 [2024-11-28 12:50:47.993642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.605 [2024-11-28 12:50:47.993673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.605 qpair failed and we were unable to recover it. 00:27:05.605 [2024-11-28 12:50:47.993917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.605 [2024-11-28 12:50:47.993960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.605 qpair failed and we were unable to recover it. 00:27:05.605 [2024-11-28 12:50:47.994090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.605 [2024-11-28 12:50:47.994122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.605 qpair failed and we were unable to recover it. 00:27:05.605 [2024-11-28 12:50:47.994364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.605 [2024-11-28 12:50:47.994395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.605 qpair failed and we were unable to recover it. 00:27:05.605 [2024-11-28 12:50:47.994540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.605 [2024-11-28 12:50:47.994554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.605 qpair failed and we were unable to recover it. 00:27:05.605 [2024-11-28 12:50:47.994714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.605 [2024-11-28 12:50:47.994746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.605 qpair failed and we were unable to recover it. 00:27:05.605 [2024-11-28 12:50:47.994990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.605 [2024-11-28 12:50:47.995022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.605 qpair failed and we were unable to recover it. 00:27:05.605 [2024-11-28 12:50:47.995231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.605 [2024-11-28 12:50:47.995263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.605 qpair failed and we were unable to recover it. 00:27:05.605 [2024-11-28 12:50:47.995526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.605 [2024-11-28 12:50:47.995557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.605 qpair failed and we were unable to recover it. 00:27:05.605 [2024-11-28 12:50:47.995777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.605 [2024-11-28 12:50:47.995809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.605 qpair failed and we were unable to recover it. 00:27:05.605 [2024-11-28 12:50:47.996078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.605 [2024-11-28 12:50:47.996107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.605 qpair failed and we were unable to recover it. 00:27:05.605 [2024-11-28 12:50:47.996345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.605 [2024-11-28 12:50:47.996357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.605 qpair failed and we were unable to recover it. 00:27:05.605 [2024-11-28 12:50:47.996560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.605 [2024-11-28 12:50:47.996571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.605 qpair failed and we were unable to recover it. 00:27:05.605 [2024-11-28 12:50:47.996723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.605 [2024-11-28 12:50:47.996734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.605 qpair failed and we were unable to recover it. 00:27:05.605 [2024-11-28 12:50:47.996811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.605 [2024-11-28 12:50:47.996821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.605 qpair failed and we were unable to recover it. 00:27:05.605 [2024-11-28 12:50:47.996915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.605 [2024-11-28 12:50:47.996926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.605 qpair failed and we were unable to recover it. 00:27:05.605 [2024-11-28 12:50:47.997060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.605 [2024-11-28 12:50:47.997072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.605 qpair failed and we were unable to recover it. 00:27:05.605 [2024-11-28 12:50:47.997215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.605 [2024-11-28 12:50:47.997226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.605 qpair failed and we were unable to recover it. 00:27:05.605 [2024-11-28 12:50:47.997383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.605 [2024-11-28 12:50:47.997414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.605 qpair failed and we were unable to recover it. 00:27:05.605 [2024-11-28 12:50:47.997604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.605 [2024-11-28 12:50:47.997635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.605 qpair failed and we were unable to recover it. 00:27:05.605 [2024-11-28 12:50:47.997828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.605 [2024-11-28 12:50:47.997859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.605 qpair failed and we were unable to recover it. 00:27:05.605 [2024-11-28 12:50:47.997987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.605 [2024-11-28 12:50:47.997998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.605 qpair failed and we were unable to recover it. 00:27:05.605 [2024-11-28 12:50:47.998130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.605 [2024-11-28 12:50:47.998140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.605 qpair failed and we were unable to recover it. 00:27:05.605 [2024-11-28 12:50:47.998223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.605 [2024-11-28 12:50:47.998237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.605 qpair failed and we were unable to recover it. 00:27:05.605 [2024-11-28 12:50:47.998313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.605 [2024-11-28 12:50:47.998323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.605 qpair failed and we were unable to recover it. 00:27:05.605 [2024-11-28 12:50:47.998545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.605 [2024-11-28 12:50:47.998575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.605 qpair failed and we were unable to recover it. 00:27:05.605 [2024-11-28 12:50:47.998806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.605 [2024-11-28 12:50:47.998838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.605 qpair failed and we were unable to recover it. 00:27:05.605 [2024-11-28 12:50:47.999043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.605 [2024-11-28 12:50:47.999075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.605 qpair failed and we were unable to recover it. 00:27:05.605 [2024-11-28 12:50:47.999320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.605 [2024-11-28 12:50:47.999351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.605 qpair failed and we were unable to recover it. 00:27:05.605 [2024-11-28 12:50:47.999534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.605 [2024-11-28 12:50:47.999565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.605 qpair failed and we were unable to recover it. 00:27:05.605 [2024-11-28 12:50:47.999755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.605 [2024-11-28 12:50:47.999787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.605 qpair failed and we were unable to recover it. 00:27:05.605 [2024-11-28 12:50:47.999967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.605 [2024-11-28 12:50:47.999999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.605 qpair failed and we were unable to recover it. 00:27:05.605 [2024-11-28 12:50:48.000262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.605 [2024-11-28 12:50:48.000294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.605 qpair failed and we were unable to recover it. 00:27:05.605 [2024-11-28 12:50:48.000429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.605 [2024-11-28 12:50:48.000461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.605 qpair failed and we were unable to recover it. 00:27:05.605 [2024-11-28 12:50:48.000635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.605 [2024-11-28 12:50:48.000666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.605 qpair failed and we were unable to recover it. 00:27:05.605 [2024-11-28 12:50:48.000864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.606 [2024-11-28 12:50:48.000896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.606 qpair failed and we were unable to recover it. 00:27:05.606 [2024-11-28 12:50:48.001150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.606 [2024-11-28 12:50:48.001183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.606 qpair failed and we were unable to recover it. 00:27:05.606 [2024-11-28 12:50:48.001371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.606 [2024-11-28 12:50:48.001403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.606 qpair failed and we were unable to recover it. 00:27:05.606 [2024-11-28 12:50:48.001670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.606 [2024-11-28 12:50:48.001712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.606 qpair failed and we were unable to recover it. 00:27:05.606 [2024-11-28 12:50:48.001874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.606 [2024-11-28 12:50:48.001885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.606 qpair failed and we were unable to recover it. 00:27:05.606 [2024-11-28 12:50:48.002036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.606 [2024-11-28 12:50:48.002047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.606 qpair failed and we were unable to recover it. 00:27:05.606 [2024-11-28 12:50:48.002216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.606 [2024-11-28 12:50:48.002228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.606 qpair failed and we were unable to recover it. 00:27:05.606 [2024-11-28 12:50:48.002373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.606 [2024-11-28 12:50:48.002384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.606 qpair failed and we were unable to recover it. 00:27:05.606 [2024-11-28 12:50:48.002540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.606 [2024-11-28 12:50:48.002572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.606 qpair failed and we were unable to recover it. 00:27:05.606 [2024-11-28 12:50:48.002750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.606 [2024-11-28 12:50:48.002782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.606 qpair failed and we were unable to recover it. 00:27:05.606 [2024-11-28 12:50:48.002967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.606 [2024-11-28 12:50:48.002999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.606 qpair failed and we were unable to recover it. 00:27:05.606 [2024-11-28 12:50:48.003197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.606 [2024-11-28 12:50:48.003229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.606 qpair failed and we were unable to recover it. 00:27:05.606 [2024-11-28 12:50:48.003347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.606 [2024-11-28 12:50:48.003379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.606 qpair failed and we were unable to recover it. 00:27:05.606 [2024-11-28 12:50:48.003623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.606 [2024-11-28 12:50:48.003655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.606 qpair failed and we were unable to recover it. 00:27:05.606 [2024-11-28 12:50:48.003902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.606 [2024-11-28 12:50:48.003933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.606 qpair failed and we were unable to recover it. 00:27:05.606 [2024-11-28 12:50:48.004304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.606 [2024-11-28 12:50:48.004375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.606 qpair failed and we were unable to recover it. 00:27:05.606 [2024-11-28 12:50:48.004679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.606 [2024-11-28 12:50:48.004714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.606 qpair failed and we were unable to recover it. 00:27:05.606 [2024-11-28 12:50:48.004901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.606 [2024-11-28 12:50:48.004915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.606 qpair failed and we were unable to recover it. 00:27:05.606 [2024-11-28 12:50:48.005150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.606 [2024-11-28 12:50:48.005183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.606 qpair failed and we were unable to recover it. 00:27:05.606 [2024-11-28 12:50:48.005380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.606 [2024-11-28 12:50:48.005412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.606 qpair failed and we were unable to recover it. 00:27:05.606 [2024-11-28 12:50:48.005524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.606 [2024-11-28 12:50:48.005554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.606 qpair failed and we were unable to recover it. 00:27:05.606 [2024-11-28 12:50:48.005801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.606 [2024-11-28 12:50:48.005832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.606 qpair failed and we were unable to recover it. 00:27:05.606 [2024-11-28 12:50:48.006025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.606 [2024-11-28 12:50:48.006058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.606 qpair failed and we were unable to recover it. 00:27:05.606 [2024-11-28 12:50:48.006245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.606 [2024-11-28 12:50:48.006277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.606 qpair failed and we were unable to recover it. 00:27:05.606 [2024-11-28 12:50:48.006418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.606 [2024-11-28 12:50:48.006448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.606 qpair failed and we were unable to recover it. 00:27:05.606 [2024-11-28 12:50:48.006662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.606 [2024-11-28 12:50:48.006693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.606 qpair failed and we were unable to recover it. 00:27:05.606 [2024-11-28 12:50:48.006963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.606 [2024-11-28 12:50:48.006996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.606 qpair failed and we were unable to recover it. 00:27:05.606 [2024-11-28 12:50:48.007142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.606 [2024-11-28 12:50:48.007173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.606 qpair failed and we were unable to recover it. 00:27:05.606 [2024-11-28 12:50:48.007365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.606 [2024-11-28 12:50:48.007406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.606 qpair failed and we were unable to recover it. 00:27:05.606 [2024-11-28 12:50:48.007596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.606 [2024-11-28 12:50:48.007628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.606 qpair failed and we were unable to recover it. 00:27:05.606 [2024-11-28 12:50:48.007873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.606 [2024-11-28 12:50:48.007904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.606 qpair failed and we were unable to recover it. 00:27:05.606 [2024-11-28 12:50:48.008105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.606 [2024-11-28 12:50:48.008137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.606 qpair failed and we were unable to recover it. 00:27:05.606 [2024-11-28 12:50:48.008326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.606 [2024-11-28 12:50:48.008357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.606 qpair failed and we were unable to recover it. 00:27:05.606 [2024-11-28 12:50:48.008626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.606 [2024-11-28 12:50:48.008658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.606 qpair failed and we were unable to recover it. 00:27:05.606 [2024-11-28 12:50:48.008845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.606 [2024-11-28 12:50:48.008859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.606 qpair failed and we were unable to recover it. 00:27:05.606 [2024-11-28 12:50:48.009013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.606 [2024-11-28 12:50:48.009027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.606 qpair failed and we were unable to recover it. 00:27:05.606 [2024-11-28 12:50:48.009190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.606 [2024-11-28 12:50:48.009204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.606 qpair failed and we were unable to recover it. 00:27:05.606 [2024-11-28 12:50:48.009429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.606 [2024-11-28 12:50:48.009443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.606 qpair failed and we were unable to recover it. 00:27:05.606 [2024-11-28 12:50:48.009612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.606 [2024-11-28 12:50:48.009626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.606 qpair failed and we were unable to recover it. 00:27:05.606 [2024-11-28 12:50:48.009732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.606 [2024-11-28 12:50:48.009762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.606 qpair failed and we were unable to recover it. 00:27:05.606 [2024-11-28 12:50:48.010008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.606 [2024-11-28 12:50:48.010040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.606 qpair failed and we were unable to recover it. 00:27:05.606 [2024-11-28 12:50:48.010218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.607 [2024-11-28 12:50:48.010249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.607 qpair failed and we were unable to recover it. 00:27:05.607 [2024-11-28 12:50:48.010448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.607 [2024-11-28 12:50:48.010479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.607 qpair failed and we were unable to recover it. 00:27:05.607 [2024-11-28 12:50:48.010658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.607 [2024-11-28 12:50:48.010690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.607 qpair failed and we were unable to recover it. 00:27:05.607 [2024-11-28 12:50:48.010832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.607 [2024-11-28 12:50:48.010862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.607 qpair failed and we were unable to recover it. 00:27:05.607 [2024-11-28 12:50:48.010998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.607 [2024-11-28 12:50:48.011031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.607 qpair failed and we were unable to recover it. 00:27:05.607 [2024-11-28 12:50:48.011238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.607 [2024-11-28 12:50:48.011269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.607 qpair failed and we were unable to recover it. 00:27:05.607 [2024-11-28 12:50:48.011514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.607 [2024-11-28 12:50:48.011545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.607 qpair failed and we were unable to recover it. 00:27:05.607 [2024-11-28 12:50:48.011735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.607 [2024-11-28 12:50:48.011766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.607 qpair failed and we were unable to recover it. 00:27:05.607 [2024-11-28 12:50:48.012030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.607 [2024-11-28 12:50:48.012062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.607 qpair failed and we were unable to recover it. 00:27:05.607 [2024-11-28 12:50:48.012310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.607 [2024-11-28 12:50:48.012341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.607 qpair failed and we were unable to recover it. 00:27:05.607 [2024-11-28 12:50:48.012527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.607 [2024-11-28 12:50:48.012558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.607 qpair failed and we were unable to recover it. 00:27:05.607 [2024-11-28 12:50:48.012747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.607 [2024-11-28 12:50:48.012778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.607 qpair failed and we were unable to recover it. 00:27:05.607 [2024-11-28 12:50:48.013046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.607 [2024-11-28 12:50:48.013078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.607 qpair failed and we were unable to recover it. 00:27:05.607 [2024-11-28 12:50:48.013293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.607 [2024-11-28 12:50:48.013325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.607 qpair failed and we were unable to recover it. 00:27:05.607 [2024-11-28 12:50:48.013567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.607 [2024-11-28 12:50:48.013631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.607 qpair failed and we were unable to recover it. 00:27:05.607 [2024-11-28 12:50:48.013810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.607 [2024-11-28 12:50:48.013827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.607 qpair failed and we were unable to recover it. 00:27:05.607 [2024-11-28 12:50:48.013932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.607 [2024-11-28 12:50:48.013984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.607 qpair failed and we were unable to recover it. 00:27:05.607 [2024-11-28 12:50:48.014206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.607 [2024-11-28 12:50:48.014237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.607 qpair failed and we were unable to recover it. 00:27:05.607 [2024-11-28 12:50:48.014455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.607 [2024-11-28 12:50:48.014486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.607 qpair failed and we were unable to recover it. 00:27:05.607 [2024-11-28 12:50:48.014691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.607 [2024-11-28 12:50:48.014705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.607 qpair failed and we were unable to recover it. 00:27:05.607 [2024-11-28 12:50:48.014933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.607 [2024-11-28 12:50:48.014955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.607 qpair failed and we were unable to recover it. 00:27:05.607 [2024-11-28 12:50:48.015192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.607 [2024-11-28 12:50:48.015206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.607 qpair failed and we were unable to recover it. 00:27:05.607 [2024-11-28 12:50:48.015304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.607 [2024-11-28 12:50:48.015318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.607 qpair failed and we were unable to recover it. 00:27:05.607 [2024-11-28 12:50:48.015461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.607 [2024-11-28 12:50:48.015491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.607 qpair failed and we were unable to recover it. 00:27:05.607 [2024-11-28 12:50:48.015617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.607 [2024-11-28 12:50:48.015648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.607 qpair failed and we were unable to recover it. 00:27:05.607 [2024-11-28 12:50:48.015764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.607 [2024-11-28 12:50:48.015794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.607 qpair failed and we were unable to recover it. 00:27:05.607 [2024-11-28 12:50:48.015911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.607 [2024-11-28 12:50:48.015942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.607 qpair failed and we were unable to recover it. 00:27:05.607 [2024-11-28 12:50:48.016070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.607 [2024-11-28 12:50:48.016085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.607 qpair failed and we were unable to recover it. 00:27:05.607 [2024-11-28 12:50:48.016220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.607 [2024-11-28 12:50:48.016235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.607 qpair failed and we were unable to recover it. 00:27:05.607 [2024-11-28 12:50:48.016387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.607 [2024-11-28 12:50:48.016402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.607 qpair failed and we were unable to recover it. 00:27:05.607 [2024-11-28 12:50:48.016571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.607 [2024-11-28 12:50:48.016602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.607 qpair failed and we were unable to recover it. 00:27:05.607 [2024-11-28 12:50:48.016847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.607 [2024-11-28 12:50:48.016876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.607 qpair failed and we were unable to recover it. 00:27:05.607 [2024-11-28 12:50:48.017081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.607 [2024-11-28 12:50:48.017113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.607 qpair failed and we were unable to recover it. 00:27:05.607 [2024-11-28 12:50:48.017225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.607 [2024-11-28 12:50:48.017256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.607 qpair failed and we were unable to recover it. 00:27:05.607 [2024-11-28 12:50:48.017477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.607 [2024-11-28 12:50:48.017509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.607 qpair failed and we were unable to recover it. 00:27:05.607 [2024-11-28 12:50:48.017723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.607 [2024-11-28 12:50:48.017754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.607 qpair failed and we were unable to recover it. 00:27:05.607 [2024-11-28 12:50:48.017956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.607 [2024-11-28 12:50:48.017990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.607 qpair failed and we were unable to recover it. 00:27:05.607 [2024-11-28 12:50:48.018128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.607 [2024-11-28 12:50:48.018159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.607 qpair failed and we were unable to recover it. 00:27:05.607 [2024-11-28 12:50:48.018284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.607 [2024-11-28 12:50:48.018314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.607 qpair failed and we were unable to recover it. 00:27:05.607 [2024-11-28 12:50:48.018504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.607 [2024-11-28 12:50:48.018534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.607 qpair failed and we were unable to recover it. 00:27:05.607 [2024-11-28 12:50:48.018777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.607 [2024-11-28 12:50:48.018807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.607 qpair failed and we were unable to recover it. 00:27:05.607 [2024-11-28 12:50:48.019017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.607 [2024-11-28 12:50:48.019055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.607 qpair failed and we were unable to recover it. 00:27:05.607 [2024-11-28 12:50:48.019236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.607 [2024-11-28 12:50:48.019267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.607 qpair failed and we were unable to recover it. 00:27:05.607 [2024-11-28 12:50:48.019446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.607 [2024-11-28 12:50:48.019478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.607 qpair failed and we were unable to recover it. 00:27:05.607 [2024-11-28 12:50:48.019590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.608 [2024-11-28 12:50:48.019604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.608 qpair failed and we were unable to recover it. 00:27:05.608 [2024-11-28 12:50:48.019702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.608 [2024-11-28 12:50:48.019716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.608 qpair failed and we were unable to recover it. 00:27:05.608 [2024-11-28 12:50:48.019851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.608 [2024-11-28 12:50:48.019883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.608 qpair failed and we were unable to recover it. 00:27:05.608 [2024-11-28 12:50:48.020129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.608 [2024-11-28 12:50:48.020162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.608 qpair failed and we were unable to recover it. 00:27:05.608 [2024-11-28 12:50:48.020292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.608 [2024-11-28 12:50:48.020324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.608 qpair failed and we were unable to recover it. 00:27:05.608 [2024-11-28 12:50:48.020444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.608 [2024-11-28 12:50:48.020476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.608 qpair failed and we were unable to recover it. 00:27:05.608 [2024-11-28 12:50:48.020689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.608 [2024-11-28 12:50:48.020720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.608 qpair failed and we were unable to recover it. 00:27:05.608 [2024-11-28 12:50:48.020867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.608 [2024-11-28 12:50:48.020898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.608 qpair failed and we were unable to recover it. 00:27:05.608 [2024-11-28 12:50:48.021102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.608 [2024-11-28 12:50:48.021117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.608 qpair failed and we were unable to recover it. 00:27:05.608 [2024-11-28 12:50:48.021273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.608 [2024-11-28 12:50:48.021302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.608 qpair failed and we were unable to recover it. 00:27:05.608 [2024-11-28 12:50:48.021473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.608 [2024-11-28 12:50:48.021504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.608 qpair failed and we were unable to recover it. 00:27:05.608 [2024-11-28 12:50:48.021797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.608 [2024-11-28 12:50:48.021829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.608 qpair failed and we were unable to recover it. 00:27:05.608 [2024-11-28 12:50:48.022036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.608 [2024-11-28 12:50:48.022068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.608 qpair failed and we were unable to recover it. 00:27:05.608 [2024-11-28 12:50:48.022340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.608 [2024-11-28 12:50:48.022371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.608 qpair failed and we were unable to recover it. 00:27:05.608 [2024-11-28 12:50:48.022558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.608 [2024-11-28 12:50:48.022589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.608 qpair failed and we were unable to recover it. 00:27:05.608 [2024-11-28 12:50:48.022799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.608 [2024-11-28 12:50:48.022830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.608 qpair failed and we were unable to recover it. 00:27:05.608 [2024-11-28 12:50:48.023018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.608 [2024-11-28 12:50:48.023032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.608 qpair failed and we were unable to recover it. 00:27:05.608 [2024-11-28 12:50:48.023177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.608 [2024-11-28 12:50:48.023192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.608 qpair failed and we were unable to recover it. 00:27:05.608 [2024-11-28 12:50:48.023342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.608 [2024-11-28 12:50:48.023356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.608 qpair failed and we were unable to recover it. 00:27:05.608 [2024-11-28 12:50:48.023439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.608 [2024-11-28 12:50:48.023453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.608 qpair failed and we were unable to recover it. 00:27:05.608 [2024-11-28 12:50:48.023616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.608 [2024-11-28 12:50:48.023631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.608 qpair failed and we were unable to recover it. 00:27:05.608 [2024-11-28 12:50:48.023790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.608 [2024-11-28 12:50:48.023804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.608 qpair failed and we were unable to recover it. 00:27:05.608 [2024-11-28 12:50:48.023893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.608 [2024-11-28 12:50:48.023907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.608 qpair failed and we were unable to recover it. 00:27:05.608 [2024-11-28 12:50:48.024047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.608 [2024-11-28 12:50:48.024063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.608 qpair failed and we were unable to recover it. 00:27:05.608 [2024-11-28 12:50:48.024148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.608 [2024-11-28 12:50:48.024162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.608 qpair failed and we were unable to recover it. 00:27:05.608 [2024-11-28 12:50:48.024376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.608 [2024-11-28 12:50:48.024390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.608 qpair failed and we were unable to recover it. 00:27:05.608 [2024-11-28 12:50:48.024542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.608 [2024-11-28 12:50:48.024557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.608 qpair failed and we were unable to recover it. 00:27:05.608 [2024-11-28 12:50:48.024654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.608 [2024-11-28 12:50:48.024668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.608 qpair failed and we were unable to recover it. 00:27:05.608 [2024-11-28 12:50:48.024757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.608 [2024-11-28 12:50:48.024771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.608 qpair failed and we were unable to recover it. 00:27:05.608 [2024-11-28 12:50:48.024882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.608 [2024-11-28 12:50:48.024896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.608 qpair failed and we were unable to recover it. 00:27:05.608 [2024-11-28 12:50:48.025123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.608 [2024-11-28 12:50:48.025138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.608 qpair failed and we were unable to recover it. 00:27:05.608 [2024-11-28 12:50:48.025241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.608 [2024-11-28 12:50:48.025273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.608 qpair failed and we were unable to recover it. 00:27:05.608 [2024-11-28 12:50:48.025402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.608 [2024-11-28 12:50:48.025432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.608 qpair failed and we were unable to recover it. 00:27:05.608 [2024-11-28 12:50:48.025683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.608 [2024-11-28 12:50:48.025715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.608 qpair failed and we were unable to recover it. 00:27:05.608 [2024-11-28 12:50:48.025959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.608 [2024-11-28 12:50:48.025974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.608 qpair failed and we were unable to recover it. 00:27:05.608 [2024-11-28 12:50:48.026058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.608 [2024-11-28 12:50:48.026073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.608 qpair failed and we were unable to recover it. 00:27:05.608 [2024-11-28 12:50:48.026173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.608 [2024-11-28 12:50:48.026189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.608 qpair failed and we were unable to recover it. 00:27:05.608 [2024-11-28 12:50:48.026305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.608 [2024-11-28 12:50:48.026319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.608 qpair failed and we were unable to recover it. 00:27:05.608 [2024-11-28 12:50:48.026399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.608 [2024-11-28 12:50:48.026416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.608 qpair failed and we were unable to recover it. 00:27:05.608 [2024-11-28 12:50:48.026664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.608 [2024-11-28 12:50:48.026678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.608 qpair failed and we were unable to recover it. 00:27:05.609 [2024-11-28 12:50:48.026890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.609 [2024-11-28 12:50:48.026904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.609 qpair failed and we were unable to recover it. 00:27:05.609 [2024-11-28 12:50:48.026991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.609 [2024-11-28 12:50:48.027006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.609 qpair failed and we were unable to recover it. 00:27:05.609 [2024-11-28 12:50:48.027114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.609 [2024-11-28 12:50:48.027128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.609 qpair failed and we were unable to recover it. 00:27:05.609 [2024-11-28 12:50:48.027296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.609 [2024-11-28 12:50:48.027310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.609 qpair failed and we were unable to recover it. 00:27:05.609 [2024-11-28 12:50:48.027451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.609 [2024-11-28 12:50:48.027482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.609 qpair failed and we were unable to recover it. 00:27:05.609 [2024-11-28 12:50:48.027663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.609 [2024-11-28 12:50:48.027695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.609 qpair failed and we were unable to recover it. 00:27:05.609 [2024-11-28 12:50:48.027818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.609 [2024-11-28 12:50:48.027850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.609 qpair failed and we were unable to recover it. 00:27:05.609 [2024-11-28 12:50:48.028038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.609 [2024-11-28 12:50:48.028070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.609 qpair failed and we were unable to recover it. 00:27:05.609 [2024-11-28 12:50:48.028261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.609 [2024-11-28 12:50:48.028291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.609 qpair failed and we were unable to recover it. 00:27:05.609 [2024-11-28 12:50:48.028400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.609 [2024-11-28 12:50:48.028431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.609 qpair failed and we were unable to recover it. 00:27:05.609 [2024-11-28 12:50:48.028561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.609 [2024-11-28 12:50:48.028576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.609 qpair failed and we were unable to recover it. 00:27:05.609 [2024-11-28 12:50:48.028729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.609 [2024-11-28 12:50:48.028767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.609 qpair failed and we were unable to recover it. 00:27:05.609 [2024-11-28 12:50:48.029022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.609 [2024-11-28 12:50:48.029055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.609 qpair failed and we were unable to recover it. 00:27:05.609 [2024-11-28 12:50:48.029196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.609 [2024-11-28 12:50:48.029227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.609 qpair failed and we were unable to recover it. 00:27:05.609 [2024-11-28 12:50:48.029426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.609 [2024-11-28 12:50:48.029456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.609 qpair failed and we were unable to recover it. 00:27:05.609 [2024-11-28 12:50:48.029751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.609 [2024-11-28 12:50:48.029782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.609 qpair failed and we were unable to recover it. 00:27:05.609 [2024-11-28 12:50:48.030045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.609 [2024-11-28 12:50:48.030060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.609 qpair failed and we were unable to recover it. 00:27:05.609 [2024-11-28 12:50:48.030160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.609 [2024-11-28 12:50:48.030175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.609 qpair failed and we were unable to recover it. 00:27:05.609 [2024-11-28 12:50:48.030331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.609 [2024-11-28 12:50:48.030345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.609 qpair failed and we were unable to recover it. 00:27:05.609 [2024-11-28 12:50:48.030420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.609 [2024-11-28 12:50:48.030434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.609 qpair failed and we were unable to recover it. 00:27:05.609 [2024-11-28 12:50:48.030527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.609 [2024-11-28 12:50:48.030541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.609 qpair failed and we were unable to recover it. 00:27:05.609 [2024-11-28 12:50:48.030755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.609 [2024-11-28 12:50:48.030786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.609 qpair failed and we were unable to recover it. 00:27:05.609 [2024-11-28 12:50:48.030971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.609 [2024-11-28 12:50:48.031004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.609 qpair failed and we were unable to recover it. 00:27:05.609 [2024-11-28 12:50:48.031204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.609 [2024-11-28 12:50:48.031236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.609 qpair failed and we were unable to recover it. 00:27:05.609 [2024-11-28 12:50:48.031434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.609 [2024-11-28 12:50:48.031463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.609 qpair failed and we were unable to recover it. 00:27:05.609 [2024-11-28 12:50:48.031727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.609 [2024-11-28 12:50:48.031744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.609 qpair failed and we were unable to recover it. 00:27:05.609 [2024-11-28 12:50:48.031886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.609 [2024-11-28 12:50:48.031900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.609 qpair failed and we were unable to recover it. 00:27:05.609 [2024-11-28 12:50:48.031978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.609 [2024-11-28 12:50:48.031993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.609 qpair failed and we were unable to recover it. 00:27:05.609 [2024-11-28 12:50:48.032154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.609 [2024-11-28 12:50:48.032168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.609 qpair failed and we were unable to recover it. 00:27:05.609 [2024-11-28 12:50:48.032323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.609 [2024-11-28 12:50:48.032354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.609 qpair failed and we were unable to recover it. 00:27:05.609 [2024-11-28 12:50:48.032541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.609 [2024-11-28 12:50:48.032573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.609 qpair failed and we were unable to recover it. 00:27:05.609 [2024-11-28 12:50:48.032701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.609 [2024-11-28 12:50:48.032731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.609 qpair failed and we were unable to recover it. 00:27:05.609 [2024-11-28 12:50:48.032976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.609 [2024-11-28 12:50:48.033008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.609 qpair failed and we were unable to recover it. 00:27:05.609 [2024-11-28 12:50:48.033256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.609 [2024-11-28 12:50:48.033287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.609 qpair failed and we were unable to recover it. 00:27:05.609 [2024-11-28 12:50:48.033414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.609 [2024-11-28 12:50:48.033445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.609 qpair failed and we were unable to recover it. 00:27:05.609 [2024-11-28 12:50:48.033710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.609 [2024-11-28 12:50:48.033742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.609 qpair failed and we were unable to recover it. 00:27:05.609 [2024-11-28 12:50:48.033876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.609 [2024-11-28 12:50:48.033907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.609 qpair failed and we were unable to recover it. 00:27:05.609 [2024-11-28 12:50:48.034183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.609 [2024-11-28 12:50:48.034198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.609 qpair failed and we were unable to recover it. 00:27:05.609 [2024-11-28 12:50:48.034341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.609 [2024-11-28 12:50:48.034355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.609 qpair failed and we were unable to recover it. 00:27:05.609 [2024-11-28 12:50:48.034443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.609 [2024-11-28 12:50:48.034458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.609 qpair failed and we were unable to recover it. 00:27:05.609 [2024-11-28 12:50:48.034547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.609 [2024-11-28 12:50:48.034563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.609 qpair failed and we were unable to recover it. 00:27:05.609 [2024-11-28 12:50:48.034784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.609 [2024-11-28 12:50:48.034814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.609 qpair failed and we were unable to recover it. 00:27:05.609 [2024-11-28 12:50:48.034961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.609 [2024-11-28 12:50:48.034994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.609 qpair failed and we were unable to recover it. 00:27:05.609 [2024-11-28 12:50:48.035210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.609 [2024-11-28 12:50:48.035241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.609 qpair failed and we were unable to recover it. 00:27:05.609 [2024-11-28 12:50:48.035505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.609 [2024-11-28 12:50:48.035536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.609 qpair failed and we were unable to recover it. 00:27:05.610 [2024-11-28 12:50:48.035728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.610 [2024-11-28 12:50:48.035759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.610 qpair failed and we were unable to recover it. 00:27:05.610 [2024-11-28 12:50:48.035969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.610 [2024-11-28 12:50:48.036002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.610 qpair failed and we were unable to recover it. 00:27:05.610 [2024-11-28 12:50:48.036095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.610 [2024-11-28 12:50:48.036109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.610 qpair failed and we were unable to recover it. 00:27:05.610 [2024-11-28 12:50:48.036340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.610 [2024-11-28 12:50:48.036372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.610 qpair failed and we were unable to recover it. 00:27:05.610 [2024-11-28 12:50:48.036479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.610 [2024-11-28 12:50:48.036510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.610 qpair failed and we were unable to recover it. 00:27:05.610 [2024-11-28 12:50:48.036753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.610 [2024-11-28 12:50:48.036784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.610 qpair failed and we were unable to recover it. 00:27:05.610 [2024-11-28 12:50:48.036921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.610 [2024-11-28 12:50:48.036935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.610 qpair failed and we were unable to recover it. 00:27:05.610 [2024-11-28 12:50:48.037173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.610 [2024-11-28 12:50:48.037205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.610 qpair failed and we were unable to recover it. 00:27:05.610 [2024-11-28 12:50:48.037391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.610 [2024-11-28 12:50:48.037421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.610 qpair failed and we were unable to recover it. 00:27:05.610 [2024-11-28 12:50:48.037684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.610 [2024-11-28 12:50:48.037716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.610 qpair failed and we were unable to recover it. 00:27:05.610 [2024-11-28 12:50:48.037916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.610 [2024-11-28 12:50:48.037956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.610 qpair failed and we were unable to recover it. 00:27:05.610 [2024-11-28 12:50:48.038152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.610 [2024-11-28 12:50:48.038183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.610 qpair failed and we were unable to recover it. 00:27:05.610 [2024-11-28 12:50:48.038421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.610 [2024-11-28 12:50:48.038452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.610 qpair failed and we were unable to recover it. 00:27:05.610 [2024-11-28 12:50:48.038640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.610 [2024-11-28 12:50:48.038672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.610 qpair failed and we were unable to recover it. 00:27:05.610 [2024-11-28 12:50:48.038847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.610 [2024-11-28 12:50:48.038877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.610 qpair failed and we were unable to recover it. 00:27:05.610 [2024-11-28 12:50:48.039150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.610 [2024-11-28 12:50:48.039182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.610 qpair failed and we were unable to recover it. 00:27:05.610 [2024-11-28 12:50:48.039324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.610 [2024-11-28 12:50:48.039355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.610 qpair failed and we were unable to recover it. 00:27:05.610 [2024-11-28 12:50:48.039494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.610 [2024-11-28 12:50:48.039525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.610 qpair failed and we were unable to recover it. 00:27:05.610 [2024-11-28 12:50:48.039793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.610 [2024-11-28 12:50:48.039823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.610 qpair failed and we were unable to recover it. 00:27:05.610 [2024-11-28 12:50:48.039960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.610 [2024-11-28 12:50:48.039993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.610 qpair failed and we were unable to recover it. 00:27:05.610 [2024-11-28 12:50:48.040171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.610 [2024-11-28 12:50:48.040202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.610 qpair failed and we were unable to recover it. 00:27:05.610 [2024-11-28 12:50:48.040391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.610 [2024-11-28 12:50:48.040427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.610 qpair failed and we were unable to recover it. 00:27:05.610 [2024-11-28 12:50:48.040625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.610 [2024-11-28 12:50:48.040656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.610 qpair failed and we were unable to recover it. 00:27:05.610 [2024-11-28 12:50:48.040902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.610 [2024-11-28 12:50:48.040933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.610 qpair failed and we were unable to recover it. 00:27:05.610 [2024-11-28 12:50:48.041081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.610 [2024-11-28 12:50:48.041112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.610 qpair failed and we were unable to recover it. 00:27:05.610 [2024-11-28 12:50:48.041307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.610 [2024-11-28 12:50:48.041338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.610 qpair failed and we were unable to recover it. 00:27:05.610 [2024-11-28 12:50:48.041653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.610 [2024-11-28 12:50:48.041683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.610 qpair failed and we were unable to recover it. 00:27:05.610 [2024-11-28 12:50:48.041974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.610 [2024-11-28 12:50:48.042007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.610 qpair failed and we were unable to recover it. 00:27:05.610 [2024-11-28 12:50:48.042207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.610 [2024-11-28 12:50:48.042238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.610 qpair failed and we were unable to recover it. 00:27:05.610 [2024-11-28 12:50:48.042480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.610 [2024-11-28 12:50:48.042511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.610 qpair failed and we were unable to recover it. 00:27:05.610 [2024-11-28 12:50:48.042687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.610 [2024-11-28 12:50:48.042701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.610 qpair failed and we were unable to recover it. 00:27:05.610 [2024-11-28 12:50:48.042794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.610 [2024-11-28 12:50:48.042808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.610 qpair failed and we were unable to recover it. 00:27:05.610 [2024-11-28 12:50:48.042913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.610 [2024-11-28 12:50:48.042928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.610 qpair failed and we were unable to recover it. 00:27:05.610 [2024-11-28 12:50:48.043106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.610 [2024-11-28 12:50:48.043138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.610 qpair failed and we were unable to recover it. 00:27:05.610 [2024-11-28 12:50:48.043356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.610 [2024-11-28 12:50:48.043386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.610 qpair failed and we were unable to recover it. 00:27:05.610 [2024-11-28 12:50:48.043516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.610 [2024-11-28 12:50:48.043547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.610 qpair failed and we were unable to recover it. 00:27:05.610 [2024-11-28 12:50:48.043791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.610 [2024-11-28 12:50:48.043805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.610 qpair failed and we were unable to recover it. 00:27:05.610 [2024-11-28 12:50:48.043978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.610 [2024-11-28 12:50:48.043993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.610 qpair failed and we were unable to recover it. 00:27:05.610 [2024-11-28 12:50:48.044152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.610 [2024-11-28 12:50:48.044183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.610 qpair failed and we were unable to recover it. 00:27:05.610 [2024-11-28 12:50:48.044430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.610 [2024-11-28 12:50:48.044462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.610 qpair failed and we were unable to recover it. 00:27:05.610 [2024-11-28 12:50:48.044638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.610 [2024-11-28 12:50:48.044670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.610 qpair failed and we were unable to recover it. 00:27:05.610 [2024-11-28 12:50:48.044940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.610 [2024-11-28 12:50:48.044978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.610 qpair failed and we were unable to recover it. 00:27:05.610 [2024-11-28 12:50:48.045111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.610 [2024-11-28 12:50:48.045126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.610 qpair failed and we were unable to recover it. 00:27:05.610 [2024-11-28 12:50:48.045212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.610 [2024-11-28 12:50:48.045226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.610 qpair failed and we were unable to recover it. 00:27:05.610 [2024-11-28 12:50:48.045445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.610 [2024-11-28 12:50:48.045475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.610 qpair failed and we were unable to recover it. 00:27:05.611 [2024-11-28 12:50:48.045653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.611 [2024-11-28 12:50:48.045684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.611 qpair failed and we were unable to recover it. 00:27:05.611 [2024-11-28 12:50:48.045897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.611 [2024-11-28 12:50:48.045928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.611 qpair failed and we were unable to recover it. 00:27:05.611 [2024-11-28 12:50:48.046130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.611 [2024-11-28 12:50:48.046162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.611 qpair failed and we were unable to recover it. 00:27:05.611 [2024-11-28 12:50:48.046347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.611 [2024-11-28 12:50:48.046384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.611 qpair failed and we were unable to recover it. 00:27:05.611 [2024-11-28 12:50:48.046517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.611 [2024-11-28 12:50:48.046548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.611 qpair failed and we were unable to recover it. 00:27:05.611 [2024-11-28 12:50:48.046673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.611 [2024-11-28 12:50:48.046703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.611 qpair failed and we were unable to recover it. 00:27:05.611 [2024-11-28 12:50:48.046825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.611 [2024-11-28 12:50:48.046856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.611 qpair failed and we were unable to recover it. 00:27:05.611 [2024-11-28 12:50:48.047095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.611 [2024-11-28 12:50:48.047110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.611 qpair failed and we were unable to recover it. 00:27:05.611 [2024-11-28 12:50:48.047249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.611 [2024-11-28 12:50:48.047279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.611 qpair failed and we were unable to recover it. 00:27:05.611 [2024-11-28 12:50:48.047400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.611 [2024-11-28 12:50:48.047432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.611 qpair failed and we were unable to recover it. 00:27:05.611 [2024-11-28 12:50:48.047633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.611 [2024-11-28 12:50:48.047665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.611 qpair failed and we were unable to recover it. 00:27:05.611 [2024-11-28 12:50:48.047877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.611 [2024-11-28 12:50:48.047891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.611 qpair failed and we were unable to recover it. 00:27:05.611 [2024-11-28 12:50:48.048072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.611 [2024-11-28 12:50:48.048087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.611 qpair failed and we were unable to recover it. 00:27:05.611 [2024-11-28 12:50:48.048310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.611 [2024-11-28 12:50:48.048341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.611 qpair failed and we were unable to recover it. 00:27:05.611 [2024-11-28 12:50:48.048529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.611 [2024-11-28 12:50:48.048559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.611 qpair failed and we were unable to recover it. 00:27:05.611 [2024-11-28 12:50:48.048776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.611 [2024-11-28 12:50:48.048807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.611 qpair failed and we were unable to recover it. 00:27:05.611 [2024-11-28 12:50:48.048977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.611 [2024-11-28 12:50:48.048992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.611 qpair failed and we were unable to recover it. 00:27:05.611 [2024-11-28 12:50:48.049275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.611 [2024-11-28 12:50:48.049311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.611 qpair failed and we were unable to recover it. 00:27:05.611 [2024-11-28 12:50:48.049425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.611 [2024-11-28 12:50:48.049442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.611 qpair failed and we were unable to recover it. 00:27:05.611 [2024-11-28 12:50:48.049600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.611 [2024-11-28 12:50:48.049633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.611 qpair failed and we were unable to recover it. 00:27:05.611 [2024-11-28 12:50:48.049814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.611 [2024-11-28 12:50:48.049845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.611 qpair failed and we were unable to recover it. 00:27:05.611 [2024-11-28 12:50:48.050071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.611 [2024-11-28 12:50:48.050103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.611 qpair failed and we were unable to recover it. 00:27:05.611 [2024-11-28 12:50:48.050234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.611 [2024-11-28 12:50:48.050265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.611 qpair failed and we were unable to recover it. 00:27:05.611 [2024-11-28 12:50:48.050525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.611 [2024-11-28 12:50:48.050556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.611 qpair failed and we were unable to recover it. 00:27:05.611 [2024-11-28 12:50:48.050666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.611 [2024-11-28 12:50:48.050697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.611 qpair failed and we were unable to recover it. 00:27:05.611 [2024-11-28 12:50:48.050942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.611 [2024-11-28 12:50:48.050985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.611 qpair failed and we were unable to recover it. 00:27:05.611 [2024-11-28 12:50:48.051228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.611 [2024-11-28 12:50:48.051259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.611 qpair failed and we were unable to recover it. 00:27:05.611 [2024-11-28 12:50:48.051398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.611 [2024-11-28 12:50:48.051428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.611 qpair failed and we were unable to recover it. 00:27:05.611 [2024-11-28 12:50:48.051561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.611 [2024-11-28 12:50:48.051591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.611 qpair failed and we were unable to recover it. 00:27:05.611 [2024-11-28 12:50:48.051812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.611 [2024-11-28 12:50:48.051843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.611 qpair failed and we were unable to recover it. 00:27:05.611 [2024-11-28 12:50:48.052107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.611 [2024-11-28 12:50:48.052155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.611 qpair failed and we were unable to recover it. 00:27:05.611 [2024-11-28 12:50:48.052400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.611 [2024-11-28 12:50:48.052431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.611 qpair failed and we were unable to recover it. 00:27:05.611 [2024-11-28 12:50:48.052679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.611 [2024-11-28 12:50:48.052693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.611 qpair failed and we were unable to recover it. 00:27:05.611 [2024-11-28 12:50:48.052771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.611 [2024-11-28 12:50:48.052785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.611 qpair failed and we were unable to recover it. 00:27:05.611 [2024-11-28 12:50:48.052942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.611 [2024-11-28 12:50:48.052961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.611 qpair failed and we were unable to recover it. 00:27:05.611 [2024-11-28 12:50:48.053174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.611 [2024-11-28 12:50:48.053189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.611 qpair failed and we were unable to recover it. 00:27:05.611 [2024-11-28 12:50:48.053329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.611 [2024-11-28 12:50:48.053344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.611 qpair failed and we were unable to recover it. 00:27:05.611 [2024-11-28 12:50:48.053431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.611 [2024-11-28 12:50:48.053459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.611 qpair failed and we were unable to recover it. 00:27:05.611 [2024-11-28 12:50:48.053647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.611 [2024-11-28 12:50:48.053677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.611 qpair failed and we were unable to recover it. 00:27:05.611 [2024-11-28 12:50:48.053930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.611 [2024-11-28 12:50:48.053972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.611 qpair failed and we were unable to recover it. 00:27:05.611 [2024-11-28 12:50:48.054211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.611 [2024-11-28 12:50:48.054225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.611 qpair failed and we were unable to recover it. 00:27:05.611 [2024-11-28 12:50:48.054410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.611 [2024-11-28 12:50:48.054424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.611 qpair failed and we were unable to recover it. 00:27:05.611 [2024-11-28 12:50:48.054584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.611 [2024-11-28 12:50:48.054598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.611 qpair failed and we were unable to recover it. 00:27:05.611 [2024-11-28 12:50:48.054806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.611 [2024-11-28 12:50:48.054820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.611 qpair failed and we were unable to recover it. 00:27:05.611 [2024-11-28 12:50:48.055061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.612 [2024-11-28 12:50:48.055076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.612 qpair failed and we were unable to recover it. 00:27:05.612 [2024-11-28 12:50:48.055171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.612 [2024-11-28 12:50:48.055185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.612 qpair failed and we were unable to recover it. 00:27:05.612 [2024-11-28 12:50:48.055284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.612 [2024-11-28 12:50:48.055299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.612 qpair failed and we were unable to recover it. 00:27:05.612 [2024-11-28 12:50:48.055390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.612 [2024-11-28 12:50:48.055405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.612 qpair failed and we were unable to recover it. 00:27:05.612 [2024-11-28 12:50:48.055570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.612 [2024-11-28 12:50:48.055584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.612 qpair failed and we were unable to recover it. 00:27:05.612 [2024-11-28 12:50:48.055737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.612 [2024-11-28 12:50:48.055751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.612 qpair failed and we were unable to recover it. 00:27:05.612 [2024-11-28 12:50:48.055833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.612 [2024-11-28 12:50:48.055868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.612 qpair failed and we were unable to recover it. 00:27:05.612 [2024-11-28 12:50:48.056048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.612 [2024-11-28 12:50:48.056079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.612 qpair failed and we were unable to recover it. 00:27:05.612 [2024-11-28 12:50:48.056292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.612 [2024-11-28 12:50:48.056323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.612 qpair failed and we were unable to recover it. 00:27:05.882 [2024-11-28 12:50:48.056452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.882 [2024-11-28 12:50:48.056484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.882 qpair failed and we were unable to recover it. 00:27:05.882 [2024-11-28 12:50:48.056621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.882 [2024-11-28 12:50:48.056651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.882 qpair failed and we were unable to recover it. 00:27:05.882 [2024-11-28 12:50:48.056789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.882 [2024-11-28 12:50:48.056804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.882 qpair failed and we were unable to recover it. 00:27:05.882 [2024-11-28 12:50:48.057010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.882 [2024-11-28 12:50:48.057025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.882 qpair failed and we were unable to recover it. 00:27:05.882 [2024-11-28 12:50:48.057196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.882 [2024-11-28 12:50:48.057210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.882 qpair failed and we were unable to recover it. 00:27:05.882 [2024-11-28 12:50:48.057299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.882 [2024-11-28 12:50:48.057314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.882 qpair failed and we were unable to recover it. 00:27:05.882 [2024-11-28 12:50:48.057413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.882 [2024-11-28 12:50:48.057428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.882 qpair failed and we were unable to recover it. 00:27:05.882 [2024-11-28 12:50:48.057579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.882 [2024-11-28 12:50:48.057593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.882 qpair failed and we were unable to recover it. 00:27:05.882 [2024-11-28 12:50:48.057736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.882 [2024-11-28 12:50:48.057750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.882 qpair failed and we were unable to recover it. 00:27:05.882 [2024-11-28 12:50:48.057829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.882 [2024-11-28 12:50:48.057843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.882 qpair failed and we were unable to recover it. 00:27:05.882 [2024-11-28 12:50:48.058079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.882 [2024-11-28 12:50:48.058093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.882 qpair failed and we were unable to recover it. 00:27:05.882 [2024-11-28 12:50:48.058251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.882 [2024-11-28 12:50:48.058266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.882 qpair failed and we were unable to recover it. 00:27:05.882 [2024-11-28 12:50:48.058425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.882 [2024-11-28 12:50:48.058440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.883 qpair failed and we were unable to recover it. 00:27:05.883 [2024-11-28 12:50:48.058521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.883 [2024-11-28 12:50:48.058535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.883 qpair failed and we were unable to recover it. 00:27:05.883 [2024-11-28 12:50:48.058689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.883 [2024-11-28 12:50:48.058703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.883 qpair failed and we were unable to recover it. 00:27:05.883 [2024-11-28 12:50:48.058855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.883 [2024-11-28 12:50:48.058870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.883 qpair failed and we were unable to recover it. 00:27:05.883 [2024-11-28 12:50:48.058955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.883 [2024-11-28 12:50:48.058970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.883 qpair failed and we were unable to recover it. 00:27:05.883 [2024-11-28 12:50:48.059207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.883 [2024-11-28 12:50:48.059224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.883 qpair failed and we were unable to recover it. 00:27:05.883 [2024-11-28 12:50:48.059385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.883 [2024-11-28 12:50:48.059399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.883 qpair failed and we were unable to recover it. 00:27:05.883 [2024-11-28 12:50:48.059557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.883 [2024-11-28 12:50:48.059571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.883 qpair failed and we were unable to recover it. 00:27:05.883 [2024-11-28 12:50:48.059643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.883 [2024-11-28 12:50:48.059656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.883 qpair failed and we were unable to recover it. 00:27:05.883 [2024-11-28 12:50:48.059888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.883 [2024-11-28 12:50:48.059902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.883 qpair failed and we were unable to recover it. 00:27:05.883 [2024-11-28 12:50:48.060110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.883 [2024-11-28 12:50:48.060125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.883 qpair failed and we were unable to recover it. 00:27:05.883 [2024-11-28 12:50:48.060282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.883 [2024-11-28 12:50:48.060297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.883 qpair failed and we were unable to recover it. 00:27:05.883 [2024-11-28 12:50:48.060398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.883 [2024-11-28 12:50:48.060412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.883 qpair failed and we were unable to recover it. 00:27:05.883 [2024-11-28 12:50:48.060620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.883 [2024-11-28 12:50:48.060634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.883 qpair failed and we were unable to recover it. 00:27:05.883 [2024-11-28 12:50:48.060810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.883 [2024-11-28 12:50:48.060824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.883 qpair failed and we were unable to recover it. 00:27:05.883 [2024-11-28 12:50:48.060898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.883 [2024-11-28 12:50:48.060912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.883 qpair failed and we were unable to recover it. 00:27:05.883 [2024-11-28 12:50:48.061097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.883 [2024-11-28 12:50:48.061111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.883 qpair failed and we were unable to recover it. 00:27:05.883 [2024-11-28 12:50:48.061202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.883 [2024-11-28 12:50:48.061216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.883 qpair failed and we were unable to recover it. 00:27:05.883 [2024-11-28 12:50:48.061314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.883 [2024-11-28 12:50:48.061328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.883 qpair failed and we were unable to recover it. 00:27:05.883 [2024-11-28 12:50:48.061495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.883 [2024-11-28 12:50:48.061510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.883 qpair failed and we were unable to recover it. 00:27:05.883 [2024-11-28 12:50:48.061672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.883 [2024-11-28 12:50:48.061703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.883 qpair failed and we were unable to recover it. 00:27:05.883 [2024-11-28 12:50:48.061906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.883 [2024-11-28 12:50:48.061937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.883 qpair failed and we were unable to recover it. 00:27:05.883 [2024-11-28 12:50:48.062089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.883 [2024-11-28 12:50:48.062121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.883 qpair failed and we were unable to recover it. 00:27:05.883 [2024-11-28 12:50:48.062366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.883 [2024-11-28 12:50:48.062397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.883 qpair failed and we were unable to recover it. 00:27:05.883 [2024-11-28 12:50:48.062642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.883 [2024-11-28 12:50:48.062673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.883 qpair failed and we were unable to recover it. 00:27:05.883 [2024-11-28 12:50:48.062888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.883 [2024-11-28 12:50:48.062918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.883 qpair failed and we were unable to recover it. 00:27:05.883 [2024-11-28 12:50:48.063235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.883 [2024-11-28 12:50:48.063305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.883 qpair failed and we were unable to recover it. 00:27:05.883 [2024-11-28 12:50:48.063521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.883 [2024-11-28 12:50:48.063556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.883 qpair failed and we were unable to recover it. 00:27:05.883 [2024-11-28 12:50:48.063831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.883 [2024-11-28 12:50:48.063863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.883 qpair failed and we were unable to recover it. 00:27:05.883 [2024-11-28 12:50:48.064058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.883 [2024-11-28 12:50:48.064090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.883 qpair failed and we were unable to recover it. 00:27:05.883 [2024-11-28 12:50:48.064290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.883 [2024-11-28 12:50:48.064322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.883 qpair failed and we were unable to recover it. 00:27:05.883 [2024-11-28 12:50:48.064522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.883 [2024-11-28 12:50:48.064554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.883 qpair failed and we were unable to recover it. 00:27:05.883 [2024-11-28 12:50:48.064674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.883 [2024-11-28 12:50:48.064690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.883 qpair failed and we were unable to recover it. 00:27:05.883 [2024-11-28 12:50:48.064849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.883 [2024-11-28 12:50:48.064863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.883 qpair failed and we were unable to recover it. 00:27:05.883 [2024-11-28 12:50:48.065118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.883 [2024-11-28 12:50:48.065150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.883 qpair failed and we were unable to recover it. 00:27:05.883 [2024-11-28 12:50:48.065426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.883 [2024-11-28 12:50:48.065457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.883 qpair failed and we were unable to recover it. 00:27:05.883 [2024-11-28 12:50:48.065684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.883 [2024-11-28 12:50:48.065715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.883 qpair failed and we were unable to recover it. 00:27:05.883 [2024-11-28 12:50:48.065977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.883 [2024-11-28 12:50:48.066008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.883 qpair failed and we were unable to recover it. 00:27:05.883 [2024-11-28 12:50:48.066115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.883 [2024-11-28 12:50:48.066129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.884 qpair failed and we were unable to recover it. 00:27:05.884 [2024-11-28 12:50:48.066312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.884 [2024-11-28 12:50:48.066343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.884 qpair failed and we were unable to recover it. 00:27:05.884 [2024-11-28 12:50:48.066528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.884 [2024-11-28 12:50:48.066559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.884 qpair failed and we were unable to recover it. 00:27:05.884 [2024-11-28 12:50:48.066749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.884 [2024-11-28 12:50:48.066780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.884 qpair failed and we were unable to recover it. 00:27:05.884 [2024-11-28 12:50:48.066912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.884 [2024-11-28 12:50:48.066926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.884 qpair failed and we were unable to recover it. 00:27:05.884 [2024-11-28 12:50:48.067142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.884 [2024-11-28 12:50:48.067156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.884 qpair failed and we were unable to recover it. 00:27:05.884 [2024-11-28 12:50:48.067358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.884 [2024-11-28 12:50:48.067372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.884 qpair failed and we were unable to recover it. 00:27:05.884 [2024-11-28 12:50:48.067521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.884 [2024-11-28 12:50:48.067557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.884 qpair failed and we were unable to recover it. 00:27:05.884 [2024-11-28 12:50:48.067676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.884 [2024-11-28 12:50:48.067706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.884 qpair failed and we were unable to recover it. 00:27:05.884 [2024-11-28 12:50:48.067900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.884 [2024-11-28 12:50:48.067930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.884 qpair failed and we were unable to recover it. 00:27:05.884 [2024-11-28 12:50:48.068062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.884 [2024-11-28 12:50:48.068093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.884 qpair failed and we were unable to recover it. 00:27:05.884 [2024-11-28 12:50:48.068202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.884 [2024-11-28 12:50:48.068216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.884 qpair failed and we were unable to recover it. 00:27:05.884 [2024-11-28 12:50:48.068387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.884 [2024-11-28 12:50:48.068418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.884 qpair failed and we were unable to recover it. 00:27:05.884 [2024-11-28 12:50:48.068601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.884 [2024-11-28 12:50:48.068631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.884 qpair failed and we were unable to recover it. 00:27:05.884 [2024-11-28 12:50:48.068889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.884 [2024-11-28 12:50:48.068930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.884 qpair failed and we were unable to recover it. 00:27:05.884 [2024-11-28 12:50:48.069075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.884 [2024-11-28 12:50:48.069090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.884 qpair failed and we were unable to recover it. 00:27:05.884 [2024-11-28 12:50:48.069164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.884 [2024-11-28 12:50:48.069178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.884 qpair failed and we were unable to recover it. 00:27:05.884 [2024-11-28 12:50:48.069315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.884 [2024-11-28 12:50:48.069329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.884 qpair failed and we were unable to recover it. 00:27:05.884 [2024-11-28 12:50:48.069485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.884 [2024-11-28 12:50:48.069517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.884 qpair failed and we were unable to recover it. 00:27:05.884 [2024-11-28 12:50:48.069653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.884 [2024-11-28 12:50:48.069684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.884 qpair failed and we were unable to recover it. 00:27:05.884 [2024-11-28 12:50:48.069873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.884 [2024-11-28 12:50:48.069904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.884 qpair failed and we were unable to recover it. 00:27:05.884 [2024-11-28 12:50:48.070110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.884 [2024-11-28 12:50:48.070142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.884 qpair failed and we were unable to recover it. 00:27:05.884 [2024-11-28 12:50:48.070280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.884 [2024-11-28 12:50:48.070311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.884 qpair failed and we were unable to recover it. 00:27:05.884 [2024-11-28 12:50:48.070485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.884 [2024-11-28 12:50:48.070515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.884 qpair failed and we were unable to recover it. 00:27:05.884 [2024-11-28 12:50:48.070758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.884 [2024-11-28 12:50:48.070789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.884 qpair failed and we were unable to recover it. 00:27:05.884 [2024-11-28 12:50:48.070898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.884 [2024-11-28 12:50:48.070928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.884 qpair failed and we were unable to recover it. 00:27:05.884 [2024-11-28 12:50:48.071115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.884 [2024-11-28 12:50:48.071130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.884 qpair failed and we were unable to recover it. 00:27:05.884 [2024-11-28 12:50:48.071213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.884 [2024-11-28 12:50:48.071245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.884 qpair failed and we were unable to recover it. 00:27:05.884 [2024-11-28 12:50:48.071423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.884 [2024-11-28 12:50:48.071454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.884 qpair failed and we were unable to recover it. 00:27:05.884 [2024-11-28 12:50:48.071625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.884 [2024-11-28 12:50:48.071656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.884 qpair failed and we were unable to recover it. 00:27:05.884 [2024-11-28 12:50:48.071866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.884 [2024-11-28 12:50:48.071880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.884 qpair failed and we were unable to recover it. 00:27:05.884 [2024-11-28 12:50:48.072132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.884 [2024-11-28 12:50:48.072146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.884 qpair failed and we were unable to recover it. 00:27:05.884 [2024-11-28 12:50:48.072246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.884 [2024-11-28 12:50:48.072261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.884 qpair failed and we were unable to recover it. 00:27:05.884 [2024-11-28 12:50:48.072350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.884 [2024-11-28 12:50:48.072365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.884 qpair failed and we were unable to recover it. 00:27:05.884 [2024-11-28 12:50:48.072500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.884 [2024-11-28 12:50:48.072514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.884 qpair failed and we were unable to recover it. 00:27:05.884 [2024-11-28 12:50:48.072670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.884 [2024-11-28 12:50:48.072701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.884 qpair failed and we were unable to recover it. 00:27:05.884 [2024-11-28 12:50:48.072892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.884 [2024-11-28 12:50:48.072924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.884 qpair failed and we were unable to recover it. 00:27:05.884 [2024-11-28 12:50:48.073229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.884 [2024-11-28 12:50:48.073262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.884 qpair failed and we were unable to recover it. 00:27:05.884 [2024-11-28 12:50:48.073456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.885 [2024-11-28 12:50:48.073488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.885 qpair failed and we were unable to recover it. 00:27:05.885 [2024-11-28 12:50:48.073616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.885 [2024-11-28 12:50:48.073647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.885 qpair failed and we were unable to recover it. 00:27:05.885 [2024-11-28 12:50:48.073838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.885 [2024-11-28 12:50:48.073852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.885 qpair failed and we were unable to recover it. 00:27:05.885 [2024-11-28 12:50:48.074067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.885 [2024-11-28 12:50:48.074082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.885 qpair failed and we were unable to recover it. 00:27:05.885 [2024-11-28 12:50:48.074174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.885 [2024-11-28 12:50:48.074208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.885 qpair failed and we were unable to recover it. 00:27:05.885 [2024-11-28 12:50:48.074415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.885 [2024-11-28 12:50:48.074446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.885 qpair failed and we were unable to recover it. 00:27:05.885 [2024-11-28 12:50:48.074703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.885 [2024-11-28 12:50:48.074745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.885 qpair failed and we were unable to recover it. 00:27:05.885 [2024-11-28 12:50:48.074905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.885 [2024-11-28 12:50:48.074919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.885 qpair failed and we were unable to recover it. 00:27:05.885 [2024-11-28 12:50:48.075171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.885 [2024-11-28 12:50:48.075186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.885 qpair failed and we were unable to recover it. 00:27:05.885 [2024-11-28 12:50:48.075416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.885 [2024-11-28 12:50:48.075433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.885 qpair failed and we were unable to recover it. 00:27:05.885 [2024-11-28 12:50:48.075615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.885 [2024-11-28 12:50:48.075630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.885 qpair failed and we were unable to recover it. 00:27:05.885 [2024-11-28 12:50:48.075844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.885 [2024-11-28 12:50:48.075858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.885 qpair failed and we were unable to recover it. 00:27:05.885 [2024-11-28 12:50:48.075969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.885 [2024-11-28 12:50:48.076001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.885 qpair failed and we were unable to recover it. 00:27:05.885 [2024-11-28 12:50:48.076215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.885 [2024-11-28 12:50:48.076246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.885 qpair failed and we were unable to recover it. 00:27:05.885 [2024-11-28 12:50:48.076502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.885 [2024-11-28 12:50:48.076533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.885 qpair failed and we were unable to recover it. 00:27:05.885 [2024-11-28 12:50:48.076780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.885 [2024-11-28 12:50:48.076812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.885 qpair failed and we were unable to recover it. 00:27:05.885 [2024-11-28 12:50:48.077001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.885 [2024-11-28 12:50:48.077033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.885 qpair failed and we were unable to recover it. 00:27:05.885 [2024-11-28 12:50:48.077243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.885 [2024-11-28 12:50:48.077258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.885 qpair failed and we were unable to recover it. 00:27:05.885 [2024-11-28 12:50:48.077487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.885 [2024-11-28 12:50:48.077502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.885 qpair failed and we were unable to recover it. 00:27:05.885 [2024-11-28 12:50:48.077648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.885 [2024-11-28 12:50:48.077662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.885 qpair failed and we were unable to recover it. 00:27:05.885 [2024-11-28 12:50:48.077763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.885 [2024-11-28 12:50:48.077777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.885 qpair failed and we were unable to recover it. 00:27:05.885 [2024-11-28 12:50:48.078018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.885 [2024-11-28 12:50:48.078049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.885 qpair failed and we were unable to recover it. 00:27:05.885 [2024-11-28 12:50:48.078182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.885 [2024-11-28 12:50:48.078214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.885 qpair failed and we were unable to recover it. 00:27:05.885 [2024-11-28 12:50:48.078346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.885 [2024-11-28 12:50:48.078378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.885 qpair failed and we were unable to recover it. 00:27:05.885 [2024-11-28 12:50:48.078569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.885 [2024-11-28 12:50:48.078601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.885 qpair failed and we were unable to recover it. 00:27:05.885 [2024-11-28 12:50:48.078789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.885 [2024-11-28 12:50:48.078804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.885 qpair failed and we were unable to recover it. 00:27:05.885 [2024-11-28 12:50:48.079022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.885 [2024-11-28 12:50:48.079054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.885 qpair failed and we were unable to recover it. 00:27:05.885 [2024-11-28 12:50:48.079189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.885 [2024-11-28 12:50:48.079219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.885 qpair failed and we were unable to recover it. 00:27:05.885 [2024-11-28 12:50:48.079339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.885 [2024-11-28 12:50:48.079371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.885 qpair failed and we were unable to recover it. 00:27:05.885 [2024-11-28 12:50:48.079584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.885 [2024-11-28 12:50:48.079614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.885 qpair failed and we were unable to recover it. 00:27:05.885 [2024-11-28 12:50:48.079745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.885 [2024-11-28 12:50:48.079776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.885 qpair failed and we were unable to recover it. 00:27:05.885 [2024-11-28 12:50:48.080028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.885 [2024-11-28 12:50:48.080060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.885 qpair failed and we were unable to recover it. 00:27:05.885 [2024-11-28 12:50:48.080266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.885 [2024-11-28 12:50:48.080298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.885 qpair failed and we were unable to recover it. 00:27:05.885 [2024-11-28 12:50:48.080518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.885 [2024-11-28 12:50:48.080549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.885 qpair failed and we were unable to recover it. 00:27:05.885 [2024-11-28 12:50:48.080793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.885 [2024-11-28 12:50:48.080825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.885 qpair failed and we were unable to recover it. 00:27:05.885 [2024-11-28 12:50:48.081025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.885 [2024-11-28 12:50:48.081041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.885 qpair failed and we were unable to recover it. 00:27:05.885 [2024-11-28 12:50:48.081290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.885 [2024-11-28 12:50:48.081321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.885 qpair failed and we were unable to recover it. 00:27:05.885 [2024-11-28 12:50:48.081570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.885 [2024-11-28 12:50:48.081600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.885 qpair failed and we were unable to recover it. 00:27:05.885 [2024-11-28 12:50:48.081793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.886 [2024-11-28 12:50:48.081808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.886 qpair failed and we were unable to recover it. 00:27:05.886 [2024-11-28 12:50:48.081905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.886 [2024-11-28 12:50:48.081919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.886 qpair failed and we were unable to recover it. 00:27:05.886 [2024-11-28 12:50:48.082070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.886 [2024-11-28 12:50:48.082085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.886 qpair failed and we were unable to recover it. 00:27:05.886 [2024-11-28 12:50:48.082323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.886 [2024-11-28 12:50:48.082354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.886 qpair failed and we were unable to recover it. 00:27:05.886 [2024-11-28 12:50:48.082534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.886 [2024-11-28 12:50:48.082565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.886 qpair failed and we were unable to recover it. 00:27:05.886 [2024-11-28 12:50:48.082709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.886 [2024-11-28 12:50:48.082741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.886 qpair failed and we were unable to recover it. 00:27:05.886 [2024-11-28 12:50:48.082861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.886 [2024-11-28 12:50:48.082875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.886 qpair failed and we were unable to recover it. 00:27:05.886 [2024-11-28 12:50:48.083047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.886 [2024-11-28 12:50:48.083079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.886 qpair failed and we were unable to recover it. 00:27:05.886 [2024-11-28 12:50:48.083337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.886 [2024-11-28 12:50:48.083367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.886 qpair failed and we were unable to recover it. 00:27:05.886 [2024-11-28 12:50:48.083508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.886 [2024-11-28 12:50:48.083539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.886 qpair failed and we were unable to recover it. 00:27:05.886 [2024-11-28 12:50:48.083652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.886 [2024-11-28 12:50:48.083684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.886 qpair failed and we were unable to recover it. 00:27:05.886 [2024-11-28 12:50:48.083868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.886 [2024-11-28 12:50:48.083882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.886 qpair failed and we were unable to recover it. 00:27:05.886 [2024-11-28 12:50:48.084042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.886 [2024-11-28 12:50:48.084056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.886 qpair failed and we were unable to recover it. 00:27:05.886 [2024-11-28 12:50:48.084138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.886 [2024-11-28 12:50:48.084152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.886 qpair failed and we were unable to recover it. 00:27:05.886 [2024-11-28 12:50:48.084331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.886 [2024-11-28 12:50:48.084345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.886 qpair failed and we were unable to recover it. 00:27:05.886 [2024-11-28 12:50:48.084528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.886 [2024-11-28 12:50:48.084559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.886 qpair failed and we were unable to recover it. 00:27:05.886 [2024-11-28 12:50:48.084837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.886 [2024-11-28 12:50:48.084868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.886 qpair failed and we were unable to recover it. 00:27:05.886 [2024-11-28 12:50:48.085042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.886 [2024-11-28 12:50:48.085057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.886 qpair failed and we were unable to recover it. 00:27:05.886 [2024-11-28 12:50:48.085154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.886 [2024-11-28 12:50:48.085168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.886 qpair failed and we were unable to recover it. 00:27:05.886 [2024-11-28 12:50:48.085251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.886 [2024-11-28 12:50:48.085266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.886 qpair failed and we were unable to recover it. 00:27:05.886 [2024-11-28 12:50:48.085499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.886 [2024-11-28 12:50:48.085530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.886 qpair failed and we were unable to recover it. 00:27:05.886 [2024-11-28 12:50:48.085640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.886 [2024-11-28 12:50:48.085671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.886 qpair failed and we were unable to recover it. 00:27:05.886 [2024-11-28 12:50:48.085851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.886 [2024-11-28 12:50:48.085881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.886 qpair failed and we were unable to recover it. 00:27:05.886 [2024-11-28 12:50:48.086067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.886 [2024-11-28 12:50:48.086082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.886 qpair failed and we were unable to recover it. 00:27:05.886 [2024-11-28 12:50:48.086238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.886 [2024-11-28 12:50:48.086268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.886 qpair failed and we were unable to recover it. 00:27:05.886 [2024-11-28 12:50:48.086516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.886 [2024-11-28 12:50:48.086547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.886 qpair failed and we were unable to recover it. 00:27:05.886 [2024-11-28 12:50:48.086728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.886 [2024-11-28 12:50:48.086767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.886 qpair failed and we were unable to recover it. 00:27:05.886 [2024-11-28 12:50:48.086919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.886 [2024-11-28 12:50:48.086933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.886 qpair failed and we were unable to recover it. 00:27:05.886 [2024-11-28 12:50:48.087040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.886 [2024-11-28 12:50:48.087074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.886 qpair failed and we were unable to recover it. 00:27:05.886 [2024-11-28 12:50:48.087193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.886 [2024-11-28 12:50:48.087225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.886 qpair failed and we were unable to recover it. 00:27:05.886 [2024-11-28 12:50:48.087407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.886 [2024-11-28 12:50:48.087438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.886 qpair failed and we were unable to recover it. 00:27:05.886 [2024-11-28 12:50:48.087651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.886 [2024-11-28 12:50:48.087682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.886 qpair failed and we were unable to recover it. 00:27:05.886 [2024-11-28 12:50:48.087878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.886 [2024-11-28 12:50:48.087909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.886 qpair failed and we were unable to recover it. 00:27:05.886 [2024-11-28 12:50:48.088152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.886 [2024-11-28 12:50:48.088166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.886 qpair failed and we were unable to recover it. 00:27:05.886 [2024-11-28 12:50:48.088333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.886 [2024-11-28 12:50:48.088348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.886 qpair failed and we were unable to recover it. 00:27:05.886 [2024-11-28 12:50:48.088447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.886 [2024-11-28 12:50:48.088477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.886 qpair failed and we were unable to recover it. 00:27:05.886 [2024-11-28 12:50:48.088616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.886 [2024-11-28 12:50:48.088647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.886 qpair failed and we were unable to recover it. 00:27:05.886 [2024-11-28 12:50:48.088888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.886 [2024-11-28 12:50:48.088919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.886 qpair failed and we were unable to recover it. 00:27:05.886 [2024-11-28 12:50:48.089083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.886 [2024-11-28 12:50:48.089115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.887 qpair failed and we were unable to recover it. 00:27:05.887 [2024-11-28 12:50:48.089260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.887 [2024-11-28 12:50:48.089273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.887 qpair failed and we were unable to recover it. 00:27:05.887 [2024-11-28 12:50:48.089414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.887 [2024-11-28 12:50:48.089446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.887 qpair failed and we were unable to recover it. 00:27:05.887 [2024-11-28 12:50:48.089646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.887 [2024-11-28 12:50:48.089678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.887 qpair failed and we were unable to recover it. 00:27:05.887 [2024-11-28 12:50:48.089798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.887 [2024-11-28 12:50:48.089830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.887 qpair failed and we were unable to recover it. 00:27:05.887 [2024-11-28 12:50:48.090046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.887 [2024-11-28 12:50:48.090057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.887 qpair failed and we were unable to recover it. 00:27:05.887 [2024-11-28 12:50:48.090257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.887 [2024-11-28 12:50:48.090267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.887 qpair failed and we were unable to recover it. 00:27:05.887 [2024-11-28 12:50:48.090435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.887 [2024-11-28 12:50:48.090446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.887 qpair failed and we were unable to recover it. 00:27:05.887 [2024-11-28 12:50:48.090506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.887 [2024-11-28 12:50:48.090517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.887 qpair failed and we were unable to recover it. 00:27:05.887 [2024-11-28 12:50:48.090667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.887 [2024-11-28 12:50:48.090678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.887 qpair failed and we were unable to recover it. 00:27:05.887 [2024-11-28 12:50:48.090826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.887 [2024-11-28 12:50:48.090858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.887 qpair failed and we were unable to recover it. 00:27:05.887 [2024-11-28 12:50:48.091129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.887 [2024-11-28 12:50:48.091162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.887 qpair failed and we were unable to recover it. 00:27:05.887 [2024-11-28 12:50:48.091372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.887 [2024-11-28 12:50:48.091405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.887 qpair failed and we were unable to recover it. 00:27:05.887 [2024-11-28 12:50:48.091527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.887 [2024-11-28 12:50:48.091558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.887 qpair failed and we were unable to recover it. 00:27:05.887 [2024-11-28 12:50:48.091755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.887 [2024-11-28 12:50:48.091787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.887 qpair failed and we were unable to recover it. 00:27:05.887 [2024-11-28 12:50:48.091912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.887 [2024-11-28 12:50:48.091943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.887 qpair failed and we were unable to recover it. 00:27:05.887 [2024-11-28 12:50:48.092085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.887 [2024-11-28 12:50:48.092096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.887 qpair failed and we were unable to recover it. 00:27:05.887 [2024-11-28 12:50:48.092251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.887 [2024-11-28 12:50:48.092262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.887 qpair failed and we were unable to recover it. 00:27:05.887 [2024-11-28 12:50:48.092485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.887 [2024-11-28 12:50:48.092495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.887 qpair failed and we were unable to recover it. 00:27:05.887 [2024-11-28 12:50:48.092560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.887 [2024-11-28 12:50:48.092571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.887 qpair failed and we were unable to recover it. 00:27:05.887 [2024-11-28 12:50:48.092643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.887 [2024-11-28 12:50:48.092654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.887 qpair failed and we were unable to recover it. 00:27:05.887 [2024-11-28 12:50:48.092745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.887 [2024-11-28 12:50:48.092756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.887 qpair failed and we were unable to recover it. 00:27:05.887 [2024-11-28 12:50:48.092844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.887 [2024-11-28 12:50:48.092855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.887 qpair failed and we were unable to recover it. 00:27:05.887 [2024-11-28 12:50:48.092956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.887 [2024-11-28 12:50:48.092967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.887 qpair failed and we were unable to recover it. 00:27:05.887 [2024-11-28 12:50:48.093176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.887 [2024-11-28 12:50:48.093209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.887 qpair failed and we were unable to recover it. 00:27:05.887 [2024-11-28 12:50:48.093400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.887 [2024-11-28 12:50:48.093430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.887 qpair failed and we were unable to recover it. 00:27:05.887 [2024-11-28 12:50:48.093614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.887 [2024-11-28 12:50:48.093646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.887 qpair failed and we were unable to recover it. 00:27:05.887 [2024-11-28 12:50:48.093772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.887 [2024-11-28 12:50:48.093804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.887 qpair failed and we were unable to recover it. 00:27:05.887 [2024-11-28 12:50:48.093993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.887 [2024-11-28 12:50:48.094025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.887 qpair failed and we were unable to recover it. 00:27:05.887 [2024-11-28 12:50:48.094148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.887 [2024-11-28 12:50:48.094179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.887 qpair failed and we were unable to recover it. 00:27:05.887 [2024-11-28 12:50:48.094305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.887 [2024-11-28 12:50:48.094336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.887 qpair failed and we were unable to recover it. 00:27:05.887 [2024-11-28 12:50:48.094582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.887 [2024-11-28 12:50:48.094614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.887 qpair failed and we were unable to recover it. 00:27:05.887 [2024-11-28 12:50:48.094812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.887 [2024-11-28 12:50:48.094844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.887 qpair failed and we were unable to recover it. 00:27:05.887 [2024-11-28 12:50:48.095111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.888 [2024-11-28 12:50:48.095143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.888 qpair failed and we were unable to recover it. 00:27:05.888 [2024-11-28 12:50:48.095245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.888 [2024-11-28 12:50:48.095256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.888 qpair failed and we were unable to recover it. 00:27:05.888 [2024-11-28 12:50:48.095408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.888 [2024-11-28 12:50:48.095419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.888 qpair failed and we were unable to recover it. 00:27:05.888 [2024-11-28 12:50:48.095589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.888 [2024-11-28 12:50:48.095620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.888 qpair failed and we were unable to recover it. 00:27:05.888 [2024-11-28 12:50:48.095751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.888 [2024-11-28 12:50:48.095782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.888 qpair failed and we were unable to recover it. 00:27:05.888 [2024-11-28 12:50:48.096051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.888 [2024-11-28 12:50:48.096062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.888 qpair failed and we were unable to recover it. 00:27:05.888 [2024-11-28 12:50:48.096193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.888 [2024-11-28 12:50:48.096204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.888 qpair failed and we were unable to recover it. 00:27:05.888 [2024-11-28 12:50:48.096387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.888 [2024-11-28 12:50:48.096423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.888 qpair failed and we were unable to recover it. 00:27:05.888 [2024-11-28 12:50:48.096539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.888 [2024-11-28 12:50:48.096571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.888 qpair failed and we were unable to recover it. 00:27:05.888 [2024-11-28 12:50:48.096853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.888 [2024-11-28 12:50:48.096883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.888 qpair failed and we were unable to recover it. 00:27:05.888 [2024-11-28 12:50:48.096987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.888 [2024-11-28 12:50:48.096997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.888 qpair failed and we were unable to recover it. 00:27:05.888 [2024-11-28 12:50:48.097200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.888 [2024-11-28 12:50:48.097210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.888 qpair failed and we were unable to recover it. 00:27:05.888 [2024-11-28 12:50:48.097358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.888 [2024-11-28 12:50:48.097368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.888 qpair failed and we were unable to recover it. 00:27:05.888 [2024-11-28 12:50:48.097528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.888 [2024-11-28 12:50:48.097539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.888 qpair failed and we were unable to recover it. 00:27:05.888 [2024-11-28 12:50:48.097670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.888 [2024-11-28 12:50:48.097681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.888 qpair failed and we were unable to recover it. 00:27:05.888 [2024-11-28 12:50:48.097842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.888 [2024-11-28 12:50:48.097872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.888 qpair failed and we were unable to recover it. 00:27:05.888 [2024-11-28 12:50:48.098119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.888 [2024-11-28 12:50:48.098152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.888 qpair failed and we were unable to recover it. 00:27:05.888 [2024-11-28 12:50:48.098274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.888 [2024-11-28 12:50:48.098306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.888 qpair failed and we were unable to recover it. 00:27:05.888 [2024-11-28 12:50:48.098549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.888 [2024-11-28 12:50:48.098581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.888 qpair failed and we were unable to recover it. 00:27:05.888 [2024-11-28 12:50:48.098799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.888 [2024-11-28 12:50:48.098831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.888 qpair failed and we were unable to recover it. 00:27:05.888 [2024-11-28 12:50:48.099009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.888 [2024-11-28 12:50:48.099041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.888 qpair failed and we were unable to recover it. 00:27:05.888 [2024-11-28 12:50:48.099265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.888 [2024-11-28 12:50:48.099296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.888 qpair failed and we were unable to recover it. 00:27:05.888 [2024-11-28 12:50:48.099587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.888 [2024-11-28 12:50:48.099618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.888 qpair failed and we were unable to recover it. 00:27:05.888 [2024-11-28 12:50:48.099754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.888 [2024-11-28 12:50:48.099786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.888 qpair failed and we were unable to recover it. 00:27:05.888 [2024-11-28 12:50:48.100043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.888 [2024-11-28 12:50:48.100075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.888 qpair failed and we were unable to recover it. 00:27:05.888 [2024-11-28 12:50:48.100283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.888 [2024-11-28 12:50:48.100293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.888 qpair failed and we were unable to recover it. 00:27:05.888 [2024-11-28 12:50:48.100359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.888 [2024-11-28 12:50:48.100369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.888 qpair failed and we were unable to recover it. 00:27:05.888 [2024-11-28 12:50:48.100505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.888 [2024-11-28 12:50:48.100516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.888 qpair failed and we were unable to recover it. 00:27:05.888 [2024-11-28 12:50:48.100749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.888 [2024-11-28 12:50:48.100780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.888 qpair failed and we were unable to recover it. 00:27:05.888 [2024-11-28 12:50:48.100925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.888 [2024-11-28 12:50:48.100964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.888 qpair failed and we were unable to recover it. 00:27:05.888 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 2681599 Killed "${NVMF_APP[@]}" "$@" 00:27:05.888 [2024-11-28 12:50:48.101158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.888 [2024-11-28 12:50:48.101191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.888 qpair failed and we were unable to recover it. 00:27:05.888 [2024-11-28 12:50:48.101288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.888 [2024-11-28 12:50:48.101299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.888 qpair failed and we were unable to recover it. 00:27:05.888 [2024-11-28 12:50:48.101392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.888 [2024-11-28 12:50:48.101403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.888 qpair failed and we were unable to recover it. 00:27:05.888 [2024-11-28 12:50:48.101539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.888 [2024-11-28 12:50:48.101553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.888 qpair failed and we were unable to recover it. 00:27:05.888 [2024-11-28 12:50:48.101702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.888 [2024-11-28 12:50:48.101713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.888 qpair failed and we were unable to recover it. 00:27:05.888 [2024-11-28 12:50:48.101776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.888 [2024-11-28 12:50:48.101787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.888 qpair failed and we were unable to recover it. 00:27:05.888 12:50:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:27:05.888 [2024-11-28 12:50:48.101925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.889 [2024-11-28 12:50:48.101936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.889 qpair failed and we were unable to recover it. 00:27:05.889 12:50:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:27:05.889 [2024-11-28 12:50:48.102167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.889 [2024-11-28 12:50:48.102181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.889 qpair failed and we were unable to recover it. 00:27:05.889 [2024-11-28 12:50:48.102327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.889 [2024-11-28 12:50:48.102337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.889 qpair failed and we were unable to recover it. 00:27:05.889 [2024-11-28 12:50:48.102469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.889 [2024-11-28 12:50:48.102480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b9 12:50:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:05.889 0 with addr=10.0.0.2, port=4420 00:27:05.889 qpair failed and we were unable to recover it. 00:27:05.889 [2024-11-28 12:50:48.102583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.889 [2024-11-28 12:50:48.102594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.889 qpair failed and we were unable to recover it. 00:27:05.889 [2024-11-28 12:50:48.102674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.889 [2024-11-28 12:50:48.102684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.889 qpair failed and we were unable to recover it. 00:27:05.889 [2024-11-28 12:50:48.102765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.889 12:50:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:05.889 [2024-11-28 12:50:48.102776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.889 qpair failed and we were unable to recover it. 00:27:05.889 [2024-11-28 12:50:48.103002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.889 [2024-11-28 12:50:48.103015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.889 qpair failed and we were unable to recover it. 00:27:05.889 12:50:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:05.889 [2024-11-28 12:50:48.103146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.889 [2024-11-28 12:50:48.103157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.889 qpair failed and we were unable to recover it. 00:27:05.889 [2024-11-28 12:50:48.103246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.889 [2024-11-28 12:50:48.103257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.889 qpair failed and we were unable to recover it. 00:27:05.889 [2024-11-28 12:50:48.103357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.889 [2024-11-28 12:50:48.103368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.889 qpair failed and we were unable to recover it. 00:27:05.889 [2024-11-28 12:50:48.103434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.889 [2024-11-28 12:50:48.103445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.889 qpair failed and we were unable to recover it. 00:27:05.889 [2024-11-28 12:50:48.103595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.889 [2024-11-28 12:50:48.103606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.889 qpair failed and we were unable to recover it. 00:27:05.889 [2024-11-28 12:50:48.103744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.889 [2024-11-28 12:50:48.103755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.889 qpair failed and we were unable to recover it. 00:27:05.889 [2024-11-28 12:50:48.103919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.889 [2024-11-28 12:50:48.103930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.889 qpair failed and we were unable to recover it. 00:27:05.889 [2024-11-28 12:50:48.104108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.889 [2024-11-28 12:50:48.104119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.889 qpair failed and we were unable to recover it. 00:27:05.889 [2024-11-28 12:50:48.104265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.889 [2024-11-28 12:50:48.104276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.889 qpair failed and we were unable to recover it. 00:27:05.889 [2024-11-28 12:50:48.104409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.889 [2024-11-28 12:50:48.104419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.889 qpair failed and we were unable to recover it. 00:27:05.889 [2024-11-28 12:50:48.104499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.889 [2024-11-28 12:50:48.104509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.889 qpair failed and we were unable to recover it. 00:27:05.889 [2024-11-28 12:50:48.104659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.889 [2024-11-28 12:50:48.104670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.889 qpair failed and we were unable to recover it. 00:27:05.889 [2024-11-28 12:50:48.104744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.889 [2024-11-28 12:50:48.104754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.889 qpair failed and we were unable to recover it. 00:27:05.889 [2024-11-28 12:50:48.104911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.889 [2024-11-28 12:50:48.104921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.889 qpair failed and we were unable to recover it. 00:27:05.889 [2024-11-28 12:50:48.105071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.889 [2024-11-28 12:50:48.105083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.889 qpair failed and we were unable to recover it. 00:27:05.889 [2024-11-28 12:50:48.105217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.889 [2024-11-28 12:50:48.105228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.889 qpair failed and we were unable to recover it. 00:27:05.889 [2024-11-28 12:50:48.105381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.889 [2024-11-28 12:50:48.105392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.889 qpair failed and we were unable to recover it. 00:27:05.889 [2024-11-28 12:50:48.105482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.889 [2024-11-28 12:50:48.105493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.889 qpair failed and we were unable to recover it. 00:27:05.889 [2024-11-28 12:50:48.105734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.889 [2024-11-28 12:50:48.105745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.889 qpair failed and we were unable to recover it. 00:27:05.889 [2024-11-28 12:50:48.105900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.889 [2024-11-28 12:50:48.105911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.889 qpair failed and we were unable to recover it. 00:27:05.889 [2024-11-28 12:50:48.106059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.889 [2024-11-28 12:50:48.106070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.889 qpair failed and we were unable to recover it. 00:27:05.889 [2024-11-28 12:50:48.106217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.889 [2024-11-28 12:50:48.106228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.889 qpair failed and we were unable to recover it. 00:27:05.889 [2024-11-28 12:50:48.106361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.889 [2024-11-28 12:50:48.106372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.889 qpair failed and we were unable to recover it. 00:27:05.889 [2024-11-28 12:50:48.106465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.889 [2024-11-28 12:50:48.106476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.889 qpair failed and we were unable to recover it. 00:27:05.889 [2024-11-28 12:50:48.106626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.889 [2024-11-28 12:50:48.106637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.889 qpair failed and we were unable to recover it. 00:27:05.889 [2024-11-28 12:50:48.106835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.889 [2024-11-28 12:50:48.106846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.889 qpair failed and we were unable to recover it. 00:27:05.889 [2024-11-28 12:50:48.106932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.889 [2024-11-28 12:50:48.106942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.889 qpair failed and we were unable to recover it. 00:27:05.889 [2024-11-28 12:50:48.107082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.889 [2024-11-28 12:50:48.107095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.889 qpair failed and we were unable to recover it. 00:27:05.889 [2024-11-28 12:50:48.107184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.889 [2024-11-28 12:50:48.107194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.889 qpair failed and we were unable to recover it. 00:27:05.890 [2024-11-28 12:50:48.107331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.890 [2024-11-28 12:50:48.107341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.890 qpair failed and we were unable to recover it. 00:27:05.890 [2024-11-28 12:50:48.107435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.890 [2024-11-28 12:50:48.107445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.890 qpair failed and we were unable to recover it. 00:27:05.890 [2024-11-28 12:50:48.107538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.890 [2024-11-28 12:50:48.107548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.890 qpair failed and we were unable to recover it. 00:27:05.890 [2024-11-28 12:50:48.107619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.890 [2024-11-28 12:50:48.107629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.890 qpair failed and we were unable to recover it. 00:27:05.890 [2024-11-28 12:50:48.107713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.890 [2024-11-28 12:50:48.107724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.890 qpair failed and we were unable to recover it. 00:27:05.890 [2024-11-28 12:50:48.107805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.890 [2024-11-28 12:50:48.107815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.890 qpair failed and we were unable to recover it. 00:27:05.890 [2024-11-28 12:50:48.107906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.890 [2024-11-28 12:50:48.107917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.890 qpair failed and we were unable to recover it. 00:27:05.890 [2024-11-28 12:50:48.107986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.890 [2024-11-28 12:50:48.107997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.890 qpair failed and we were unable to recover it. 00:27:05.890 [2024-11-28 12:50:48.108146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.890 [2024-11-28 12:50:48.108156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.890 qpair failed and we were unable to recover it. 00:27:05.890 [2024-11-28 12:50:48.108234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.890 [2024-11-28 12:50:48.108244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.890 qpair failed and we were unable to recover it. 00:27:05.890 [2024-11-28 12:50:48.108399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.890 [2024-11-28 12:50:48.108410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.890 qpair failed and we were unable to recover it. 00:27:05.890 [2024-11-28 12:50:48.108543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.890 [2024-11-28 12:50:48.108553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.890 qpair failed and we were unable to recover it. 00:27:05.890 [2024-11-28 12:50:48.108704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.890 [2024-11-28 12:50:48.108716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.890 qpair failed and we were unable to recover it. 00:27:05.890 [2024-11-28 12:50:48.108804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.890 [2024-11-28 12:50:48.108815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.890 qpair failed and we were unable to recover it. 00:27:05.890 [2024-11-28 12:50:48.108962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.890 [2024-11-28 12:50:48.108973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.890 qpair failed and we were unable to recover it. 00:27:05.890 [2024-11-28 12:50:48.109107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.890 [2024-11-28 12:50:48.109118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.890 qpair failed and we were unable to recover it. 00:27:05.890 [2024-11-28 12:50:48.109265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.890 [2024-11-28 12:50:48.109276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.890 qpair failed and we were unable to recover it. 00:27:05.890 [2024-11-28 12:50:48.109465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.890 [2024-11-28 12:50:48.109476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.890 qpair failed and we were unable to recover it. 00:27:05.890 [2024-11-28 12:50:48.109550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.890 [2024-11-28 12:50:48.109560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.890 qpair failed and we were unable to recover it. 00:27:05.890 [2024-11-28 12:50:48.109635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.890 [2024-11-28 12:50:48.109645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.890 qpair failed and we were unable to recover it. 00:27:05.890 [2024-11-28 12:50:48.109713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.890 [2024-11-28 12:50:48.109723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.890 qpair failed and we were unable to recover it. 00:27:05.890 12:50:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=2682327 00:27:05.890 [2024-11-28 12:50:48.109863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.890 [2024-11-28 12:50:48.109874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.890 qpair failed and we were unable to recover it. 00:27:05.890 [2024-11-28 12:50:48.110080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.890 [2024-11-28 12:50:48.110091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.890 qpair failed and we were unable to recover it. 00:27:05.890 12:50:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 2682327 00:27:05.890 [2024-11-28 12:50:48.110234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.890 [2024-11-28 12:50:48.110246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.890 12:50:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:27:05.890 qpair failed and we were unable to recover it. 00:27:05.890 [2024-11-28 12:50:48.110478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.890 [2024-11-28 12:50:48.110515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.890 qpair failed and we were unable to recover it. 00:27:05.890 12:50:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2682327 ']' 00:27:05.890 [2024-11-28 12:50:48.110752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.890 [2024-11-28 12:50:48.110787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.890 qpair failed and we were unable to recover it. 00:27:05.890 12:50:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:05.890 [2024-11-28 12:50:48.110897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.890 [2024-11-28 12:50:48.110913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.890 qpair failed and we were unable to recover it. 00:27:05.890 [2024-11-28 12:50:48.111003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.890 [2024-11-28 12:50:48.111017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.890 qpair failed and we were unable to recover it. 00:27:05.890 12:50:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:05.890 [2024-11-28 12:50:48.111177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.890 [2024-11-28 12:50:48.111192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.890 qpair failed and we were unable to recover it. 00:27:05.890 [2024-11-28 12:50:48.111294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.890 [2024-11-28 12:50:48.111308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.890 qpair failed and we were unable to recover it. 00:27:05.890 12:50:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:05.890 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:05.890 [2024-11-28 12:50:48.111465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.890 [2024-11-28 12:50:48.111481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.890 qpair failed and we were unable to recover it. 00:27:05.890 [2024-11-28 12:50:48.111636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.890 [2024-11-28 12:50:48.111650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.890 qpair failed and we were unable to recover it. 00:27:05.890 12:50:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:05.890 [2024-11-28 12:50:48.111801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.890 [2024-11-28 12:50:48.111816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.890 qpair failed and we were unable to recover it. 00:27:05.890 [2024-11-28 12:50:48.111912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.891 [2024-11-28 12:50:48.111928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.891 qpair failed and we were unable to recover it. 00:27:05.891 12:50:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:05.891 [2024-11-28 12:50:48.112029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.891 [2024-11-28 12:50:48.112044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.891 qpair failed and we were unable to recover it. 00:27:05.891 [2024-11-28 12:50:48.112203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.891 [2024-11-28 12:50:48.112217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.891 qpair failed and we were unable to recover it. 00:27:05.891 [2024-11-28 12:50:48.112314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.891 [2024-11-28 12:50:48.112328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.891 qpair failed and we were unable to recover it. 00:27:05.891 [2024-11-28 12:50:48.112435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.891 [2024-11-28 12:50:48.112449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.891 qpair failed and we were unable to recover it. 00:27:05.891 [2024-11-28 12:50:48.112534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.891 [2024-11-28 12:50:48.112548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.891 qpair failed and we were unable to recover it. 00:27:05.891 [2024-11-28 12:50:48.112646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.891 [2024-11-28 12:50:48.112660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.891 qpair failed and we were unable to recover it. 00:27:05.891 [2024-11-28 12:50:48.112756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.891 [2024-11-28 12:50:48.112771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.891 qpair failed and we were unable to recover it. 00:27:05.891 [2024-11-28 12:50:48.112919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.891 [2024-11-28 12:50:48.112933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.891 qpair failed and we were unable to recover it. 00:27:05.891 [2024-11-28 12:50:48.113027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.891 [2024-11-28 12:50:48.113040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.891 qpair failed and we were unable to recover it. 00:27:05.891 [2024-11-28 12:50:48.113183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.891 [2024-11-28 12:50:48.113194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.891 qpair failed and we were unable to recover it. 00:27:05.891 [2024-11-28 12:50:48.113334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.891 [2024-11-28 12:50:48.113345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.891 qpair failed and we were unable to recover it. 00:27:05.891 [2024-11-28 12:50:48.113417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.891 [2024-11-28 12:50:48.113427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.891 qpair failed and we were unable to recover it. 00:27:05.891 [2024-11-28 12:50:48.113582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.891 [2024-11-28 12:50:48.113593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.891 qpair failed and we were unable to recover it. 00:27:05.891 [2024-11-28 12:50:48.113681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.891 [2024-11-28 12:50:48.113692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.891 qpair failed and we were unable to recover it. 00:27:05.891 [2024-11-28 12:50:48.113893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.891 [2024-11-28 12:50:48.113904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.891 qpair failed and we were unable to recover it. 00:27:05.891 [2024-11-28 12:50:48.114048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.891 [2024-11-28 12:50:48.114059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.891 qpair failed and we were unable to recover it. 00:27:05.891 [2024-11-28 12:50:48.114193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.891 [2024-11-28 12:50:48.114203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.891 qpair failed and we were unable to recover it. 00:27:05.891 [2024-11-28 12:50:48.114356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.891 [2024-11-28 12:50:48.114366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.891 qpair failed and we were unable to recover it. 00:27:05.891 [2024-11-28 12:50:48.114519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.891 [2024-11-28 12:50:48.114530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.891 qpair failed and we were unable to recover it. 00:27:05.891 [2024-11-28 12:50:48.114610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.891 [2024-11-28 12:50:48.114621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.891 qpair failed and we were unable to recover it. 00:27:05.891 [2024-11-28 12:50:48.114757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.891 [2024-11-28 12:50:48.114769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.891 qpair failed and we were unable to recover it. 00:27:05.891 [2024-11-28 12:50:48.114977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.891 [2024-11-28 12:50:48.114988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.891 qpair failed and we were unable to recover it. 00:27:05.891 [2024-11-28 12:50:48.115067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.891 [2024-11-28 12:50:48.115078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.891 qpair failed and we were unable to recover it. 00:27:05.891 [2024-11-28 12:50:48.115178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.891 [2024-11-28 12:50:48.115189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.891 qpair failed and we were unable to recover it. 00:27:05.891 [2024-11-28 12:50:48.115279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.891 [2024-11-28 12:50:48.115290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.891 qpair failed and we were unable to recover it. 00:27:05.891 [2024-11-28 12:50:48.115442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.891 [2024-11-28 12:50:48.115453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.891 qpair failed and we were unable to recover it. 00:27:05.891 [2024-11-28 12:50:48.115605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.891 [2024-11-28 12:50:48.115617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.891 qpair failed and we were unable to recover it. 00:27:05.891 [2024-11-28 12:50:48.115763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.891 [2024-11-28 12:50:48.115774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.891 qpair failed and we were unable to recover it. 00:27:05.891 [2024-11-28 12:50:48.115858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.891 [2024-11-28 12:50:48.115868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.891 qpair failed and we were unable to recover it. 00:27:05.891 [2024-11-28 12:50:48.116075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.891 [2024-11-28 12:50:48.116086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.891 qpair failed and we were unable to recover it. 00:27:05.891 [2024-11-28 12:50:48.116165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.891 [2024-11-28 12:50:48.116176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.891 qpair failed and we were unable to recover it. 00:27:05.891 [2024-11-28 12:50:48.116313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.891 [2024-11-28 12:50:48.116324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.891 qpair failed and we were unable to recover it. 00:27:05.891 [2024-11-28 12:50:48.116392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.891 [2024-11-28 12:50:48.116403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.892 qpair failed and we were unable to recover it. 00:27:05.892 [2024-11-28 12:50:48.116541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.892 [2024-11-28 12:50:48.116552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.892 qpair failed and we were unable to recover it. 00:27:05.892 [2024-11-28 12:50:48.116643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.892 [2024-11-28 12:50:48.116654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.892 qpair failed and we were unable to recover it. 00:27:05.892 [2024-11-28 12:50:48.116743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.892 [2024-11-28 12:50:48.116753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.892 qpair failed and we were unable to recover it. 00:27:05.892 [2024-11-28 12:50:48.116823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.892 [2024-11-28 12:50:48.116834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.892 qpair failed and we were unable to recover it. 00:27:05.892 [2024-11-28 12:50:48.116913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.892 [2024-11-28 12:50:48.116924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.892 qpair failed and we were unable to recover it. 00:27:05.892 [2024-11-28 12:50:48.117015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.892 [2024-11-28 12:50:48.117026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.892 qpair failed and we were unable to recover it. 00:27:05.892 [2024-11-28 12:50:48.117179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.892 [2024-11-28 12:50:48.117189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.892 qpair failed and we were unable to recover it. 00:27:05.892 [2024-11-28 12:50:48.117281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.892 [2024-11-28 12:50:48.117292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.892 qpair failed and we were unable to recover it. 00:27:05.892 [2024-11-28 12:50:48.117377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.892 [2024-11-28 12:50:48.117387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.892 qpair failed and we were unable to recover it. 00:27:05.892 [2024-11-28 12:50:48.117519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.892 [2024-11-28 12:50:48.117529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.892 qpair failed and we were unable to recover it. 00:27:05.892 [2024-11-28 12:50:48.117688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.892 [2024-11-28 12:50:48.117699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.892 qpair failed and we were unable to recover it. 00:27:05.892 [2024-11-28 12:50:48.117839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.892 [2024-11-28 12:50:48.117849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.892 qpair failed and we were unable to recover it. 00:27:05.892 [2024-11-28 12:50:48.117914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.892 [2024-11-28 12:50:48.117924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.892 qpair failed and we were unable to recover it. 00:27:05.892 [2024-11-28 12:50:48.118075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.892 [2024-11-28 12:50:48.118086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.892 qpair failed and we were unable to recover it. 00:27:05.892 [2024-11-28 12:50:48.118234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.892 [2024-11-28 12:50:48.118245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.892 qpair failed and we were unable to recover it. 00:27:05.892 [2024-11-28 12:50:48.118314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.892 [2024-11-28 12:50:48.118324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.892 qpair failed and we were unable to recover it. 00:27:05.892 [2024-11-28 12:50:48.118473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.892 [2024-11-28 12:50:48.118483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.892 qpair failed and we were unable to recover it. 00:27:05.892 [2024-11-28 12:50:48.118643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.892 [2024-11-28 12:50:48.118654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.892 qpair failed and we were unable to recover it. 00:27:05.892 [2024-11-28 12:50:48.118867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.892 [2024-11-28 12:50:48.118878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.892 qpair failed and we were unable to recover it. 00:27:05.892 [2024-11-28 12:50:48.119067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.892 [2024-11-28 12:50:48.119078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.892 qpair failed and we were unable to recover it. 00:27:05.892 [2024-11-28 12:50:48.119231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.892 [2024-11-28 12:50:48.119242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.892 qpair failed and we were unable to recover it. 00:27:05.892 [2024-11-28 12:50:48.119331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.892 [2024-11-28 12:50:48.119342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.892 qpair failed and we were unable to recover it. 00:27:05.892 [2024-11-28 12:50:48.119494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.892 [2024-11-28 12:50:48.119504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.892 qpair failed and we were unable to recover it. 00:27:05.892 [2024-11-28 12:50:48.119585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.892 [2024-11-28 12:50:48.119596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.892 qpair failed and we were unable to recover it. 00:27:05.892 [2024-11-28 12:50:48.119880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.892 [2024-11-28 12:50:48.119891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.892 qpair failed and we were unable to recover it. 00:27:05.892 [2024-11-28 12:50:48.120024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.892 [2024-11-28 12:50:48.120034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.892 qpair failed and we were unable to recover it. 00:27:05.892 [2024-11-28 12:50:48.120129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.892 [2024-11-28 12:50:48.120140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.892 qpair failed and we were unable to recover it. 00:27:05.892 [2024-11-28 12:50:48.120296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.892 [2024-11-28 12:50:48.120307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.892 qpair failed and we were unable to recover it. 00:27:05.892 [2024-11-28 12:50:48.120445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.892 [2024-11-28 12:50:48.120456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.892 qpair failed and we were unable to recover it. 00:27:05.892 [2024-11-28 12:50:48.120547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.892 [2024-11-28 12:50:48.120557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.892 qpair failed and we were unable to recover it. 00:27:05.892 [2024-11-28 12:50:48.120627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.892 [2024-11-28 12:50:48.120637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.892 qpair failed and we were unable to recover it. 00:27:05.892 [2024-11-28 12:50:48.120721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.892 [2024-11-28 12:50:48.120731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.892 qpair failed and we were unable to recover it. 00:27:05.892 [2024-11-28 12:50:48.120867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.892 [2024-11-28 12:50:48.120878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.892 qpair failed and we were unable to recover it. 00:27:05.892 [2024-11-28 12:50:48.120958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.892 [2024-11-28 12:50:48.120970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.892 qpair failed and we were unable to recover it. 00:27:05.892 [2024-11-28 12:50:48.121116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.892 [2024-11-28 12:50:48.121127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.892 qpair failed and we were unable to recover it. 00:27:05.892 [2024-11-28 12:50:48.121199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.892 [2024-11-28 12:50:48.121210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.892 qpair failed and we were unable to recover it. 00:27:05.892 [2024-11-28 12:50:48.121307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.893 [2024-11-28 12:50:48.121318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.893 qpair failed and we were unable to recover it. 00:27:05.893 [2024-11-28 12:50:48.121450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.893 [2024-11-28 12:50:48.121461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.893 qpair failed and we were unable to recover it. 00:27:05.893 [2024-11-28 12:50:48.121618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.893 [2024-11-28 12:50:48.121629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.893 qpair failed and we were unable to recover it. 00:27:05.893 [2024-11-28 12:50:48.121720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.893 [2024-11-28 12:50:48.121731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.893 qpair failed and we were unable to recover it. 00:27:05.893 [2024-11-28 12:50:48.121818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.893 [2024-11-28 12:50:48.121829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.893 qpair failed and we were unable to recover it. 00:27:05.893 [2024-11-28 12:50:48.121928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.893 [2024-11-28 12:50:48.121938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.893 qpair failed and we were unable to recover it. 00:27:05.893 [2024-11-28 12:50:48.122042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.893 [2024-11-28 12:50:48.122053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.893 qpair failed and we were unable to recover it. 00:27:05.893 [2024-11-28 12:50:48.122215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.893 [2024-11-28 12:50:48.122226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.893 qpair failed and we were unable to recover it. 00:27:05.893 [2024-11-28 12:50:48.122306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.893 [2024-11-28 12:50:48.122316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.893 qpair failed and we were unable to recover it. 00:27:05.893 [2024-11-28 12:50:48.122525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.893 [2024-11-28 12:50:48.122536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.893 qpair failed and we were unable to recover it. 00:27:05.893 [2024-11-28 12:50:48.122697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.893 [2024-11-28 12:50:48.122708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.893 qpair failed and we were unable to recover it. 00:27:05.893 [2024-11-28 12:50:48.122935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.893 [2024-11-28 12:50:48.122945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.893 qpair failed and we were unable to recover it. 00:27:05.893 [2024-11-28 12:50:48.123024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.893 [2024-11-28 12:50:48.123034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.893 qpair failed and we were unable to recover it. 00:27:05.893 [2024-11-28 12:50:48.123118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.893 [2024-11-28 12:50:48.123128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.893 qpair failed and we were unable to recover it. 00:27:05.893 [2024-11-28 12:50:48.123223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.893 [2024-11-28 12:50:48.123234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.893 qpair failed and we were unable to recover it. 00:27:05.893 [2024-11-28 12:50:48.123381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.893 [2024-11-28 12:50:48.123393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.893 qpair failed and we were unable to recover it. 00:27:05.893 [2024-11-28 12:50:48.123471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.893 [2024-11-28 12:50:48.123482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.893 qpair failed and we were unable to recover it. 00:27:05.893 [2024-11-28 12:50:48.123620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.893 [2024-11-28 12:50:48.123631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.893 qpair failed and we were unable to recover it. 00:27:05.893 [2024-11-28 12:50:48.123711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.893 [2024-11-28 12:50:48.123721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.893 qpair failed and we were unable to recover it. 00:27:05.893 [2024-11-28 12:50:48.123853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.893 [2024-11-28 12:50:48.123864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.893 qpair failed and we were unable to recover it. 00:27:05.893 [2024-11-28 12:50:48.124000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.893 [2024-11-28 12:50:48.124011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.893 qpair failed and we were unable to recover it. 00:27:05.893 [2024-11-28 12:50:48.124209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.893 [2024-11-28 12:50:48.124219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.893 qpair failed and we were unable to recover it. 00:27:05.893 [2024-11-28 12:50:48.124303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.893 [2024-11-28 12:50:48.124314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.893 qpair failed and we were unable to recover it. 00:27:05.893 [2024-11-28 12:50:48.124377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.893 [2024-11-28 12:50:48.124387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.893 qpair failed and we were unable to recover it. 00:27:05.893 [2024-11-28 12:50:48.124479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.893 [2024-11-28 12:50:48.124489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.893 qpair failed and we were unable to recover it. 00:27:05.893 [2024-11-28 12:50:48.124573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.893 [2024-11-28 12:50:48.124584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.893 qpair failed and we were unable to recover it. 00:27:05.893 [2024-11-28 12:50:48.124654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.893 [2024-11-28 12:50:48.124664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.893 qpair failed and we were unable to recover it. 00:27:05.893 [2024-11-28 12:50:48.124729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.893 [2024-11-28 12:50:48.124740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.893 qpair failed and we were unable to recover it. 00:27:05.893 [2024-11-28 12:50:48.124902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.893 [2024-11-28 12:50:48.124912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.893 qpair failed and we were unable to recover it. 00:27:05.893 [2024-11-28 12:50:48.125006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.893 [2024-11-28 12:50:48.125017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.893 qpair failed and we were unable to recover it. 00:27:05.893 [2024-11-28 12:50:48.125109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.893 [2024-11-28 12:50:48.125119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.893 qpair failed and we were unable to recover it. 00:27:05.893 [2024-11-28 12:50:48.125257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.893 [2024-11-28 12:50:48.125267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.893 qpair failed and we were unable to recover it. 00:27:05.893 [2024-11-28 12:50:48.125346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.893 [2024-11-28 12:50:48.125356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.893 qpair failed and we were unable to recover it. 00:27:05.893 [2024-11-28 12:50:48.125419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.893 [2024-11-28 12:50:48.125429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.893 qpair failed and we were unable to recover it. 00:27:05.893 [2024-11-28 12:50:48.125495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.893 [2024-11-28 12:50:48.125505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.893 qpair failed and we were unable to recover it. 00:27:05.893 [2024-11-28 12:50:48.125579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.893 [2024-11-28 12:50:48.125590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.893 qpair failed and we were unable to recover it. 00:27:05.893 [2024-11-28 12:50:48.125674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.893 [2024-11-28 12:50:48.125685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.893 qpair failed and we were unable to recover it. 00:27:05.893 [2024-11-28 12:50:48.125761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.893 [2024-11-28 12:50:48.125774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.893 qpair failed and we were unable to recover it. 00:27:05.894 [2024-11-28 12:50:48.125856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.894 [2024-11-28 12:50:48.125866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.894 qpair failed and we were unable to recover it. 00:27:05.894 [2024-11-28 12:50:48.125943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.894 [2024-11-28 12:50:48.125956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.894 qpair failed and we were unable to recover it. 00:27:05.894 [2024-11-28 12:50:48.126108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.894 [2024-11-28 12:50:48.126119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.894 qpair failed and we were unable to recover it. 00:27:05.894 [2024-11-28 12:50:48.126187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.894 [2024-11-28 12:50:48.126198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.894 qpair failed and we were unable to recover it. 00:27:05.894 [2024-11-28 12:50:48.126351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.894 [2024-11-28 12:50:48.126362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.894 qpair failed and we were unable to recover it. 00:27:05.894 [2024-11-28 12:50:48.126502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.894 [2024-11-28 12:50:48.126513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.894 qpair failed and we were unable to recover it. 00:27:05.894 [2024-11-28 12:50:48.126574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.894 [2024-11-28 12:50:48.126584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.894 qpair failed and we were unable to recover it. 00:27:05.894 [2024-11-28 12:50:48.126656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.894 [2024-11-28 12:50:48.126667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.894 qpair failed and we were unable to recover it. 00:27:05.894 [2024-11-28 12:50:48.126804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.894 [2024-11-28 12:50:48.126814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.894 qpair failed and we were unable to recover it. 00:27:05.894 [2024-11-28 12:50:48.126879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.894 [2024-11-28 12:50:48.126889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.894 qpair failed and we were unable to recover it. 00:27:05.894 [2024-11-28 12:50:48.127088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.894 [2024-11-28 12:50:48.127099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.894 qpair failed and we were unable to recover it. 00:27:05.894 [2024-11-28 12:50:48.127172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.894 [2024-11-28 12:50:48.127183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.894 qpair failed and we were unable to recover it. 00:27:05.894 [2024-11-28 12:50:48.127320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.894 [2024-11-28 12:50:48.127331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.894 qpair failed and we were unable to recover it. 00:27:05.894 [2024-11-28 12:50:48.127402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.894 [2024-11-28 12:50:48.127413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.894 qpair failed and we were unable to recover it. 00:27:05.894 [2024-11-28 12:50:48.127546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.894 [2024-11-28 12:50:48.127556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.894 qpair failed and we were unable to recover it. 00:27:05.894 [2024-11-28 12:50:48.127632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.894 [2024-11-28 12:50:48.127642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.894 qpair failed and we were unable to recover it. 00:27:05.894 [2024-11-28 12:50:48.127788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.894 [2024-11-28 12:50:48.127798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.894 qpair failed and we were unable to recover it. 00:27:05.894 [2024-11-28 12:50:48.127952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.894 [2024-11-28 12:50:48.127962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.894 qpair failed and we were unable to recover it. 00:27:05.894 [2024-11-28 12:50:48.128118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.894 [2024-11-28 12:50:48.128129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.894 qpair failed and we were unable to recover it. 00:27:05.894 [2024-11-28 12:50:48.128209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.894 [2024-11-28 12:50:48.128220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.894 qpair failed and we were unable to recover it. 00:27:05.894 [2024-11-28 12:50:48.128360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.894 [2024-11-28 12:50:48.128370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.894 qpair failed and we were unable to recover it. 00:27:05.894 [2024-11-28 12:50:48.128508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.894 [2024-11-28 12:50:48.128519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.894 qpair failed and we were unable to recover it. 00:27:05.894 [2024-11-28 12:50:48.128599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.894 [2024-11-28 12:50:48.128610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.894 qpair failed and we were unable to recover it. 00:27:05.894 [2024-11-28 12:50:48.128688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.894 [2024-11-28 12:50:48.128698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.894 qpair failed and we were unable to recover it. 00:27:05.894 [2024-11-28 12:50:48.128838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.894 [2024-11-28 12:50:48.128849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.894 qpair failed and we were unable to recover it. 00:27:05.894 [2024-11-28 12:50:48.129001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.894 [2024-11-28 12:50:48.129013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.894 qpair failed and we were unable to recover it. 00:27:05.894 [2024-11-28 12:50:48.129091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.894 [2024-11-28 12:50:48.129102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.894 qpair failed and we were unable to recover it. 00:27:05.894 [2024-11-28 12:50:48.129167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.894 [2024-11-28 12:50:48.129177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.894 qpair failed and we were unable to recover it. 00:27:05.894 [2024-11-28 12:50:48.129251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.894 [2024-11-28 12:50:48.129262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.894 qpair failed and we were unable to recover it. 00:27:05.894 [2024-11-28 12:50:48.129330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.894 [2024-11-28 12:50:48.129340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.894 qpair failed and we were unable to recover it. 00:27:05.894 [2024-11-28 12:50:48.129406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.894 [2024-11-28 12:50:48.129416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.894 qpair failed and we were unable to recover it. 00:27:05.894 [2024-11-28 12:50:48.129563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.894 [2024-11-28 12:50:48.129574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.894 qpair failed and we were unable to recover it. 00:27:05.894 [2024-11-28 12:50:48.129635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.894 [2024-11-28 12:50:48.129645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.894 qpair failed and we were unable to recover it. 00:27:05.894 [2024-11-28 12:50:48.129824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.894 [2024-11-28 12:50:48.129837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.894 qpair failed and we were unable to recover it. 00:27:05.894 [2024-11-28 12:50:48.129926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.894 [2024-11-28 12:50:48.129937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.894 qpair failed and we were unable to recover it. 00:27:05.894 [2024-11-28 12:50:48.130022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.894 [2024-11-28 12:50:48.130033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.894 qpair failed and we were unable to recover it. 00:27:05.894 [2024-11-28 12:50:48.130244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.894 [2024-11-28 12:50:48.130254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.894 qpair failed and we were unable to recover it. 00:27:05.894 [2024-11-28 12:50:48.130407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.894 [2024-11-28 12:50:48.130419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.895 qpair failed and we were unable to recover it. 00:27:05.895 [2024-11-28 12:50:48.130553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.895 [2024-11-28 12:50:48.130564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.895 qpair failed and we were unable to recover it. 00:27:05.895 [2024-11-28 12:50:48.130700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.895 [2024-11-28 12:50:48.130722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.895 qpair failed and we were unable to recover it. 00:27:05.895 [2024-11-28 12:50:48.130889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.895 [2024-11-28 12:50:48.130901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.895 qpair failed and we were unable to recover it. 00:27:05.895 [2024-11-28 12:50:48.130987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.895 [2024-11-28 12:50:48.130998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.895 qpair failed and we were unable to recover it. 00:27:05.895 [2024-11-28 12:50:48.131095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.895 [2024-11-28 12:50:48.131106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.895 qpair failed and we were unable to recover it. 00:27:05.895 [2024-11-28 12:50:48.131173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.895 [2024-11-28 12:50:48.131183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.895 qpair failed and we were unable to recover it. 00:27:05.895 [2024-11-28 12:50:48.131339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.895 [2024-11-28 12:50:48.131350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.895 qpair failed and we were unable to recover it. 00:27:05.895 [2024-11-28 12:50:48.131447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.895 [2024-11-28 12:50:48.131458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.895 qpair failed and we were unable to recover it. 00:27:05.895 [2024-11-28 12:50:48.131524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.895 [2024-11-28 12:50:48.131535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.895 qpair failed and we were unable to recover it. 00:27:05.895 [2024-11-28 12:50:48.131628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.895 [2024-11-28 12:50:48.131638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.895 qpair failed and we were unable to recover it. 00:27:05.895 [2024-11-28 12:50:48.131769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.895 [2024-11-28 12:50:48.131779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.895 qpair failed and we were unable to recover it. 00:27:05.895 [2024-11-28 12:50:48.131924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.895 [2024-11-28 12:50:48.131934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.895 qpair failed and we were unable to recover it. 00:27:05.895 [2024-11-28 12:50:48.132008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.895 [2024-11-28 12:50:48.132020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.895 qpair failed and we were unable to recover it. 00:27:05.895 [2024-11-28 12:50:48.132172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.895 [2024-11-28 12:50:48.132183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.895 qpair failed and we were unable to recover it. 00:27:05.895 [2024-11-28 12:50:48.132248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.895 [2024-11-28 12:50:48.132259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.895 qpair failed and we were unable to recover it. 00:27:05.895 [2024-11-28 12:50:48.132486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.895 [2024-11-28 12:50:48.132497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.895 qpair failed and we were unable to recover it. 00:27:05.895 [2024-11-28 12:50:48.132627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.895 [2024-11-28 12:50:48.132638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.895 qpair failed and we were unable to recover it. 00:27:05.895 [2024-11-28 12:50:48.132724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.895 [2024-11-28 12:50:48.132734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.895 qpair failed and we were unable to recover it. 00:27:05.895 [2024-11-28 12:50:48.132812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.895 [2024-11-28 12:50:48.132822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.895 qpair failed and we were unable to recover it. 00:27:05.895 [2024-11-28 12:50:48.132966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.895 [2024-11-28 12:50:48.132977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.895 qpair failed and we were unable to recover it. 00:27:05.895 [2024-11-28 12:50:48.133052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.895 [2024-11-28 12:50:48.133063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.895 qpair failed and we were unable to recover it. 00:27:05.895 [2024-11-28 12:50:48.133159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.895 [2024-11-28 12:50:48.133169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.895 qpair failed and we were unable to recover it. 00:27:05.895 [2024-11-28 12:50:48.133320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.895 [2024-11-28 12:50:48.133330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.895 qpair failed and we were unable to recover it. 00:27:05.895 [2024-11-28 12:50:48.133484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.895 [2024-11-28 12:50:48.133495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.895 qpair failed and we were unable to recover it. 00:27:05.895 [2024-11-28 12:50:48.133634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.895 [2024-11-28 12:50:48.133644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.895 qpair failed and we were unable to recover it. 00:27:05.895 [2024-11-28 12:50:48.133787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.895 [2024-11-28 12:50:48.133797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.895 qpair failed and we were unable to recover it. 00:27:05.895 [2024-11-28 12:50:48.133880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.895 [2024-11-28 12:50:48.133891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.895 qpair failed and we were unable to recover it. 00:27:05.895 [2024-11-28 12:50:48.133972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.895 [2024-11-28 12:50:48.133996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.895 qpair failed and we were unable to recover it. 00:27:05.895 [2024-11-28 12:50:48.134198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.895 [2024-11-28 12:50:48.134208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.895 qpair failed and we were unable to recover it. 00:27:05.895 [2024-11-28 12:50:48.134291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.895 [2024-11-28 12:50:48.134301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.895 qpair failed and we were unable to recover it. 00:27:05.895 [2024-11-28 12:50:48.134446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.895 [2024-11-28 12:50:48.134457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.895 qpair failed and we were unable to recover it. 00:27:05.895 [2024-11-28 12:50:48.134606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.895 [2024-11-28 12:50:48.134617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.895 qpair failed and we were unable to recover it. 00:27:05.895 [2024-11-28 12:50:48.134758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.895 [2024-11-28 12:50:48.134768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.895 qpair failed and we were unable to recover it. 00:27:05.895 [2024-11-28 12:50:48.134843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.895 [2024-11-28 12:50:48.134853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.895 qpair failed and we were unable to recover it. 00:27:05.895 [2024-11-28 12:50:48.134928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.895 [2024-11-28 12:50:48.134938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.895 qpair failed and we were unable to recover it. 00:27:05.895 [2024-11-28 12:50:48.135042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.895 [2024-11-28 12:50:48.135053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.895 qpair failed and we were unable to recover it. 00:27:05.895 [2024-11-28 12:50:48.135190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.895 [2024-11-28 12:50:48.135201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.895 qpair failed and we were unable to recover it. 00:27:05.895 [2024-11-28 12:50:48.135348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.895 [2024-11-28 12:50:48.135359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.895 qpair failed and we were unable to recover it. 00:27:05.896 [2024-11-28 12:50:48.135443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.896 [2024-11-28 12:50:48.135453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.896 qpair failed and we were unable to recover it. 00:27:05.896 [2024-11-28 12:50:48.135516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.896 [2024-11-28 12:50:48.135527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.896 qpair failed and we were unable to recover it. 00:27:05.896 [2024-11-28 12:50:48.135605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.896 [2024-11-28 12:50:48.135615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.896 qpair failed and we were unable to recover it. 00:27:05.896 [2024-11-28 12:50:48.135697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.896 [2024-11-28 12:50:48.135710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.896 qpair failed and we were unable to recover it. 00:27:05.896 [2024-11-28 12:50:48.135773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.896 [2024-11-28 12:50:48.135783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.896 qpair failed and we were unable to recover it. 00:27:05.896 [2024-11-28 12:50:48.135865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.896 [2024-11-28 12:50:48.135876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.896 qpair failed and we were unable to recover it. 00:27:05.896 [2024-11-28 12:50:48.136073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.896 [2024-11-28 12:50:48.136085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.896 qpair failed and we were unable to recover it. 00:27:05.896 [2024-11-28 12:50:48.136145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.896 [2024-11-28 12:50:48.136156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.896 qpair failed and we were unable to recover it. 00:27:05.896 [2024-11-28 12:50:48.136233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.896 [2024-11-28 12:50:48.136244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.896 qpair failed and we were unable to recover it. 00:27:05.896 [2024-11-28 12:50:48.136323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.896 [2024-11-28 12:50:48.136333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.896 qpair failed and we were unable to recover it. 00:27:05.896 [2024-11-28 12:50:48.136569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.896 [2024-11-28 12:50:48.136579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.896 qpair failed and we were unable to recover it. 00:27:05.896 [2024-11-28 12:50:48.136677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.896 [2024-11-28 12:50:48.136688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.896 qpair failed and we were unable to recover it. 00:27:05.896 [2024-11-28 12:50:48.136781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.896 [2024-11-28 12:50:48.136792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.896 qpair failed and we were unable to recover it. 00:27:05.896 [2024-11-28 12:50:48.136869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.896 [2024-11-28 12:50:48.136880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.896 qpair failed and we were unable to recover it. 00:27:05.896 [2024-11-28 12:50:48.136970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.896 [2024-11-28 12:50:48.136982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.896 qpair failed and we were unable to recover it. 00:27:05.896 [2024-11-28 12:50:48.137056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.896 [2024-11-28 12:50:48.137068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.896 qpair failed and we were unable to recover it. 00:27:05.896 [2024-11-28 12:50:48.137136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.896 [2024-11-28 12:50:48.137146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.896 qpair failed and we were unable to recover it. 00:27:05.896 [2024-11-28 12:50:48.137346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.896 [2024-11-28 12:50:48.137357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.896 qpair failed and we were unable to recover it. 00:27:05.896 [2024-11-28 12:50:48.137447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.896 [2024-11-28 12:50:48.137458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.896 qpair failed and we were unable to recover it. 00:27:05.896 [2024-11-28 12:50:48.137525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.896 [2024-11-28 12:50:48.137535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.896 qpair failed and we were unable to recover it. 00:27:05.896 [2024-11-28 12:50:48.137672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.896 [2024-11-28 12:50:48.137682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.896 qpair failed and we were unable to recover it. 00:27:05.896 [2024-11-28 12:50:48.137775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.896 [2024-11-28 12:50:48.137785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.896 qpair failed and we were unable to recover it. 00:27:05.896 [2024-11-28 12:50:48.137920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.896 [2024-11-28 12:50:48.137930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.896 qpair failed and we were unable to recover it. 00:27:05.896 [2024-11-28 12:50:48.138017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.896 [2024-11-28 12:50:48.138028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.896 qpair failed and we were unable to recover it. 00:27:05.896 [2024-11-28 12:50:48.138105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.896 [2024-11-28 12:50:48.138115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.896 qpair failed and we were unable to recover it. 00:27:05.896 [2024-11-28 12:50:48.138218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.896 [2024-11-28 12:50:48.138228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.896 qpair failed and we were unable to recover it. 00:27:05.896 [2024-11-28 12:50:48.138363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.896 [2024-11-28 12:50:48.138374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.896 qpair failed and we were unable to recover it. 00:27:05.896 [2024-11-28 12:50:48.138528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.896 [2024-11-28 12:50:48.138539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.896 qpair failed and we were unable to recover it. 00:27:05.896 [2024-11-28 12:50:48.138687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.896 [2024-11-28 12:50:48.138698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.896 qpair failed and we were unable to recover it. 00:27:05.896 [2024-11-28 12:50:48.138860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.896 [2024-11-28 12:50:48.138870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.896 qpair failed and we were unable to recover it. 00:27:05.896 [2024-11-28 12:50:48.139039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.896 [2024-11-28 12:50:48.139051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.896 qpair failed and we were unable to recover it. 00:27:05.896 [2024-11-28 12:50:48.139185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.896 [2024-11-28 12:50:48.139196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.896 qpair failed and we were unable to recover it. 00:27:05.896 [2024-11-28 12:50:48.139354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.896 [2024-11-28 12:50:48.139365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.896 qpair failed and we were unable to recover it. 00:27:05.896 [2024-11-28 12:50:48.139517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.896 [2024-11-28 12:50:48.139528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.896 qpair failed and we were unable to recover it. 00:27:05.896 [2024-11-28 12:50:48.139655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.897 [2024-11-28 12:50:48.139667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.897 qpair failed and we were unable to recover it. 00:27:05.897 [2024-11-28 12:50:48.139814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.897 [2024-11-28 12:50:48.139824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.897 qpair failed and we were unable to recover it. 00:27:05.897 [2024-11-28 12:50:48.139969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.897 [2024-11-28 12:50:48.139980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.897 qpair failed and we were unable to recover it. 00:27:05.897 [2024-11-28 12:50:48.140125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.897 [2024-11-28 12:50:48.140136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.897 qpair failed and we were unable to recover it. 00:27:05.897 [2024-11-28 12:50:48.140236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.897 [2024-11-28 12:50:48.140247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.897 qpair failed and we were unable to recover it. 00:27:05.897 [2024-11-28 12:50:48.140340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.897 [2024-11-28 12:50:48.140351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.897 qpair failed and we were unable to recover it. 00:27:05.897 [2024-11-28 12:50:48.140494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.897 [2024-11-28 12:50:48.140505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.897 qpair failed and we were unable to recover it. 00:27:05.897 [2024-11-28 12:50:48.140721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.897 [2024-11-28 12:50:48.140732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.897 qpair failed and we were unable to recover it. 00:27:05.897 [2024-11-28 12:50:48.140872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.897 [2024-11-28 12:50:48.140882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.897 qpair failed and we were unable to recover it. 00:27:05.897 [2024-11-28 12:50:48.141037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.897 [2024-11-28 12:50:48.141050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.897 qpair failed and we were unable to recover it. 00:27:05.897 [2024-11-28 12:50:48.141201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.897 [2024-11-28 12:50:48.141212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.897 qpair failed and we were unable to recover it. 00:27:05.897 [2024-11-28 12:50:48.141307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.897 [2024-11-28 12:50:48.141317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.897 qpair failed and we were unable to recover it. 00:27:05.897 [2024-11-28 12:50:48.141467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.897 [2024-11-28 12:50:48.141478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.897 qpair failed and we were unable to recover it. 00:27:05.897 [2024-11-28 12:50:48.141571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.897 [2024-11-28 12:50:48.141582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.897 qpair failed and we were unable to recover it. 00:27:05.897 [2024-11-28 12:50:48.141648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.897 [2024-11-28 12:50:48.141669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.897 qpair failed and we were unable to recover it. 00:27:05.897 [2024-11-28 12:50:48.141812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.897 [2024-11-28 12:50:48.141822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.897 qpair failed and we were unable to recover it. 00:27:05.897 [2024-11-28 12:50:48.141895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.897 [2024-11-28 12:50:48.141905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.897 qpair failed and we were unable to recover it. 00:27:05.897 [2024-11-28 12:50:48.142052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.897 [2024-11-28 12:50:48.142063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.897 qpair failed and we were unable to recover it. 00:27:05.897 [2024-11-28 12:50:48.142198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.897 [2024-11-28 12:50:48.142208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.897 qpair failed and we were unable to recover it. 00:27:05.897 [2024-11-28 12:50:48.142357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.897 [2024-11-28 12:50:48.142368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.897 qpair failed and we were unable to recover it. 00:27:05.897 [2024-11-28 12:50:48.142500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.897 [2024-11-28 12:50:48.142511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.897 qpair failed and we were unable to recover it. 00:27:05.897 [2024-11-28 12:50:48.142600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.897 [2024-11-28 12:50:48.142610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.897 qpair failed and we were unable to recover it. 00:27:05.897 [2024-11-28 12:50:48.142682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.897 [2024-11-28 12:50:48.142692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.897 qpair failed and we were unable to recover it. 00:27:05.897 [2024-11-28 12:50:48.142805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.897 [2024-11-28 12:50:48.142816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.897 qpair failed and we were unable to recover it. 00:27:05.897 [2024-11-28 12:50:48.142891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.897 [2024-11-28 12:50:48.142901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.897 qpair failed and we were unable to recover it. 00:27:05.897 [2024-11-28 12:50:48.143031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.897 [2024-11-28 12:50:48.143042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.897 qpair failed and we were unable to recover it. 00:27:05.897 [2024-11-28 12:50:48.143207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.897 [2024-11-28 12:50:48.143218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.897 qpair failed and we were unable to recover it. 00:27:05.897 [2024-11-28 12:50:48.143368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.897 [2024-11-28 12:50:48.143378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.897 qpair failed and we were unable to recover it. 00:27:05.897 [2024-11-28 12:50:48.143449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.897 [2024-11-28 12:50:48.143460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.897 qpair failed and we were unable to recover it. 00:27:05.897 [2024-11-28 12:50:48.143549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.897 [2024-11-28 12:50:48.143559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.897 qpair failed and we were unable to recover it. 00:27:05.897 [2024-11-28 12:50:48.143633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.897 [2024-11-28 12:50:48.143644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.897 qpair failed and we were unable to recover it. 00:27:05.897 [2024-11-28 12:50:48.143700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.897 [2024-11-28 12:50:48.143711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.897 qpair failed and we were unable to recover it. 00:27:05.897 [2024-11-28 12:50:48.143774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.897 [2024-11-28 12:50:48.143784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.897 qpair failed and we were unable to recover it. 00:27:05.897 [2024-11-28 12:50:48.143865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.897 [2024-11-28 12:50:48.143876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.897 qpair failed and we were unable to recover it. 00:27:05.897 [2024-11-28 12:50:48.144078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.897 [2024-11-28 12:50:48.144088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.897 qpair failed and we were unable to recover it. 00:27:05.897 [2024-11-28 12:50:48.144165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.897 [2024-11-28 12:50:48.144176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.897 qpair failed and we were unable to recover it. 00:27:05.897 [2024-11-28 12:50:48.144273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.897 [2024-11-28 12:50:48.144300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.897 qpair failed and we were unable to recover it. 00:27:05.897 [2024-11-28 12:50:48.144395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.897 [2024-11-28 12:50:48.144409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.897 qpair failed and we were unable to recover it. 00:27:05.897 [2024-11-28 12:50:48.144585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.898 [2024-11-28 12:50:48.144600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.898 qpair failed and we were unable to recover it. 00:27:05.898 [2024-11-28 12:50:48.144688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.898 [2024-11-28 12:50:48.144702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.898 qpair failed and we were unable to recover it. 00:27:05.898 [2024-11-28 12:50:48.144890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.898 [2024-11-28 12:50:48.144905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.898 qpair failed and we were unable to recover it. 00:27:05.898 [2024-11-28 12:50:48.145114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.898 [2024-11-28 12:50:48.145130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.898 qpair failed and we were unable to recover it. 00:27:05.898 [2024-11-28 12:50:48.145339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.898 [2024-11-28 12:50:48.145354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.898 qpair failed and we were unable to recover it. 00:27:05.898 [2024-11-28 12:50:48.145510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.898 [2024-11-28 12:50:48.145524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.898 qpair failed and we were unable to recover it. 00:27:05.898 [2024-11-28 12:50:48.145677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.898 [2024-11-28 12:50:48.145691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.898 qpair failed and we were unable to recover it. 00:27:05.898 [2024-11-28 12:50:48.145893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.898 [2024-11-28 12:50:48.145908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.898 qpair failed and we were unable to recover it. 00:27:05.898 [2024-11-28 12:50:48.146010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.898 [2024-11-28 12:50:48.146024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.898 qpair failed and we were unable to recover it. 00:27:05.898 [2024-11-28 12:50:48.146165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.898 [2024-11-28 12:50:48.146180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.898 qpair failed and we were unable to recover it. 00:27:05.898 [2024-11-28 12:50:48.146339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.898 [2024-11-28 12:50:48.146353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.898 qpair failed and we were unable to recover it. 00:27:05.898 [2024-11-28 12:50:48.146503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.898 [2024-11-28 12:50:48.146522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.898 qpair failed and we were unable to recover it. 00:27:05.898 [2024-11-28 12:50:48.146664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.898 [2024-11-28 12:50:48.146678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.898 qpair failed and we were unable to recover it. 00:27:05.898 [2024-11-28 12:50:48.146823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.898 [2024-11-28 12:50:48.146837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.898 qpair failed and we were unable to recover it. 00:27:05.898 [2024-11-28 12:50:48.146994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.898 [2024-11-28 12:50:48.147009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.898 qpair failed and we were unable to recover it. 00:27:05.898 [2024-11-28 12:50:48.147120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.898 [2024-11-28 12:50:48.147133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.898 qpair failed and we were unable to recover it. 00:27:05.898 [2024-11-28 12:50:48.147273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.898 [2024-11-28 12:50:48.147287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.898 qpair failed and we were unable to recover it. 00:27:05.898 [2024-11-28 12:50:48.147495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.898 [2024-11-28 12:50:48.147509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.898 qpair failed and we were unable to recover it. 00:27:05.898 [2024-11-28 12:50:48.147589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.898 [2024-11-28 12:50:48.147603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.898 qpair failed and we were unable to recover it. 00:27:05.898 [2024-11-28 12:50:48.147831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.898 [2024-11-28 12:50:48.147845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.898 qpair failed and we were unable to recover it. 00:27:05.898 [2024-11-28 12:50:48.147988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.898 [2024-11-28 12:50:48.148002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.898 qpair failed and we were unable to recover it. 00:27:05.898 [2024-11-28 12:50:48.148159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.898 [2024-11-28 12:50:48.148173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.898 qpair failed and we were unable to recover it. 00:27:05.898 [2024-11-28 12:50:48.148435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.898 [2024-11-28 12:50:48.148450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.898 qpair failed and we were unable to recover it. 00:27:05.898 [2024-11-28 12:50:48.148587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.898 [2024-11-28 12:50:48.148602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.898 qpair failed and we were unable to recover it. 00:27:05.898 [2024-11-28 12:50:48.148700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.898 [2024-11-28 12:50:48.148714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.898 qpair failed and we were unable to recover it. 00:27:05.898 [2024-11-28 12:50:48.148870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.898 [2024-11-28 12:50:48.148885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.898 qpair failed and we were unable to recover it. 00:27:05.898 [2024-11-28 12:50:48.149035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.898 [2024-11-28 12:50:48.149050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.898 qpair failed and we were unable to recover it. 00:27:05.898 [2024-11-28 12:50:48.149279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.898 [2024-11-28 12:50:48.149293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.898 qpair failed and we were unable to recover it. 00:27:05.898 [2024-11-28 12:50:48.149449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.898 [2024-11-28 12:50:48.149463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.898 qpair failed and we were unable to recover it. 00:27:05.898 [2024-11-28 12:50:48.149612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.898 [2024-11-28 12:50:48.149628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.898 qpair failed and we were unable to recover it. 00:27:05.898 [2024-11-28 12:50:48.149825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.898 [2024-11-28 12:50:48.149839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.898 qpair failed and we were unable to recover it. 00:27:05.898 [2024-11-28 12:50:48.150004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.898 [2024-11-28 12:50:48.150019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.898 qpair failed and we were unable to recover it. 00:27:05.898 [2024-11-28 12:50:48.150114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.898 [2024-11-28 12:50:48.150129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.898 qpair failed and we were unable to recover it. 00:27:05.898 [2024-11-28 12:50:48.150275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.898 [2024-11-28 12:50:48.150288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.898 qpair failed and we were unable to recover it. 00:27:05.898 [2024-11-28 12:50:48.150521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.898 [2024-11-28 12:50:48.150535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.898 qpair failed and we were unable to recover it. 00:27:05.898 [2024-11-28 12:50:48.150682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.898 [2024-11-28 12:50:48.150696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.898 qpair failed and we were unable to recover it. 00:27:05.898 [2024-11-28 12:50:48.150794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.898 [2024-11-28 12:50:48.150808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:05.898 qpair failed and we were unable to recover it. 00:27:05.898 [2024-11-28 12:50:48.150951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.898 [2024-11-28 12:50:48.150964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.898 qpair failed and we were unable to recover it. 00:27:05.898 [2024-11-28 12:50:48.151046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.899 [2024-11-28 12:50:48.151057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.899 qpair failed and we were unable to recover it. 00:27:05.899 [2024-11-28 12:50:48.151210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.899 [2024-11-28 12:50:48.151220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.899 qpair failed and we were unable to recover it. 00:27:05.899 [2024-11-28 12:50:48.151318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.899 [2024-11-28 12:50:48.151329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.899 qpair failed and we were unable to recover it. 00:27:05.899 [2024-11-28 12:50:48.151410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.899 [2024-11-28 12:50:48.151420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.899 qpair failed and we were unable to recover it. 00:27:05.899 [2024-11-28 12:50:48.151620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.899 [2024-11-28 12:50:48.151630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.899 qpair failed and we were unable to recover it. 00:27:05.899 [2024-11-28 12:50:48.151771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.899 [2024-11-28 12:50:48.151782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.899 qpair failed and we were unable to recover it. 00:27:05.899 [2024-11-28 12:50:48.151957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.899 [2024-11-28 12:50:48.151968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.899 qpair failed and we were unable to recover it. 00:27:05.899 [2024-11-28 12:50:48.152197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.899 [2024-11-28 12:50:48.152208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.899 qpair failed and we were unable to recover it. 00:27:05.899 [2024-11-28 12:50:48.152285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.899 [2024-11-28 12:50:48.152295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.899 qpair failed and we were unable to recover it. 00:27:05.899 [2024-11-28 12:50:48.152394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.899 [2024-11-28 12:50:48.152404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.899 qpair failed and we were unable to recover it. 00:27:05.899 [2024-11-28 12:50:48.152475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.899 [2024-11-28 12:50:48.152486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.899 qpair failed and we were unable to recover it. 00:27:05.899 [2024-11-28 12:50:48.152567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.899 [2024-11-28 12:50:48.152577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.899 qpair failed and we were unable to recover it. 00:27:05.899 [2024-11-28 12:50:48.152663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.899 [2024-11-28 12:50:48.152673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.899 qpair failed and we were unable to recover it. 00:27:05.899 [2024-11-28 12:50:48.152809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.899 [2024-11-28 12:50:48.152822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.899 qpair failed and we were unable to recover it. 00:27:05.899 [2024-11-28 12:50:48.152911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.899 [2024-11-28 12:50:48.152922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.899 qpair failed and we were unable to recover it. 00:27:05.899 [2024-11-28 12:50:48.153075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.899 [2024-11-28 12:50:48.153087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.899 qpair failed and we were unable to recover it. 00:27:05.899 [2024-11-28 12:50:48.153168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.899 [2024-11-28 12:50:48.153179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.899 qpair failed and we were unable to recover it. 00:27:05.899 [2024-11-28 12:50:48.153324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.899 [2024-11-28 12:50:48.153335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.899 qpair failed and we were unable to recover it. 00:27:05.899 [2024-11-28 12:50:48.153537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.899 [2024-11-28 12:50:48.153548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.899 qpair failed and we were unable to recover it. 00:27:05.899 [2024-11-28 12:50:48.153613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.899 [2024-11-28 12:50:48.153623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.899 qpair failed and we were unable to recover it. 00:27:05.899 [2024-11-28 12:50:48.153873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.899 [2024-11-28 12:50:48.153884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.899 qpair failed and we were unable to recover it. 00:27:05.899 [2024-11-28 12:50:48.153976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.899 [2024-11-28 12:50:48.153987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.899 qpair failed and we were unable to recover it. 00:27:05.899 [2024-11-28 12:50:48.154158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.899 [2024-11-28 12:50:48.154169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.899 qpair failed and we were unable to recover it. 00:27:05.899 [2024-11-28 12:50:48.154257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.899 [2024-11-28 12:50:48.154268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.899 qpair failed and we were unable to recover it. 00:27:05.899 [2024-11-28 12:50:48.154330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.899 [2024-11-28 12:50:48.154345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.899 qpair failed and we were unable to recover it. 00:27:05.899 [2024-11-28 12:50:48.154413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.899 [2024-11-28 12:50:48.154423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.899 qpair failed and we were unable to recover it. 00:27:05.899 [2024-11-28 12:50:48.154565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.899 [2024-11-28 12:50:48.154575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.899 qpair failed and we were unable to recover it. 00:27:05.899 [2024-11-28 12:50:48.154641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.899 [2024-11-28 12:50:48.154651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.899 qpair failed and we were unable to recover it. 00:27:05.899 [2024-11-28 12:50:48.154805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.899 [2024-11-28 12:50:48.154816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.899 qpair failed and we were unable to recover it. 00:27:05.899 [2024-11-28 12:50:48.154960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.899 [2024-11-28 12:50:48.154972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.899 qpair failed and we were unable to recover it. 00:27:05.899 [2024-11-28 12:50:48.155047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.899 [2024-11-28 12:50:48.155058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.899 qpair failed and we were unable to recover it. 00:27:05.899 [2024-11-28 12:50:48.155158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.899 [2024-11-28 12:50:48.155169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.899 qpair failed and we were unable to recover it. 00:27:05.899 [2024-11-28 12:50:48.155255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.899 [2024-11-28 12:50:48.155266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.899 qpair failed and we were unable to recover it. 00:27:05.899 [2024-11-28 12:50:48.155339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.899 [2024-11-28 12:50:48.155350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.899 qpair failed and we were unable to recover it. 00:27:05.899 [2024-11-28 12:50:48.155488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.899 [2024-11-28 12:50:48.155498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.899 qpair failed and we were unable to recover it. 00:27:05.899 [2024-11-28 12:50:48.155581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.899 [2024-11-28 12:50:48.155592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.899 qpair failed and we were unable to recover it. 00:27:05.899 [2024-11-28 12:50:48.155663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.899 [2024-11-28 12:50:48.155674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.899 qpair failed and we were unable to recover it. 00:27:05.899 [2024-11-28 12:50:48.155812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.899 [2024-11-28 12:50:48.155822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.899 qpair failed and we were unable to recover it. 00:27:05.899 [2024-11-28 12:50:48.155914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.899 [2024-11-28 12:50:48.155925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.900 qpair failed and we were unable to recover it. 00:27:05.900 [2024-11-28 12:50:48.156096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.900 [2024-11-28 12:50:48.156107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.900 qpair failed and we were unable to recover it. 00:27:05.900 [2024-11-28 12:50:48.156202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.900 [2024-11-28 12:50:48.156213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.900 qpair failed and we were unable to recover it. 00:27:05.900 [2024-11-28 12:50:48.156345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.900 [2024-11-28 12:50:48.156355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.900 qpair failed and we were unable to recover it. 00:27:05.900 [2024-11-28 12:50:48.156488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.900 [2024-11-28 12:50:48.156498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.900 qpair failed and we were unable to recover it. 00:27:05.900 [2024-11-28 12:50:48.156643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.900 [2024-11-28 12:50:48.156654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.900 qpair failed and we were unable to recover it. 00:27:05.900 [2024-11-28 12:50:48.156797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.900 [2024-11-28 12:50:48.156808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.900 qpair failed and we were unable to recover it. 00:27:05.900 [2024-11-28 12:50:48.156937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.900 [2024-11-28 12:50:48.156953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.900 qpair failed and we were unable to recover it. 00:27:05.900 [2024-11-28 12:50:48.157021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.900 [2024-11-28 12:50:48.157032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.900 qpair failed and we were unable to recover it. 00:27:05.900 [2024-11-28 12:50:48.157209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.900 [2024-11-28 12:50:48.157219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.900 qpair failed and we were unable to recover it. 00:27:05.900 [2024-11-28 12:50:48.157362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.900 [2024-11-28 12:50:48.157372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.900 qpair failed and we were unable to recover it. 00:27:05.900 [2024-11-28 12:50:48.157511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.900 [2024-11-28 12:50:48.157522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.900 qpair failed and we were unable to recover it. 00:27:05.900 [2024-11-28 12:50:48.157664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.900 [2024-11-28 12:50:48.157675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.900 qpair failed and we were unable to recover it. 00:27:05.900 [2024-11-28 12:50:48.157755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.900 [2024-11-28 12:50:48.157765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.900 qpair failed and we were unable to recover it. 00:27:05.900 [2024-11-28 12:50:48.157856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.900 [2024-11-28 12:50:48.157867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.900 qpair failed and we were unable to recover it. 00:27:05.900 [2024-11-28 12:50:48.157967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.900 [2024-11-28 12:50:48.157981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.900 qpair failed and we were unable to recover it. 00:27:05.900 [2024-11-28 12:50:48.158055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.900 [2024-11-28 12:50:48.158066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.900 qpair failed and we were unable to recover it. 00:27:05.900 [2024-11-28 12:50:48.158201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.900 [2024-11-28 12:50:48.158211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.900 qpair failed and we were unable to recover it. 00:27:05.900 [2024-11-28 12:50:48.158380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.900 [2024-11-28 12:50:48.158391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.900 qpair failed and we were unable to recover it. 00:27:05.900 [2024-11-28 12:50:48.158593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.900 [2024-11-28 12:50:48.158603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.900 qpair failed and we were unable to recover it. 00:27:05.900 [2024-11-28 12:50:48.158692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.900 [2024-11-28 12:50:48.158703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.900 qpair failed and we were unable to recover it. 00:27:05.900 [2024-11-28 12:50:48.158842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.900 [2024-11-28 12:50:48.158853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.900 qpair failed and we were unable to recover it. 00:27:05.900 [2024-11-28 12:50:48.159052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.900 [2024-11-28 12:50:48.159064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.900 qpair failed and we were unable to recover it. 00:27:05.900 [2024-11-28 12:50:48.159214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.900 [2024-11-28 12:50:48.159224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.900 qpair failed and we were unable to recover it. 00:27:05.900 [2024-11-28 12:50:48.159310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.900 [2024-11-28 12:50:48.159321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.900 qpair failed and we were unable to recover it. 00:27:05.900 [2024-11-28 12:50:48.159456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.900 [2024-11-28 12:50:48.159466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.900 qpair failed and we were unable to recover it. 00:27:05.900 [2024-11-28 12:50:48.159619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.900 [2024-11-28 12:50:48.159630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.900 qpair failed and we were unable to recover it. 00:27:05.900 [2024-11-28 12:50:48.159705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.900 [2024-11-28 12:50:48.159715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.900 qpair failed and we were unable to recover it. 00:27:05.900 [2024-11-28 12:50:48.159785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.900 [2024-11-28 12:50:48.159795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.900 qpair failed and we were unable to recover it. 00:27:05.900 [2024-11-28 12:50:48.159887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.900 [2024-11-28 12:50:48.159898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.900 qpair failed and we were unable to recover it. 00:27:05.900 [2024-11-28 12:50:48.160044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.900 [2024-11-28 12:50:48.160055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.900 qpair failed and we were unable to recover it. 00:27:05.900 [2024-11-28 12:50:48.160124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.900 [2024-11-28 12:50:48.160134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.900 qpair failed and we were unable to recover it. 00:27:05.900 [2024-11-28 12:50:48.160228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.900 [2024-11-28 12:50:48.160222] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:27:05.900 [2024-11-28 12:50:48.160239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.900 qpair failed and we were unable to recover it. 00:27:05.900 [2024-11-28 12:50:48.160262] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:05.900 [2024-11-28 12:50:48.160321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.900 [2024-11-28 12:50:48.160331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.900 qpair failed and we were unable to recover it. 00:27:05.900 [2024-11-28 12:50:48.160479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.901 [2024-11-28 12:50:48.160488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.901 qpair failed and we were unable to recover it. 00:27:05.901 [2024-11-28 12:50:48.160624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.901 [2024-11-28 12:50:48.160634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.901 qpair failed and we were unable to recover it. 00:27:05.901 [2024-11-28 12:50:48.160702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.901 [2024-11-28 12:50:48.160711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.901 qpair failed and we were unable to recover it. 00:27:05.901 [2024-11-28 12:50:48.160773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.901 [2024-11-28 12:50:48.160782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.901 qpair failed and we were unable to recover it. 00:27:05.901 [2024-11-28 12:50:48.160856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.901 [2024-11-28 12:50:48.160865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.901 qpair failed and we were unable to recover it. 00:27:05.901 [2024-11-28 12:50:48.160960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.901 [2024-11-28 12:50:48.160970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.901 qpair failed and we were unable to recover it. 00:27:05.901 [2024-11-28 12:50:48.161046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.901 [2024-11-28 12:50:48.161056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.901 qpair failed and we were unable to recover it. 00:27:05.901 [2024-11-28 12:50:48.161211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.901 [2024-11-28 12:50:48.161221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.901 qpair failed and we were unable to recover it. 00:27:05.901 [2024-11-28 12:50:48.161311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.901 [2024-11-28 12:50:48.161321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.901 qpair failed and we were unable to recover it. 00:27:05.901 [2024-11-28 12:50:48.161386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.901 [2024-11-28 12:50:48.161396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.901 qpair failed and we were unable to recover it. 00:27:05.901 [2024-11-28 12:50:48.161488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.901 [2024-11-28 12:50:48.161499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.901 qpair failed and we were unable to recover it. 00:27:05.901 [2024-11-28 12:50:48.161653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.901 [2024-11-28 12:50:48.161664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.901 qpair failed and we were unable to recover it. 00:27:05.901 [2024-11-28 12:50:48.161796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.901 [2024-11-28 12:50:48.161807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.901 qpair failed and we were unable to recover it. 00:27:05.901 [2024-11-28 12:50:48.161943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.901 [2024-11-28 12:50:48.161965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.901 qpair failed and we were unable to recover it. 00:27:05.901 [2024-11-28 12:50:48.162104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.901 [2024-11-28 12:50:48.162114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.901 qpair failed and we were unable to recover it. 00:27:05.901 [2024-11-28 12:50:48.162256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.901 [2024-11-28 12:50:48.162267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.901 qpair failed and we were unable to recover it. 00:27:05.901 [2024-11-28 12:50:48.162410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.901 [2024-11-28 12:50:48.162422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.901 qpair failed and we were unable to recover it. 00:27:05.901 [2024-11-28 12:50:48.162513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.901 [2024-11-28 12:50:48.162524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.901 qpair failed and we were unable to recover it. 00:27:05.901 [2024-11-28 12:50:48.162619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.901 [2024-11-28 12:50:48.162630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.901 qpair failed and we were unable to recover it. 00:27:05.901 [2024-11-28 12:50:48.162839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.901 [2024-11-28 12:50:48.162850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.901 qpair failed and we were unable to recover it. 00:27:05.901 [2024-11-28 12:50:48.162938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.901 [2024-11-28 12:50:48.162954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.901 qpair failed and we were unable to recover it. 00:27:05.901 [2024-11-28 12:50:48.163052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.901 [2024-11-28 12:50:48.163063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.901 qpair failed and we were unable to recover it. 00:27:05.901 [2024-11-28 12:50:48.163151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.901 [2024-11-28 12:50:48.163161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.901 qpair failed and we were unable to recover it. 00:27:05.901 [2024-11-28 12:50:48.163256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.901 [2024-11-28 12:50:48.163267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.901 qpair failed and we were unable to recover it. 00:27:05.901 [2024-11-28 12:50:48.163423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.901 [2024-11-28 12:50:48.163433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.901 qpair failed and we were unable to recover it. 00:27:05.901 [2024-11-28 12:50:48.163522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.901 [2024-11-28 12:50:48.163533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.901 qpair failed and we were unable to recover it. 00:27:05.901 [2024-11-28 12:50:48.163759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.901 [2024-11-28 12:50:48.163770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.901 qpair failed and we were unable to recover it. 00:27:05.901 [2024-11-28 12:50:48.163844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.901 [2024-11-28 12:50:48.163855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.901 qpair failed and we were unable to recover it. 00:27:05.901 [2024-11-28 12:50:48.163931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.901 [2024-11-28 12:50:48.163942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.901 qpair failed and we were unable to recover it. 00:27:05.901 [2024-11-28 12:50:48.164093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.901 [2024-11-28 12:50:48.164104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.901 qpair failed and we were unable to recover it. 00:27:05.901 [2024-11-28 12:50:48.164233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.901 [2024-11-28 12:50:48.164244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.901 qpair failed and we were unable to recover it. 00:27:05.901 [2024-11-28 12:50:48.164344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.901 [2024-11-28 12:50:48.164355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.901 qpair failed and we were unable to recover it. 00:27:05.901 [2024-11-28 12:50:48.164503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.901 [2024-11-28 12:50:48.164513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.901 qpair failed and we were unable to recover it. 00:27:05.901 [2024-11-28 12:50:48.164654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.901 [2024-11-28 12:50:48.164665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.901 qpair failed and we were unable to recover it. 00:27:05.901 [2024-11-28 12:50:48.164732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.901 [2024-11-28 12:50:48.164743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.901 qpair failed and we were unable to recover it. 00:27:05.901 [2024-11-28 12:50:48.164968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.901 [2024-11-28 12:50:48.164979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.901 qpair failed and we were unable to recover it. 00:27:05.901 [2024-11-28 12:50:48.165151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.901 [2024-11-28 12:50:48.165162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.901 qpair failed and we were unable to recover it. 00:27:05.901 [2024-11-28 12:50:48.165302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.901 [2024-11-28 12:50:48.165312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.901 qpair failed and we were unable to recover it. 00:27:05.902 [2024-11-28 12:50:48.165455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.902 [2024-11-28 12:50:48.165466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.902 qpair failed and we were unable to recover it. 00:27:05.902 [2024-11-28 12:50:48.165547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.902 [2024-11-28 12:50:48.165557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.902 qpair failed and we were unable to recover it. 00:27:05.902 [2024-11-28 12:50:48.165634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.902 [2024-11-28 12:50:48.165644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.902 qpair failed and we were unable to recover it. 00:27:05.902 [2024-11-28 12:50:48.165847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.902 [2024-11-28 12:50:48.165859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.902 qpair failed and we were unable to recover it. 00:27:05.902 [2024-11-28 12:50:48.166012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.902 [2024-11-28 12:50:48.166024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.902 qpair failed and we were unable to recover it. 00:27:05.902 [2024-11-28 12:50:48.166101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.902 [2024-11-28 12:50:48.166112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.902 qpair failed and we were unable to recover it. 00:27:05.902 [2024-11-28 12:50:48.166248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.902 [2024-11-28 12:50:48.166259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.902 qpair failed and we were unable to recover it. 00:27:05.902 [2024-11-28 12:50:48.166341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.902 [2024-11-28 12:50:48.166352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.902 qpair failed and we were unable to recover it. 00:27:05.902 [2024-11-28 12:50:48.166485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.902 [2024-11-28 12:50:48.166496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.902 qpair failed and we were unable to recover it. 00:27:05.902 [2024-11-28 12:50:48.166698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.902 [2024-11-28 12:50:48.166709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.902 qpair failed and we were unable to recover it. 00:27:05.902 [2024-11-28 12:50:48.166865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.902 [2024-11-28 12:50:48.166876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.902 qpair failed and we were unable to recover it. 00:27:05.902 [2024-11-28 12:50:48.167032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.902 [2024-11-28 12:50:48.167043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.902 qpair failed and we were unable to recover it. 00:27:05.902 [2024-11-28 12:50:48.167188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.902 [2024-11-28 12:50:48.167199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.902 qpair failed and we were unable to recover it. 00:27:05.902 [2024-11-28 12:50:48.167342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.902 [2024-11-28 12:50:48.167354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.902 qpair failed and we were unable to recover it. 00:27:05.902 [2024-11-28 12:50:48.167416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.902 [2024-11-28 12:50:48.167426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.902 qpair failed and we were unable to recover it. 00:27:05.902 [2024-11-28 12:50:48.167507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.902 [2024-11-28 12:50:48.167518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.902 qpair failed and we were unable to recover it. 00:27:05.902 [2024-11-28 12:50:48.167673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.902 [2024-11-28 12:50:48.167683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.902 qpair failed and we were unable to recover it. 00:27:05.902 [2024-11-28 12:50:48.167832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.902 [2024-11-28 12:50:48.167842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.902 qpair failed and we were unable to recover it. 00:27:05.902 [2024-11-28 12:50:48.167934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.902 [2024-11-28 12:50:48.167945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.902 qpair failed and we were unable to recover it. 00:27:05.902 [2024-11-28 12:50:48.168079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.902 [2024-11-28 12:50:48.168090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.902 qpair failed and we were unable to recover it. 00:27:05.902 [2024-11-28 12:50:48.168173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.902 [2024-11-28 12:50:48.168184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.902 qpair failed and we were unable to recover it. 00:27:05.902 [2024-11-28 12:50:48.168351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.902 [2024-11-28 12:50:48.168362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.902 qpair failed and we were unable to recover it. 00:27:05.902 [2024-11-28 12:50:48.168517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.902 [2024-11-28 12:50:48.168530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.902 qpair failed and we were unable to recover it. 00:27:05.902 [2024-11-28 12:50:48.168687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.902 [2024-11-28 12:50:48.168698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.902 qpair failed and we were unable to recover it. 00:27:05.902 [2024-11-28 12:50:48.168830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.902 [2024-11-28 12:50:48.168841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.902 qpair failed and we were unable to recover it. 00:27:05.902 [2024-11-28 12:50:48.168972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.902 [2024-11-28 12:50:48.168983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.902 qpair failed and we were unable to recover it. 00:27:05.902 [2024-11-28 12:50:48.169071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.902 [2024-11-28 12:50:48.169082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.902 qpair failed and we were unable to recover it. 00:27:05.902 [2024-11-28 12:50:48.169227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.902 [2024-11-28 12:50:48.169238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.902 qpair failed and we were unable to recover it. 00:27:05.902 [2024-11-28 12:50:48.169316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.902 [2024-11-28 12:50:48.169327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.902 qpair failed and we were unable to recover it. 00:27:05.902 [2024-11-28 12:50:48.169406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.902 [2024-11-28 12:50:48.169417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.902 qpair failed and we were unable to recover it. 00:27:05.902 [2024-11-28 12:50:48.169549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.902 [2024-11-28 12:50:48.169559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.902 qpair failed and we were unable to recover it. 00:27:05.902 [2024-11-28 12:50:48.169623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.902 [2024-11-28 12:50:48.169633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.902 qpair failed and we were unable to recover it. 00:27:05.902 [2024-11-28 12:50:48.169712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.902 [2024-11-28 12:50:48.169723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.902 qpair failed and we were unable to recover it. 00:27:05.902 [2024-11-28 12:50:48.169798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.902 [2024-11-28 12:50:48.169808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.902 qpair failed and we were unable to recover it. 00:27:05.902 [2024-11-28 12:50:48.170033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.902 [2024-11-28 12:50:48.170044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.902 qpair failed and we were unable to recover it. 00:27:05.902 [2024-11-28 12:50:48.170129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.902 [2024-11-28 12:50:48.170141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.902 qpair failed and we were unable to recover it. 00:27:05.902 [2024-11-28 12:50:48.170281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.902 [2024-11-28 12:50:48.170292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.902 qpair failed and we were unable to recover it. 00:27:05.902 [2024-11-28 12:50:48.170370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.902 [2024-11-28 12:50:48.170381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.902 qpair failed and we were unable to recover it. 00:27:05.903 [2024-11-28 12:50:48.170559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.903 [2024-11-28 12:50:48.170570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.903 qpair failed and we were unable to recover it. 00:27:05.903 [2024-11-28 12:50:48.170652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.903 [2024-11-28 12:50:48.170663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.903 qpair failed and we were unable to recover it. 00:27:05.903 [2024-11-28 12:50:48.170757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.903 [2024-11-28 12:50:48.170768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.903 qpair failed and we were unable to recover it. 00:27:05.903 [2024-11-28 12:50:48.170902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.903 [2024-11-28 12:50:48.170913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.903 qpair failed and we were unable to recover it. 00:27:05.903 [2024-11-28 12:50:48.171077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.903 [2024-11-28 12:50:48.171088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.903 qpair failed and we were unable to recover it. 00:27:05.903 [2024-11-28 12:50:48.171152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.903 [2024-11-28 12:50:48.171163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.903 qpair failed and we were unable to recover it. 00:27:05.903 [2024-11-28 12:50:48.171300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.903 [2024-11-28 12:50:48.171311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.903 qpair failed and we were unable to recover it. 00:27:05.903 [2024-11-28 12:50:48.171457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.903 [2024-11-28 12:50:48.171467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.903 qpair failed and we were unable to recover it. 00:27:05.903 [2024-11-28 12:50:48.171625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.903 [2024-11-28 12:50:48.171636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.903 qpair failed and we were unable to recover it. 00:27:05.903 [2024-11-28 12:50:48.171776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.903 [2024-11-28 12:50:48.171787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.903 qpair failed and we were unable to recover it. 00:27:05.903 [2024-11-28 12:50:48.171864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.903 [2024-11-28 12:50:48.171875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.903 qpair failed and we were unable to recover it. 00:27:05.903 [2024-11-28 12:50:48.171978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.903 [2024-11-28 12:50:48.171989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.903 qpair failed and we were unable to recover it. 00:27:05.903 [2024-11-28 12:50:48.172138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.903 [2024-11-28 12:50:48.172149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.903 qpair failed and we were unable to recover it. 00:27:05.903 [2024-11-28 12:50:48.172230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.903 [2024-11-28 12:50:48.172240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.903 qpair failed and we were unable to recover it. 00:27:05.903 [2024-11-28 12:50:48.172314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.903 [2024-11-28 12:50:48.172325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.903 qpair failed and we were unable to recover it. 00:27:05.903 [2024-11-28 12:50:48.172469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.903 [2024-11-28 12:50:48.172480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.903 qpair failed and we were unable to recover it. 00:27:05.903 [2024-11-28 12:50:48.172546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.903 [2024-11-28 12:50:48.172556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.903 qpair failed and we were unable to recover it. 00:27:05.903 [2024-11-28 12:50:48.172630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.903 [2024-11-28 12:50:48.172641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.903 qpair failed and we were unable to recover it. 00:27:05.903 [2024-11-28 12:50:48.172729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.903 [2024-11-28 12:50:48.172740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.903 qpair failed and we were unable to recover it. 00:27:05.903 [2024-11-28 12:50:48.172970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.903 [2024-11-28 12:50:48.172981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.903 qpair failed and we were unable to recover it. 00:27:05.903 [2024-11-28 12:50:48.173147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.903 [2024-11-28 12:50:48.173158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.903 qpair failed and we were unable to recover it. 00:27:05.903 [2024-11-28 12:50:48.173309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.903 [2024-11-28 12:50:48.173320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.903 qpair failed and we were unable to recover it. 00:27:05.903 [2024-11-28 12:50:48.173475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.903 [2024-11-28 12:50:48.173486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.903 qpair failed and we were unable to recover it. 00:27:05.903 [2024-11-28 12:50:48.173563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.903 [2024-11-28 12:50:48.173574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.903 qpair failed and we were unable to recover it. 00:27:05.903 [2024-11-28 12:50:48.173715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.903 [2024-11-28 12:50:48.173728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.903 qpair failed and we were unable to recover it. 00:27:05.903 [2024-11-28 12:50:48.173806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.903 [2024-11-28 12:50:48.173816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.903 qpair failed and we were unable to recover it. 00:27:05.903 [2024-11-28 12:50:48.174053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.903 [2024-11-28 12:50:48.174065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.903 qpair failed and we were unable to recover it. 00:27:05.903 [2024-11-28 12:50:48.174222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.903 [2024-11-28 12:50:48.174233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.903 qpair failed and we were unable to recover it. 00:27:05.903 [2024-11-28 12:50:48.174375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.903 [2024-11-28 12:50:48.174385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.903 qpair failed and we were unable to recover it. 00:27:05.903 [2024-11-28 12:50:48.174463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.903 [2024-11-28 12:50:48.174475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.903 qpair failed and we were unable to recover it. 00:27:05.903 [2024-11-28 12:50:48.174555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.903 [2024-11-28 12:50:48.174566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.903 qpair failed and we were unable to recover it. 00:27:05.903 [2024-11-28 12:50:48.174713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.903 [2024-11-28 12:50:48.174723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.903 qpair failed and we were unable to recover it. 00:27:05.903 [2024-11-28 12:50:48.174818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.903 [2024-11-28 12:50:48.174829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.903 qpair failed and we were unable to recover it. 00:27:05.903 [2024-11-28 12:50:48.175051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.903 [2024-11-28 12:50:48.175062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.903 qpair failed and we were unable to recover it. 00:27:05.903 [2024-11-28 12:50:48.175207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.903 [2024-11-28 12:50:48.175218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.903 qpair failed and we were unable to recover it. 00:27:05.903 [2024-11-28 12:50:48.175297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.903 [2024-11-28 12:50:48.175308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.903 qpair failed and we were unable to recover it. 00:27:05.903 [2024-11-28 12:50:48.175452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.903 [2024-11-28 12:50:48.175463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.903 qpair failed and we were unable to recover it. 00:27:05.903 [2024-11-28 12:50:48.175612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.904 [2024-11-28 12:50:48.175622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.904 qpair failed and we were unable to recover it. 00:27:05.904 [2024-11-28 12:50:48.175770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.904 [2024-11-28 12:50:48.175781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.904 qpair failed and we were unable to recover it. 00:27:05.904 [2024-11-28 12:50:48.175874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.904 [2024-11-28 12:50:48.175885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.904 qpair failed and we were unable to recover it. 00:27:05.904 [2024-11-28 12:50:48.176037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.904 [2024-11-28 12:50:48.176048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.904 qpair failed and we were unable to recover it. 00:27:05.904 [2024-11-28 12:50:48.176182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.904 [2024-11-28 12:50:48.176193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.904 qpair failed and we were unable to recover it. 00:27:05.904 [2024-11-28 12:50:48.176398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.904 [2024-11-28 12:50:48.176408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.904 qpair failed and we were unable to recover it. 00:27:05.904 [2024-11-28 12:50:48.176566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.904 [2024-11-28 12:50:48.176577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.904 qpair failed and we were unable to recover it. 00:27:05.904 [2024-11-28 12:50:48.176666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.904 [2024-11-28 12:50:48.176677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.904 qpair failed and we were unable to recover it. 00:27:05.904 [2024-11-28 12:50:48.176824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.904 [2024-11-28 12:50:48.176834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.904 qpair failed and we were unable to recover it. 00:27:05.904 [2024-11-28 12:50:48.177009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.904 [2024-11-28 12:50:48.177020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.904 qpair failed and we were unable to recover it. 00:27:05.904 [2024-11-28 12:50:48.177241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.904 [2024-11-28 12:50:48.177251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.904 qpair failed and we were unable to recover it. 00:27:05.904 [2024-11-28 12:50:48.177319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.904 [2024-11-28 12:50:48.177329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.904 qpair failed and we were unable to recover it. 00:27:05.904 [2024-11-28 12:50:48.177457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.904 [2024-11-28 12:50:48.177468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.904 qpair failed and we were unable to recover it. 00:27:05.904 [2024-11-28 12:50:48.177547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.904 [2024-11-28 12:50:48.177558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.904 qpair failed and we were unable to recover it. 00:27:05.904 [2024-11-28 12:50:48.177710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.904 [2024-11-28 12:50:48.177721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.904 qpair failed and we were unable to recover it. 00:27:05.904 [2024-11-28 12:50:48.177852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.904 [2024-11-28 12:50:48.177863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.904 qpair failed and we were unable to recover it. 00:27:05.904 [2024-11-28 12:50:48.178008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.904 [2024-11-28 12:50:48.178020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.904 qpair failed and we were unable to recover it. 00:27:05.904 [2024-11-28 12:50:48.178098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.904 [2024-11-28 12:50:48.178109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.904 qpair failed and we were unable to recover it. 00:27:05.904 [2024-11-28 12:50:48.178184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.904 [2024-11-28 12:50:48.178195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.904 qpair failed and we were unable to recover it. 00:27:05.904 [2024-11-28 12:50:48.178341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.904 [2024-11-28 12:50:48.178351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.904 qpair failed and we were unable to recover it. 00:27:05.904 [2024-11-28 12:50:48.178578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.904 [2024-11-28 12:50:48.178589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.904 qpair failed and we were unable to recover it. 00:27:05.904 [2024-11-28 12:50:48.178681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.904 [2024-11-28 12:50:48.178691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.904 qpair failed and we were unable to recover it. 00:27:05.904 [2024-11-28 12:50:48.178836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.904 [2024-11-28 12:50:48.178847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.904 qpair failed and we were unable to recover it. 00:27:05.904 [2024-11-28 12:50:48.178931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.904 [2024-11-28 12:50:48.178941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.904 qpair failed and we were unable to recover it. 00:27:05.904 [2024-11-28 12:50:48.179094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.904 [2024-11-28 12:50:48.179104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.904 qpair failed and we were unable to recover it. 00:27:05.904 [2024-11-28 12:50:48.179171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.904 [2024-11-28 12:50:48.179181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.904 qpair failed and we were unable to recover it. 00:27:05.904 [2024-11-28 12:50:48.179257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.904 [2024-11-28 12:50:48.179268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.904 qpair failed and we were unable to recover it. 00:27:05.904 [2024-11-28 12:50:48.179355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.904 [2024-11-28 12:50:48.179368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.904 qpair failed and we were unable to recover it. 00:27:05.904 [2024-11-28 12:50:48.179463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.904 [2024-11-28 12:50:48.179474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.904 qpair failed and we were unable to recover it. 00:27:05.904 [2024-11-28 12:50:48.179621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.904 [2024-11-28 12:50:48.179631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.904 qpair failed and we were unable to recover it. 00:27:05.904 [2024-11-28 12:50:48.179796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.904 [2024-11-28 12:50:48.179807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.904 qpair failed and we were unable to recover it. 00:27:05.904 [2024-11-28 12:50:48.180014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.904 [2024-11-28 12:50:48.180025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.904 qpair failed and we were unable to recover it. 00:27:05.904 [2024-11-28 12:50:48.180113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.904 [2024-11-28 12:50:48.180124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.904 qpair failed and we were unable to recover it. 00:27:05.904 [2024-11-28 12:50:48.180257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.904 [2024-11-28 12:50:48.180268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.904 qpair failed and we were unable to recover it. 00:27:05.904 [2024-11-28 12:50:48.180503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.904 [2024-11-28 12:50:48.180514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.904 qpair failed and we were unable to recover it. 00:27:05.904 [2024-11-28 12:50:48.180672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.904 [2024-11-28 12:50:48.180682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.904 qpair failed and we were unable to recover it. 00:27:05.904 [2024-11-28 12:50:48.180866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.904 [2024-11-28 12:50:48.180877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.904 qpair failed and we were unable to recover it. 00:27:05.904 [2024-11-28 12:50:48.181022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.904 [2024-11-28 12:50:48.181032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.904 qpair failed and we were unable to recover it. 00:27:05.904 [2024-11-28 12:50:48.181168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.905 [2024-11-28 12:50:48.181180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.905 qpair failed and we were unable to recover it. 00:27:05.905 [2024-11-28 12:50:48.181399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.905 [2024-11-28 12:50:48.181410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.905 qpair failed and we were unable to recover it. 00:27:05.905 [2024-11-28 12:50:48.181478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.905 [2024-11-28 12:50:48.181489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.905 qpair failed and we were unable to recover it. 00:27:05.905 [2024-11-28 12:50:48.181639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.905 [2024-11-28 12:50:48.181650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.905 qpair failed and we were unable to recover it. 00:27:05.905 [2024-11-28 12:50:48.181710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.905 [2024-11-28 12:50:48.181720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.905 qpair failed and we were unable to recover it. 00:27:05.905 [2024-11-28 12:50:48.181863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.905 [2024-11-28 12:50:48.181873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.905 qpair failed and we were unable to recover it. 00:27:05.905 [2024-11-28 12:50:48.181956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.905 [2024-11-28 12:50:48.181967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.905 qpair failed and we were unable to recover it. 00:27:05.905 [2024-11-28 12:50:48.182096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.905 [2024-11-28 12:50:48.182106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.905 qpair failed and we were unable to recover it. 00:27:05.905 [2024-11-28 12:50:48.182169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.905 [2024-11-28 12:50:48.182180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.905 qpair failed and we were unable to recover it. 00:27:05.905 [2024-11-28 12:50:48.182316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.905 [2024-11-28 12:50:48.182327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.905 qpair failed and we were unable to recover it. 00:27:05.905 [2024-11-28 12:50:48.182461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.905 [2024-11-28 12:50:48.182471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.905 qpair failed and we were unable to recover it. 00:27:05.905 [2024-11-28 12:50:48.182715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.905 [2024-11-28 12:50:48.182726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.905 qpair failed and we were unable to recover it. 00:27:05.905 [2024-11-28 12:50:48.182857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.905 [2024-11-28 12:50:48.182868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.905 qpair failed and we were unable to recover it. 00:27:05.905 [2024-11-28 12:50:48.183017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.905 [2024-11-28 12:50:48.183028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.905 qpair failed and we were unable to recover it. 00:27:05.905 [2024-11-28 12:50:48.183160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.905 [2024-11-28 12:50:48.183171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.905 qpair failed and we were unable to recover it. 00:27:05.905 [2024-11-28 12:50:48.183322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.905 [2024-11-28 12:50:48.183332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.905 qpair failed and we were unable to recover it. 00:27:05.905 [2024-11-28 12:50:48.183431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.905 [2024-11-28 12:50:48.183442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.905 qpair failed and we were unable to recover it. 00:27:05.905 [2024-11-28 12:50:48.183584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.905 [2024-11-28 12:50:48.183595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.905 qpair failed and we were unable to recover it. 00:27:05.905 [2024-11-28 12:50:48.183725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.905 [2024-11-28 12:50:48.183736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.905 qpair failed and we were unable to recover it. 00:27:05.905 [2024-11-28 12:50:48.183882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.905 [2024-11-28 12:50:48.183892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.905 qpair failed and we were unable to recover it. 00:27:05.905 [2024-11-28 12:50:48.183971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.905 [2024-11-28 12:50:48.183982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.905 qpair failed and we were unable to recover it. 00:27:05.905 [2024-11-28 12:50:48.184203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.905 [2024-11-28 12:50:48.184214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.905 qpair failed and we were unable to recover it. 00:27:05.905 [2024-11-28 12:50:48.184281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.905 [2024-11-28 12:50:48.184292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.905 qpair failed and we were unable to recover it. 00:27:05.905 [2024-11-28 12:50:48.184386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.905 [2024-11-28 12:50:48.184397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.905 qpair failed and we were unable to recover it. 00:27:05.905 [2024-11-28 12:50:48.184469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.905 [2024-11-28 12:50:48.184480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.905 qpair failed and we were unable to recover it. 00:27:05.905 [2024-11-28 12:50:48.184546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.905 [2024-11-28 12:50:48.184557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.905 qpair failed and we were unable to recover it. 00:27:05.905 [2024-11-28 12:50:48.184618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.905 [2024-11-28 12:50:48.184629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.905 qpair failed and we were unable to recover it. 00:27:05.905 [2024-11-28 12:50:48.184711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.905 [2024-11-28 12:50:48.184721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.905 qpair failed and we were unable to recover it. 00:27:05.905 [2024-11-28 12:50:48.184928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.905 [2024-11-28 12:50:48.184939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.905 qpair failed and we were unable to recover it. 00:27:05.905 [2024-11-28 12:50:48.185094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.905 [2024-11-28 12:50:48.185106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.905 qpair failed and we were unable to recover it. 00:27:05.905 [2024-11-28 12:50:48.185197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.905 [2024-11-28 12:50:48.185208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.905 qpair failed and we were unable to recover it. 00:27:05.905 [2024-11-28 12:50:48.185345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.905 [2024-11-28 12:50:48.185355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.905 qpair failed and we were unable to recover it. 00:27:05.905 [2024-11-28 12:50:48.185500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.905 [2024-11-28 12:50:48.185510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.905 qpair failed and we were unable to recover it. 00:27:05.905 [2024-11-28 12:50:48.185603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.905 [2024-11-28 12:50:48.185614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.905 qpair failed and we were unable to recover it. 00:27:05.905 [2024-11-28 12:50:48.185769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.905 [2024-11-28 12:50:48.185780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.906 qpair failed and we were unable to recover it. 00:27:05.906 [2024-11-28 12:50:48.185847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.906 [2024-11-28 12:50:48.185858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.906 qpair failed and we were unable to recover it. 00:27:05.906 [2024-11-28 12:50:48.185993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.906 [2024-11-28 12:50:48.186004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.906 qpair failed and we were unable to recover it. 00:27:05.906 [2024-11-28 12:50:48.186156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.906 [2024-11-28 12:50:48.186167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.906 qpair failed and we were unable to recover it. 00:27:05.906 [2024-11-28 12:50:48.186310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.906 [2024-11-28 12:50:48.186321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.906 qpair failed and we were unable to recover it. 00:27:05.906 [2024-11-28 12:50:48.186536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.906 [2024-11-28 12:50:48.186547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.906 qpair failed and we were unable to recover it. 00:27:05.906 [2024-11-28 12:50:48.186610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.906 [2024-11-28 12:50:48.186621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.906 qpair failed and we were unable to recover it. 00:27:05.906 [2024-11-28 12:50:48.186703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.906 [2024-11-28 12:50:48.186713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.906 qpair failed and we were unable to recover it. 00:27:05.906 [2024-11-28 12:50:48.186791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.906 [2024-11-28 12:50:48.186802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.906 qpair failed and we were unable to recover it. 00:27:05.906 [2024-11-28 12:50:48.186938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.906 [2024-11-28 12:50:48.186953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.906 qpair failed and we were unable to recover it. 00:27:05.906 [2024-11-28 12:50:48.187039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.906 [2024-11-28 12:50:48.187050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.906 qpair failed and we were unable to recover it. 00:27:05.906 [2024-11-28 12:50:48.187182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.906 [2024-11-28 12:50:48.187192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.906 qpair failed and we were unable to recover it. 00:27:05.906 [2024-11-28 12:50:48.187284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.906 [2024-11-28 12:50:48.187294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.906 qpair failed and we were unable to recover it. 00:27:05.906 [2024-11-28 12:50:48.187441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.906 [2024-11-28 12:50:48.187452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.906 qpair failed and we were unable to recover it. 00:27:05.906 [2024-11-28 12:50:48.187583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.906 [2024-11-28 12:50:48.187594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.906 qpair failed and we were unable to recover it. 00:27:05.906 [2024-11-28 12:50:48.187685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.906 [2024-11-28 12:50:48.187696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.906 qpair failed and we were unable to recover it. 00:27:05.906 [2024-11-28 12:50:48.187869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.906 [2024-11-28 12:50:48.187879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.906 qpair failed and we were unable to recover it. 00:27:05.906 [2024-11-28 12:50:48.187945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.906 [2024-11-28 12:50:48.187960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.906 qpair failed and we were unable to recover it. 00:27:05.906 [2024-11-28 12:50:48.188112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.906 [2024-11-28 12:50:48.188123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.906 qpair failed and we were unable to recover it. 00:27:05.906 [2024-11-28 12:50:48.188252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.906 [2024-11-28 12:50:48.188262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.906 qpair failed and we were unable to recover it. 00:27:05.906 [2024-11-28 12:50:48.188347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.906 [2024-11-28 12:50:48.188358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.906 qpair failed and we were unable to recover it. 00:27:05.906 [2024-11-28 12:50:48.188498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.906 [2024-11-28 12:50:48.188509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.906 qpair failed and we were unable to recover it. 00:27:05.906 [2024-11-28 12:50:48.188656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.906 [2024-11-28 12:50:48.188676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.906 qpair failed and we were unable to recover it. 00:27:05.906 [2024-11-28 12:50:48.188773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.906 [2024-11-28 12:50:48.188788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.906 qpair failed and we were unable to recover it. 00:27:05.906 [2024-11-28 12:50:48.188882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.906 [2024-11-28 12:50:48.188896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.906 qpair failed and we were unable to recover it. 00:27:05.906 [2024-11-28 12:50:48.188974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.906 [2024-11-28 12:50:48.188989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.906 qpair failed and we were unable to recover it. 00:27:05.906 [2024-11-28 12:50:48.189175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.906 [2024-11-28 12:50:48.189190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.906 qpair failed and we were unable to recover it. 00:27:05.906 [2024-11-28 12:50:48.189352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.906 [2024-11-28 12:50:48.189367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.906 qpair failed and we were unable to recover it. 00:27:05.906 [2024-11-28 12:50:48.189469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.906 [2024-11-28 12:50:48.189484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.906 qpair failed and we were unable to recover it. 00:27:05.906 [2024-11-28 12:50:48.189743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.906 [2024-11-28 12:50:48.189757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.906 qpair failed and we were unable to recover it. 00:27:05.906 [2024-11-28 12:50:48.189867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.906 [2024-11-28 12:50:48.189882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.906 qpair failed and we were unable to recover it. 00:27:05.906 [2024-11-28 12:50:48.190023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.906 [2024-11-28 12:50:48.190038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.906 qpair failed and we were unable to recover it. 00:27:05.906 [2024-11-28 12:50:48.190251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.906 [2024-11-28 12:50:48.190266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.906 qpair failed and we were unable to recover it. 00:27:05.906 [2024-11-28 12:50:48.190421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.906 [2024-11-28 12:50:48.190435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.906 qpair failed and we were unable to recover it. 00:27:05.906 [2024-11-28 12:50:48.190519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.906 [2024-11-28 12:50:48.190533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.906 qpair failed and we were unable to recover it. 00:27:05.906 [2024-11-28 12:50:48.190618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.906 [2024-11-28 12:50:48.190638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.906 qpair failed and we were unable to recover it. 00:27:05.906 [2024-11-28 12:50:48.190723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.906 [2024-11-28 12:50:48.190737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.906 qpair failed and we were unable to recover it. 00:27:05.906 [2024-11-28 12:50:48.190885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.906 [2024-11-28 12:50:48.190899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.906 qpair failed and we were unable to recover it. 00:27:05.906 [2024-11-28 12:50:48.191116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.906 [2024-11-28 12:50:48.191131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.906 qpair failed and we were unable to recover it. 00:27:05.907 [2024-11-28 12:50:48.191294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.907 [2024-11-28 12:50:48.191309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.907 qpair failed and we were unable to recover it. 00:27:05.907 [2024-11-28 12:50:48.191394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.907 [2024-11-28 12:50:48.191408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.907 qpair failed and we were unable to recover it. 00:27:05.907 [2024-11-28 12:50:48.191498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.907 [2024-11-28 12:50:48.191512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.907 qpair failed and we were unable to recover it. 00:27:05.907 [2024-11-28 12:50:48.191625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.907 [2024-11-28 12:50:48.191639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.907 qpair failed and we were unable to recover it. 00:27:05.907 [2024-11-28 12:50:48.191736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.907 [2024-11-28 12:50:48.191750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.907 qpair failed and we were unable to recover it. 00:27:05.907 [2024-11-28 12:50:48.191914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.907 [2024-11-28 12:50:48.191928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.907 qpair failed and we were unable to recover it. 00:27:05.907 [2024-11-28 12:50:48.192181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.907 [2024-11-28 12:50:48.192196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.907 qpair failed and we were unable to recover it. 00:27:05.907 [2024-11-28 12:50:48.192356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.907 [2024-11-28 12:50:48.192370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.907 qpair failed and we were unable to recover it. 00:27:05.907 [2024-11-28 12:50:48.192470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.907 [2024-11-28 12:50:48.192484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.907 qpair failed and we were unable to recover it. 00:27:05.907 [2024-11-28 12:50:48.192580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.907 [2024-11-28 12:50:48.192595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.907 qpair failed and we were unable to recover it. 00:27:05.907 [2024-11-28 12:50:48.192869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.907 [2024-11-28 12:50:48.192883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.907 qpair failed and we were unable to recover it. 00:27:05.907 [2024-11-28 12:50:48.193049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.907 [2024-11-28 12:50:48.193063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.907 qpair failed and we were unable to recover it. 00:27:05.907 [2024-11-28 12:50:48.193244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.907 [2024-11-28 12:50:48.193258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.907 qpair failed and we were unable to recover it. 00:27:05.907 [2024-11-28 12:50:48.193513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.907 [2024-11-28 12:50:48.193526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.907 qpair failed and we were unable to recover it. 00:27:05.907 [2024-11-28 12:50:48.193617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.907 [2024-11-28 12:50:48.193632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.907 qpair failed and we were unable to recover it. 00:27:05.907 [2024-11-28 12:50:48.193774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.907 [2024-11-28 12:50:48.193788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.907 qpair failed and we were unable to recover it. 00:27:05.907 [2024-11-28 12:50:48.193891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.907 [2024-11-28 12:50:48.193905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.907 qpair failed and we were unable to recover it. 00:27:05.907 [2024-11-28 12:50:48.194097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.907 [2024-11-28 12:50:48.194112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.907 qpair failed and we were unable to recover it. 00:27:05.907 [2024-11-28 12:50:48.194328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.907 [2024-11-28 12:50:48.194342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.907 qpair failed and we were unable to recover it. 00:27:05.907 [2024-11-28 12:50:48.194425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.907 [2024-11-28 12:50:48.194439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.907 qpair failed and we were unable to recover it. 00:27:05.907 [2024-11-28 12:50:48.194542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.907 [2024-11-28 12:50:48.194556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.907 qpair failed and we were unable to recover it. 00:27:05.907 [2024-11-28 12:50:48.194713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.907 [2024-11-28 12:50:48.194727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.907 qpair failed and we were unable to recover it. 00:27:05.907 [2024-11-28 12:50:48.194941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.907 [2024-11-28 12:50:48.194961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.907 qpair failed and we were unable to recover it. 00:27:05.907 [2024-11-28 12:50:48.195223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.907 [2024-11-28 12:50:48.195246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.907 qpair failed and we were unable to recover it. 00:27:05.907 [2024-11-28 12:50:48.195436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.907 [2024-11-28 12:50:48.195451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.907 qpair failed and we were unable to recover it. 00:27:05.907 [2024-11-28 12:50:48.195526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.907 [2024-11-28 12:50:48.195539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.907 qpair failed and we were unable to recover it. 00:27:05.907 [2024-11-28 12:50:48.195782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.907 [2024-11-28 12:50:48.195796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.907 qpair failed and we were unable to recover it. 00:27:05.907 [2024-11-28 12:50:48.195935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.907 [2024-11-28 12:50:48.195955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.907 qpair failed and we were unable to recover it. 00:27:05.907 [2024-11-28 12:50:48.196062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.907 [2024-11-28 12:50:48.196076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.907 qpair failed and we were unable to recover it. 00:27:05.907 [2024-11-28 12:50:48.196184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.907 [2024-11-28 12:50:48.196198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.907 qpair failed and we were unable to recover it. 00:27:05.907 [2024-11-28 12:50:48.196356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.907 [2024-11-28 12:50:48.196370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.907 qpair failed and we were unable to recover it. 00:27:05.907 [2024-11-28 12:50:48.196526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.907 [2024-11-28 12:50:48.196540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.907 qpair failed and we were unable to recover it. 00:27:05.907 [2024-11-28 12:50:48.196711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.907 [2024-11-28 12:50:48.196725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.907 qpair failed and we were unable to recover it. 00:27:05.907 [2024-11-28 12:50:48.196906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.907 [2024-11-28 12:50:48.196920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.907 qpair failed and we were unable to recover it. 00:27:05.907 [2024-11-28 12:50:48.197066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.907 [2024-11-28 12:50:48.197080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.907 qpair failed and we were unable to recover it. 00:27:05.907 [2024-11-28 12:50:48.197240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.907 [2024-11-28 12:50:48.197254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.907 qpair failed and we were unable to recover it. 00:27:05.907 [2024-11-28 12:50:48.197414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.907 [2024-11-28 12:50:48.197428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.907 qpair failed and we were unable to recover it. 00:27:05.907 [2024-11-28 12:50:48.197607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.907 [2024-11-28 12:50:48.197621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.907 qpair failed and we were unable to recover it. 00:27:05.907 [2024-11-28 12:50:48.197793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.908 [2024-11-28 12:50:48.197807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.908 qpair failed and we were unable to recover it. 00:27:05.908 [2024-11-28 12:50:48.197993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.908 [2024-11-28 12:50:48.198008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.908 qpair failed and we were unable to recover it. 00:27:05.908 [2024-11-28 12:50:48.198216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.908 [2024-11-28 12:50:48.198230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.908 qpair failed and we were unable to recover it. 00:27:05.908 [2024-11-28 12:50:48.198428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.908 [2024-11-28 12:50:48.198442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.908 qpair failed and we were unable to recover it. 00:27:05.908 [2024-11-28 12:50:48.198551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.908 [2024-11-28 12:50:48.198565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.908 qpair failed and we were unable to recover it. 00:27:05.908 [2024-11-28 12:50:48.198803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.908 [2024-11-28 12:50:48.198817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.908 qpair failed and we were unable to recover it. 00:27:05.908 [2024-11-28 12:50:48.198912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.908 [2024-11-28 12:50:48.198926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.908 qpair failed and we were unable to recover it. 00:27:05.908 [2024-11-28 12:50:48.199094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.908 [2024-11-28 12:50:48.199109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.908 qpair failed and we were unable to recover it. 00:27:05.908 [2024-11-28 12:50:48.199252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.908 [2024-11-28 12:50:48.199267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.908 qpair failed and we were unable to recover it. 00:27:05.908 [2024-11-28 12:50:48.199353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.908 [2024-11-28 12:50:48.199366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.908 qpair failed and we were unable to recover it. 00:27:05.908 [2024-11-28 12:50:48.199532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.908 [2024-11-28 12:50:48.199546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.908 qpair failed and we were unable to recover it. 00:27:05.908 [2024-11-28 12:50:48.199638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.908 [2024-11-28 12:50:48.199652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.908 qpair failed and we were unable to recover it. 00:27:05.908 [2024-11-28 12:50:48.199797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.908 [2024-11-28 12:50:48.199815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.908 qpair failed and we were unable to recover it. 00:27:05.908 [2024-11-28 12:50:48.199911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.908 [2024-11-28 12:50:48.199925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.908 qpair failed and we were unable to recover it. 00:27:05.908 [2024-11-28 12:50:48.200021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.908 [2024-11-28 12:50:48.200036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.908 qpair failed and we were unable to recover it. 00:27:05.908 [2024-11-28 12:50:48.200132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.908 [2024-11-28 12:50:48.200146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.908 qpair failed and we were unable to recover it. 00:27:05.908 [2024-11-28 12:50:48.200308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.908 [2024-11-28 12:50:48.200322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.908 qpair failed and we were unable to recover it. 00:27:05.908 [2024-11-28 12:50:48.200477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.908 [2024-11-28 12:50:48.200491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.908 qpair failed and we were unable to recover it. 00:27:05.908 [2024-11-28 12:50:48.200562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.908 [2024-11-28 12:50:48.200576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.908 qpair failed and we were unable to recover it. 00:27:05.908 [2024-11-28 12:50:48.200792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.908 [2024-11-28 12:50:48.200806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.908 qpair failed and we were unable to recover it. 00:27:05.908 [2024-11-28 12:50:48.200962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.908 [2024-11-28 12:50:48.200977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.908 qpair failed and we were unable to recover it. 00:27:05.908 [2024-11-28 12:50:48.201064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.908 [2024-11-28 12:50:48.201079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.908 qpair failed and we were unable to recover it. 00:27:05.908 [2024-11-28 12:50:48.201295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.908 [2024-11-28 12:50:48.201308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.908 qpair failed and we were unable to recover it. 00:27:05.908 [2024-11-28 12:50:48.201400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.908 [2024-11-28 12:50:48.201414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.908 qpair failed and we were unable to recover it. 00:27:05.908 [2024-11-28 12:50:48.201503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.908 [2024-11-28 12:50:48.201517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.908 qpair failed and we were unable to recover it. 00:27:05.908 [2024-11-28 12:50:48.201749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.908 [2024-11-28 12:50:48.201763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.908 qpair failed and we were unable to recover it. 00:27:05.908 [2024-11-28 12:50:48.201869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.908 [2024-11-28 12:50:48.201883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.908 qpair failed and we were unable to recover it. 00:27:05.908 [2024-11-28 12:50:48.201970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.908 [2024-11-28 12:50:48.201985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.908 qpair failed and we were unable to recover it. 00:27:05.908 [2024-11-28 12:50:48.202129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.908 [2024-11-28 12:50:48.202143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.908 qpair failed and we were unable to recover it. 00:27:05.908 [2024-11-28 12:50:48.202304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.908 [2024-11-28 12:50:48.202318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.908 qpair failed and we were unable to recover it. 00:27:05.908 [2024-11-28 12:50:48.202417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.908 [2024-11-28 12:50:48.202431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.908 qpair failed and we were unable to recover it. 00:27:05.908 [2024-11-28 12:50:48.202513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.908 [2024-11-28 12:50:48.202527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.908 qpair failed and we were unable to recover it. 00:27:05.908 [2024-11-28 12:50:48.202678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.908 [2024-11-28 12:50:48.202692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.908 qpair failed and we were unable to recover it. 00:27:05.908 [2024-11-28 12:50:48.202847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.908 [2024-11-28 12:50:48.202861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.908 qpair failed and we were unable to recover it. 00:27:05.908 [2024-11-28 12:50:48.203003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.908 [2024-11-28 12:50:48.203018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.908 qpair failed and we were unable to recover it. 00:27:05.908 [2024-11-28 12:50:48.203094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.908 [2024-11-28 12:50:48.203108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.908 qpair failed and we were unable to recover it. 00:27:05.908 [2024-11-28 12:50:48.203205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.908 [2024-11-28 12:50:48.203219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.908 qpair failed and we were unable to recover it. 00:27:05.908 [2024-11-28 12:50:48.203319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.908 [2024-11-28 12:50:48.203333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.908 qpair failed and we were unable to recover it. 00:27:05.908 [2024-11-28 12:50:48.203496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.908 [2024-11-28 12:50:48.203509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.908 qpair failed and we were unable to recover it. 00:27:05.909 [2024-11-28 12:50:48.203603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.909 [2024-11-28 12:50:48.203616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.909 qpair failed and we were unable to recover it. 00:27:05.909 [2024-11-28 12:50:48.203785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.909 [2024-11-28 12:50:48.203796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.909 qpair failed and we were unable to recover it. 00:27:05.909 [2024-11-28 12:50:48.203886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.909 [2024-11-28 12:50:48.203896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.909 qpair failed and we were unable to recover it. 00:27:05.909 [2024-11-28 12:50:48.204038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.909 [2024-11-28 12:50:48.204049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.909 qpair failed and we were unable to recover it. 00:27:05.909 [2024-11-28 12:50:48.204210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.909 [2024-11-28 12:50:48.204220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.909 qpair failed and we were unable to recover it. 00:27:05.909 [2024-11-28 12:50:48.204288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.909 [2024-11-28 12:50:48.204298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.909 qpair failed and we were unable to recover it. 00:27:05.909 [2024-11-28 12:50:48.204438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.909 [2024-11-28 12:50:48.204447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.909 qpair failed and we were unable to recover it. 00:27:05.909 [2024-11-28 12:50:48.204520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.909 [2024-11-28 12:50:48.204530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.909 qpair failed and we were unable to recover it. 00:27:05.909 [2024-11-28 12:50:48.204605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.909 [2024-11-28 12:50:48.204616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.909 qpair failed and we were unable to recover it. 00:27:05.909 [2024-11-28 12:50:48.204748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.909 [2024-11-28 12:50:48.204758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.909 qpair failed and we were unable to recover it. 00:27:05.909 [2024-11-28 12:50:48.204840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.909 [2024-11-28 12:50:48.204850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.909 qpair failed and we were unable to recover it. 00:27:05.909 [2024-11-28 12:50:48.204923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.909 [2024-11-28 12:50:48.204933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.909 qpair failed and we were unable to recover it. 00:27:05.909 [2024-11-28 12:50:48.205018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.909 [2024-11-28 12:50:48.205028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.909 qpair failed and we were unable to recover it. 00:27:05.909 [2024-11-28 12:50:48.205166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.909 [2024-11-28 12:50:48.205177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.909 qpair failed and we were unable to recover it. 00:27:05.909 [2024-11-28 12:50:48.205265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.909 [2024-11-28 12:50:48.205276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.909 qpair failed and we were unable to recover it. 00:27:05.909 [2024-11-28 12:50:48.205403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.909 [2024-11-28 12:50:48.205414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.909 qpair failed and we were unable to recover it. 00:27:05.909 [2024-11-28 12:50:48.205561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.909 [2024-11-28 12:50:48.205571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.909 qpair failed and we were unable to recover it. 00:27:05.909 [2024-11-28 12:50:48.205636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.909 [2024-11-28 12:50:48.205646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.909 qpair failed and we were unable to recover it. 00:27:05.909 [2024-11-28 12:50:48.205784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.909 [2024-11-28 12:50:48.205794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.909 qpair failed and we were unable to recover it. 00:27:05.909 [2024-11-28 12:50:48.205942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.909 [2024-11-28 12:50:48.205958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.909 qpair failed and we were unable to recover it. 00:27:05.909 [2024-11-28 12:50:48.206024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.909 [2024-11-28 12:50:48.206034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.909 qpair failed and we were unable to recover it. 00:27:05.909 [2024-11-28 12:50:48.206105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.909 [2024-11-28 12:50:48.206115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.909 qpair failed and we were unable to recover it. 00:27:05.909 [2024-11-28 12:50:48.206264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.909 [2024-11-28 12:50:48.206275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.909 qpair failed and we were unable to recover it. 00:27:05.909 [2024-11-28 12:50:48.206363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.909 [2024-11-28 12:50:48.206374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.909 qpair failed and we were unable to recover it. 00:27:05.909 [2024-11-28 12:50:48.206443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.909 [2024-11-28 12:50:48.206453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.909 qpair failed and we were unable to recover it. 00:27:05.909 [2024-11-28 12:50:48.206589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.909 [2024-11-28 12:50:48.206599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.909 qpair failed and we were unable to recover it. 00:27:05.909 [2024-11-28 12:50:48.206685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.909 [2024-11-28 12:50:48.206695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.909 qpair failed and we were unable to recover it. 00:27:05.909 [2024-11-28 12:50:48.206778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.909 [2024-11-28 12:50:48.206790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.909 qpair failed and we were unable to recover it. 00:27:05.909 [2024-11-28 12:50:48.206938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.909 [2024-11-28 12:50:48.206952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.909 qpair failed and we were unable to recover it. 00:27:05.909 [2024-11-28 12:50:48.207156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.909 [2024-11-28 12:50:48.207167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.909 qpair failed and we were unable to recover it. 00:27:05.909 [2024-11-28 12:50:48.207369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.909 [2024-11-28 12:50:48.207379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.909 qpair failed and we were unable to recover it. 00:27:05.909 [2024-11-28 12:50:48.207581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.909 [2024-11-28 12:50:48.207591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.909 qpair failed and we were unable to recover it. 00:27:05.909 [2024-11-28 12:50:48.207662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.909 [2024-11-28 12:50:48.207672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.909 qpair failed and we were unable to recover it. 00:27:05.909 [2024-11-28 12:50:48.207805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.909 [2024-11-28 12:50:48.207814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.909 qpair failed and we were unable to recover it. 00:27:05.909 [2024-11-28 12:50:48.207891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.909 [2024-11-28 12:50:48.207900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.909 qpair failed and we were unable to recover it. 00:27:05.909 [2024-11-28 12:50:48.207996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.910 [2024-11-28 12:50:48.208007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.910 qpair failed and we were unable to recover it. 00:27:05.910 [2024-11-28 12:50:48.208204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.910 [2024-11-28 12:50:48.208214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.910 qpair failed and we were unable to recover it. 00:27:05.910 [2024-11-28 12:50:48.208366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.910 [2024-11-28 12:50:48.208376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.910 qpair failed and we were unable to recover it. 00:27:05.910 [2024-11-28 12:50:48.208527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.910 [2024-11-28 12:50:48.208537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.910 qpair failed and we were unable to recover it. 00:27:05.910 [2024-11-28 12:50:48.208614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.910 [2024-11-28 12:50:48.208625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.910 qpair failed and we were unable to recover it. 00:27:05.910 [2024-11-28 12:50:48.208708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.910 [2024-11-28 12:50:48.208719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.910 qpair failed and we were unable to recover it. 00:27:05.910 [2024-11-28 12:50:48.208856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.910 [2024-11-28 12:50:48.208866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.910 qpair failed and we were unable to recover it. 00:27:05.910 [2024-11-28 12:50:48.209031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.910 [2024-11-28 12:50:48.209042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.910 qpair failed and we were unable to recover it. 00:27:05.910 [2024-11-28 12:50:48.209242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.910 [2024-11-28 12:50:48.209252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.910 qpair failed and we were unable to recover it. 00:27:05.910 [2024-11-28 12:50:48.209430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.910 [2024-11-28 12:50:48.209440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.910 qpair failed and we were unable to recover it. 00:27:05.910 [2024-11-28 12:50:48.209583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.910 [2024-11-28 12:50:48.209592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.910 qpair failed and we were unable to recover it. 00:27:05.910 [2024-11-28 12:50:48.209743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.910 [2024-11-28 12:50:48.209754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.910 qpair failed and we were unable to recover it. 00:27:05.910 [2024-11-28 12:50:48.209885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.910 [2024-11-28 12:50:48.209895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.910 qpair failed and we were unable to recover it. 00:27:05.910 [2024-11-28 12:50:48.210027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.910 [2024-11-28 12:50:48.210038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.910 qpair failed and we were unable to recover it. 00:27:05.910 [2024-11-28 12:50:48.210171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.910 [2024-11-28 12:50:48.210181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.910 qpair failed and we were unable to recover it. 00:27:05.910 [2024-11-28 12:50:48.210309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.910 [2024-11-28 12:50:48.210319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.910 qpair failed and we were unable to recover it. 00:27:05.910 [2024-11-28 12:50:48.210486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.910 [2024-11-28 12:50:48.210496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.910 qpair failed and we were unable to recover it. 00:27:05.910 [2024-11-28 12:50:48.210630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.910 [2024-11-28 12:50:48.210641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.910 qpair failed and we were unable to recover it. 00:27:05.910 [2024-11-28 12:50:48.210727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.910 [2024-11-28 12:50:48.210737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.910 qpair failed and we were unable to recover it. 00:27:05.910 [2024-11-28 12:50:48.210955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.910 [2024-11-28 12:50:48.210966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.910 qpair failed and we were unable to recover it. 00:27:05.910 [2024-11-28 12:50:48.211061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.910 [2024-11-28 12:50:48.211071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.910 qpair failed and we were unable to recover it. 00:27:05.910 [2024-11-28 12:50:48.211219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.910 [2024-11-28 12:50:48.211229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.910 qpair failed and we were unable to recover it. 00:27:05.910 [2024-11-28 12:50:48.211366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.910 [2024-11-28 12:50:48.211376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.910 qpair failed and we were unable to recover it. 00:27:05.910 [2024-11-28 12:50:48.211466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.910 [2024-11-28 12:50:48.211476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.910 qpair failed and we were unable to recover it. 00:27:05.910 [2024-11-28 12:50:48.211537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.910 [2024-11-28 12:50:48.211547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.910 qpair failed and we were unable to recover it. 00:27:05.910 [2024-11-28 12:50:48.211690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.910 [2024-11-28 12:50:48.211700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.910 qpair failed and we were unable to recover it. 00:27:05.910 [2024-11-28 12:50:48.211836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.910 [2024-11-28 12:50:48.211847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.910 qpair failed and we were unable to recover it. 00:27:05.910 [2024-11-28 12:50:48.212073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.910 [2024-11-28 12:50:48.212083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.910 qpair failed and we were unable to recover it. 00:27:05.910 [2024-11-28 12:50:48.212217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.910 [2024-11-28 12:50:48.212227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.910 qpair failed and we were unable to recover it. 00:27:05.910 [2024-11-28 12:50:48.212384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.910 [2024-11-28 12:50:48.212395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.910 qpair failed and we were unable to recover it. 00:27:05.910 [2024-11-28 12:50:48.212528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.910 [2024-11-28 12:50:48.212538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.910 qpair failed and we were unable to recover it. 00:27:05.910 [2024-11-28 12:50:48.212706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.910 [2024-11-28 12:50:48.212716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.910 qpair failed and we were unable to recover it. 00:27:05.910 [2024-11-28 12:50:48.212935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.910 [2024-11-28 12:50:48.212952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.910 qpair failed and we were unable to recover it. 00:27:05.910 [2024-11-28 12:50:48.213174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.910 [2024-11-28 12:50:48.213184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.910 qpair failed and we were unable to recover it. 00:27:05.910 [2024-11-28 12:50:48.213271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.910 [2024-11-28 12:50:48.213281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.910 qpair failed and we were unable to recover it. 00:27:05.910 [2024-11-28 12:50:48.213414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.910 [2024-11-28 12:50:48.213425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.910 qpair failed and we were unable to recover it. 00:27:05.910 [2024-11-28 12:50:48.213580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.911 [2024-11-28 12:50:48.213590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.911 qpair failed and we were unable to recover it. 00:27:05.911 [2024-11-28 12:50:48.213690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.911 [2024-11-28 12:50:48.213701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.911 qpair failed and we were unable to recover it. 00:27:05.911 [2024-11-28 12:50:48.213836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.911 [2024-11-28 12:50:48.213846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.911 qpair failed and we were unable to recover it. 00:27:05.911 [2024-11-28 12:50:48.214081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.911 [2024-11-28 12:50:48.214091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.911 qpair failed and we were unable to recover it. 00:27:05.911 [2024-11-28 12:50:48.214236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.911 [2024-11-28 12:50:48.214246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.911 qpair failed and we were unable to recover it. 00:27:05.911 [2024-11-28 12:50:48.214328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.911 [2024-11-28 12:50:48.214338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.911 qpair failed and we were unable to recover it. 00:27:05.911 [2024-11-28 12:50:48.214415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.911 [2024-11-28 12:50:48.214425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.911 qpair failed and we were unable to recover it. 00:27:05.911 [2024-11-28 12:50:48.214567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.911 [2024-11-28 12:50:48.214578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.911 qpair failed and we were unable to recover it. 00:27:05.911 [2024-11-28 12:50:48.214726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.911 [2024-11-28 12:50:48.214737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.911 qpair failed and we were unable to recover it. 00:27:05.911 [2024-11-28 12:50:48.214827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.911 [2024-11-28 12:50:48.214837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.911 qpair failed and we were unable to recover it. 00:27:05.911 [2024-11-28 12:50:48.214904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.911 [2024-11-28 12:50:48.214914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.911 qpair failed and we were unable to recover it. 00:27:05.911 [2024-11-28 12:50:48.215067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.911 [2024-11-28 12:50:48.215078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.911 qpair failed and we were unable to recover it. 00:27:05.911 [2024-11-28 12:50:48.215174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.911 [2024-11-28 12:50:48.215185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.911 qpair failed and we were unable to recover it. 00:27:05.911 [2024-11-28 12:50:48.215327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.911 [2024-11-28 12:50:48.215338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.911 qpair failed and we were unable to recover it. 00:27:05.911 [2024-11-28 12:50:48.215430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.911 [2024-11-28 12:50:48.215441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.911 qpair failed and we were unable to recover it. 00:27:05.911 [2024-11-28 12:50:48.215596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.911 [2024-11-28 12:50:48.215606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.911 qpair failed and we were unable to recover it. 00:27:05.911 [2024-11-28 12:50:48.215696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.911 [2024-11-28 12:50:48.215707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.911 qpair failed and we were unable to recover it. 00:27:05.911 [2024-11-28 12:50:48.215777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.911 [2024-11-28 12:50:48.215787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.911 qpair failed and we were unable to recover it. 00:27:05.911 [2024-11-28 12:50:48.215876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.911 [2024-11-28 12:50:48.215886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.911 qpair failed and we were unable to recover it. 00:27:05.911 [2024-11-28 12:50:48.215978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.911 [2024-11-28 12:50:48.215988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.911 qpair failed and we were unable to recover it. 00:27:05.911 [2024-11-28 12:50:48.216142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.911 [2024-11-28 12:50:48.216152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.911 qpair failed and we were unable to recover it. 00:27:05.911 [2024-11-28 12:50:48.216232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.911 [2024-11-28 12:50:48.216242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.911 qpair failed and we were unable to recover it. 00:27:05.911 [2024-11-28 12:50:48.216333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.911 [2024-11-28 12:50:48.216344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.911 qpair failed and we were unable to recover it. 00:27:05.911 [2024-11-28 12:50:48.216481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.911 [2024-11-28 12:50:48.216492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.911 qpair failed and we were unable to recover it. 00:27:05.911 [2024-11-28 12:50:48.216619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.911 [2024-11-28 12:50:48.216629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.911 qpair failed and we were unable to recover it. 00:27:05.911 [2024-11-28 12:50:48.216860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.911 [2024-11-28 12:50:48.216870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.911 qpair failed and we were unable to recover it. 00:27:05.911 [2024-11-28 12:50:48.216957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.911 [2024-11-28 12:50:48.216968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.911 qpair failed and we were unable to recover it. 00:27:05.911 [2024-11-28 12:50:48.217045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.911 [2024-11-28 12:50:48.217055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.911 qpair failed and we were unable to recover it. 00:27:05.911 [2024-11-28 12:50:48.217217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.911 [2024-11-28 12:50:48.217228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.911 qpair failed and we were unable to recover it. 00:27:05.911 [2024-11-28 12:50:48.217402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.911 [2024-11-28 12:50:48.217413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.911 qpair failed and we were unable to recover it. 00:27:05.911 [2024-11-28 12:50:48.217634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.911 [2024-11-28 12:50:48.217645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.911 qpair failed and we were unable to recover it. 00:27:05.911 [2024-11-28 12:50:48.217782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.911 [2024-11-28 12:50:48.217792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.911 qpair failed and we were unable to recover it. 00:27:05.911 [2024-11-28 12:50:48.217879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.911 [2024-11-28 12:50:48.217889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.911 qpair failed and we were unable to recover it. 00:27:05.911 [2024-11-28 12:50:48.218032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.911 [2024-11-28 12:50:48.218043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.911 qpair failed and we were unable to recover it. 00:27:05.911 [2024-11-28 12:50:48.218123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.911 [2024-11-28 12:50:48.218133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.911 qpair failed and we were unable to recover it. 00:27:05.911 [2024-11-28 12:50:48.218290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.911 [2024-11-28 12:50:48.218301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.911 qpair failed and we were unable to recover it. 00:27:05.911 [2024-11-28 12:50:48.218447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.911 [2024-11-28 12:50:48.218459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.911 qpair failed and we were unable to recover it. 00:27:05.911 [2024-11-28 12:50:48.218555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.911 [2024-11-28 12:50:48.218565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.911 qpair failed and we were unable to recover it. 00:27:05.911 [2024-11-28 12:50:48.218732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.912 [2024-11-28 12:50:48.218742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.912 qpair failed and we were unable to recover it. 00:27:05.912 [2024-11-28 12:50:48.218823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.912 [2024-11-28 12:50:48.218833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.912 qpair failed and we were unable to recover it. 00:27:05.912 [2024-11-28 12:50:48.218907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.912 [2024-11-28 12:50:48.218918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.912 qpair failed and we were unable to recover it. 00:27:05.912 [2024-11-28 12:50:48.219126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.912 [2024-11-28 12:50:48.219136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.912 qpair failed and we were unable to recover it. 00:27:05.912 [2024-11-28 12:50:48.219283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.912 [2024-11-28 12:50:48.219294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.912 qpair failed and we were unable to recover it. 00:27:05.912 [2024-11-28 12:50:48.219387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.912 [2024-11-28 12:50:48.219398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.912 qpair failed and we were unable to recover it. 00:27:05.912 [2024-11-28 12:50:48.219471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.912 [2024-11-28 12:50:48.219481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.912 qpair failed and we were unable to recover it. 00:27:05.912 [2024-11-28 12:50:48.219562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.912 [2024-11-28 12:50:48.219572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.912 qpair failed and we were unable to recover it. 00:27:05.912 [2024-11-28 12:50:48.219706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.912 [2024-11-28 12:50:48.219716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.912 qpair failed and we were unable to recover it. 00:27:05.912 [2024-11-28 12:50:48.219812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.912 [2024-11-28 12:50:48.219822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.912 qpair failed and we were unable to recover it. 00:27:05.912 [2024-11-28 12:50:48.219890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.912 [2024-11-28 12:50:48.219900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.912 qpair failed and we were unable to recover it. 00:27:05.912 [2024-11-28 12:50:48.220054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.912 [2024-11-28 12:50:48.220065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.912 qpair failed and we were unable to recover it. 00:27:05.912 [2024-11-28 12:50:48.220224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.912 [2024-11-28 12:50:48.220234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.912 qpair failed and we were unable to recover it. 00:27:05.912 [2024-11-28 12:50:48.220320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.912 [2024-11-28 12:50:48.220330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.912 qpair failed and we were unable to recover it. 00:27:05.912 [2024-11-28 12:50:48.220403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.912 [2024-11-28 12:50:48.220414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.912 qpair failed and we were unable to recover it. 00:27:05.912 [2024-11-28 12:50:48.220483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.912 [2024-11-28 12:50:48.220493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.912 qpair failed and we were unable to recover it. 00:27:05.912 [2024-11-28 12:50:48.220638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.912 [2024-11-28 12:50:48.220648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.912 qpair failed and we were unable to recover it. 00:27:05.912 [2024-11-28 12:50:48.220710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.912 [2024-11-28 12:50:48.220721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.912 qpair failed and we were unable to recover it. 00:27:05.912 [2024-11-28 12:50:48.220927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.912 [2024-11-28 12:50:48.220937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.912 qpair failed and we were unable to recover it. 00:27:05.912 [2024-11-28 12:50:48.221100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.912 [2024-11-28 12:50:48.221117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.912 qpair failed and we were unable to recover it. 00:27:05.912 [2024-11-28 12:50:48.221274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.912 [2024-11-28 12:50:48.221288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.912 qpair failed and we were unable to recover it. 00:27:05.912 [2024-11-28 12:50:48.221497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.912 [2024-11-28 12:50:48.221511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.912 qpair failed and we were unable to recover it. 00:27:05.912 [2024-11-28 12:50:48.221588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.912 [2024-11-28 12:50:48.221602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.912 qpair failed and we were unable to recover it. 00:27:05.912 [2024-11-28 12:50:48.221761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.912 [2024-11-28 12:50:48.221775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.912 qpair failed and we were unable to recover it. 00:27:05.912 [2024-11-28 12:50:48.221935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.912 [2024-11-28 12:50:48.221955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.912 qpair failed and we were unable to recover it. 00:27:05.912 [2024-11-28 12:50:48.222055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.912 [2024-11-28 12:50:48.222067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.912 qpair failed and we were unable to recover it. 00:27:05.912 [2024-11-28 12:50:48.222281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.912 [2024-11-28 12:50:48.222292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.912 qpair failed and we were unable to recover it. 00:27:05.912 [2024-11-28 12:50:48.222372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.912 [2024-11-28 12:50:48.222383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.912 qpair failed and we were unable to recover it. 00:27:05.912 [2024-11-28 12:50:48.222527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.912 [2024-11-28 12:50:48.222538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.912 qpair failed and we were unable to recover it. 00:27:05.912 [2024-11-28 12:50:48.222626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.912 [2024-11-28 12:50:48.222637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.912 qpair failed and we were unable to recover it. 00:27:05.912 [2024-11-28 12:50:48.222770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.912 [2024-11-28 12:50:48.222780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.912 qpair failed and we were unable to recover it. 00:27:05.912 [2024-11-28 12:50:48.222855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.912 [2024-11-28 12:50:48.222865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.912 qpair failed and we were unable to recover it. 00:27:05.912 [2024-11-28 12:50:48.223015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.912 [2024-11-28 12:50:48.223026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.912 qpair failed and we were unable to recover it. 00:27:05.912 [2024-11-28 12:50:48.223170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.912 [2024-11-28 12:50:48.223181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.912 qpair failed and we were unable to recover it. 00:27:05.912 [2024-11-28 12:50:48.223320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.912 [2024-11-28 12:50:48.223330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.912 qpair failed and we were unable to recover it. 00:27:05.912 [2024-11-28 12:50:48.223424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.912 [2024-11-28 12:50:48.223434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.912 qpair failed and we were unable to recover it. 00:27:05.912 [2024-11-28 12:50:48.223571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.912 [2024-11-28 12:50:48.223582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.912 qpair failed and we were unable to recover it. 00:27:05.912 [2024-11-28 12:50:48.223715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.912 [2024-11-28 12:50:48.223725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.912 qpair failed and we were unable to recover it. 00:27:05.912 [2024-11-28 12:50:48.223864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.912 [2024-11-28 12:50:48.223876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.913 qpair failed and we were unable to recover it. 00:27:05.913 [2024-11-28 12:50:48.223965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.913 [2024-11-28 12:50:48.223976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.913 qpair failed and we were unable to recover it. 00:27:05.913 [2024-11-28 12:50:48.224055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.913 [2024-11-28 12:50:48.224065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.913 qpair failed and we were unable to recover it. 00:27:05.913 [2024-11-28 12:50:48.224142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.913 [2024-11-28 12:50:48.224153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.913 qpair failed and we were unable to recover it. 00:27:05.913 [2024-11-28 12:50:48.224382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.913 [2024-11-28 12:50:48.224392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.913 qpair failed and we were unable to recover it. 00:27:05.913 [2024-11-28 12:50:48.224457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.913 [2024-11-28 12:50:48.224467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.913 qpair failed and we were unable to recover it. 00:27:05.913 [2024-11-28 12:50:48.224665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.913 [2024-11-28 12:50:48.224676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.913 qpair failed and we were unable to recover it. 00:27:05.913 [2024-11-28 12:50:48.224815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.913 [2024-11-28 12:50:48.224825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.913 qpair failed and we were unable to recover it. 00:27:05.913 [2024-11-28 12:50:48.224899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.913 [2024-11-28 12:50:48.224909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.913 qpair failed and we were unable to recover it. 00:27:05.913 [2024-11-28 12:50:48.225110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.913 [2024-11-28 12:50:48.225121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.913 qpair failed and we were unable to recover it. 00:27:05.913 [2024-11-28 12:50:48.225284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.913 [2024-11-28 12:50:48.225294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.913 qpair failed and we were unable to recover it. 00:27:05.913 [2024-11-28 12:50:48.225515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.913 [2024-11-28 12:50:48.225525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.913 qpair failed and we were unable to recover it. 00:27:05.913 [2024-11-28 12:50:48.225705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.913 [2024-11-28 12:50:48.225715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.913 qpair failed and we were unable to recover it. 00:27:05.913 [2024-11-28 12:50:48.225927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.913 [2024-11-28 12:50:48.225937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.913 qpair failed and we were unable to recover it. 00:27:05.913 [2024-11-28 12:50:48.226124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.913 [2024-11-28 12:50:48.226135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.913 qpair failed and we were unable to recover it. 00:27:05.913 [2024-11-28 12:50:48.226217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.913 [2024-11-28 12:50:48.226227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.913 qpair failed and we were unable to recover it. 00:27:05.913 [2024-11-28 12:50:48.226386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.913 [2024-11-28 12:50:48.226397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.913 qpair failed and we were unable to recover it. 00:27:05.913 [2024-11-28 12:50:48.226562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.913 [2024-11-28 12:50:48.226572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.913 qpair failed and we were unable to recover it. 00:27:05.913 [2024-11-28 12:50:48.226706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.913 [2024-11-28 12:50:48.226717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.913 qpair failed and we were unable to recover it. 00:27:05.913 [2024-11-28 12:50:48.226796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.913 [2024-11-28 12:50:48.226806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.913 qpair failed and we were unable to recover it. 00:27:05.913 [2024-11-28 12:50:48.226958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.913 [2024-11-28 12:50:48.226968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.913 qpair failed and we were unable to recover it. 00:27:05.913 [2024-11-28 12:50:48.227035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.913 [2024-11-28 12:50:48.227045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.913 qpair failed and we were unable to recover it. 00:27:05.913 [2024-11-28 12:50:48.227195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.913 [2024-11-28 12:50:48.227205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.913 qpair failed and we were unable to recover it. 00:27:05.913 [2024-11-28 12:50:48.227411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.913 [2024-11-28 12:50:48.227421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.913 qpair failed and we were unable to recover it. 00:27:05.913 [2024-11-28 12:50:48.227510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.913 [2024-11-28 12:50:48.227520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.913 qpair failed and we were unable to recover it. 00:27:05.913 [2024-11-28 12:50:48.227602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.913 [2024-11-28 12:50:48.227612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.913 qpair failed and we were unable to recover it. 00:27:05.913 [2024-11-28 12:50:48.227741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.913 [2024-11-28 12:50:48.227751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.913 qpair failed and we were unable to recover it. 00:27:05.913 [2024-11-28 12:50:48.227846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.913 [2024-11-28 12:50:48.227857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.913 qpair failed and we were unable to recover it. 00:27:05.913 [2024-11-28 12:50:48.227925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.913 [2024-11-28 12:50:48.227935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.913 qpair failed and we were unable to recover it. 00:27:05.913 [2024-11-28 12:50:48.228026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.913 [2024-11-28 12:50:48.228036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.913 qpair failed and we were unable to recover it. 00:27:05.913 [2024-11-28 12:50:48.228261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.913 [2024-11-28 12:50:48.228271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.913 qpair failed and we were unable to recover it. 00:27:05.913 [2024-11-28 12:50:48.228443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.913 [2024-11-28 12:50:48.228453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.913 qpair failed and we were unable to recover it. 00:27:05.913 [2024-11-28 12:50:48.228643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.913 [2024-11-28 12:50:48.228653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.913 qpair failed and we were unable to recover it. 00:27:05.913 [2024-11-28 12:50:48.228853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.913 [2024-11-28 12:50:48.228863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.913 qpair failed and we were unable to recover it. 00:27:05.913 [2024-11-28 12:50:48.229045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.913 [2024-11-28 12:50:48.229056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.913 qpair failed and we were unable to recover it. 00:27:05.913 [2024-11-28 12:50:48.229188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.913 [2024-11-28 12:50:48.229199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.913 qpair failed and we were unable to recover it. 00:27:05.913 [2024-11-28 12:50:48.229266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.913 [2024-11-28 12:50:48.229276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.913 qpair failed and we were unable to recover it. 00:27:05.913 [2024-11-28 12:50:48.229369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.913 [2024-11-28 12:50:48.229380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.913 qpair failed and we were unable to recover it. 00:27:05.913 [2024-11-28 12:50:48.229463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.913 [2024-11-28 12:50:48.229473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.913 qpair failed and we were unable to recover it. 00:27:05.913 [2024-11-28 12:50:48.229617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.913 [2024-11-28 12:50:48.229628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.913 qpair failed and we were unable to recover it. 00:27:05.913 [2024-11-28 12:50:48.229805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.913 [2024-11-28 12:50:48.229818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.913 qpair failed and we were unable to recover it. 00:27:05.914 [2024-11-28 12:50:48.229886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.914 [2024-11-28 12:50:48.229896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.914 qpair failed and we were unable to recover it. 00:27:05.914 [2024-11-28 12:50:48.229992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.914 [2024-11-28 12:50:48.230002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.914 qpair failed and we were unable to recover it. 00:27:05.914 [2024-11-28 12:50:48.230132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.914 [2024-11-28 12:50:48.230142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.914 qpair failed and we were unable to recover it. 00:27:05.914 [2024-11-28 12:50:48.230234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.914 [2024-11-28 12:50:48.230244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.914 qpair failed and we were unable to recover it. 00:27:05.914 [2024-11-28 12:50:48.230385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.914 [2024-11-28 12:50:48.230395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.914 qpair failed and we were unable to recover it. 00:27:05.914 [2024-11-28 12:50:48.230480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.914 [2024-11-28 12:50:48.230490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.914 qpair failed and we were unable to recover it. 00:27:05.914 [2024-11-28 12:50:48.230569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.914 [2024-11-28 12:50:48.230580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.914 qpair failed and we were unable to recover it. 00:27:05.914 [2024-11-28 12:50:48.230655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.914 [2024-11-28 12:50:48.230665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.914 qpair failed and we were unable to recover it. 00:27:05.914 [2024-11-28 12:50:48.230805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.914 [2024-11-28 12:50:48.230816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.914 qpair failed and we were unable to recover it. 00:27:05.914 [2024-11-28 12:50:48.230958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.914 [2024-11-28 12:50:48.230968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.914 qpair failed and we were unable to recover it. 00:27:05.914 [2024-11-28 12:50:48.231124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.914 [2024-11-28 12:50:48.231135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.914 qpair failed and we were unable to recover it. 00:27:05.914 [2024-11-28 12:50:48.231215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.914 [2024-11-28 12:50:48.231226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.914 qpair failed and we were unable to recover it. 00:27:05.914 [2024-11-28 12:50:48.231373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.914 [2024-11-28 12:50:48.231383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.914 qpair failed and we were unable to recover it. 00:27:05.914 [2024-11-28 12:50:48.231518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.914 [2024-11-28 12:50:48.231529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.914 qpair failed and we were unable to recover it. 00:27:05.914 [2024-11-28 12:50:48.231626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.914 [2024-11-28 12:50:48.231637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.914 qpair failed and we were unable to recover it. 00:27:05.914 [2024-11-28 12:50:48.231800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.914 [2024-11-28 12:50:48.231810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.914 qpair failed and we were unable to recover it. 00:27:05.914 [2024-11-28 12:50:48.231880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.914 [2024-11-28 12:50:48.231891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.914 qpair failed and we were unable to recover it. 00:27:05.914 [2024-11-28 12:50:48.232046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.914 [2024-11-28 12:50:48.232057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.914 qpair failed and we were unable to recover it. 00:27:05.914 [2024-11-28 12:50:48.232129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.914 [2024-11-28 12:50:48.232139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.914 qpair failed and we were unable to recover it. 00:27:05.914 [2024-11-28 12:50:48.232219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.914 [2024-11-28 12:50:48.232229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.914 qpair failed and we were unable to recover it. 00:27:05.914 [2024-11-28 12:50:48.232375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.914 [2024-11-28 12:50:48.232385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.914 qpair failed and we were unable to recover it. 00:27:05.914 [2024-11-28 12:50:48.232468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.914 [2024-11-28 12:50:48.232478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.914 qpair failed and we were unable to recover it. 00:27:05.914 [2024-11-28 12:50:48.232630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.914 [2024-11-28 12:50:48.232640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.914 qpair failed and we were unable to recover it. 00:27:05.914 [2024-11-28 12:50:48.232809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.914 [2024-11-28 12:50:48.232820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.914 qpair failed and we were unable to recover it. 00:27:05.914 [2024-11-28 12:50:48.232904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.914 [2024-11-28 12:50:48.232914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.914 qpair failed and we were unable to recover it. 00:27:05.914 [2024-11-28 12:50:48.232998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.914 [2024-11-28 12:50:48.233009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.914 qpair failed and we were unable to recover it. 00:27:05.914 [2024-11-28 12:50:48.233075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.914 [2024-11-28 12:50:48.233085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.914 qpair failed and we were unable to recover it. 00:27:05.914 [2024-11-28 12:50:48.233223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.914 [2024-11-28 12:50:48.233233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.914 qpair failed and we were unable to recover it. 00:27:05.914 [2024-11-28 12:50:48.233307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.914 [2024-11-28 12:50:48.233317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.914 qpair failed and we were unable to recover it. 00:27:05.914 [2024-11-28 12:50:48.233387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.914 [2024-11-28 12:50:48.233397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.914 qpair failed and we were unable to recover it. 00:27:05.914 [2024-11-28 12:50:48.233595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.914 [2024-11-28 12:50:48.233605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.914 qpair failed and we were unable to recover it. 00:27:05.914 [2024-11-28 12:50:48.233753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.914 [2024-11-28 12:50:48.233763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.914 qpair failed and we were unable to recover it. 00:27:05.914 [2024-11-28 12:50:48.233857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.914 [2024-11-28 12:50:48.233867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.914 qpair failed and we were unable to recover it. 00:27:05.914 [2024-11-28 12:50:48.233999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.914 [2024-11-28 12:50:48.234010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.914 qpair failed and we were unable to recover it. 00:27:05.914 [2024-11-28 12:50:48.234154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.914 [2024-11-28 12:50:48.234164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.914 qpair failed and we were unable to recover it. 00:27:05.914 [2024-11-28 12:50:48.234324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.914 [2024-11-28 12:50:48.234334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.914 qpair failed and we were unable to recover it. 00:27:05.914 [2024-11-28 12:50:48.234433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.914 [2024-11-28 12:50:48.234444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.914 qpair failed and we were unable to recover it. 00:27:05.914 [2024-11-28 12:50:48.234533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.914 [2024-11-28 12:50:48.234543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.915 qpair failed and we were unable to recover it. 00:27:05.915 [2024-11-28 12:50:48.234634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.915 [2024-11-28 12:50:48.234644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.915 qpair failed and we were unable to recover it. 00:27:05.915 [2024-11-28 12:50:48.234784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.915 [2024-11-28 12:50:48.234796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.915 qpair failed and we were unable to recover it. 00:27:05.915 [2024-11-28 12:50:48.235060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.915 [2024-11-28 12:50:48.235071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.915 qpair failed and we were unable to recover it. 00:27:05.915 [2024-11-28 12:50:48.235165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.915 [2024-11-28 12:50:48.235175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.915 qpair failed and we were unable to recover it. 00:27:05.915 [2024-11-28 12:50:48.235283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.915 [2024-11-28 12:50:48.235293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.915 qpair failed and we were unable to recover it. 00:27:05.915 [2024-11-28 12:50:48.235359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.915 [2024-11-28 12:50:48.235369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.915 qpair failed and we were unable to recover it. 00:27:05.915 [2024-11-28 12:50:48.235444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.915 [2024-11-28 12:50:48.235455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.915 qpair failed and we were unable to recover it. 00:27:05.915 [2024-11-28 12:50:48.235516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.915 [2024-11-28 12:50:48.235526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.915 qpair failed and we were unable to recover it. 00:27:05.915 [2024-11-28 12:50:48.235676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.915 [2024-11-28 12:50:48.235686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.915 qpair failed and we were unable to recover it. 00:27:05.915 [2024-11-28 12:50:48.235867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.915 [2024-11-28 12:50:48.235877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.915 qpair failed and we were unable to recover it. 00:27:05.915 [2024-11-28 12:50:48.235958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.915 [2024-11-28 12:50:48.235969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.915 qpair failed and we were unable to recover it. 00:27:05.915 [2024-11-28 12:50:48.236116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.915 [2024-11-28 12:50:48.236126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.915 qpair failed and we were unable to recover it. 00:27:05.915 [2024-11-28 12:50:48.236261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.915 [2024-11-28 12:50:48.236271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.915 qpair failed and we were unable to recover it. 00:27:05.915 [2024-11-28 12:50:48.236421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.915 [2024-11-28 12:50:48.236431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.915 qpair failed and we were unable to recover it. 00:27:05.915 [2024-11-28 12:50:48.236515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.915 [2024-11-28 12:50:48.236525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.915 qpair failed and we were unable to recover it. 00:27:05.915 [2024-11-28 12:50:48.236672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.915 [2024-11-28 12:50:48.236683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.915 qpair failed and we were unable to recover it. 00:27:05.915 [2024-11-28 12:50:48.236783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.915 [2024-11-28 12:50:48.236794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.915 qpair failed and we were unable to recover it. 00:27:05.915 [2024-11-28 12:50:48.236922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.915 [2024-11-28 12:50:48.236933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.915 qpair failed and we were unable to recover it. 00:27:05.915 [2024-11-28 12:50:48.237007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.915 [2024-11-28 12:50:48.237018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.915 qpair failed and we were unable to recover it. 00:27:05.915 [2024-11-28 12:50:48.237147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.915 [2024-11-28 12:50:48.237157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.915 qpair failed and we were unable to recover it. 00:27:05.915 [2024-11-28 12:50:48.237389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.915 [2024-11-28 12:50:48.237399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.915 qpair failed and we were unable to recover it. 00:27:05.915 [2024-11-28 12:50:48.237576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.915 [2024-11-28 12:50:48.237587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.915 qpair failed and we were unable to recover it. 00:27:05.915 [2024-11-28 12:50:48.237658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.915 [2024-11-28 12:50:48.237668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.915 qpair failed and we were unable to recover it. 00:27:05.915 [2024-11-28 12:50:48.237732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.915 [2024-11-28 12:50:48.237742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.915 qpair failed and we were unable to recover it. 00:27:05.915 [2024-11-28 12:50:48.237966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.915 [2024-11-28 12:50:48.237978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.915 qpair failed and we were unable to recover it. 00:27:05.915 [2024-11-28 12:50:48.238064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.915 [2024-11-28 12:50:48.238074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.915 qpair failed and we were unable to recover it. 00:27:05.915 [2024-11-28 12:50:48.238276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.915 [2024-11-28 12:50:48.238287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.915 qpair failed and we were unable to recover it. 00:27:05.915 [2024-11-28 12:50:48.238370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.915 [2024-11-28 12:50:48.238380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.915 qpair failed and we were unable to recover it. 00:27:05.915 [2024-11-28 12:50:48.238475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.915 [2024-11-28 12:50:48.238485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.915 qpair failed and we were unable to recover it. 00:27:05.915 [2024-11-28 12:50:48.238627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.915 [2024-11-28 12:50:48.238638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.915 qpair failed and we were unable to recover it. 00:27:05.915 [2024-11-28 12:50:48.238787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.915 [2024-11-28 12:50:48.238797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.915 qpair failed and we were unable to recover it. 00:27:05.915 [2024-11-28 12:50:48.238944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.915 [2024-11-28 12:50:48.238959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.915 qpair failed and we were unable to recover it. 00:27:05.915 [2024-11-28 12:50:48.239040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.915 [2024-11-28 12:50:48.239050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.915 qpair failed and we were unable to recover it. 00:27:05.915 [2024-11-28 12:50:48.239254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.915 [2024-11-28 12:50:48.239264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.915 qpair failed and we were unable to recover it. 00:27:05.915 [2024-11-28 12:50:48.239408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.915 [2024-11-28 12:50:48.239419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.915 qpair failed and we were unable to recover it. 00:27:05.915 [2024-11-28 12:50:48.239550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.915 [2024-11-28 12:50:48.239560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.915 qpair failed and we were unable to recover it. 00:27:05.915 [2024-11-28 12:50:48.239702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.915 [2024-11-28 12:50:48.239712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.915 qpair failed and we were unable to recover it. 00:27:05.916 [2024-11-28 12:50:48.239919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.916 [2024-11-28 12:50:48.239930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.916 qpair failed and we were unable to recover it. 00:27:05.916 [2024-11-28 12:50:48.240078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.916 [2024-11-28 12:50:48.240089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.916 qpair failed and we were unable to recover it. 00:27:05.916 [2024-11-28 12:50:48.240162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.916 [2024-11-28 12:50:48.240172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.916 qpair failed and we were unable to recover it. 00:27:05.916 [2024-11-28 12:50:48.240319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.916 [2024-11-28 12:50:48.240328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.916 qpair failed and we were unable to recover it. 00:27:05.916 [2024-11-28 12:50:48.240473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.916 [2024-11-28 12:50:48.240486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.916 qpair failed and we were unable to recover it. 00:27:05.916 [2024-11-28 12:50:48.240583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.916 [2024-11-28 12:50:48.240593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.916 qpair failed and we were unable to recover it. 00:27:05.916 [2024-11-28 12:50:48.240797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.916 [2024-11-28 12:50:48.240807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.916 qpair failed and we were unable to recover it. 00:27:05.916 [2024-11-28 12:50:48.240894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.916 [2024-11-28 12:50:48.240904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.916 qpair failed and we were unable to recover it. 00:27:05.916 [2024-11-28 12:50:48.240981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.916 [2024-11-28 12:50:48.240991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.916 qpair failed and we were unable to recover it. 00:27:05.916 [2024-11-28 12:50:48.241132] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:05.916 [2024-11-28 12:50:48.241140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.916 [2024-11-28 12:50:48.241151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.916 qpair failed and we were unable to recover it. 00:27:05.916 [2024-11-28 12:50:48.241238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.916 [2024-11-28 12:50:48.241248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.916 qpair failed and we were unable to recover it. 00:27:05.916 [2024-11-28 12:50:48.241383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.916 [2024-11-28 12:50:48.241394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.916 qpair failed and we were unable to recover it. 00:27:05.916 [2024-11-28 12:50:48.241480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.916 [2024-11-28 12:50:48.241490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.916 qpair failed and we were unable to recover it. 00:27:05.916 [2024-11-28 12:50:48.241619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.916 [2024-11-28 12:50:48.241629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.916 qpair failed and we were unable to recover it. 00:27:05.916 [2024-11-28 12:50:48.241777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.916 [2024-11-28 12:50:48.241788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.916 qpair failed and we were unable to recover it. 00:27:05.916 [2024-11-28 12:50:48.241870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.916 [2024-11-28 12:50:48.241880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.916 qpair failed and we were unable to recover it. 00:27:05.916 [2024-11-28 12:50:48.241962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.916 [2024-11-28 12:50:48.241973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.916 qpair failed and we were unable to recover it. 00:27:05.916 [2024-11-28 12:50:48.242057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.916 [2024-11-28 12:50:48.242069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.916 qpair failed and we were unable to recover it. 00:27:05.916 [2024-11-28 12:50:48.242232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.916 [2024-11-28 12:50:48.242243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.916 qpair failed and we were unable to recover it. 00:27:05.916 [2024-11-28 12:50:48.242381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.916 [2024-11-28 12:50:48.242390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.916 qpair failed and we were unable to recover it. 00:27:05.916 [2024-11-28 12:50:48.242642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.916 [2024-11-28 12:50:48.242653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.916 qpair failed and we were unable to recover it. 00:27:05.916 [2024-11-28 12:50:48.242729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.916 [2024-11-28 12:50:48.242739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.916 qpair failed and we were unable to recover it. 00:27:05.916 [2024-11-28 12:50:48.242826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.916 [2024-11-28 12:50:48.242837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.916 qpair failed and we were unable to recover it. 00:27:05.916 [2024-11-28 12:50:48.242992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.916 [2024-11-28 12:50:48.243004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.916 qpair failed and we were unable to recover it. 00:27:05.916 [2024-11-28 12:50:48.243209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.916 [2024-11-28 12:50:48.243220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.916 qpair failed and we were unable to recover it. 00:27:05.916 [2024-11-28 12:50:48.243309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.916 [2024-11-28 12:50:48.243320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.916 qpair failed and we were unable to recover it. 00:27:05.916 [2024-11-28 12:50:48.243461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.916 [2024-11-28 12:50:48.243471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.916 qpair failed and we were unable to recover it. 00:27:05.916 [2024-11-28 12:50:48.243695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.916 [2024-11-28 12:50:48.243705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.916 qpair failed and we were unable to recover it. 00:27:05.916 [2024-11-28 12:50:48.243852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.916 [2024-11-28 12:50:48.243862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.916 qpair failed and we were unable to recover it. 00:27:05.916 [2024-11-28 12:50:48.244013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.916 [2024-11-28 12:50:48.244025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.916 qpair failed and we were unable to recover it. 00:27:05.916 [2024-11-28 12:50:48.244184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.916 [2024-11-28 12:50:48.244195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.916 qpair failed and we were unable to recover it. 00:27:05.916 [2024-11-28 12:50:48.244269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.916 [2024-11-28 12:50:48.244279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.916 qpair failed and we were unable to recover it. 00:27:05.916 [2024-11-28 12:50:48.244428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.916 [2024-11-28 12:50:48.244439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.916 qpair failed and we were unable to recover it. 00:27:05.916 [2024-11-28 12:50:48.244586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.916 [2024-11-28 12:50:48.244598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.916 qpair failed and we were unable to recover it. 00:27:05.916 [2024-11-28 12:50:48.244731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.916 [2024-11-28 12:50:48.244742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.916 qpair failed and we were unable to recover it. 00:27:05.916 [2024-11-28 12:50:48.244874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.916 [2024-11-28 12:50:48.244884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.916 qpair failed and we were unable to recover it. 00:27:05.916 [2024-11-28 12:50:48.244973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.916 [2024-11-28 12:50:48.244984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.916 qpair failed and we were unable to recover it. 00:27:05.916 [2024-11-28 12:50:48.245067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.916 [2024-11-28 12:50:48.245077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.916 qpair failed and we were unable to recover it. 00:27:05.916 [2024-11-28 12:50:48.245235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.916 [2024-11-28 12:50:48.245245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.916 qpair failed and we were unable to recover it. 00:27:05.916 [2024-11-28 12:50:48.245401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.916 [2024-11-28 12:50:48.245413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.916 qpair failed and we were unable to recover it. 00:27:05.916 [2024-11-28 12:50:48.245551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.916 [2024-11-28 12:50:48.245562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.916 qpair failed and we were unable to recover it. 00:27:05.917 [2024-11-28 12:50:48.245655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.917 [2024-11-28 12:50:48.245666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.917 qpair failed and we were unable to recover it. 00:27:05.917 [2024-11-28 12:50:48.245865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.917 [2024-11-28 12:50:48.245875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.917 qpair failed and we were unable to recover it. 00:27:05.917 [2024-11-28 12:50:48.245973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.917 [2024-11-28 12:50:48.245985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.917 qpair failed and we were unable to recover it. 00:27:05.917 [2024-11-28 12:50:48.246132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.917 [2024-11-28 12:50:48.246145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.917 qpair failed and we were unable to recover it. 00:27:05.917 [2024-11-28 12:50:48.246236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.917 [2024-11-28 12:50:48.246247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.917 qpair failed and we were unable to recover it. 00:27:05.917 [2024-11-28 12:50:48.246383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.917 [2024-11-28 12:50:48.246394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.917 qpair failed and we were unable to recover it. 00:27:05.917 [2024-11-28 12:50:48.246479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.917 [2024-11-28 12:50:48.246489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.917 qpair failed and we were unable to recover it. 00:27:05.917 [2024-11-28 12:50:48.246732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.917 [2024-11-28 12:50:48.246743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.917 qpair failed and we were unable to recover it. 00:27:05.917 [2024-11-28 12:50:48.246824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.917 [2024-11-28 12:50:48.246835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.917 qpair failed and we were unable to recover it. 00:27:05.917 [2024-11-28 12:50:48.246901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.917 [2024-11-28 12:50:48.246911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.917 qpair failed and we were unable to recover it. 00:27:05.917 [2024-11-28 12:50:48.247069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.917 [2024-11-28 12:50:48.247080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.917 qpair failed and we were unable to recover it. 00:27:05.917 [2024-11-28 12:50:48.247167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.917 [2024-11-28 12:50:48.247177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.917 qpair failed and we were unable to recover it. 00:27:05.917 [2024-11-28 12:50:48.247267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.917 [2024-11-28 12:50:48.247278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.917 qpair failed and we were unable to recover it. 00:27:05.917 [2024-11-28 12:50:48.247413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.917 [2024-11-28 12:50:48.247424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.917 qpair failed and we were unable to recover it. 00:27:05.917 [2024-11-28 12:50:48.247567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.917 [2024-11-28 12:50:48.247577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.917 qpair failed and we were unable to recover it. 00:27:05.917 [2024-11-28 12:50:48.247733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.917 [2024-11-28 12:50:48.247744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.917 qpair failed and we were unable to recover it. 00:27:05.917 [2024-11-28 12:50:48.247838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.917 [2024-11-28 12:50:48.247849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.917 qpair failed and we were unable to recover it. 00:27:05.917 [2024-11-28 12:50:48.248069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.917 [2024-11-28 12:50:48.248080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.917 qpair failed and we were unable to recover it. 00:27:05.917 [2024-11-28 12:50:48.248211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.917 [2024-11-28 12:50:48.248222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.917 qpair failed and we were unable to recover it. 00:27:05.917 [2024-11-28 12:50:48.248366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.917 [2024-11-28 12:50:48.248377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.917 qpair failed and we were unable to recover it. 00:27:05.917 [2024-11-28 12:50:48.248628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.917 [2024-11-28 12:50:48.248640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.917 qpair failed and we were unable to recover it. 00:27:05.917 [2024-11-28 12:50:48.248862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.917 [2024-11-28 12:50:48.248874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.917 qpair failed and we were unable to recover it. 00:27:05.917 [2024-11-28 12:50:48.248953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.917 [2024-11-28 12:50:48.248964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.917 qpair failed and we were unable to recover it. 00:27:05.917 [2024-11-28 12:50:48.249100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.917 [2024-11-28 12:50:48.249111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.917 qpair failed and we were unable to recover it. 00:27:05.917 [2024-11-28 12:50:48.249263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.917 [2024-11-28 12:50:48.249273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.917 qpair failed and we were unable to recover it. 00:27:05.917 [2024-11-28 12:50:48.249369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.917 [2024-11-28 12:50:48.249380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.917 qpair failed and we were unable to recover it. 00:27:05.917 [2024-11-28 12:50:48.249501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.917 [2024-11-28 12:50:48.249513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.917 qpair failed and we were unable to recover it. 00:27:05.917 [2024-11-28 12:50:48.249670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.917 [2024-11-28 12:50:48.249681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.917 qpair failed and we were unable to recover it. 00:27:05.917 [2024-11-28 12:50:48.249822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.917 [2024-11-28 12:50:48.249832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.917 qpair failed and we were unable to recover it. 00:27:05.917 [2024-11-28 12:50:48.249982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.917 [2024-11-28 12:50:48.249995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.917 qpair failed and we were unable to recover it. 00:27:05.917 [2024-11-28 12:50:48.250143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.917 [2024-11-28 12:50:48.250155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.917 qpair failed and we were unable to recover it. 00:27:05.917 [2024-11-28 12:50:48.250365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.917 [2024-11-28 12:50:48.250377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.917 qpair failed and we were unable to recover it. 00:27:05.917 [2024-11-28 12:50:48.250459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.917 [2024-11-28 12:50:48.250471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.917 qpair failed and we were unable to recover it. 00:27:05.917 [2024-11-28 12:50:48.250563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.917 [2024-11-28 12:50:48.250574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.917 qpair failed and we were unable to recover it. 00:27:05.917 [2024-11-28 12:50:48.250668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.917 [2024-11-28 12:50:48.250679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.917 qpair failed and we were unable to recover it. 00:27:05.917 [2024-11-28 12:50:48.250840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.917 [2024-11-28 12:50:48.250852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.917 qpair failed and we were unable to recover it. 00:27:05.917 [2024-11-28 12:50:48.251007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.917 [2024-11-28 12:50:48.251018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.917 qpair failed and we were unable to recover it. 00:27:05.917 [2024-11-28 12:50:48.251211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.917 [2024-11-28 12:50:48.251223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.917 qpair failed and we were unable to recover it. 00:27:05.917 [2024-11-28 12:50:48.251303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.917 [2024-11-28 12:50:48.251314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.917 qpair failed and we were unable to recover it. 00:27:05.917 [2024-11-28 12:50:48.251459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.917 [2024-11-28 12:50:48.251470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.917 qpair failed and we were unable to recover it. 00:27:05.917 [2024-11-28 12:50:48.251598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.918 [2024-11-28 12:50:48.251609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.918 qpair failed and we were unable to recover it. 00:27:05.918 [2024-11-28 12:50:48.251808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.918 [2024-11-28 12:50:48.251820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.918 qpair failed and we were unable to recover it. 00:27:05.918 [2024-11-28 12:50:48.252083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.918 [2024-11-28 12:50:48.252096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.918 qpair failed and we were unable to recover it. 00:27:05.918 [2024-11-28 12:50:48.252277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.918 [2024-11-28 12:50:48.252292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.918 qpair failed and we were unable to recover it. 00:27:05.918 [2024-11-28 12:50:48.252494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.918 [2024-11-28 12:50:48.252505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.918 qpair failed and we were unable to recover it. 00:27:05.918 [2024-11-28 12:50:48.252710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.918 [2024-11-28 12:50:48.252723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.918 qpair failed and we were unable to recover it. 00:27:05.918 [2024-11-28 12:50:48.252952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.918 [2024-11-28 12:50:48.252964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.918 qpair failed and we were unable to recover it. 00:27:05.918 [2024-11-28 12:50:48.253144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.918 [2024-11-28 12:50:48.253155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.918 qpair failed and we were unable to recover it. 00:27:05.918 [2024-11-28 12:50:48.253310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.918 [2024-11-28 12:50:48.253320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.918 qpair failed and we were unable to recover it. 00:27:05.918 [2024-11-28 12:50:48.253518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.918 [2024-11-28 12:50:48.253528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.918 qpair failed and we were unable to recover it. 00:27:05.918 [2024-11-28 12:50:48.253728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.918 [2024-11-28 12:50:48.253739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.918 qpair failed and we were unable to recover it. 00:27:05.918 [2024-11-28 12:50:48.253966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.918 [2024-11-28 12:50:48.253977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.918 qpair failed and we were unable to recover it. 00:27:05.918 [2024-11-28 12:50:48.254199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.918 [2024-11-28 12:50:48.254210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.918 qpair failed and we were unable to recover it. 00:27:05.918 [2024-11-28 12:50:48.254471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.918 [2024-11-28 12:50:48.254482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.918 qpair failed and we were unable to recover it. 00:27:05.918 [2024-11-28 12:50:48.254709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.918 [2024-11-28 12:50:48.254719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.918 qpair failed and we were unable to recover it. 00:27:05.918 [2024-11-28 12:50:48.254923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.918 [2024-11-28 12:50:48.254934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.918 qpair failed and we were unable to recover it. 00:27:05.918 [2024-11-28 12:50:48.255111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.918 [2024-11-28 12:50:48.255122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.918 qpair failed and we were unable to recover it. 00:27:05.918 [2024-11-28 12:50:48.255273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.918 [2024-11-28 12:50:48.255284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.918 qpair failed and we were unable to recover it. 00:27:05.918 [2024-11-28 12:50:48.255507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.918 [2024-11-28 12:50:48.255517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.918 qpair failed and we were unable to recover it. 00:27:05.918 [2024-11-28 12:50:48.255741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.918 [2024-11-28 12:50:48.255751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.918 qpair failed and we were unable to recover it. 00:27:05.918 [2024-11-28 12:50:48.255845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.918 [2024-11-28 12:50:48.255855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.918 qpair failed and we were unable to recover it. 00:27:05.918 [2024-11-28 12:50:48.255987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.918 [2024-11-28 12:50:48.255998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.918 qpair failed and we were unable to recover it. 00:27:05.918 [2024-11-28 12:50:48.256074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.918 [2024-11-28 12:50:48.256084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.918 qpair failed and we were unable to recover it. 00:27:05.918 [2024-11-28 12:50:48.256234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.918 [2024-11-28 12:50:48.256244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.918 qpair failed and we were unable to recover it. 00:27:05.918 [2024-11-28 12:50:48.256442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.918 [2024-11-28 12:50:48.256452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.918 qpair failed and we were unable to recover it. 00:27:05.918 [2024-11-28 12:50:48.256619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.918 [2024-11-28 12:50:48.256630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.918 qpair failed and we were unable to recover it. 00:27:05.918 [2024-11-28 12:50:48.256851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.918 [2024-11-28 12:50:48.256862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.918 qpair failed and we were unable to recover it. 00:27:05.918 [2024-11-28 12:50:48.257025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.918 [2024-11-28 12:50:48.257036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.918 qpair failed and we were unable to recover it. 00:27:05.918 [2024-11-28 12:50:48.257274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.918 [2024-11-28 12:50:48.257284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.918 qpair failed and we were unable to recover it. 00:27:05.918 [2024-11-28 12:50:48.257431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.918 [2024-11-28 12:50:48.257441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.918 qpair failed and we were unable to recover it. 00:27:05.918 [2024-11-28 12:50:48.257523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.918 [2024-11-28 12:50:48.257534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.918 qpair failed and we were unable to recover it. 00:27:05.918 [2024-11-28 12:50:48.257736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.918 [2024-11-28 12:50:48.257747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.918 qpair failed and we were unable to recover it. 00:27:05.918 [2024-11-28 12:50:48.257968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.918 [2024-11-28 12:50:48.257979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.918 qpair failed and we were unable to recover it. 00:27:05.918 [2024-11-28 12:50:48.258068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.918 [2024-11-28 12:50:48.258079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.918 qpair failed and we were unable to recover it. 00:27:05.918 [2024-11-28 12:50:48.258213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.918 [2024-11-28 12:50:48.258223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.918 qpair failed and we were unable to recover it. 00:27:05.918 [2024-11-28 12:50:48.258382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.918 [2024-11-28 12:50:48.258392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.918 qpair failed and we were unable to recover it. 00:27:05.918 [2024-11-28 12:50:48.258603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.918 [2024-11-28 12:50:48.258613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.918 qpair failed and we were unable to recover it. 00:27:05.918 [2024-11-28 12:50:48.258752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.919 [2024-11-28 12:50:48.258762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.919 qpair failed and we were unable to recover it. 00:27:05.919 [2024-11-28 12:50:48.258909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.919 [2024-11-28 12:50:48.258919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.919 qpair failed and we were unable to recover it. 00:27:05.919 [2024-11-28 12:50:48.259014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.919 [2024-11-28 12:50:48.259024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.919 qpair failed and we were unable to recover it. 00:27:05.919 [2024-11-28 12:50:48.259226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.919 [2024-11-28 12:50:48.259236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.919 qpair failed and we were unable to recover it. 00:27:05.919 [2024-11-28 12:50:48.259401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.919 [2024-11-28 12:50:48.259411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.919 qpair failed and we were unable to recover it. 00:27:05.919 [2024-11-28 12:50:48.259495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.919 [2024-11-28 12:50:48.259505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.919 qpair failed and we were unable to recover it. 00:27:05.919 [2024-11-28 12:50:48.259739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.919 [2024-11-28 12:50:48.259751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.919 qpair failed and we were unable to recover it. 00:27:05.919 [2024-11-28 12:50:48.259905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.919 [2024-11-28 12:50:48.259916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.919 qpair failed and we were unable to recover it. 00:27:05.919 [2024-11-28 12:50:48.260078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.919 [2024-11-28 12:50:48.260089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.919 qpair failed and we were unable to recover it. 00:27:05.919 [2024-11-28 12:50:48.260221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.919 [2024-11-28 12:50:48.260231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.919 qpair failed and we were unable to recover it. 00:27:05.919 [2024-11-28 12:50:48.260317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.919 [2024-11-28 12:50:48.260327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.919 qpair failed and we were unable to recover it. 00:27:05.919 [2024-11-28 12:50:48.260468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.919 [2024-11-28 12:50:48.260479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.919 qpair failed and we were unable to recover it. 00:27:05.919 [2024-11-28 12:50:48.260652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.919 [2024-11-28 12:50:48.260663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.919 qpair failed and we were unable to recover it. 00:27:05.919 [2024-11-28 12:50:48.260799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.919 [2024-11-28 12:50:48.260810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.919 qpair failed and we were unable to recover it. 00:27:05.919 [2024-11-28 12:50:48.260962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.919 [2024-11-28 12:50:48.260973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.919 qpair failed and we were unable to recover it. 00:27:05.919 [2024-11-28 12:50:48.261110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.919 [2024-11-28 12:50:48.261121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.919 qpair failed and we were unable to recover it. 00:27:05.919 [2024-11-28 12:50:48.261276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.919 [2024-11-28 12:50:48.261287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.919 qpair failed and we were unable to recover it. 00:27:05.919 [2024-11-28 12:50:48.261514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.919 [2024-11-28 12:50:48.261525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.919 qpair failed and we were unable to recover it. 00:27:05.919 [2024-11-28 12:50:48.261726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.919 [2024-11-28 12:50:48.261737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.919 qpair failed and we were unable to recover it. 00:27:05.919 [2024-11-28 12:50:48.261872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.919 [2024-11-28 12:50:48.261882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.919 qpair failed and we were unable to recover it. 00:27:05.919 [2024-11-28 12:50:48.262048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.919 [2024-11-28 12:50:48.262059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.919 qpair failed and we were unable to recover it. 00:27:05.919 [2024-11-28 12:50:48.262212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.919 [2024-11-28 12:50:48.262223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.919 qpair failed and we were unable to recover it. 00:27:05.919 [2024-11-28 12:50:48.262369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.919 [2024-11-28 12:50:48.262380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.919 qpair failed and we were unable to recover it. 00:27:05.919 [2024-11-28 12:50:48.262595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.919 [2024-11-28 12:50:48.262606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.919 qpair failed and we were unable to recover it. 00:27:05.919 [2024-11-28 12:50:48.262795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.919 [2024-11-28 12:50:48.262806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.919 qpair failed and we were unable to recover it. 00:27:05.919 [2024-11-28 12:50:48.263034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.919 [2024-11-28 12:50:48.263045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.919 qpair failed and we were unable to recover it. 00:27:05.919 [2024-11-28 12:50:48.263206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.919 [2024-11-28 12:50:48.263217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.919 qpair failed and we were unable to recover it. 00:27:05.919 [2024-11-28 12:50:48.263309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.919 [2024-11-28 12:50:48.263319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.919 qpair failed and we were unable to recover it. 00:27:05.919 [2024-11-28 12:50:48.263547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.919 [2024-11-28 12:50:48.263558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.919 qpair failed and we were unable to recover it. 00:27:05.919 [2024-11-28 12:50:48.263735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.919 [2024-11-28 12:50:48.263745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.919 qpair failed and we were unable to recover it. 00:27:05.919 [2024-11-28 12:50:48.263921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.919 [2024-11-28 12:50:48.263932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.919 qpair failed and we were unable to recover it. 00:27:05.919 [2024-11-28 12:50:48.264092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.919 [2024-11-28 12:50:48.264103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.919 qpair failed and we were unable to recover it. 00:27:05.919 [2024-11-28 12:50:48.264206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.919 [2024-11-28 12:50:48.264217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.919 qpair failed and we were unable to recover it. 00:27:05.919 [2024-11-28 12:50:48.264436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.919 [2024-11-28 12:50:48.264447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.919 qpair failed and we were unable to recover it. 00:27:05.919 [2024-11-28 12:50:48.264576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.919 [2024-11-28 12:50:48.264586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.919 qpair failed and we were unable to recover it. 00:27:05.919 [2024-11-28 12:50:48.264662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.919 [2024-11-28 12:50:48.264672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.919 qpair failed and we were unable to recover it. 00:27:05.919 [2024-11-28 12:50:48.264894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.919 [2024-11-28 12:50:48.264904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.919 qpair failed and we were unable to recover it. 00:27:05.919 [2024-11-28 12:50:48.265170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.919 [2024-11-28 12:50:48.265181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.919 qpair failed and we were unable to recover it. 00:27:05.919 [2024-11-28 12:50:48.265379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.919 [2024-11-28 12:50:48.265390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.919 qpair failed and we were unable to recover it. 00:27:05.919 [2024-11-28 12:50:48.265610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.919 [2024-11-28 12:50:48.265620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.919 qpair failed and we were unable to recover it. 00:27:05.919 [2024-11-28 12:50:48.265843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.919 [2024-11-28 12:50:48.265853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.919 qpair failed and we were unable to recover it. 00:27:05.919 [2024-11-28 12:50:48.266019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.919 [2024-11-28 12:50:48.266029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.919 qpair failed and we were unable to recover it. 00:27:05.919 [2024-11-28 12:50:48.266259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.919 [2024-11-28 12:50:48.266270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.919 qpair failed and we were unable to recover it. 00:27:05.919 [2024-11-28 12:50:48.266420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.919 [2024-11-28 12:50:48.266431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.919 qpair failed and we were unable to recover it. 00:27:05.919 [2024-11-28 12:50:48.266631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.920 [2024-11-28 12:50:48.266641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.920 qpair failed and we were unable to recover it. 00:27:05.920 [2024-11-28 12:50:48.266781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.920 [2024-11-28 12:50:48.266791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.920 qpair failed and we were unable to recover it. 00:27:05.920 [2024-11-28 12:50:48.267016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.920 [2024-11-28 12:50:48.267030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.920 qpair failed and we were unable to recover it. 00:27:05.920 [2024-11-28 12:50:48.267231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.920 [2024-11-28 12:50:48.267241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.920 qpair failed and we were unable to recover it. 00:27:05.920 [2024-11-28 12:50:48.267463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.920 [2024-11-28 12:50:48.267473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.920 qpair failed and we were unable to recover it. 00:27:05.920 [2024-11-28 12:50:48.267572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.920 [2024-11-28 12:50:48.267582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.920 qpair failed and we were unable to recover it. 00:27:05.920 [2024-11-28 12:50:48.267657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.920 [2024-11-28 12:50:48.267667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.920 qpair failed and we were unable to recover it. 00:27:05.920 [2024-11-28 12:50:48.267826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.920 [2024-11-28 12:50:48.267836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.920 qpair failed and we were unable to recover it. 00:27:05.920 [2024-11-28 12:50:48.267909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.920 [2024-11-28 12:50:48.267919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.920 qpair failed and we were unable to recover it. 00:27:05.920 [2024-11-28 12:50:48.268096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.920 [2024-11-28 12:50:48.268107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.920 qpair failed and we were unable to recover it. 00:27:05.920 [2024-11-28 12:50:48.268337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.920 [2024-11-28 12:50:48.268348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.920 qpair failed and we were unable to recover it. 00:27:05.920 [2024-11-28 12:50:48.268567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.920 [2024-11-28 12:50:48.268577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.920 qpair failed and we were unable to recover it. 00:27:05.920 [2024-11-28 12:50:48.268817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.920 [2024-11-28 12:50:48.268828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.920 qpair failed and we were unable to recover it. 00:27:05.920 [2024-11-28 12:50:48.269076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.920 [2024-11-28 12:50:48.269086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.920 qpair failed and we were unable to recover it. 00:27:05.920 [2024-11-28 12:50:48.269329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.920 [2024-11-28 12:50:48.269340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.920 qpair failed and we were unable to recover it. 00:27:05.920 [2024-11-28 12:50:48.269584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.920 [2024-11-28 12:50:48.269594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.920 qpair failed and we were unable to recover it. 00:27:05.920 [2024-11-28 12:50:48.269798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.920 [2024-11-28 12:50:48.269809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.920 qpair failed and we were unable to recover it. 00:27:05.920 [2024-11-28 12:50:48.269982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.920 [2024-11-28 12:50:48.269993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.920 qpair failed and we were unable to recover it. 00:27:05.920 [2024-11-28 12:50:48.270167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.920 [2024-11-28 12:50:48.270177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.920 qpair failed and we were unable to recover it. 00:27:05.920 [2024-11-28 12:50:48.270376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.920 [2024-11-28 12:50:48.270386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.920 qpair failed and we were unable to recover it. 00:27:05.920 [2024-11-28 12:50:48.270613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.920 [2024-11-28 12:50:48.270623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.920 qpair failed and we were unable to recover it. 00:27:05.920 [2024-11-28 12:50:48.270845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.920 [2024-11-28 12:50:48.270855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.920 qpair failed and we were unable to recover it. 00:27:05.920 [2024-11-28 12:50:48.271034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.920 [2024-11-28 12:50:48.271045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.920 qpair failed and we were unable to recover it. 00:27:05.920 [2024-11-28 12:50:48.271274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.920 [2024-11-28 12:50:48.271285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.920 qpair failed and we were unable to recover it. 00:27:05.920 [2024-11-28 12:50:48.271438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.920 [2024-11-28 12:50:48.271449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.920 qpair failed and we were unable to recover it. 00:27:05.920 [2024-11-28 12:50:48.271569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.920 [2024-11-28 12:50:48.271580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.920 qpair failed and we were unable to recover it. 00:27:05.920 [2024-11-28 12:50:48.271737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.920 [2024-11-28 12:50:48.271747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.920 qpair failed and we were unable to recover it. 00:27:05.920 [2024-11-28 12:50:48.271885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.920 [2024-11-28 12:50:48.271895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.920 qpair failed and we were unable to recover it. 00:27:05.920 [2024-11-28 12:50:48.272123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.920 [2024-11-28 12:50:48.272134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.920 qpair failed and we were unable to recover it. 00:27:05.920 [2024-11-28 12:50:48.272240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.920 [2024-11-28 12:50:48.272251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.920 qpair failed and we were unable to recover it. 00:27:05.920 [2024-11-28 12:50:48.272506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.920 [2024-11-28 12:50:48.272517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.920 qpair failed and we were unable to recover it. 00:27:05.920 [2024-11-28 12:50:48.272726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.920 [2024-11-28 12:50:48.272736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.920 qpair failed and we were unable to recover it. 00:27:05.920 [2024-11-28 12:50:48.272881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.920 [2024-11-28 12:50:48.272892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.920 qpair failed and we were unable to recover it. 00:27:05.920 [2024-11-28 12:50:48.273132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.920 [2024-11-28 12:50:48.273143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.920 qpair failed and we were unable to recover it. 00:27:05.920 [2024-11-28 12:50:48.273315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.920 [2024-11-28 12:50:48.273326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.920 qpair failed and we were unable to recover it. 00:27:05.920 [2024-11-28 12:50:48.273548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.920 [2024-11-28 12:50:48.273558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.920 qpair failed and we were unable to recover it. 00:27:05.920 [2024-11-28 12:50:48.273764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.920 [2024-11-28 12:50:48.273774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.920 qpair failed and we were unable to recover it. 00:27:05.920 [2024-11-28 12:50:48.273999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.920 [2024-11-28 12:50:48.274010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.920 qpair failed and we were unable to recover it. 00:27:05.920 [2024-11-28 12:50:48.274235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.920 [2024-11-28 12:50:48.274246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.920 qpair failed and we were unable to recover it. 00:27:05.920 [2024-11-28 12:50:48.274471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.920 [2024-11-28 12:50:48.274481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.920 qpair failed and we were unable to recover it. 00:27:05.920 [2024-11-28 12:50:48.274646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.920 [2024-11-28 12:50:48.274657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.920 qpair failed and we were unable to recover it. 00:27:05.920 [2024-11-28 12:50:48.274860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.920 [2024-11-28 12:50:48.274871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.920 qpair failed and we were unable to recover it. 00:27:05.920 [2024-11-28 12:50:48.275098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.920 [2024-11-28 12:50:48.275111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.920 qpair failed and we were unable to recover it. 00:27:05.920 [2024-11-28 12:50:48.275352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.920 [2024-11-28 12:50:48.275362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.920 qpair failed and we were unable to recover it. 00:27:05.920 [2024-11-28 12:50:48.275517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.920 [2024-11-28 12:50:48.275527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.921 qpair failed and we were unable to recover it. 00:27:05.921 [2024-11-28 12:50:48.275775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.921 [2024-11-28 12:50:48.275786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.921 qpair failed and we were unable to recover it. 00:27:05.921 [2024-11-28 12:50:48.276011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.921 [2024-11-28 12:50:48.276022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.921 qpair failed and we were unable to recover it. 00:27:05.921 [2024-11-28 12:50:48.276193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.921 [2024-11-28 12:50:48.276204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.921 qpair failed and we were unable to recover it. 00:27:05.921 [2024-11-28 12:50:48.276426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.921 [2024-11-28 12:50:48.276437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.921 qpair failed and we were unable to recover it. 00:27:05.921 [2024-11-28 12:50:48.276638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.921 [2024-11-28 12:50:48.276648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.921 qpair failed and we were unable to recover it. 00:27:05.921 [2024-11-28 12:50:48.276874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.921 [2024-11-28 12:50:48.276884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.921 qpair failed and we were unable to recover it. 00:27:05.921 [2024-11-28 12:50:48.277111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.921 [2024-11-28 12:50:48.277122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.921 qpair failed and we were unable to recover it. 00:27:05.921 [2024-11-28 12:50:48.277329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.921 [2024-11-28 12:50:48.277339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.921 qpair failed and we were unable to recover it. 00:27:05.921 [2024-11-28 12:50:48.277541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.921 [2024-11-28 12:50:48.277551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.921 qpair failed and we were unable to recover it. 00:27:05.921 [2024-11-28 12:50:48.277684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.921 [2024-11-28 12:50:48.277695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.921 qpair failed and we were unable to recover it. 00:27:05.921 [2024-11-28 12:50:48.277844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.921 [2024-11-28 12:50:48.277855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.921 qpair failed and we were unable to recover it. 00:27:05.921 [2024-11-28 12:50:48.277954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.921 [2024-11-28 12:50:48.277966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.921 qpair failed and we were unable to recover it. 00:27:05.921 [2024-11-28 12:50:48.278172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.921 [2024-11-28 12:50:48.278182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.921 qpair failed and we were unable to recover it. 00:27:05.921 [2024-11-28 12:50:48.278334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.921 [2024-11-28 12:50:48.278344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.921 qpair failed and we were unable to recover it. 00:27:05.921 [2024-11-28 12:50:48.278591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.921 [2024-11-28 12:50:48.278602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.921 qpair failed and we were unable to recover it. 00:27:05.921 [2024-11-28 12:50:48.278824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.921 [2024-11-28 12:50:48.278834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.921 qpair failed and we were unable to recover it. 00:27:05.921 [2024-11-28 12:50:48.278915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.921 [2024-11-28 12:50:48.278925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.921 qpair failed and we were unable to recover it. 00:27:05.921 [2024-11-28 12:50:48.279178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.921 [2024-11-28 12:50:48.279189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.921 qpair failed and we were unable to recover it. 00:27:05.921 [2024-11-28 12:50:48.279414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.921 [2024-11-28 12:50:48.279425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.921 qpair failed and we were unable to recover it. 00:27:05.921 [2024-11-28 12:50:48.279659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.921 [2024-11-28 12:50:48.279669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.921 qpair failed and we were unable to recover it. 00:27:05.921 [2024-11-28 12:50:48.279811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.921 [2024-11-28 12:50:48.279822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.921 qpair failed and we were unable to recover it. 00:27:05.921 [2024-11-28 12:50:48.280020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.921 [2024-11-28 12:50:48.280032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.921 qpair failed and we were unable to recover it. 00:27:05.921 [2024-11-28 12:50:48.280279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.921 [2024-11-28 12:50:48.280290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.921 qpair failed and we were unable to recover it. 00:27:05.921 [2024-11-28 12:50:48.280514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.921 [2024-11-28 12:50:48.280526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.921 qpair failed and we were unable to recover it. 00:27:05.921 [2024-11-28 12:50:48.280630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.921 [2024-11-28 12:50:48.280641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.921 qpair failed and we were unable to recover it. 00:27:05.921 [2024-11-28 12:50:48.280816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.921 [2024-11-28 12:50:48.280828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.921 qpair failed and we were unable to recover it. 00:27:05.921 [2024-11-28 12:50:48.281033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.921 [2024-11-28 12:50:48.281046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.921 qpair failed and we were unable to recover it. 00:27:05.921 [2024-11-28 12:50:48.281260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.921 [2024-11-28 12:50:48.281271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.921 qpair failed and we were unable to recover it. 00:27:05.921 [2024-11-28 12:50:48.281470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.921 [2024-11-28 12:50:48.281481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.921 qpair failed and we were unable to recover it. 00:27:05.921 [2024-11-28 12:50:48.281707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.921 [2024-11-28 12:50:48.281717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.921 qpair failed and we were unable to recover it. 00:27:05.921 [2024-11-28 12:50:48.281852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.921 [2024-11-28 12:50:48.281862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.921 qpair failed and we were unable to recover it. 00:27:05.921 [2024-11-28 12:50:48.282061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.921 [2024-11-28 12:50:48.282072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.921 qpair failed and we were unable to recover it. 00:27:05.921 [2024-11-28 12:50:48.282276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.921 [2024-11-28 12:50:48.282287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.921 qpair failed and we were unable to recover it. 00:27:05.921 [2024-11-28 12:50:48.282507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.921 [2024-11-28 12:50:48.282519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.921 qpair failed and we were unable to recover it. 00:27:05.921 [2024-11-28 12:50:48.282670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.921 [2024-11-28 12:50:48.282682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.921 qpair failed and we were unable to recover it. 00:27:05.921 [2024-11-28 12:50:48.282914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.921 [2024-11-28 12:50:48.282925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.921 qpair failed and we were unable to recover it. 00:27:05.921 [2024-11-28 12:50:48.283130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.921 [2024-11-28 12:50:48.283141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.921 qpair failed and we were unable to recover it. 00:27:05.921 [2024-11-28 12:50:48.283311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.921 [2024-11-28 12:50:48.283325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.921 qpair failed and we were unable to recover it. 00:27:05.921 [2024-11-28 12:50:48.283515] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:05.921 [2024-11-28 12:50:48.283544] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:05.921 [2024-11-28 12:50:48.283547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.921 [2024-11-28 12:50:48.283553] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:05.921 [2024-11-28 12:50:48.283557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.921 [2024-11-28 12:50:48.283560] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:05.921 qpair failed and we were unable to recover it. 00:27:05.921 [2024-11-28 12:50:48.283566] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:05.921 [2024-11-28 12:50:48.283734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.921 [2024-11-28 12:50:48.283745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.921 qpair failed and we were unable to recover it. 00:27:05.921 [2024-11-28 12:50:48.283898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.921 [2024-11-28 12:50:48.283910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.921 qpair failed and we were unable to recover it. 00:27:05.921 [2024-11-28 12:50:48.284132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.921 [2024-11-28 12:50:48.284144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.921 qpair failed and we were unable to recover it. 00:27:05.921 [2024-11-28 12:50:48.284367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.921 [2024-11-28 12:50:48.284378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.921 qpair failed and we were unable to recover it. 00:27:05.921 [2024-11-28 12:50:48.284602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.921 [2024-11-28 12:50:48.284613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.921 qpair failed and we were unable to recover it. 00:27:05.921 [2024-11-28 12:50:48.284759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.921 [2024-11-28 12:50:48.284770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.921 qpair failed and we were unable to recover it. 00:27:05.922 [2024-11-28 12:50:48.284915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.922 [2024-11-28 12:50:48.284925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.922 qpair failed and we were unable to recover it. 00:27:05.922 [2024-11-28 12:50:48.285174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.922 [2024-11-28 12:50:48.285185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.922 qpair failed and we were unable to recover it. 00:27:05.922 [2024-11-28 12:50:48.285171] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:27:05.922 [2024-11-28 12:50:48.285281] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:27:05.922 [2024-11-28 12:50:48.285334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.922 [2024-11-28 12:50:48.285344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.922 qpair failed and we were unable to recover it. 00:27:05.922 [2024-11-28 12:50:48.285282] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:27:05.922 [2024-11-28 12:50:48.285195] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:27:05.922 [2024-11-28 12:50:48.285560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.922 [2024-11-28 12:50:48.285587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.922 qpair failed and we were unable to recover it. 00:27:05.922 [2024-11-28 12:50:48.285759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.922 [2024-11-28 12:50:48.285774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.922 qpair failed and we were unable to recover it. 00:27:05.922 [2024-11-28 12:50:48.286011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.922 [2024-11-28 12:50:48.286027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.922 qpair failed and we were unable to recover it. 00:27:05.922 [2024-11-28 12:50:48.286220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.922 [2024-11-28 12:50:48.286235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.922 qpair failed and we were unable to recover it. 00:27:05.922 [2024-11-28 12:50:48.286444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.922 [2024-11-28 12:50:48.286458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.922 qpair failed and we were unable to recover it. 00:27:05.922 [2024-11-28 12:50:48.286639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.922 [2024-11-28 12:50:48.286654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.922 qpair failed and we were unable to recover it. 00:27:05.922 [2024-11-28 12:50:48.286763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.922 [2024-11-28 12:50:48.286778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.922 qpair failed and we were unable to recover it. 00:27:05.922 [2024-11-28 12:50:48.286988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.922 [2024-11-28 12:50:48.287004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.922 qpair failed and we were unable to recover it. 00:27:05.922 [2024-11-28 12:50:48.287262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.922 [2024-11-28 12:50:48.287277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.922 qpair failed and we were unable to recover it. 00:27:05.922 [2024-11-28 12:50:48.287366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.922 [2024-11-28 12:50:48.287381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.922 qpair failed and we were unable to recover it. 00:27:05.922 [2024-11-28 12:50:48.287493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.922 [2024-11-28 12:50:48.287507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.922 qpair failed and we were unable to recover it. 00:27:05.922 [2024-11-28 12:50:48.287668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.922 [2024-11-28 12:50:48.287682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.922 qpair failed and we were unable to recover it. 00:27:05.922 [2024-11-28 12:50:48.287859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.922 [2024-11-28 12:50:48.287874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.922 qpair failed and we were unable to recover it. 00:27:05.922 [2024-11-28 12:50:48.288144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.922 [2024-11-28 12:50:48.288170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.922 qpair failed and we were unable to recover it. 00:27:05.922 [2024-11-28 12:50:48.288288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.922 [2024-11-28 12:50:48.288303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.922 qpair failed and we were unable to recover it. 00:27:05.922 [2024-11-28 12:50:48.288455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.922 [2024-11-28 12:50:48.288469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.922 qpair failed and we were unable to recover it. 00:27:05.922 [2024-11-28 12:50:48.288706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.922 [2024-11-28 12:50:48.288721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.922 qpair failed and we were unable to recover it. 00:27:05.922 [2024-11-28 12:50:48.288882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.922 [2024-11-28 12:50:48.288896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.922 qpair failed and we were unable to recover it. 00:27:05.922 [2024-11-28 12:50:48.289126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.922 [2024-11-28 12:50:48.289141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.922 qpair failed and we were unable to recover it. 00:27:05.922 [2024-11-28 12:50:48.289304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.922 [2024-11-28 12:50:48.289318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.922 qpair failed and we were unable to recover it. 00:27:05.922 [2024-11-28 12:50:48.289482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.922 [2024-11-28 12:50:48.289496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.922 qpair failed and we were unable to recover it. 00:27:05.922 [2024-11-28 12:50:48.289648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.922 [2024-11-28 12:50:48.289662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.922 qpair failed and we were unable to recover it. 00:27:05.922 [2024-11-28 12:50:48.289871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.922 [2024-11-28 12:50:48.289884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.922 qpair failed and we were unable to recover it. 00:27:05.922 [2024-11-28 12:50:48.290044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.922 [2024-11-28 12:50:48.290059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.922 qpair failed and we were unable to recover it. 00:27:05.922 [2024-11-28 12:50:48.290291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.922 [2024-11-28 12:50:48.290305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.922 qpair failed and we were unable to recover it. 00:27:05.922 [2024-11-28 12:50:48.290544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.922 [2024-11-28 12:50:48.290558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.922 qpair failed and we were unable to recover it. 00:27:05.922 [2024-11-28 12:50:48.290743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.922 [2024-11-28 12:50:48.290760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.922 qpair failed and we were unable to recover it. 00:27:05.922 [2024-11-28 12:50:48.290994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.922 [2024-11-28 12:50:48.291008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.922 qpair failed and we were unable to recover it. 00:27:05.922 [2024-11-28 12:50:48.291247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.922 [2024-11-28 12:50:48.291262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.922 qpair failed and we were unable to recover it. 00:27:05.922 [2024-11-28 12:50:48.291500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.922 [2024-11-28 12:50:48.291514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.922 qpair failed and we were unable to recover it. 00:27:05.922 [2024-11-28 12:50:48.291750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.922 [2024-11-28 12:50:48.291764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.922 qpair failed and we were unable to recover it. 00:27:05.922 [2024-11-28 12:50:48.291999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.922 [2024-11-28 12:50:48.292014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.922 qpair failed and we were unable to recover it. 00:27:05.922 [2024-11-28 12:50:48.292253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.922 [2024-11-28 12:50:48.292267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.922 qpair failed and we were unable to recover it. 00:27:05.922 [2024-11-28 12:50:48.292421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.922 [2024-11-28 12:50:48.292435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.922 qpair failed and we were unable to recover it. 00:27:05.922 [2024-11-28 12:50:48.292622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.922 [2024-11-28 12:50:48.292637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.922 qpair failed and we were unable to recover it. 00:27:05.922 [2024-11-28 12:50:48.292820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.922 [2024-11-28 12:50:48.292834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.922 qpair failed and we were unable to recover it. 00:27:05.922 [2024-11-28 12:50:48.292992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.922 [2024-11-28 12:50:48.293006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.922 qpair failed and we were unable to recover it. 00:27:05.922 [2024-11-28 12:50:48.293246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.922 [2024-11-28 12:50:48.293260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.922 qpair failed and we were unable to recover it. 00:27:05.922 [2024-11-28 12:50:48.293497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.922 [2024-11-28 12:50:48.293512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.922 qpair failed and we were unable to recover it. 00:27:05.923 [2024-11-28 12:50:48.293748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.923 [2024-11-28 12:50:48.293763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.923 qpair failed and we were unable to recover it. 00:27:05.923 [2024-11-28 12:50:48.293934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.923 [2024-11-28 12:50:48.293955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.923 qpair failed and we were unable to recover it. 00:27:05.923 [2024-11-28 12:50:48.294194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.923 [2024-11-28 12:50:48.294209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.923 qpair failed and we were unable to recover it. 00:27:05.923 [2024-11-28 12:50:48.294443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.923 [2024-11-28 12:50:48.294457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.923 qpair failed and we were unable to recover it. 00:27:05.923 [2024-11-28 12:50:48.294696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.923 [2024-11-28 12:50:48.294710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.923 qpair failed and we were unable to recover it. 00:27:05.923 [2024-11-28 12:50:48.294921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.923 [2024-11-28 12:50:48.294936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.923 qpair failed and we were unable to recover it. 00:27:05.923 [2024-11-28 12:50:48.295102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.923 [2024-11-28 12:50:48.295117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.923 qpair failed and we were unable to recover it. 00:27:05.923 [2024-11-28 12:50:48.295267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.923 [2024-11-28 12:50:48.295281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.923 qpair failed and we were unable to recover it. 00:27:05.923 [2024-11-28 12:50:48.295547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.923 [2024-11-28 12:50:48.295561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.923 qpair failed and we were unable to recover it. 00:27:05.923 [2024-11-28 12:50:48.295754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.923 [2024-11-28 12:50:48.295768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.923 qpair failed and we were unable to recover it. 00:27:05.923 [2024-11-28 12:50:48.296030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.923 [2024-11-28 12:50:48.296045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.923 qpair failed and we were unable to recover it. 00:27:05.923 [2024-11-28 12:50:48.296237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.923 [2024-11-28 12:50:48.296252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.923 qpair failed and we were unable to recover it. 00:27:05.923 [2024-11-28 12:50:48.296498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.923 [2024-11-28 12:50:48.296512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.923 qpair failed and we were unable to recover it. 00:27:05.923 [2024-11-28 12:50:48.296750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.923 [2024-11-28 12:50:48.296765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.923 qpair failed and we were unable to recover it. 00:27:05.923 [2024-11-28 12:50:48.296935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.923 [2024-11-28 12:50:48.296953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.923 qpair failed and we were unable to recover it. 00:27:05.923 [2024-11-28 12:50:48.297134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.923 [2024-11-28 12:50:48.297148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.923 qpair failed and we were unable to recover it. 00:27:05.923 [2024-11-28 12:50:48.297302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.923 [2024-11-28 12:50:48.297316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.923 qpair failed and we were unable to recover it. 00:27:05.923 [2024-11-28 12:50:48.297556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.923 [2024-11-28 12:50:48.297570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.923 qpair failed and we were unable to recover it. 00:27:05.923 [2024-11-28 12:50:48.297793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.923 [2024-11-28 12:50:48.297806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.923 qpair failed and we were unable to recover it. 00:27:05.923 [2024-11-28 12:50:48.297961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.923 [2024-11-28 12:50:48.297976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.923 qpair failed and we were unable to recover it. 00:27:05.923 [2024-11-28 12:50:48.298215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.923 [2024-11-28 12:50:48.298230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.923 qpair failed and we were unable to recover it. 00:27:05.923 [2024-11-28 12:50:48.298391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.923 [2024-11-28 12:50:48.298405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.923 qpair failed and we were unable to recover it. 00:27:05.923 [2024-11-28 12:50:48.298638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.923 [2024-11-28 12:50:48.298652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.923 qpair failed and we were unable to recover it. 00:27:05.923 [2024-11-28 12:50:48.298875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.923 [2024-11-28 12:50:48.298889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.923 qpair failed and we were unable to recover it. 00:27:05.923 [2024-11-28 12:50:48.299097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.923 [2024-11-28 12:50:48.299112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.923 qpair failed and we were unable to recover it. 00:27:05.923 [2024-11-28 12:50:48.299351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.923 [2024-11-28 12:50:48.299366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.923 qpair failed and we were unable to recover it. 00:27:05.923 [2024-11-28 12:50:48.299610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.923 [2024-11-28 12:50:48.299624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.923 qpair failed and we were unable to recover it. 00:27:05.923 [2024-11-28 12:50:48.299834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.923 [2024-11-28 12:50:48.299853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.923 qpair failed and we were unable to recover it. 00:27:05.923 [2024-11-28 12:50:48.300068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.923 [2024-11-28 12:50:48.300083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.923 qpair failed and we were unable to recover it. 00:27:05.923 [2024-11-28 12:50:48.300319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.923 [2024-11-28 12:50:48.300334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.923 qpair failed and we were unable to recover it. 00:27:05.923 [2024-11-28 12:50:48.300549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.923 [2024-11-28 12:50:48.300563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.923 qpair failed and we were unable to recover it. 00:27:05.923 [2024-11-28 12:50:48.300730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.923 [2024-11-28 12:50:48.300745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.923 qpair failed and we were unable to recover it. 00:27:05.923 [2024-11-28 12:50:48.300904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.923 [2024-11-28 12:50:48.300918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.923 qpair failed and we were unable to recover it. 00:27:05.923 [2024-11-28 12:50:48.301099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.923 [2024-11-28 12:50:48.301114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.923 qpair failed and we were unable to recover it. 00:27:05.923 [2024-11-28 12:50:48.301325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.923 [2024-11-28 12:50:48.301341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.923 qpair failed and we were unable to recover it. 00:27:05.923 [2024-11-28 12:50:48.301607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.923 [2024-11-28 12:50:48.301622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.923 qpair failed and we were unable to recover it. 00:27:05.923 [2024-11-28 12:50:48.301860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.923 [2024-11-28 12:50:48.301875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.923 qpair failed and we were unable to recover it. 00:27:05.923 [2024-11-28 12:50:48.302119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.923 [2024-11-28 12:50:48.302135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.923 qpair failed and we were unable to recover it. 00:27:05.923 [2024-11-28 12:50:48.302326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.923 [2024-11-28 12:50:48.302342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.923 qpair failed and we were unable to recover it. 00:27:05.923 [2024-11-28 12:50:48.302571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.923 [2024-11-28 12:50:48.302587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.923 qpair failed and we were unable to recover it. 00:27:05.923 [2024-11-28 12:50:48.302750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.923 [2024-11-28 12:50:48.302765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.923 qpair failed and we were unable to recover it. 00:27:05.923 [2024-11-28 12:50:48.303007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.923 [2024-11-28 12:50:48.303023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.923 qpair failed and we were unable to recover it. 00:27:05.923 [2024-11-28 12:50:48.303177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.923 [2024-11-28 12:50:48.303192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.923 qpair failed and we were unable to recover it. 00:27:05.923 [2024-11-28 12:50:48.303399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.923 [2024-11-28 12:50:48.303413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.923 qpair failed and we were unable to recover it. 00:27:05.923 [2024-11-28 12:50:48.303636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.923 [2024-11-28 12:50:48.303651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.923 qpair failed and we were unable to recover it. 00:27:05.923 [2024-11-28 12:50:48.303759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.923 [2024-11-28 12:50:48.303773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.923 qpair failed and we were unable to recover it. 00:27:05.923 [2024-11-28 12:50:48.303953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.923 [2024-11-28 12:50:48.303969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.924 qpair failed and we were unable to recover it. 00:27:05.924 [2024-11-28 12:50:48.304060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.924 [2024-11-28 12:50:48.304075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.924 qpair failed and we were unable to recover it. 00:27:05.924 [2024-11-28 12:50:48.304329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.924 [2024-11-28 12:50:48.304345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.924 qpair failed and we were unable to recover it. 00:27:05.924 [2024-11-28 12:50:48.304576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.924 [2024-11-28 12:50:48.304591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.924 qpair failed and we were unable to recover it. 00:27:05.924 [2024-11-28 12:50:48.304748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.924 [2024-11-28 12:50:48.304763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.924 qpair failed and we were unable to recover it. 00:27:05.924 [2024-11-28 12:50:48.304919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.924 [2024-11-28 12:50:48.304934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.924 qpair failed and we were unable to recover it. 00:27:05.924 [2024-11-28 12:50:48.305168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.924 [2024-11-28 12:50:48.305184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.924 qpair failed and we were unable to recover it. 00:27:05.924 [2024-11-28 12:50:48.305399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.924 [2024-11-28 12:50:48.305415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:05.924 qpair failed and we were unable to recover it. 00:27:05.924 [2024-11-28 12:50:48.305679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.924 [2024-11-28 12:50:48.305700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.924 qpair failed and we were unable to recover it. 00:27:05.924 [2024-11-28 12:50:48.305854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.924 [2024-11-28 12:50:48.305866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.924 qpair failed and we were unable to recover it. 00:27:05.924 [2024-11-28 12:50:48.306052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.924 [2024-11-28 12:50:48.306064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.924 qpair failed and we were unable to recover it. 00:27:05.924 [2024-11-28 12:50:48.306300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.924 [2024-11-28 12:50:48.306311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.924 qpair failed and we were unable to recover it. 00:27:05.924 [2024-11-28 12:50:48.306468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.924 [2024-11-28 12:50:48.306480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.924 qpair failed and we were unable to recover it. 00:27:05.924 [2024-11-28 12:50:48.306642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.924 [2024-11-28 12:50:48.306655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.924 qpair failed and we were unable to recover it. 00:27:05.924 [2024-11-28 12:50:48.306852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.924 [2024-11-28 12:50:48.306865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.924 qpair failed and we were unable to recover it. 00:27:05.924 [2024-11-28 12:50:48.307058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.924 [2024-11-28 12:50:48.307072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.924 qpair failed and we were unable to recover it. 00:27:05.924 [2024-11-28 12:50:48.307216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.924 [2024-11-28 12:50:48.307227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.924 qpair failed and we were unable to recover it. 00:27:05.924 [2024-11-28 12:50:48.307373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.924 [2024-11-28 12:50:48.307384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.924 qpair failed and we were unable to recover it. 00:27:05.924 [2024-11-28 12:50:48.307516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.924 [2024-11-28 12:50:48.307527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.924 qpair failed and we were unable to recover it. 00:27:05.924 [2024-11-28 12:50:48.307750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.924 [2024-11-28 12:50:48.307761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.924 qpair failed and we were unable to recover it. 00:27:05.924 [2024-11-28 12:50:48.307958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.924 [2024-11-28 12:50:48.307970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.924 qpair failed and we were unable to recover it. 00:27:05.924 [2024-11-28 12:50:48.308173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.924 [2024-11-28 12:50:48.308190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.924 qpair failed and we were unable to recover it. 00:27:05.924 [2024-11-28 12:50:48.308286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.924 [2024-11-28 12:50:48.308297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.924 qpair failed and we were unable to recover it. 00:27:05.924 [2024-11-28 12:50:48.308393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.924 [2024-11-28 12:50:48.308404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.924 qpair failed and we were unable to recover it. 00:27:05.924 [2024-11-28 12:50:48.308549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.924 [2024-11-28 12:50:48.308561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.924 qpair failed and we were unable to recover it. 00:27:05.924 [2024-11-28 12:50:48.308737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.924 [2024-11-28 12:50:48.308749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.924 qpair failed and we were unable to recover it. 00:27:05.924 [2024-11-28 12:50:48.308899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.924 [2024-11-28 12:50:48.308910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.924 qpair failed and we were unable to recover it. 00:27:05.924 [2024-11-28 12:50:48.309089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.924 [2024-11-28 12:50:48.309101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.924 qpair failed and we were unable to recover it. 00:27:05.924 [2024-11-28 12:50:48.309353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.924 [2024-11-28 12:50:48.309365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.924 qpair failed and we were unable to recover it. 00:27:05.924 [2024-11-28 12:50:48.309641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.924 [2024-11-28 12:50:48.309652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.924 qpair failed and we were unable to recover it. 00:27:05.924 [2024-11-28 12:50:48.309851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.924 [2024-11-28 12:50:48.309862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.924 qpair failed and we were unable to recover it. 00:27:05.924 [2024-11-28 12:50:48.310085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.924 [2024-11-28 12:50:48.310097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.924 qpair failed and we were unable to recover it. 00:27:05.924 [2024-11-28 12:50:48.310347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.924 [2024-11-28 12:50:48.310361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.924 qpair failed and we were unable to recover it. 00:27:05.924 [2024-11-28 12:50:48.310584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.924 [2024-11-28 12:50:48.310596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.924 qpair failed and we were unable to recover it. 00:27:05.924 [2024-11-28 12:50:48.310832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.924 [2024-11-28 12:50:48.310845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.924 qpair failed and we were unable to recover it. 00:27:05.924 [2024-11-28 12:50:48.310956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.924 [2024-11-28 12:50:48.310971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.924 qpair failed and we were unable to recover it. 00:27:05.924 [2024-11-28 12:50:48.311210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.924 [2024-11-28 12:50:48.311223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.924 qpair failed and we were unable to recover it. 00:27:05.924 [2024-11-28 12:50:48.311377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.924 [2024-11-28 12:50:48.311391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.924 qpair failed and we were unable to recover it. 00:27:05.924 [2024-11-28 12:50:48.311564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.924 [2024-11-28 12:50:48.311578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.924 qpair failed and we were unable to recover it. 00:27:05.924 [2024-11-28 12:50:48.311731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.924 [2024-11-28 12:50:48.311745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.924 qpair failed and we were unable to recover it. 00:27:05.924 [2024-11-28 12:50:48.311887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.924 [2024-11-28 12:50:48.311899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.924 qpair failed and we were unable to recover it. 00:27:05.924 [2024-11-28 12:50:48.312035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.924 [2024-11-28 12:50:48.312049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.924 qpair failed and we were unable to recover it. 00:27:05.924 [2024-11-28 12:50:48.312217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.924 [2024-11-28 12:50:48.312229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.924 qpair failed and we were unable to recover it. 00:27:05.924 [2024-11-28 12:50:48.312306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.924 [2024-11-28 12:50:48.312317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.924 qpair failed and we were unable to recover it. 00:27:05.924 [2024-11-28 12:50:48.312543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.924 [2024-11-28 12:50:48.312556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.924 qpair failed and we were unable to recover it. 00:27:05.924 [2024-11-28 12:50:48.312759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.924 [2024-11-28 12:50:48.312772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.924 qpair failed and we were unable to recover it. 00:27:05.924 [2024-11-28 12:50:48.312935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.924 [2024-11-28 12:50:48.312950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.924 qpair failed and we were unable to recover it. 00:27:05.924 [2024-11-28 12:50:48.313123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.924 [2024-11-28 12:50:48.313134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.924 qpair failed and we were unable to recover it. 00:27:05.924 [2024-11-28 12:50:48.313339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.925 [2024-11-28 12:50:48.313371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.925 qpair failed and we were unable to recover it. 00:27:05.925 [2024-11-28 12:50:48.313603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.925 [2024-11-28 12:50:48.313616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.925 qpair failed and we were unable to recover it. 00:27:05.925 [2024-11-28 12:50:48.313770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.925 [2024-11-28 12:50:48.313784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.925 qpair failed and we were unable to recover it. 00:27:05.925 [2024-11-28 12:50:48.313940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.925 [2024-11-28 12:50:48.313961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.925 qpair failed and we were unable to recover it. 00:27:05.925 [2024-11-28 12:50:48.314059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.925 [2024-11-28 12:50:48.314073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.925 qpair failed and we were unable to recover it. 00:27:05.925 [2024-11-28 12:50:48.314307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.925 [2024-11-28 12:50:48.314321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.925 qpair failed and we were unable to recover it. 00:27:05.925 [2024-11-28 12:50:48.314472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.925 [2024-11-28 12:50:48.314486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.925 qpair failed and we were unable to recover it. 00:27:05.925 [2024-11-28 12:50:48.314643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.925 [2024-11-28 12:50:48.314657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.925 qpair failed and we were unable to recover it. 00:27:05.925 [2024-11-28 12:50:48.314809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.925 [2024-11-28 12:50:48.314823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.925 qpair failed and we were unable to recover it. 00:27:05.925 [2024-11-28 12:50:48.314980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.925 [2024-11-28 12:50:48.314995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.925 qpair failed and we were unable to recover it. 00:27:05.925 [2024-11-28 12:50:48.315149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.925 [2024-11-28 12:50:48.315164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.925 qpair failed and we were unable to recover it. 00:27:05.925 [2024-11-28 12:50:48.315331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.925 [2024-11-28 12:50:48.315345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.925 qpair failed and we were unable to recover it. 00:27:05.925 [2024-11-28 12:50:48.315508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.925 [2024-11-28 12:50:48.315522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.925 qpair failed and we were unable to recover it. 00:27:05.925 [2024-11-28 12:50:48.315663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.925 [2024-11-28 12:50:48.315677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.925 qpair failed and we were unable to recover it. 00:27:05.925 [2024-11-28 12:50:48.315894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.925 [2024-11-28 12:50:48.315908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.925 qpair failed and we were unable to recover it. 00:27:05.925 [2024-11-28 12:50:48.316004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.925 [2024-11-28 12:50:48.316018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.925 qpair failed and we were unable to recover it. 00:27:05.925 [2024-11-28 12:50:48.316276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.925 [2024-11-28 12:50:48.316291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.925 qpair failed and we were unable to recover it. 00:27:05.925 [2024-11-28 12:50:48.316438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.925 [2024-11-28 12:50:48.316452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.925 qpair failed and we were unable to recover it. 00:27:05.925 [2024-11-28 12:50:48.316653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.925 [2024-11-28 12:50:48.316668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.925 qpair failed and we were unable to recover it. 00:27:05.925 [2024-11-28 12:50:48.316774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.925 [2024-11-28 12:50:48.316788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.925 qpair failed and we were unable to recover it. 00:27:05.925 [2024-11-28 12:50:48.316942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.925 [2024-11-28 12:50:48.316960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.925 qpair failed and we were unable to recover it. 00:27:05.925 [2024-11-28 12:50:48.317191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.925 [2024-11-28 12:50:48.317205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.925 qpair failed and we were unable to recover it. 00:27:05.925 [2024-11-28 12:50:48.317462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.925 [2024-11-28 12:50:48.317478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.925 qpair failed and we were unable to recover it. 00:27:05.925 [2024-11-28 12:50:48.317709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.925 [2024-11-28 12:50:48.317725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.925 qpair failed and we were unable to recover it. 00:27:05.925 [2024-11-28 12:50:48.317896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.925 [2024-11-28 12:50:48.317911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.925 qpair failed and we were unable to recover it. 00:27:05.925 [2024-11-28 12:50:48.318114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.925 [2024-11-28 12:50:48.318129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.925 qpair failed and we were unable to recover it. 00:27:05.925 [2024-11-28 12:50:48.318291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.925 [2024-11-28 12:50:48.318307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.925 qpair failed and we were unable to recover it. 00:27:05.925 [2024-11-28 12:50:48.318521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.925 [2024-11-28 12:50:48.318541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.925 qpair failed and we were unable to recover it. 00:27:05.925 [2024-11-28 12:50:48.318770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.925 [2024-11-28 12:50:48.318787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.925 qpair failed and we were unable to recover it. 00:27:05.925 [2024-11-28 12:50:48.319053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.925 [2024-11-28 12:50:48.319070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.925 qpair failed and we were unable to recover it. 00:27:05.925 [2024-11-28 12:50:48.319297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.925 [2024-11-28 12:50:48.319314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.925 qpair failed and we were unable to recover it. 00:27:05.925 [2024-11-28 12:50:48.319546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.925 [2024-11-28 12:50:48.319562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.925 qpair failed and we were unable to recover it. 00:27:05.925 [2024-11-28 12:50:48.319723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.925 [2024-11-28 12:50:48.319740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.925 qpair failed and we were unable to recover it. 00:27:05.925 [2024-11-28 12:50:48.319954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.925 [2024-11-28 12:50:48.319972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.925 qpair failed and we were unable to recover it. 00:27:05.925 [2024-11-28 12:50:48.320186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.925 [2024-11-28 12:50:48.320205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.925 qpair failed and we were unable to recover it. 00:27:05.925 [2024-11-28 12:50:48.320443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.925 [2024-11-28 12:50:48.320461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd02be0 with addr=10.0.0.2, port=4420 00:27:05.925 qpair failed and we were unable to recover it. 00:27:05.925 [2024-11-28 12:50:48.320635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.925 [2024-11-28 12:50:48.320659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.925 qpair failed and we were unable to recover it. 00:27:05.925 [2024-11-28 12:50:48.320875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.925 [2024-11-28 12:50:48.320891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.925 qpair failed and we were unable to recover it. 00:27:05.925 [2024-11-28 12:50:48.321117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.925 [2024-11-28 12:50:48.321132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.925 qpair failed and we were unable to recover it. 00:27:05.925 [2024-11-28 12:50:48.321398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.925 [2024-11-28 12:50:48.321411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.925 qpair failed and we were unable to recover it. 00:27:05.925 [2024-11-28 12:50:48.321637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.925 [2024-11-28 12:50:48.321649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.925 qpair failed and we were unable to recover it. 00:27:05.925 [2024-11-28 12:50:48.321855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.925 [2024-11-28 12:50:48.321865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.925 qpair failed and we were unable to recover it. 00:27:05.925 [2024-11-28 12:50:48.322012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.925 [2024-11-28 12:50:48.322023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.925 qpair failed and we were unable to recover it. 00:27:05.925 [2024-11-28 12:50:48.322194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.925 [2024-11-28 12:50:48.322205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.925 qpair failed and we were unable to recover it. 00:27:05.925 [2024-11-28 12:50:48.322358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.925 [2024-11-28 12:50:48.322369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.925 qpair failed and we were unable to recover it. 00:27:05.925 [2024-11-28 12:50:48.322502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.925 [2024-11-28 12:50:48.322513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.925 qpair failed and we were unable to recover it. 00:27:05.925 [2024-11-28 12:50:48.322746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.925 [2024-11-28 12:50:48.322757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.925 qpair failed and we were unable to recover it. 00:27:05.925 [2024-11-28 12:50:48.322830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.925 [2024-11-28 12:50:48.322841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.925 qpair failed and we were unable to recover it. 00:27:05.925 [2024-11-28 12:50:48.323065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.925 [2024-11-28 12:50:48.323075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.925 qpair failed and we were unable to recover it. 00:27:05.926 [2024-11-28 12:50:48.323302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.926 [2024-11-28 12:50:48.323312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.926 qpair failed and we were unable to recover it. 00:27:05.926 [2024-11-28 12:50:48.323575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.926 [2024-11-28 12:50:48.323586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.926 qpair failed and we were unable to recover it. 00:27:05.926 [2024-11-28 12:50:48.323737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.926 [2024-11-28 12:50:48.323748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.926 qpair failed and we were unable to recover it. 00:27:05.926 [2024-11-28 12:50:48.323971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.926 [2024-11-28 12:50:48.323982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.926 qpair failed and we were unable to recover it. 00:27:05.926 [2024-11-28 12:50:48.324152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.926 [2024-11-28 12:50:48.324163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.926 qpair failed and we were unable to recover it. 00:27:05.926 [2024-11-28 12:50:48.324251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.926 [2024-11-28 12:50:48.324264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.926 qpair failed and we were unable to recover it. 00:27:05.926 [2024-11-28 12:50:48.324409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.926 [2024-11-28 12:50:48.324420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.926 qpair failed and we were unable to recover it. 00:27:05.926 [2024-11-28 12:50:48.324582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.926 [2024-11-28 12:50:48.324592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.926 qpair failed and we were unable to recover it. 00:27:05.926 [2024-11-28 12:50:48.324688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.926 [2024-11-28 12:50:48.324699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.926 qpair failed and we were unable to recover it. 00:27:05.926 [2024-11-28 12:50:48.324899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.926 [2024-11-28 12:50:48.324910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.926 qpair failed and we were unable to recover it. 00:27:05.926 [2024-11-28 12:50:48.325123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.926 [2024-11-28 12:50:48.325134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.926 qpair failed and we were unable to recover it. 00:27:05.926 [2024-11-28 12:50:48.325359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.926 [2024-11-28 12:50:48.325369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.926 qpair failed and we were unable to recover it. 00:27:05.926 [2024-11-28 12:50:48.325530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.926 [2024-11-28 12:50:48.325540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.926 qpair failed and we were unable to recover it. 00:27:05.926 [2024-11-28 12:50:48.325633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.926 [2024-11-28 12:50:48.325643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.926 qpair failed and we were unable to recover it. 00:27:05.926 [2024-11-28 12:50:48.325816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.926 [2024-11-28 12:50:48.325827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.926 qpair failed and we were unable to recover it. 00:27:05.926 [2024-11-28 12:50:48.326006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.926 [2024-11-28 12:50:48.326016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.926 qpair failed and we were unable to recover it. 00:27:05.926 [2024-11-28 12:50:48.326103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.926 [2024-11-28 12:50:48.326113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.926 qpair failed and we were unable to recover it. 00:27:05.926 [2024-11-28 12:50:48.326263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.926 [2024-11-28 12:50:48.326274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.926 qpair failed and we were unable to recover it. 00:27:05.926 [2024-11-28 12:50:48.326449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.926 [2024-11-28 12:50:48.326460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.926 qpair failed and we were unable to recover it. 00:27:05.926 [2024-11-28 12:50:48.326564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.926 [2024-11-28 12:50:48.326575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.926 qpair failed and we were unable to recover it. 00:27:05.926 [2024-11-28 12:50:48.326732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.926 [2024-11-28 12:50:48.326741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.926 qpair failed and we were unable to recover it. 00:27:05.926 [2024-11-28 12:50:48.326965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.926 [2024-11-28 12:50:48.326976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.926 qpair failed and we were unable to recover it. 00:27:05.926 [2024-11-28 12:50:48.327045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.926 [2024-11-28 12:50:48.327055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.926 qpair failed and we were unable to recover it. 00:27:05.926 [2024-11-28 12:50:48.327298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.926 [2024-11-28 12:50:48.327308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.926 qpair failed and we were unable to recover it. 00:27:05.926 [2024-11-28 12:50:48.327536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.926 [2024-11-28 12:50:48.327547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.926 qpair failed and we were unable to recover it. 00:27:05.926 [2024-11-28 12:50:48.327782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.926 [2024-11-28 12:50:48.327793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.926 qpair failed and we were unable to recover it. 00:27:05.926 [2024-11-28 12:50:48.327954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.926 [2024-11-28 12:50:48.327965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.926 qpair failed and we were unable to recover it. 00:27:05.926 [2024-11-28 12:50:48.328214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.926 [2024-11-28 12:50:48.328226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.926 qpair failed and we were unable to recover it. 00:27:05.926 [2024-11-28 12:50:48.328372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.926 [2024-11-28 12:50:48.328383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.926 qpair failed and we were unable to recover it. 00:27:05.926 [2024-11-28 12:50:48.328527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.926 [2024-11-28 12:50:48.328537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.926 qpair failed and we were unable to recover it. 00:27:05.926 [2024-11-28 12:50:48.328741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.926 [2024-11-28 12:50:48.328755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.926 qpair failed and we were unable to recover it. 00:27:05.926 [2024-11-28 12:50:48.328990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.926 [2024-11-28 12:50:48.329005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.926 qpair failed and we were unable to recover it. 00:27:05.926 [2024-11-28 12:50:48.329281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.926 [2024-11-28 12:50:48.329296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.926 qpair failed and we were unable to recover it. 00:27:05.926 [2024-11-28 12:50:48.329501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.926 [2024-11-28 12:50:48.329515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.926 qpair failed and we were unable to recover it. 00:27:05.926 [2024-11-28 12:50:48.329743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.926 [2024-11-28 12:50:48.329755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.926 qpair failed and we were unable to recover it. 00:27:05.926 [2024-11-28 12:50:48.329957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.926 [2024-11-28 12:50:48.329969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.926 qpair failed and we were unable to recover it. 00:27:05.926 [2024-11-28 12:50:48.330170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.926 [2024-11-28 12:50:48.330181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.926 qpair failed and we were unable to recover it. 00:27:05.926 [2024-11-28 12:50:48.330399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.926 [2024-11-28 12:50:48.330412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.926 qpair failed and we were unable to recover it. 00:27:05.927 [2024-11-28 12:50:48.330497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.927 [2024-11-28 12:50:48.330508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.927 qpair failed and we were unable to recover it. 00:27:05.927 [2024-11-28 12:50:48.330737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.927 [2024-11-28 12:50:48.330749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.927 qpair failed and we were unable to recover it. 00:27:05.927 [2024-11-28 12:50:48.330960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.927 [2024-11-28 12:50:48.330972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.927 qpair failed and we were unable to recover it. 00:27:05.927 [2024-11-28 12:50:48.331248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.927 [2024-11-28 12:50:48.331261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.927 qpair failed and we were unable to recover it. 00:27:05.927 [2024-11-28 12:50:48.331464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.927 [2024-11-28 12:50:48.331476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.927 qpair failed and we were unable to recover it. 00:27:05.927 [2024-11-28 12:50:48.331726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.927 [2024-11-28 12:50:48.331739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.927 qpair failed and we were unable to recover it. 00:27:05.927 [2024-11-28 12:50:48.331875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.927 [2024-11-28 12:50:48.331886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.927 qpair failed and we were unable to recover it. 00:27:05.927 [2024-11-28 12:50:48.332099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.927 [2024-11-28 12:50:48.332114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.927 qpair failed and we were unable to recover it. 00:27:05.927 [2024-11-28 12:50:48.332341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.927 [2024-11-28 12:50:48.332353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.927 qpair failed and we were unable to recover it. 00:27:05.927 [2024-11-28 12:50:48.332583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.927 [2024-11-28 12:50:48.332594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.927 qpair failed and we were unable to recover it. 00:27:05.927 [2024-11-28 12:50:48.332812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.927 [2024-11-28 12:50:48.332823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.927 qpair failed and we were unable to recover it. 00:27:05.927 [2024-11-28 12:50:48.332977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.927 [2024-11-28 12:50:48.332989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.927 qpair failed and we were unable to recover it. 00:27:05.927 [2024-11-28 12:50:48.333130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.927 [2024-11-28 12:50:48.333141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.927 qpair failed and we were unable to recover it. 00:27:05.927 [2024-11-28 12:50:48.333353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.927 [2024-11-28 12:50:48.333364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.927 qpair failed and we were unable to recover it. 00:27:05.927 [2024-11-28 12:50:48.333593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.927 [2024-11-28 12:50:48.333604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.927 qpair failed and we were unable to recover it. 00:27:05.927 [2024-11-28 12:50:48.333698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.927 [2024-11-28 12:50:48.333709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.927 qpair failed and we were unable to recover it. 00:27:05.927 [2024-11-28 12:50:48.333856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.927 [2024-11-28 12:50:48.333866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.927 qpair failed and we were unable to recover it. 00:27:05.927 [2024-11-28 12:50:48.334000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.927 [2024-11-28 12:50:48.334011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.927 qpair failed and we were unable to recover it. 00:27:05.927 [2024-11-28 12:50:48.334201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.927 [2024-11-28 12:50:48.334212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.927 qpair failed and we were unable to recover it. 00:27:05.927 [2024-11-28 12:50:48.334349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.927 [2024-11-28 12:50:48.334360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.927 qpair failed and we were unable to recover it. 00:27:05.927 [2024-11-28 12:50:48.334534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.927 [2024-11-28 12:50:48.334545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.927 qpair failed and we were unable to recover it. 00:27:05.927 [2024-11-28 12:50:48.334696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.927 [2024-11-28 12:50:48.334706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.927 qpair failed and we were unable to recover it. 00:27:05.927 [2024-11-28 12:50:48.334942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.927 [2024-11-28 12:50:48.334957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.927 qpair failed and we were unable to recover it. 00:27:05.927 [2024-11-28 12:50:48.335165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.927 [2024-11-28 12:50:48.335176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.927 qpair failed and we were unable to recover it. 00:27:05.927 [2024-11-28 12:50:48.335448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.927 [2024-11-28 12:50:48.335459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.927 qpair failed and we were unable to recover it. 00:27:05.927 [2024-11-28 12:50:48.335594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.927 [2024-11-28 12:50:48.335605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.927 qpair failed and we were unable to recover it. 00:27:05.927 [2024-11-28 12:50:48.335827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.927 [2024-11-28 12:50:48.335837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.927 qpair failed and we were unable to recover it. 00:27:05.927 [2024-11-28 12:50:48.335990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.927 [2024-11-28 12:50:48.336001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.927 qpair failed and we were unable to recover it. 00:27:05.927 [2024-11-28 12:50:48.336151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.927 [2024-11-28 12:50:48.336161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.927 qpair failed and we were unable to recover it. 00:27:05.927 [2024-11-28 12:50:48.336306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.927 [2024-11-28 12:50:48.336317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.927 qpair failed and we were unable to recover it. 00:27:05.927 [2024-11-28 12:50:48.336491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.927 [2024-11-28 12:50:48.336502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.927 qpair failed and we were unable to recover it. 00:27:05.927 [2024-11-28 12:50:48.336597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.927 [2024-11-28 12:50:48.336608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.927 qpair failed and we were unable to recover it. 00:27:05.927 [2024-11-28 12:50:48.336768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.927 [2024-11-28 12:50:48.336779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.927 qpair failed and we were unable to recover it. 00:27:05.927 [2024-11-28 12:50:48.336950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.927 [2024-11-28 12:50:48.336961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.927 qpair failed and we were unable to recover it. 00:27:05.927 [2024-11-28 12:50:48.337186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.927 [2024-11-28 12:50:48.337197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.927 qpair failed and we were unable to recover it. 00:27:05.927 [2024-11-28 12:50:48.337445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.927 [2024-11-28 12:50:48.337456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.927 qpair failed and we were unable to recover it. 00:27:05.927 [2024-11-28 12:50:48.337585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.927 [2024-11-28 12:50:48.337596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.927 qpair failed and we were unable to recover it. 00:27:05.927 [2024-11-28 12:50:48.337768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.927 [2024-11-28 12:50:48.337780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.927 qpair failed and we were unable to recover it. 00:27:05.927 [2024-11-28 12:50:48.337978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.927 [2024-11-28 12:50:48.337989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.927 qpair failed and we were unable to recover it. 00:27:05.927 [2024-11-28 12:50:48.338150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.927 [2024-11-28 12:50:48.338161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.927 qpair failed and we were unable to recover it. 00:27:05.927 [2024-11-28 12:50:48.338365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.927 [2024-11-28 12:50:48.338376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.927 qpair failed and we were unable to recover it. 00:27:05.927 [2024-11-28 12:50:48.338540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.927 [2024-11-28 12:50:48.338551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.927 qpair failed and we were unable to recover it. 00:27:05.927 [2024-11-28 12:50:48.338751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.927 [2024-11-28 12:50:48.338762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.927 qpair failed and we were unable to recover it. 00:27:05.927 [2024-11-28 12:50:48.338994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.927 [2024-11-28 12:50:48.339007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.927 qpair failed and we were unable to recover it. 00:27:05.927 [2024-11-28 12:50:48.339243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.927 [2024-11-28 12:50:48.339255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.927 qpair failed and we were unable to recover it. 00:27:05.927 [2024-11-28 12:50:48.339385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.927 [2024-11-28 12:50:48.339396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.927 qpair failed and we were unable to recover it. 00:27:05.927 [2024-11-28 12:50:48.339543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.927 [2024-11-28 12:50:48.339553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.927 qpair failed and we were unable to recover it. 00:27:05.927 [2024-11-28 12:50:48.339695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.927 [2024-11-28 12:50:48.339709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.927 qpair failed and we were unable to recover it. 00:27:05.927 [2024-11-28 12:50:48.339923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.927 [2024-11-28 12:50:48.339934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.927 qpair failed and we were unable to recover it. 00:27:05.927 [2024-11-28 12:50:48.340023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.927 [2024-11-28 12:50:48.340034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.927 qpair failed and we were unable to recover it. 00:27:05.927 [2024-11-28 12:50:48.340239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.927 [2024-11-28 12:50:48.340250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.927 qpair failed and we were unable to recover it. 00:27:05.927 [2024-11-28 12:50:48.340478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.927 [2024-11-28 12:50:48.340491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.927 qpair failed and we were unable to recover it. 00:27:05.927 [2024-11-28 12:50:48.340677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.927 [2024-11-28 12:50:48.340688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.927 qpair failed and we were unable to recover it. 00:27:05.927 [2024-11-28 12:50:48.340913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.927 [2024-11-28 12:50:48.340924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.927 qpair failed and we were unable to recover it. 00:27:05.927 [2024-11-28 12:50:48.341133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.928 [2024-11-28 12:50:48.341145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.928 qpair failed and we were unable to recover it. 00:27:05.928 [2024-11-28 12:50:48.341365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.928 [2024-11-28 12:50:48.341377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.928 qpair failed and we were unable to recover it. 00:27:05.928 [2024-11-28 12:50:48.341513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.928 [2024-11-28 12:50:48.341524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.928 qpair failed and we were unable to recover it. 00:27:05.928 [2024-11-28 12:50:48.341676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.928 [2024-11-28 12:50:48.341687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.928 qpair failed and we were unable to recover it. 00:27:05.928 [2024-11-28 12:50:48.341877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.928 [2024-11-28 12:50:48.341887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.928 qpair failed and we were unable to recover it. 00:27:05.928 [2024-11-28 12:50:48.342133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.928 [2024-11-28 12:50:48.342144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.928 qpair failed and we were unable to recover it. 00:27:05.928 [2024-11-28 12:50:48.342238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.928 [2024-11-28 12:50:48.342248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.928 qpair failed and we were unable to recover it. 00:27:05.928 [2024-11-28 12:50:48.342475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.928 [2024-11-28 12:50:48.342487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.928 qpair failed and we were unable to recover it. 00:27:05.928 [2024-11-28 12:50:48.342711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.928 [2024-11-28 12:50:48.342722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.928 qpair failed and we were unable to recover it. 00:27:05.928 [2024-11-28 12:50:48.342980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.928 [2024-11-28 12:50:48.342992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.928 qpair failed and we were unable to recover it. 00:27:05.928 [2024-11-28 12:50:48.343246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.928 [2024-11-28 12:50:48.343257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.928 qpair failed and we were unable to recover it. 00:27:05.928 [2024-11-28 12:50:48.343399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.928 [2024-11-28 12:50:48.343409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.928 qpair failed and we were unable to recover it. 00:27:05.928 [2024-11-28 12:50:48.343652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.928 [2024-11-28 12:50:48.343664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.928 qpair failed and we were unable to recover it. 00:27:05.928 [2024-11-28 12:50:48.343808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.928 [2024-11-28 12:50:48.343818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.928 qpair failed and we were unable to recover it. 00:27:05.928 [2024-11-28 12:50:48.344055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.928 [2024-11-28 12:50:48.344067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.928 qpair failed and we were unable to recover it. 00:27:05.928 [2024-11-28 12:50:48.344265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.928 [2024-11-28 12:50:48.344276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.928 qpair failed and we were unable to recover it. 00:27:05.928 [2024-11-28 12:50:48.344519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.928 [2024-11-28 12:50:48.344531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.928 qpair failed and we were unable to recover it. 00:27:05.928 [2024-11-28 12:50:48.344727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.928 [2024-11-28 12:50:48.344737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.928 qpair failed and we were unable to recover it. 00:27:05.928 [2024-11-28 12:50:48.344872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.928 [2024-11-28 12:50:48.344882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.928 qpair failed and we were unable to recover it. 00:27:05.928 [2024-11-28 12:50:48.345099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.928 [2024-11-28 12:50:48.345109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.928 qpair failed and we were unable to recover it. 00:27:05.928 [2024-11-28 12:50:48.345270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.928 [2024-11-28 12:50:48.345280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.928 qpair failed and we were unable to recover it. 00:27:05.928 [2024-11-28 12:50:48.345505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.928 [2024-11-28 12:50:48.345516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.928 qpair failed and we were unable to recover it. 00:27:05.928 [2024-11-28 12:50:48.345732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.928 [2024-11-28 12:50:48.345743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.928 qpair failed and we were unable to recover it. 00:27:05.928 [2024-11-28 12:50:48.345969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.928 [2024-11-28 12:50:48.345980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.928 qpair failed and we were unable to recover it. 00:27:05.928 [2024-11-28 12:50:48.346203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.928 [2024-11-28 12:50:48.346213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.928 qpair failed and we were unable to recover it. 00:27:05.928 [2024-11-28 12:50:48.346346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.928 [2024-11-28 12:50:48.346357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.928 qpair failed and we were unable to recover it. 00:27:05.928 [2024-11-28 12:50:48.346502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.928 [2024-11-28 12:50:48.346512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.928 qpair failed and we were unable to recover it. 00:27:05.928 [2024-11-28 12:50:48.346719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.928 [2024-11-28 12:50:48.346729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.928 qpair failed and we were unable to recover it. 00:27:05.928 [2024-11-28 12:50:48.346957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.928 [2024-11-28 12:50:48.346968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.928 qpair failed and we were unable to recover it. 00:27:05.928 [2024-11-28 12:50:48.347168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.928 [2024-11-28 12:50:48.347179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.928 qpair failed and we were unable to recover it. 00:27:05.928 [2024-11-28 12:50:48.347349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.928 [2024-11-28 12:50:48.347359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.928 qpair failed and we were unable to recover it. 00:27:05.928 [2024-11-28 12:50:48.347579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.928 [2024-11-28 12:50:48.347590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.928 qpair failed and we were unable to recover it. 00:27:05.928 [2024-11-28 12:50:48.347721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.928 [2024-11-28 12:50:48.347731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.928 qpair failed and we were unable to recover it. 00:27:05.928 [2024-11-28 12:50:48.347876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.928 [2024-11-28 12:50:48.347888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.928 qpair failed and we were unable to recover it. 00:27:05.928 [2024-11-28 12:50:48.348152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.928 [2024-11-28 12:50:48.348163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.928 qpair failed and we were unable to recover it. 00:27:05.928 [2024-11-28 12:50:48.348389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.928 [2024-11-28 12:50:48.348399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.928 qpair failed and we were unable to recover it. 00:27:05.928 [2024-11-28 12:50:48.348628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.928 [2024-11-28 12:50:48.348638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.928 qpair failed and we were unable to recover it. 00:27:05.928 [2024-11-28 12:50:48.348837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.928 [2024-11-28 12:50:48.348847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.928 qpair failed and we were unable to recover it. 00:27:05.928 [2024-11-28 12:50:48.349015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.928 [2024-11-28 12:50:48.349026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.928 qpair failed and we were unable to recover it. 00:27:05.928 [2024-11-28 12:50:48.349178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.928 [2024-11-28 12:50:48.349188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.928 qpair failed and we were unable to recover it. 00:27:05.928 [2024-11-28 12:50:48.349409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.928 [2024-11-28 12:50:48.349420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.928 qpair failed and we were unable to recover it. 00:27:05.928 [2024-11-28 12:50:48.349566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.928 [2024-11-28 12:50:48.349576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.928 qpair failed and we were unable to recover it. 00:27:05.928 [2024-11-28 12:50:48.349745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.928 [2024-11-28 12:50:48.349755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.928 qpair failed and we were unable to recover it. 00:27:05.928 [2024-11-28 12:50:48.349852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.928 [2024-11-28 12:50:48.349862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.928 qpair failed and we were unable to recover it. 00:27:05.928 [2024-11-28 12:50:48.350086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.928 [2024-11-28 12:50:48.350097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.928 qpair failed and we were unable to recover it. 00:27:05.928 [2024-11-28 12:50:48.350320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.928 [2024-11-28 12:50:48.350331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.928 qpair failed and we were unable to recover it. 00:27:05.928 [2024-11-28 12:50:48.350549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.928 [2024-11-28 12:50:48.350559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.928 qpair failed and we were unable to recover it. 00:27:05.928 [2024-11-28 12:50:48.350760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.928 [2024-11-28 12:50:48.350771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.928 qpair failed and we were unable to recover it. 00:27:05.929 [2024-11-28 12:50:48.350993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.929 [2024-11-28 12:50:48.351003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.929 qpair failed and we were unable to recover it. 00:27:05.929 [2024-11-28 12:50:48.351156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.929 [2024-11-28 12:50:48.351166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.929 qpair failed and we were unable to recover it. 00:27:05.929 [2024-11-28 12:50:48.351408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.929 [2024-11-28 12:50:48.351418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.929 qpair failed and we were unable to recover it. 00:27:05.929 [2024-11-28 12:50:48.351568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.929 [2024-11-28 12:50:48.351578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.929 qpair failed and we were unable to recover it. 00:27:05.929 [2024-11-28 12:50:48.351803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.929 [2024-11-28 12:50:48.351814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.929 qpair failed and we were unable to recover it. 00:27:05.929 [2024-11-28 12:50:48.352010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.929 [2024-11-28 12:50:48.352021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.929 qpair failed and we were unable to recover it. 00:27:05.929 [2024-11-28 12:50:48.352165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.929 [2024-11-28 12:50:48.352175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.929 qpair failed and we were unable to recover it. 00:27:05.929 [2024-11-28 12:50:48.352355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.929 [2024-11-28 12:50:48.352365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.929 qpair failed and we were unable to recover it. 00:27:05.929 [2024-11-28 12:50:48.352588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.929 [2024-11-28 12:50:48.352599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.929 qpair failed and we were unable to recover it. 00:27:05.929 [2024-11-28 12:50:48.352808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.929 [2024-11-28 12:50:48.352818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.929 qpair failed and we were unable to recover it. 00:27:05.929 [2024-11-28 12:50:48.352985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.929 [2024-11-28 12:50:48.352996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.929 qpair failed and we were unable to recover it. 00:27:05.929 [2024-11-28 12:50:48.353172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.929 [2024-11-28 12:50:48.353182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.929 qpair failed and we were unable to recover it. 00:27:05.929 [2024-11-28 12:50:48.353411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.929 [2024-11-28 12:50:48.353422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.929 qpair failed and we were unable to recover it. 00:27:05.929 [2024-11-28 12:50:48.353636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.929 [2024-11-28 12:50:48.353647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.929 qpair failed and we were unable to recover it. 00:27:05.929 [2024-11-28 12:50:48.353868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.929 [2024-11-28 12:50:48.353878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.929 qpair failed and we were unable to recover it. 00:27:05.929 [2024-11-28 12:50:48.354039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.929 [2024-11-28 12:50:48.354050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.929 qpair failed and we were unable to recover it. 00:27:05.929 [2024-11-28 12:50:48.354204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.929 [2024-11-28 12:50:48.354216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.929 qpair failed and we were unable to recover it. 00:27:05.929 [2024-11-28 12:50:48.354288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.929 [2024-11-28 12:50:48.354299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.929 qpair failed and we were unable to recover it. 00:27:05.929 [2024-11-28 12:50:48.354440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.929 [2024-11-28 12:50:48.354450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.929 qpair failed and we were unable to recover it. 00:27:05.929 [2024-11-28 12:50:48.354596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.929 [2024-11-28 12:50:48.354606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.929 qpair failed and we were unable to recover it. 00:27:05.929 [2024-11-28 12:50:48.354857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.929 [2024-11-28 12:50:48.354867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.929 qpair failed and we were unable to recover it. 00:27:05.929 [2024-11-28 12:50:48.355037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.929 [2024-11-28 12:50:48.355048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.929 qpair failed and we were unable to recover it. 00:27:05.929 [2024-11-28 12:50:48.355222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.929 [2024-11-28 12:50:48.355233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.929 qpair failed and we were unable to recover it. 00:27:05.929 [2024-11-28 12:50:48.355431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.929 [2024-11-28 12:50:48.355442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.929 qpair failed and we were unable to recover it. 00:27:05.929 [2024-11-28 12:50:48.355580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.929 [2024-11-28 12:50:48.355590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.929 qpair failed and we were unable to recover it. 00:27:05.929 [2024-11-28 12:50:48.355740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.929 [2024-11-28 12:50:48.355752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.929 qpair failed and we were unable to recover it. 00:27:05.929 [2024-11-28 12:50:48.355830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.929 [2024-11-28 12:50:48.355841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.929 qpair failed and we were unable to recover it. 00:27:05.929 [2024-11-28 12:50:48.355905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.929 [2024-11-28 12:50:48.355915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.929 qpair failed and we were unable to recover it. 00:27:05.929 [2024-11-28 12:50:48.355981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.929 [2024-11-28 12:50:48.355991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.929 qpair failed and we were unable to recover it. 00:27:05.929 [2024-11-28 12:50:48.356221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.929 [2024-11-28 12:50:48.356231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.929 qpair failed and we were unable to recover it. 00:27:05.929 [2024-11-28 12:50:48.356382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.929 [2024-11-28 12:50:48.356392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.929 qpair failed and we were unable to recover it. 00:27:05.929 [2024-11-28 12:50:48.356544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.929 [2024-11-28 12:50:48.356554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.929 qpair failed and we were unable to recover it. 00:27:05.929 [2024-11-28 12:50:48.356709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.929 [2024-11-28 12:50:48.356719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.929 qpair failed and we were unable to recover it. 00:27:05.929 [2024-11-28 12:50:48.356861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.929 [2024-11-28 12:50:48.356871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.929 qpair failed and we were unable to recover it. 00:27:05.929 [2024-11-28 12:50:48.357118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.929 [2024-11-28 12:50:48.357129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.929 qpair failed and we were unable to recover it. 00:27:05.929 [2024-11-28 12:50:48.357260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.929 [2024-11-28 12:50:48.357271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.929 qpair failed and we were unable to recover it. 00:27:05.929 [2024-11-28 12:50:48.357345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.929 [2024-11-28 12:50:48.357355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.929 qpair failed and we were unable to recover it. 00:27:05.929 [2024-11-28 12:50:48.357519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.929 [2024-11-28 12:50:48.357530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.929 qpair failed and we were unable to recover it. 00:27:05.929 [2024-11-28 12:50:48.357662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.929 [2024-11-28 12:50:48.357672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.929 qpair failed and we were unable to recover it. 00:27:05.929 [2024-11-28 12:50:48.357819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.929 [2024-11-28 12:50:48.357829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.929 qpair failed and we were unable to recover it. 00:27:05.929 [2024-11-28 12:50:48.357996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.929 [2024-11-28 12:50:48.358007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.929 qpair failed and we were unable to recover it. 00:27:05.929 [2024-11-28 12:50:48.358166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.929 [2024-11-28 12:50:48.358176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.929 qpair failed and we were unable to recover it. 00:27:05.929 [2024-11-28 12:50:48.358328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.929 [2024-11-28 12:50:48.358339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.929 qpair failed and we were unable to recover it. 00:27:05.929 [2024-11-28 12:50:48.358493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.929 [2024-11-28 12:50:48.358503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.929 qpair failed and we were unable to recover it. 00:27:05.929 [2024-11-28 12:50:48.358736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.929 [2024-11-28 12:50:48.358746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.929 qpair failed and we were unable to recover it. 00:27:05.930 [2024-11-28 12:50:48.358895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.930 [2024-11-28 12:50:48.358905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.930 qpair failed and we were unable to recover it. 00:27:05.930 [2024-11-28 12:50:48.359126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.930 [2024-11-28 12:50:48.359137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.930 qpair failed and we were unable to recover it. 00:27:05.930 [2024-11-28 12:50:48.359232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.930 [2024-11-28 12:50:48.359242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.930 qpair failed and we were unable to recover it. 00:27:05.930 [2024-11-28 12:50:48.359474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.930 [2024-11-28 12:50:48.359484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.930 qpair failed and we were unable to recover it. 00:27:05.930 [2024-11-28 12:50:48.359687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.930 [2024-11-28 12:50:48.359698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.930 qpair failed and we were unable to recover it. 00:27:05.930 [2024-11-28 12:50:48.359799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.930 [2024-11-28 12:50:48.359809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.930 qpair failed and we were unable to recover it. 00:27:05.930 [2024-11-28 12:50:48.360032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.930 [2024-11-28 12:50:48.360043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.930 qpair failed and we were unable to recover it. 00:27:05.930 [2024-11-28 12:50:48.360258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.930 [2024-11-28 12:50:48.360268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.930 qpair failed and we were unable to recover it. 00:27:05.930 [2024-11-28 12:50:48.360439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.930 [2024-11-28 12:50:48.360449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.930 qpair failed and we were unable to recover it. 00:27:05.930 [2024-11-28 12:50:48.360605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.930 [2024-11-28 12:50:48.360616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.930 qpair failed and we were unable to recover it. 00:27:05.930 [2024-11-28 12:50:48.360817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.930 [2024-11-28 12:50:48.360827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.930 qpair failed and we were unable to recover it. 00:27:05.930 [2024-11-28 12:50:48.360976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.930 [2024-11-28 12:50:48.360986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.930 qpair failed and we were unable to recover it. 00:27:05.930 [2024-11-28 12:50:48.361130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.930 [2024-11-28 12:50:48.361140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.930 qpair failed and we were unable to recover it. 00:27:05.930 [2024-11-28 12:50:48.361363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.930 [2024-11-28 12:50:48.361373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.930 qpair failed and we were unable to recover it. 00:27:05.930 [2024-11-28 12:50:48.361606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.930 [2024-11-28 12:50:48.361617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.930 qpair failed and we were unable to recover it. 00:27:05.930 [2024-11-28 12:50:48.361891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.930 [2024-11-28 12:50:48.361901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.930 qpair failed and we were unable to recover it. 00:27:05.930 [2024-11-28 12:50:48.362075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.930 [2024-11-28 12:50:48.362086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.930 qpair failed and we were unable to recover it. 00:27:05.930 [2024-11-28 12:50:48.362314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.930 [2024-11-28 12:50:48.362324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.930 qpair failed and we were unable to recover it. 00:27:05.930 [2024-11-28 12:50:48.362469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.930 [2024-11-28 12:50:48.362479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.930 qpair failed and we were unable to recover it. 00:27:05.930 [2024-11-28 12:50:48.362680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.930 [2024-11-28 12:50:48.362691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.930 qpair failed and we were unable to recover it. 00:27:05.930 [2024-11-28 12:50:48.362913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.930 [2024-11-28 12:50:48.362925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.930 qpair failed and we were unable to recover it. 00:27:05.930 [2024-11-28 12:50:48.363150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.930 [2024-11-28 12:50:48.363160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.930 qpair failed and we were unable to recover it. 00:27:05.930 [2024-11-28 12:50:48.363312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.930 [2024-11-28 12:50:48.363322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.930 qpair failed and we were unable to recover it. 00:27:05.930 [2024-11-28 12:50:48.363456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.930 [2024-11-28 12:50:48.363466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.930 qpair failed and we were unable to recover it. 00:27:05.930 [2024-11-28 12:50:48.363600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.930 [2024-11-28 12:50:48.363609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.930 qpair failed and we were unable to recover it. 00:27:05.930 [2024-11-28 12:50:48.363839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.930 [2024-11-28 12:50:48.363849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.930 qpair failed and we were unable to recover it. 00:27:05.930 [2024-11-28 12:50:48.364018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.930 [2024-11-28 12:50:48.364028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.930 qpair failed and we were unable to recover it. 00:27:05.930 [2024-11-28 12:50:48.364225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.930 [2024-11-28 12:50:48.364235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.930 qpair failed and we were unable to recover it. 00:27:05.930 [2024-11-28 12:50:48.364387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.930 [2024-11-28 12:50:48.364397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.930 qpair failed and we were unable to recover it. 00:27:05.930 [2024-11-28 12:50:48.364497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.930 [2024-11-28 12:50:48.364507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.930 qpair failed and we were unable to recover it. 00:27:05.930 [2024-11-28 12:50:48.364726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.930 [2024-11-28 12:50:48.364736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.930 qpair failed and we were unable to recover it. 00:27:05.930 [2024-11-28 12:50:48.364936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.930 [2024-11-28 12:50:48.364955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.930 qpair failed and we were unable to recover it. 00:27:05.930 [2024-11-28 12:50:48.365156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.930 [2024-11-28 12:50:48.365167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.930 qpair failed and we were unable to recover it. 00:27:05.930 [2024-11-28 12:50:48.365249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.930 [2024-11-28 12:50:48.365259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.930 qpair failed and we were unable to recover it. 00:27:05.930 [2024-11-28 12:50:48.365494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.930 [2024-11-28 12:50:48.365505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.930 qpair failed and we were unable to recover it. 00:27:05.930 [2024-11-28 12:50:48.365657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.930 [2024-11-28 12:50:48.365667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.930 qpair failed and we were unable to recover it. 00:27:05.930 [2024-11-28 12:50:48.365814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.930 [2024-11-28 12:50:48.365825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.930 qpair failed and we were unable to recover it. 00:27:05.930 [2024-11-28 12:50:48.366055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.930 [2024-11-28 12:50:48.366065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.930 qpair failed and we were unable to recover it. 00:27:05.930 [2024-11-28 12:50:48.366235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.930 [2024-11-28 12:50:48.366246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.930 qpair failed and we were unable to recover it. 00:27:05.930 [2024-11-28 12:50:48.366341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.930 [2024-11-28 12:50:48.366351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.930 qpair failed and we were unable to recover it. 00:27:05.930 [2024-11-28 12:50:48.366566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.930 [2024-11-28 12:50:48.366576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.930 qpair failed and we were unable to recover it. 00:27:05.930 [2024-11-28 12:50:48.366718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.930 [2024-11-28 12:50:48.366728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.930 qpair failed and we were unable to recover it. 00:27:05.930 [2024-11-28 12:50:48.366888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.930 [2024-11-28 12:50:48.366898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.930 qpair failed and we were unable to recover it. 00:27:05.930 [2024-11-28 12:50:48.367054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.930 [2024-11-28 12:50:48.367065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.930 qpair failed and we were unable to recover it. 00:27:05.930 [2024-11-28 12:50:48.367158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.930 [2024-11-28 12:50:48.367168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.930 qpair failed and we were unable to recover it. 00:27:05.930 [2024-11-28 12:50:48.367388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.930 [2024-11-28 12:50:48.367398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.930 qpair failed and we were unable to recover it. 00:27:05.930 [2024-11-28 12:50:48.367549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.930 [2024-11-28 12:50:48.367560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.930 qpair failed and we were unable to recover it. 00:27:05.930 [2024-11-28 12:50:48.367660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.930 [2024-11-28 12:50:48.367671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.930 qpair failed and we were unable to recover it. 00:27:05.930 [2024-11-28 12:50:48.367820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.930 [2024-11-28 12:50:48.367830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.930 qpair failed and we were unable to recover it. 00:27:05.930 [2024-11-28 12:50:48.368017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.930 [2024-11-28 12:50:48.368028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.930 qpair failed and we were unable to recover it. 00:27:05.931 [2024-11-28 12:50:48.368180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.931 [2024-11-28 12:50:48.368190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.931 qpair failed and we were unable to recover it. 00:27:05.931 [2024-11-28 12:50:48.368353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.931 [2024-11-28 12:50:48.368363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.931 qpair failed and we were unable to recover it. 00:27:05.931 [2024-11-28 12:50:48.368571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.931 [2024-11-28 12:50:48.368581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.931 qpair failed and we were unable to recover it. 00:27:05.931 [2024-11-28 12:50:48.368810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.931 [2024-11-28 12:50:48.368820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.931 qpair failed and we were unable to recover it. 00:27:05.931 [2024-11-28 12:50:48.368991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.931 [2024-11-28 12:50:48.369001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.931 qpair failed and we were unable to recover it. 00:27:05.931 [2024-11-28 12:50:48.369227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.931 [2024-11-28 12:50:48.369237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.931 qpair failed and we were unable to recover it. 00:27:05.931 [2024-11-28 12:50:48.369386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.931 [2024-11-28 12:50:48.369396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.931 qpair failed and we were unable to recover it. 00:27:05.931 [2024-11-28 12:50:48.369460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.931 [2024-11-28 12:50:48.369469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.931 qpair failed and we were unable to recover it. 00:27:05.931 [2024-11-28 12:50:48.369610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.931 [2024-11-28 12:50:48.369620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.931 qpair failed and we were unable to recover it. 00:27:05.931 [2024-11-28 12:50:48.369843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.931 [2024-11-28 12:50:48.369853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.931 qpair failed and we were unable to recover it. 00:27:05.931 [2024-11-28 12:50:48.370106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.931 [2024-11-28 12:50:48.370118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.931 qpair failed and we were unable to recover it. 00:27:05.931 [2024-11-28 12:50:48.370298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.931 [2024-11-28 12:50:48.370309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.931 qpair failed and we were unable to recover it. 00:27:05.931 [2024-11-28 12:50:48.370465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.931 [2024-11-28 12:50:48.370476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.931 qpair failed and we were unable to recover it. 00:27:05.931 [2024-11-28 12:50:48.370675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.931 [2024-11-28 12:50:48.370685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.931 qpair failed and we were unable to recover it. 00:27:05.931 [2024-11-28 12:50:48.370839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.931 [2024-11-28 12:50:48.370849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.931 qpair failed and we were unable to recover it. 00:27:05.931 [2024-11-28 12:50:48.371076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.931 [2024-11-28 12:50:48.371087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.931 qpair failed and we were unable to recover it. 00:27:05.931 [2024-11-28 12:50:48.371330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.931 [2024-11-28 12:50:48.371340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.931 qpair failed and we were unable to recover it. 00:27:05.931 [2024-11-28 12:50:48.371490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.931 [2024-11-28 12:50:48.371500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.931 qpair failed and we were unable to recover it. 00:27:05.931 [2024-11-28 12:50:48.371706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.931 [2024-11-28 12:50:48.371716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.931 qpair failed and we were unable to recover it. 00:27:05.931 [2024-11-28 12:50:48.371866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.931 [2024-11-28 12:50:48.371876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.931 qpair failed and we were unable to recover it. 00:27:05.931 [2024-11-28 12:50:48.372100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.931 [2024-11-28 12:50:48.372111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.931 qpair failed and we were unable to recover it. 00:27:05.931 [2024-11-28 12:50:48.372280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.931 [2024-11-28 12:50:48.372290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.931 qpair failed and we were unable to recover it. 00:27:05.931 [2024-11-28 12:50:48.372499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.931 [2024-11-28 12:50:48.372510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.931 qpair failed and we were unable to recover it. 00:27:05.931 [2024-11-28 12:50:48.372731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.931 [2024-11-28 12:50:48.372742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.931 qpair failed and we were unable to recover it. 00:27:05.931 [2024-11-28 12:50:48.372890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.931 [2024-11-28 12:50:48.372900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.931 qpair failed and we were unable to recover it. 00:27:05.931 [2024-11-28 12:50:48.373134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.931 [2024-11-28 12:50:48.373145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.931 qpair failed and we were unable to recover it. 00:27:05.931 [2024-11-28 12:50:48.373362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.931 [2024-11-28 12:50:48.373372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.931 qpair failed and we were unable to recover it. 00:27:05.931 [2024-11-28 12:50:48.373515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.931 [2024-11-28 12:50:48.373525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.931 qpair failed and we were unable to recover it. 00:27:05.931 [2024-11-28 12:50:48.373615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.931 [2024-11-28 12:50:48.373625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.931 qpair failed and we were unable to recover it. 00:27:05.931 [2024-11-28 12:50:48.373893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.931 [2024-11-28 12:50:48.373904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.931 qpair failed and we were unable to recover it. 00:27:05.931 [2024-11-28 12:50:48.374093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.931 [2024-11-28 12:50:48.374104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.931 qpair failed and we were unable to recover it. 00:27:05.931 [2024-11-28 12:50:48.374274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.931 [2024-11-28 12:50:48.374284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.931 qpair failed and we were unable to recover it. 00:27:05.931 [2024-11-28 12:50:48.374433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.931 [2024-11-28 12:50:48.374444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.931 qpair failed and we were unable to recover it. 00:27:05.931 [2024-11-28 12:50:48.374664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.931 [2024-11-28 12:50:48.374674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.931 qpair failed and we were unable to recover it. 00:27:05.931 [2024-11-28 12:50:48.374905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.931 [2024-11-28 12:50:48.374915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.931 qpair failed and we were unable to recover it. 00:27:05.931 [2024-11-28 12:50:48.375084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.931 [2024-11-28 12:50:48.375094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.931 qpair failed and we were unable to recover it. 00:27:05.931 [2024-11-28 12:50:48.375237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.931 [2024-11-28 12:50:48.375248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.931 qpair failed and we were unable to recover it. 00:27:05.931 [2024-11-28 12:50:48.375451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.931 [2024-11-28 12:50:48.375461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.931 qpair failed and we were unable to recover it. 00:27:05.931 [2024-11-28 12:50:48.375620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.931 [2024-11-28 12:50:48.375630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.931 qpair failed and we were unable to recover it. 00:27:05.931 [2024-11-28 12:50:48.375830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.931 [2024-11-28 12:50:48.375841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.931 qpair failed and we were unable to recover it. 00:27:05.931 [2024-11-28 12:50:48.375992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.931 [2024-11-28 12:50:48.376002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.931 qpair failed and we were unable to recover it. 00:27:05.931 [2024-11-28 12:50:48.376234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.931 [2024-11-28 12:50:48.376244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.931 qpair failed and we were unable to recover it. 00:27:05.931 [2024-11-28 12:50:48.376404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.931 [2024-11-28 12:50:48.376414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.931 qpair failed and we were unable to recover it. 00:27:05.931 [2024-11-28 12:50:48.376654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.931 [2024-11-28 12:50:48.376664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.931 qpair failed and we were unable to recover it. 00:27:05.931 [2024-11-28 12:50:48.376745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.931 [2024-11-28 12:50:48.376755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.931 qpair failed and we were unable to recover it. 00:27:05.931 [2024-11-28 12:50:48.376901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.931 [2024-11-28 12:50:48.376911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.931 qpair failed and we were unable to recover it. 00:27:05.931 [2024-11-28 12:50:48.377080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.931 [2024-11-28 12:50:48.377090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.931 qpair failed and we were unable to recover it. 00:27:05.931 [2024-11-28 12:50:48.377226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.931 [2024-11-28 12:50:48.377236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.931 qpair failed and we were unable to recover it. 00:27:05.931 [2024-11-28 12:50:48.377479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.931 [2024-11-28 12:50:48.377492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.931 qpair failed and we were unable to recover it. 00:27:05.931 [2024-11-28 12:50:48.377626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.931 [2024-11-28 12:50:48.377635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.931 qpair failed and we were unable to recover it. 00:27:05.931 [2024-11-28 12:50:48.377857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.931 [2024-11-28 12:50:48.377870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.932 qpair failed and we were unable to recover it. 00:27:05.932 [2024-11-28 12:50:48.378103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.932 [2024-11-28 12:50:48.378114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.932 qpair failed and we were unable to recover it. 00:27:05.932 [2024-11-28 12:50:48.378207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.932 [2024-11-28 12:50:48.378217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.932 qpair failed and we were unable to recover it. 00:27:05.932 [2024-11-28 12:50:48.378437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.932 [2024-11-28 12:50:48.378447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.932 qpair failed and we were unable to recover it. 00:27:05.932 [2024-11-28 12:50:48.378645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.932 [2024-11-28 12:50:48.378655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.932 qpair failed and we were unable to recover it. 00:27:05.932 [2024-11-28 12:50:48.378811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.932 [2024-11-28 12:50:48.378822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.932 qpair failed and we were unable to recover it. 00:27:05.932 [2024-11-28 12:50:48.378998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.932 [2024-11-28 12:50:48.379009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.932 qpair failed and we were unable to recover it. 00:27:05.932 [2024-11-28 12:50:48.379184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.932 [2024-11-28 12:50:48.379195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.932 qpair failed and we were unable to recover it. 00:27:05.932 12:50:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:05.932 [2024-11-28 12:50:48.379355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.932 [2024-11-28 12:50:48.379365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.932 qpair failed and we were unable to recover it. 00:27:05.932 12:50:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:27:05.932 [2024-11-28 12:50:48.379456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.932 [2024-11-28 12:50:48.379467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.932 qpair failed and we were unable to recover it. 00:27:05.932 12:50:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:05.932 [2024-11-28 12:50:48.379611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.932 [2024-11-28 12:50:48.379623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.932 qpair failed and we were unable to recover it. 00:27:05.932 12:50:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:05.932 12:50:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:05.932 [2024-11-28 12:50:48.379845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.932 [2024-11-28 12:50:48.379856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.932 qpair failed and we were unable to recover it. 00:27:05.932 [2024-11-28 12:50:48.379991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.932 [2024-11-28 12:50:48.380003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.932 qpair failed and we were unable to recover it. 00:27:05.932 [2024-11-28 12:50:48.380209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.932 [2024-11-28 12:50:48.380220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.932 qpair failed and we were unable to recover it. 00:27:05.932 [2024-11-28 12:50:48.380361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.932 [2024-11-28 12:50:48.380372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.932 qpair failed and we were unable to recover it. 00:27:05.932 [2024-11-28 12:50:48.380599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.932 [2024-11-28 12:50:48.380610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.932 qpair failed and we were unable to recover it. 00:27:05.932 [2024-11-28 12:50:48.380709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.932 [2024-11-28 12:50:48.380719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.932 qpair failed and we were unable to recover it. 00:27:05.932 [2024-11-28 12:50:48.380943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.932 [2024-11-28 12:50:48.380958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.932 qpair failed and we were unable to recover it. 00:27:05.932 [2024-11-28 12:50:48.381157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.932 [2024-11-28 12:50:48.381168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.932 qpair failed and we were unable to recover it. 00:27:05.932 [2024-11-28 12:50:48.381386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.932 [2024-11-28 12:50:48.381397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.932 qpair failed and we were unable to recover it. 00:27:05.932 [2024-11-28 12:50:48.381621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.932 [2024-11-28 12:50:48.381632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.932 qpair failed and we were unable to recover it. 00:27:05.932 [2024-11-28 12:50:48.381855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.932 [2024-11-28 12:50:48.381867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.932 qpair failed and we were unable to recover it. 00:27:05.932 [2024-11-28 12:50:48.381942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.932 [2024-11-28 12:50:48.381956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.932 qpair failed and we were unable to recover it. 00:27:05.932 [2024-11-28 12:50:48.382157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.932 [2024-11-28 12:50:48.382168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.932 qpair failed and we were unable to recover it. 00:27:05.932 [2024-11-28 12:50:48.382422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.932 [2024-11-28 12:50:48.382445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.932 qpair failed and we were unable to recover it. 00:27:05.932 [2024-11-28 12:50:48.382607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.932 [2024-11-28 12:50:48.382619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.932 qpair failed and we were unable to recover it. 00:27:05.932 [2024-11-28 12:50:48.382772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.932 [2024-11-28 12:50:48.382783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.932 qpair failed and we were unable to recover it. 00:27:05.932 [2024-11-28 12:50:48.383000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.932 [2024-11-28 12:50:48.383012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.932 qpair failed and we were unable to recover it. 00:27:05.932 [2024-11-28 12:50:48.383178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.932 [2024-11-28 12:50:48.383189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.932 qpair failed and we were unable to recover it. 00:27:05.932 [2024-11-28 12:50:48.383291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.932 [2024-11-28 12:50:48.383302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.932 qpair failed and we were unable to recover it. 00:27:05.932 [2024-11-28 12:50:48.383543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.932 [2024-11-28 12:50:48.383559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.932 qpair failed and we were unable to recover it. 00:27:05.932 [2024-11-28 12:50:48.383683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.932 [2024-11-28 12:50:48.383696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.932 qpair failed and we were unable to recover it. 00:27:05.932 [2024-11-28 12:50:48.383860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.932 [2024-11-28 12:50:48.383871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.932 qpair failed and we were unable to recover it. 00:27:05.932 [2024-11-28 12:50:48.384025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.932 [2024-11-28 12:50:48.384036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.932 qpair failed and we were unable to recover it. 00:27:05.932 [2024-11-28 12:50:48.384180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.932 [2024-11-28 12:50:48.384190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.932 qpair failed and we were unable to recover it. 00:27:05.932 [2024-11-28 12:50:48.384338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.932 [2024-11-28 12:50:48.384350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.932 qpair failed and we were unable to recover it. 00:27:05.932 [2024-11-28 12:50:48.384598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.932 [2024-11-28 12:50:48.384613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:05.932 qpair failed and we were unable to recover it. 00:27:06.201 [2024-11-28 12:50:48.384836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.201 [2024-11-28 12:50:48.384849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.201 qpair failed and we were unable to recover it. 00:27:06.201 [2024-11-28 12:50:48.384999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.201 [2024-11-28 12:50:48.385014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.201 qpair failed and we were unable to recover it. 00:27:06.201 [2024-11-28 12:50:48.385151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.201 [2024-11-28 12:50:48.385162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.201 qpair failed and we were unable to recover it. 00:27:06.201 [2024-11-28 12:50:48.385298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.201 [2024-11-28 12:50:48.385309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.201 qpair failed and we were unable to recover it. 00:27:06.201 [2024-11-28 12:50:48.385466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.201 [2024-11-28 12:50:48.385477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.201 qpair failed and we were unable to recover it. 00:27:06.201 [2024-11-28 12:50:48.385675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.201 [2024-11-28 12:50:48.385686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.201 qpair failed and we were unable to recover it. 00:27:06.201 [2024-11-28 12:50:48.385904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.201 [2024-11-28 12:50:48.385915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.201 qpair failed and we were unable to recover it. 00:27:06.201 [2024-11-28 12:50:48.386000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.201 [2024-11-28 12:50:48.386012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.201 qpair failed and we were unable to recover it. 00:27:06.202 [2024-11-28 12:50:48.386235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.202 [2024-11-28 12:50:48.386246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.202 qpair failed and we were unable to recover it. 00:27:06.202 [2024-11-28 12:50:48.386474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.202 [2024-11-28 12:50:48.386485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.202 qpair failed and we were unable to recover it. 00:27:06.202 [2024-11-28 12:50:48.386586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.202 [2024-11-28 12:50:48.386596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.202 qpair failed and we were unable to recover it. 00:27:06.202 [2024-11-28 12:50:48.386751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.202 [2024-11-28 12:50:48.386762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.202 qpair failed and we were unable to recover it. 00:27:06.202 [2024-11-28 12:50:48.386983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.202 [2024-11-28 12:50:48.386995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.202 qpair failed and we were unable to recover it. 00:27:06.202 [2024-11-28 12:50:48.387243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.202 [2024-11-28 12:50:48.387254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.202 qpair failed and we were unable to recover it. 00:27:06.202 [2024-11-28 12:50:48.387463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.202 [2024-11-28 12:50:48.387474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.202 qpair failed and we were unable to recover it. 00:27:06.202 [2024-11-28 12:50:48.387728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.202 [2024-11-28 12:50:48.387740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.202 qpair failed and we were unable to recover it. 00:27:06.202 [2024-11-28 12:50:48.388001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.202 [2024-11-28 12:50:48.388013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.202 qpair failed and we were unable to recover it. 00:27:06.202 [2024-11-28 12:50:48.388105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.202 [2024-11-28 12:50:48.388116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.202 qpair failed and we were unable to recover it. 00:27:06.202 [2024-11-28 12:50:48.388258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.202 [2024-11-28 12:50:48.388269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.202 qpair failed and we were unable to recover it. 00:27:06.202 [2024-11-28 12:50:48.388416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.202 [2024-11-28 12:50:48.388427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.202 qpair failed and we were unable to recover it. 00:27:06.202 [2024-11-28 12:50:48.388493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.202 [2024-11-28 12:50:48.388504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.202 qpair failed and we were unable to recover it. 00:27:06.202 [2024-11-28 12:50:48.388679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.202 [2024-11-28 12:50:48.388690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.202 qpair failed and we were unable to recover it. 00:27:06.202 [2024-11-28 12:50:48.388842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.202 [2024-11-28 12:50:48.388854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.202 qpair failed and we were unable to recover it. 00:27:06.202 [2024-11-28 12:50:48.388996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.202 [2024-11-28 12:50:48.389007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.202 qpair failed and we were unable to recover it. 00:27:06.202 [2024-11-28 12:50:48.389112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.202 [2024-11-28 12:50:48.389123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.202 qpair failed and we were unable to recover it. 00:27:06.202 [2024-11-28 12:50:48.389318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.202 [2024-11-28 12:50:48.389328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.202 qpair failed and we were unable to recover it. 00:27:06.202 [2024-11-28 12:50:48.389506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.202 [2024-11-28 12:50:48.389517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.202 qpair failed and we were unable to recover it. 00:27:06.202 [2024-11-28 12:50:48.389671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.202 [2024-11-28 12:50:48.389684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.202 qpair failed and we were unable to recover it. 00:27:06.202 [2024-11-28 12:50:48.389892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.202 [2024-11-28 12:50:48.389903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.202 qpair failed and we were unable to recover it. 00:27:06.202 [2024-11-28 12:50:48.390068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.202 [2024-11-28 12:50:48.390080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.202 qpair failed and we were unable to recover it. 00:27:06.202 [2024-11-28 12:50:48.390176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.202 [2024-11-28 12:50:48.390188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.202 qpair failed and we were unable to recover it. 00:27:06.202 [2024-11-28 12:50:48.390340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.202 [2024-11-28 12:50:48.390351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.202 qpair failed and we were unable to recover it. 00:27:06.202 [2024-11-28 12:50:48.390447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.202 [2024-11-28 12:50:48.390458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.202 qpair failed and we were unable to recover it. 00:27:06.202 [2024-11-28 12:50:48.390628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.202 [2024-11-28 12:50:48.390639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.202 qpair failed and we were unable to recover it. 00:27:06.202 [2024-11-28 12:50:48.390819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.202 [2024-11-28 12:50:48.390830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.202 qpair failed and we were unable to recover it. 00:27:06.202 [2024-11-28 12:50:48.390973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.202 [2024-11-28 12:50:48.390985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.202 qpair failed and we were unable to recover it. 00:27:06.202 [2024-11-28 12:50:48.391216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.202 [2024-11-28 12:50:48.391228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.202 qpair failed and we were unable to recover it. 00:27:06.202 [2024-11-28 12:50:48.391375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.202 [2024-11-28 12:50:48.391387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.202 qpair failed and we were unable to recover it. 00:27:06.202 [2024-11-28 12:50:48.391610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.202 [2024-11-28 12:50:48.391623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.202 qpair failed and we were unable to recover it. 00:27:06.202 [2024-11-28 12:50:48.391844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.202 [2024-11-28 12:50:48.391855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.202 qpair failed and we were unable to recover it. 00:27:06.202 [2024-11-28 12:50:48.391933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.202 [2024-11-28 12:50:48.391944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.202 qpair failed and we were unable to recover it. 00:27:06.202 [2024-11-28 12:50:48.392104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.202 [2024-11-28 12:50:48.392117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.202 qpair failed and we were unable to recover it. 00:27:06.202 [2024-11-28 12:50:48.392276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.202 [2024-11-28 12:50:48.392287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.202 qpair failed and we were unable to recover it. 00:27:06.202 [2024-11-28 12:50:48.392475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.202 [2024-11-28 12:50:48.392486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.202 qpair failed and we were unable to recover it. 00:27:06.202 [2024-11-28 12:50:48.392582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.202 [2024-11-28 12:50:48.392593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.202 qpair failed and we were unable to recover it. 00:27:06.202 [2024-11-28 12:50:48.392792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.202 [2024-11-28 12:50:48.392803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.202 qpair failed and we were unable to recover it. 00:27:06.203 [2024-11-28 12:50:48.392958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.203 [2024-11-28 12:50:48.392969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.203 qpair failed and we were unable to recover it. 00:27:06.203 [2024-11-28 12:50:48.393060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.203 [2024-11-28 12:50:48.393072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.203 qpair failed and we were unable to recover it. 00:27:06.203 [2024-11-28 12:50:48.393158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.203 [2024-11-28 12:50:48.393168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.203 qpair failed and we were unable to recover it. 00:27:06.203 [2024-11-28 12:50:48.393243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.203 [2024-11-28 12:50:48.393254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.203 qpair failed and we were unable to recover it. 00:27:06.203 [2024-11-28 12:50:48.393316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.203 [2024-11-28 12:50:48.393327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.203 qpair failed and we were unable to recover it. 00:27:06.203 [2024-11-28 12:50:48.393474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.203 [2024-11-28 12:50:48.393485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.203 qpair failed and we were unable to recover it. 00:27:06.203 [2024-11-28 12:50:48.393572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.203 [2024-11-28 12:50:48.393583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.203 qpair failed and we were unable to recover it. 00:27:06.203 [2024-11-28 12:50:48.393678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.203 [2024-11-28 12:50:48.393688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.203 qpair failed and we were unable to recover it. 00:27:06.203 [2024-11-28 12:50:48.393917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.203 [2024-11-28 12:50:48.393929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.203 qpair failed and we were unable to recover it. 00:27:06.203 [2024-11-28 12:50:48.394091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.203 [2024-11-28 12:50:48.394102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.203 qpair failed and we were unable to recover it. 00:27:06.203 [2024-11-28 12:50:48.394198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.203 [2024-11-28 12:50:48.394208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.203 qpair failed and we were unable to recover it. 00:27:06.203 [2024-11-28 12:50:48.394368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.203 [2024-11-28 12:50:48.394378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.203 qpair failed and we were unable to recover it. 00:27:06.203 [2024-11-28 12:50:48.394553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.203 [2024-11-28 12:50:48.394565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.203 qpair failed and we were unable to recover it. 00:27:06.203 [2024-11-28 12:50:48.394723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.203 [2024-11-28 12:50:48.394734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.203 qpair failed and we were unable to recover it. 00:27:06.203 [2024-11-28 12:50:48.394821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.203 [2024-11-28 12:50:48.394831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.203 qpair failed and we were unable to recover it. 00:27:06.203 [2024-11-28 12:50:48.394979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.203 [2024-11-28 12:50:48.394990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.203 qpair failed and we were unable to recover it. 00:27:06.203 [2024-11-28 12:50:48.395146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.203 [2024-11-28 12:50:48.395157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.203 qpair failed and we were unable to recover it. 00:27:06.203 [2024-11-28 12:50:48.395304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.203 [2024-11-28 12:50:48.395315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.203 qpair failed and we were unable to recover it. 00:27:06.203 [2024-11-28 12:50:48.395415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.203 [2024-11-28 12:50:48.395426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.203 qpair failed and we were unable to recover it. 00:27:06.203 [2024-11-28 12:50:48.395642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.203 [2024-11-28 12:50:48.395653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.203 qpair failed and we were unable to recover it. 00:27:06.203 [2024-11-28 12:50:48.395882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.203 [2024-11-28 12:50:48.395893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.203 qpair failed and we were unable to recover it. 00:27:06.203 [2024-11-28 12:50:48.396036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.203 [2024-11-28 12:50:48.396048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.203 qpair failed and we were unable to recover it. 00:27:06.203 [2024-11-28 12:50:48.396149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.203 [2024-11-28 12:50:48.396161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.203 qpair failed and we were unable to recover it. 00:27:06.203 [2024-11-28 12:50:48.396307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.203 [2024-11-28 12:50:48.396318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.203 qpair failed and we were unable to recover it. 00:27:06.203 [2024-11-28 12:50:48.396469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.203 [2024-11-28 12:50:48.396480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.203 qpair failed and we were unable to recover it. 00:27:06.203 [2024-11-28 12:50:48.396580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.203 [2024-11-28 12:50:48.396591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.203 qpair failed and we were unable to recover it. 00:27:06.203 [2024-11-28 12:50:48.396681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.203 [2024-11-28 12:50:48.396692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.203 qpair failed and we were unable to recover it. 00:27:06.203 [2024-11-28 12:50:48.396843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.203 [2024-11-28 12:50:48.396854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.203 qpair failed and we were unable to recover it. 00:27:06.203 [2024-11-28 12:50:48.397074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.203 [2024-11-28 12:50:48.397086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.203 qpair failed and we were unable to recover it. 00:27:06.203 [2024-11-28 12:50:48.397170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.203 [2024-11-28 12:50:48.397181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.203 qpair failed and we were unable to recover it. 00:27:06.203 [2024-11-28 12:50:48.397402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.203 [2024-11-28 12:50:48.397413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.203 qpair failed and we were unable to recover it. 00:27:06.203 [2024-11-28 12:50:48.397561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.203 [2024-11-28 12:50:48.397572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.203 qpair failed and we were unable to recover it. 00:27:06.203 [2024-11-28 12:50:48.397785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.203 [2024-11-28 12:50:48.397796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.203 qpair failed and we were unable to recover it. 00:27:06.203 [2024-11-28 12:50:48.397890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.203 [2024-11-28 12:50:48.397902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.203 qpair failed and we were unable to recover it. 00:27:06.203 [2024-11-28 12:50:48.398112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.203 [2024-11-28 12:50:48.398123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.203 qpair failed and we were unable to recover it. 00:27:06.203 [2024-11-28 12:50:48.398226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.203 [2024-11-28 12:50:48.398240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.203 qpair failed and we were unable to recover it. 00:27:06.203 [2024-11-28 12:50:48.398401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.203 [2024-11-28 12:50:48.398412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.203 qpair failed and we were unable to recover it. 00:27:06.203 [2024-11-28 12:50:48.398588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.203 [2024-11-28 12:50:48.398599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.203 qpair failed and we were unable to recover it. 00:27:06.203 [2024-11-28 12:50:48.398809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.204 [2024-11-28 12:50:48.398820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.204 qpair failed and we were unable to recover it. 00:27:06.204 [2024-11-28 12:50:48.398982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.204 [2024-11-28 12:50:48.398994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.204 qpair failed and we were unable to recover it. 00:27:06.204 [2024-11-28 12:50:48.399142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.204 [2024-11-28 12:50:48.399153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.204 qpair failed and we were unable to recover it. 00:27:06.204 [2024-11-28 12:50:48.399296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.204 [2024-11-28 12:50:48.399307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.204 qpair failed and we were unable to recover it. 00:27:06.204 [2024-11-28 12:50:48.399391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.204 [2024-11-28 12:50:48.399402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.204 qpair failed and we were unable to recover it. 00:27:06.204 [2024-11-28 12:50:48.399661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.204 [2024-11-28 12:50:48.399672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.204 qpair failed and we were unable to recover it. 00:27:06.204 [2024-11-28 12:50:48.399816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.204 [2024-11-28 12:50:48.399827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.204 qpair failed and we were unable to recover it. 00:27:06.204 [2024-11-28 12:50:48.400006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.204 [2024-11-28 12:50:48.400017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.204 qpair failed and we were unable to recover it. 00:27:06.204 [2024-11-28 12:50:48.400161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.204 [2024-11-28 12:50:48.400172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.204 qpair failed and we were unable to recover it. 00:27:06.204 [2024-11-28 12:50:48.400314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.204 [2024-11-28 12:50:48.400325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.204 qpair failed and we were unable to recover it. 00:27:06.204 [2024-11-28 12:50:48.400492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.204 [2024-11-28 12:50:48.400502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.204 qpair failed and we were unable to recover it. 00:27:06.204 [2024-11-28 12:50:48.400595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.204 [2024-11-28 12:50:48.400607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.204 qpair failed and we were unable to recover it. 00:27:06.204 [2024-11-28 12:50:48.400832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.204 [2024-11-28 12:50:48.400843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.204 qpair failed and we were unable to recover it. 00:27:06.204 [2024-11-28 12:50:48.400992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.204 [2024-11-28 12:50:48.401003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.204 qpair failed and we were unable to recover it. 00:27:06.204 [2024-11-28 12:50:48.401244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.204 [2024-11-28 12:50:48.401255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.204 qpair failed and we were unable to recover it. 00:27:06.204 [2024-11-28 12:50:48.401341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.204 [2024-11-28 12:50:48.401352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.204 qpair failed and we were unable to recover it. 00:27:06.204 [2024-11-28 12:50:48.401498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.204 [2024-11-28 12:50:48.401509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.204 qpair failed and we were unable to recover it. 00:27:06.204 [2024-11-28 12:50:48.401668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.204 [2024-11-28 12:50:48.401678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.204 qpair failed and we were unable to recover it. 00:27:06.204 [2024-11-28 12:50:48.401851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.204 [2024-11-28 12:50:48.401862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.204 qpair failed and we were unable to recover it. 00:27:06.204 [2024-11-28 12:50:48.402018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.204 [2024-11-28 12:50:48.402030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.204 qpair failed and we were unable to recover it. 00:27:06.204 [2024-11-28 12:50:48.402178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.204 [2024-11-28 12:50:48.402188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.204 qpair failed and we were unable to recover it. 00:27:06.204 [2024-11-28 12:50:48.402268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.204 [2024-11-28 12:50:48.402279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.204 qpair failed and we were unable to recover it. 00:27:06.204 [2024-11-28 12:50:48.402371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.204 [2024-11-28 12:50:48.402381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.204 qpair failed and we were unable to recover it. 00:27:06.204 [2024-11-28 12:50:48.402611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.204 [2024-11-28 12:50:48.402622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.204 qpair failed and we were unable to recover it. 00:27:06.204 [2024-11-28 12:50:48.402765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.204 [2024-11-28 12:50:48.402776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.204 qpair failed and we were unable to recover it. 00:27:06.204 [2024-11-28 12:50:48.402923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.204 [2024-11-28 12:50:48.402934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.204 qpair failed and we were unable to recover it. 00:27:06.204 [2024-11-28 12:50:48.403069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.204 [2024-11-28 12:50:48.403080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.204 qpair failed and we were unable to recover it. 00:27:06.204 [2024-11-28 12:50:48.403248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.204 [2024-11-28 12:50:48.403259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.204 qpair failed and we were unable to recover it. 00:27:06.204 [2024-11-28 12:50:48.403411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.204 [2024-11-28 12:50:48.403422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.204 qpair failed and we were unable to recover it. 00:27:06.204 [2024-11-28 12:50:48.403506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.204 [2024-11-28 12:50:48.403517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.204 qpair failed and we were unable to recover it. 00:27:06.204 [2024-11-28 12:50:48.403611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.204 [2024-11-28 12:50:48.403621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.204 qpair failed and we were unable to recover it. 00:27:06.204 [2024-11-28 12:50:48.403751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.204 [2024-11-28 12:50:48.403761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.204 qpair failed and we were unable to recover it. 00:27:06.204 [2024-11-28 12:50:48.403901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.204 [2024-11-28 12:50:48.403913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.204 qpair failed and we were unable to recover it. 00:27:06.204 [2024-11-28 12:50:48.404054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.204 [2024-11-28 12:50:48.404066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.204 qpair failed and we were unable to recover it. 00:27:06.204 [2024-11-28 12:50:48.404169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.204 [2024-11-28 12:50:48.404180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.204 qpair failed and we were unable to recover it. 00:27:06.204 [2024-11-28 12:50:48.404335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.204 [2024-11-28 12:50:48.404347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.204 qpair failed and we were unable to recover it. 00:27:06.204 [2024-11-28 12:50:48.404492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.204 [2024-11-28 12:50:48.404503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.204 qpair failed and we were unable to recover it. 00:27:06.204 [2024-11-28 12:50:48.404578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.204 [2024-11-28 12:50:48.404592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.204 qpair failed and we were unable to recover it. 00:27:06.204 [2024-11-28 12:50:48.404790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.205 [2024-11-28 12:50:48.404801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.205 qpair failed and we were unable to recover it. 00:27:06.205 [2024-11-28 12:50:48.404875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.205 [2024-11-28 12:50:48.404885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.205 qpair failed and we were unable to recover it. 00:27:06.205 [2024-11-28 12:50:48.404955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.205 [2024-11-28 12:50:48.404966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.205 qpair failed and we were unable to recover it. 00:27:06.205 [2024-11-28 12:50:48.405055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.205 [2024-11-28 12:50:48.405066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.205 qpair failed and we were unable to recover it. 00:27:06.205 [2024-11-28 12:50:48.405155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.205 [2024-11-28 12:50:48.405165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.205 qpair failed and we were unable to recover it. 00:27:06.205 [2024-11-28 12:50:48.405255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.205 [2024-11-28 12:50:48.405267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.205 qpair failed and we were unable to recover it. 00:27:06.205 [2024-11-28 12:50:48.405412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.205 [2024-11-28 12:50:48.405424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.205 qpair failed and we were unable to recover it. 00:27:06.205 [2024-11-28 12:50:48.405548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.205 [2024-11-28 12:50:48.405560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.205 qpair failed and we were unable to recover it. 00:27:06.205 [2024-11-28 12:50:48.405780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.205 [2024-11-28 12:50:48.405792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.205 qpair failed and we were unable to recover it. 00:27:06.205 [2024-11-28 12:50:48.406042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.205 [2024-11-28 12:50:48.406054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.205 qpair failed and we were unable to recover it. 00:27:06.205 [2024-11-28 12:50:48.406163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.205 [2024-11-28 12:50:48.406174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.205 qpair failed and we were unable to recover it. 00:27:06.205 [2024-11-28 12:50:48.406319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.205 [2024-11-28 12:50:48.406332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.205 qpair failed and we were unable to recover it. 00:27:06.205 [2024-11-28 12:50:48.406557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.205 [2024-11-28 12:50:48.406569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.205 qpair failed and we were unable to recover it. 00:27:06.205 [2024-11-28 12:50:48.406742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.205 [2024-11-28 12:50:48.406755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.205 qpair failed and we were unable to recover it. 00:27:06.205 [2024-11-28 12:50:48.406898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.205 [2024-11-28 12:50:48.406910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.205 qpair failed and we were unable to recover it. 00:27:06.205 [2024-11-28 12:50:48.407075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.205 [2024-11-28 12:50:48.407087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.205 qpair failed and we were unable to recover it. 00:27:06.205 [2024-11-28 12:50:48.407164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.205 [2024-11-28 12:50:48.407175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.205 qpair failed and we were unable to recover it. 00:27:06.205 [2024-11-28 12:50:48.407340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.205 [2024-11-28 12:50:48.407351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.205 qpair failed and we were unable to recover it. 00:27:06.205 [2024-11-28 12:50:48.407436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.205 [2024-11-28 12:50:48.407447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.205 qpair failed and we were unable to recover it. 00:27:06.205 [2024-11-28 12:50:48.407615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.205 [2024-11-28 12:50:48.407626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.205 qpair failed and we were unable to recover it. 00:27:06.205 [2024-11-28 12:50:48.407825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.205 [2024-11-28 12:50:48.407837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.205 qpair failed and we were unable to recover it. 00:27:06.205 [2024-11-28 12:50:48.408151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.205 [2024-11-28 12:50:48.408162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.205 qpair failed and we were unable to recover it. 00:27:06.205 [2024-11-28 12:50:48.408313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.205 [2024-11-28 12:50:48.408325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.205 qpair failed and we were unable to recover it. 00:27:06.205 [2024-11-28 12:50:48.408474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.205 [2024-11-28 12:50:48.408484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.205 qpair failed and we were unable to recover it. 00:27:06.205 [2024-11-28 12:50:48.408657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.205 [2024-11-28 12:50:48.408668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.205 qpair failed and we were unable to recover it. 00:27:06.205 [2024-11-28 12:50:48.408909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.205 [2024-11-28 12:50:48.408921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.205 qpair failed and we were unable to recover it. 00:27:06.205 [2024-11-28 12:50:48.409095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.205 [2024-11-28 12:50:48.409107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.205 qpair failed and we were unable to recover it. 00:27:06.205 [2024-11-28 12:50:48.409201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.205 [2024-11-28 12:50:48.409212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.205 qpair failed and we were unable to recover it. 00:27:06.205 [2024-11-28 12:50:48.409289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.205 [2024-11-28 12:50:48.409300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.205 qpair failed and we were unable to recover it. 00:27:06.205 [2024-11-28 12:50:48.409372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.205 [2024-11-28 12:50:48.409383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.205 qpair failed and we were unable to recover it. 00:27:06.205 [2024-11-28 12:50:48.409515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.205 [2024-11-28 12:50:48.409526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.205 qpair failed and we were unable to recover it. 00:27:06.205 [2024-11-28 12:50:48.409622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.205 [2024-11-28 12:50:48.409634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.205 qpair failed and we were unable to recover it. 00:27:06.205 [2024-11-28 12:50:48.409798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.205 [2024-11-28 12:50:48.409810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.205 qpair failed and we were unable to recover it. 00:27:06.205 [2024-11-28 12:50:48.409963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.205 [2024-11-28 12:50:48.409975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.205 qpair failed and we were unable to recover it. 00:27:06.205 [2024-11-28 12:50:48.410125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.205 [2024-11-28 12:50:48.410136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.205 qpair failed and we were unable to recover it. 00:27:06.205 [2024-11-28 12:50:48.410220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.205 [2024-11-28 12:50:48.410231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.205 qpair failed and we were unable to recover it. 00:27:06.205 [2024-11-28 12:50:48.410333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.205 [2024-11-28 12:50:48.410346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.205 qpair failed and we were unable to recover it. 00:27:06.205 [2024-11-28 12:50:48.410549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.205 [2024-11-28 12:50:48.410560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.205 qpair failed and we were unable to recover it. 00:27:06.205 [2024-11-28 12:50:48.410655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.205 [2024-11-28 12:50:48.410666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.206 qpair failed and we were unable to recover it. 00:27:06.206 [2024-11-28 12:50:48.410826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.206 [2024-11-28 12:50:48.410840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.206 qpair failed and we were unable to recover it. 00:27:06.206 [2024-11-28 12:50:48.411050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.206 [2024-11-28 12:50:48.411062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.206 qpair failed and we were unable to recover it. 00:27:06.206 [2024-11-28 12:50:48.411213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.206 [2024-11-28 12:50:48.411224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.206 qpair failed and we were unable to recover it. 00:27:06.206 [2024-11-28 12:50:48.411368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.206 [2024-11-28 12:50:48.411380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.206 qpair failed and we were unable to recover it. 00:27:06.206 [2024-11-28 12:50:48.411472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.206 [2024-11-28 12:50:48.411483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.206 qpair failed and we were unable to recover it. 00:27:06.206 [2024-11-28 12:50:48.411664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.206 [2024-11-28 12:50:48.411675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.206 qpair failed and we were unable to recover it. 00:27:06.206 [2024-11-28 12:50:48.411835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.206 [2024-11-28 12:50:48.411846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.206 qpair failed and we were unable to recover it. 00:27:06.206 [2024-11-28 12:50:48.411995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.206 [2024-11-28 12:50:48.412007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.206 qpair failed and we were unable to recover it. 00:27:06.206 [2024-11-28 12:50:48.412165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.206 [2024-11-28 12:50:48.412176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.206 qpair failed and we were unable to recover it. 00:27:06.206 [2024-11-28 12:50:48.412321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.206 [2024-11-28 12:50:48.412333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.206 qpair failed and we were unable to recover it. 00:27:06.206 [2024-11-28 12:50:48.412482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.206 [2024-11-28 12:50:48.412494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.206 qpair failed and we were unable to recover it. 00:27:06.206 [2024-11-28 12:50:48.412744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.206 [2024-11-28 12:50:48.412755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.206 qpair failed and we were unable to recover it. 00:27:06.206 [2024-11-28 12:50:48.412841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.206 [2024-11-28 12:50:48.412852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.206 qpair failed and we were unable to recover it. 00:27:06.206 [2024-11-28 12:50:48.413069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.206 [2024-11-28 12:50:48.413081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.206 qpair failed and we were unable to recover it. 00:27:06.206 [2024-11-28 12:50:48.413215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.206 [2024-11-28 12:50:48.413225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.206 qpair failed and we were unable to recover it. 00:27:06.206 [2024-11-28 12:50:48.413326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.206 [2024-11-28 12:50:48.413337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.206 qpair failed and we were unable to recover it. 00:27:06.206 [2024-11-28 12:50:48.413429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.206 [2024-11-28 12:50:48.413441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.206 qpair failed and we were unable to recover it. 00:27:06.206 [2024-11-28 12:50:48.413697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.206 [2024-11-28 12:50:48.413708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.206 qpair failed and we were unable to recover it. 00:27:06.206 [2024-11-28 12:50:48.413961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.206 [2024-11-28 12:50:48.413972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.206 qpair failed and we were unable to recover it. 00:27:06.206 [2024-11-28 12:50:48.414073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.206 [2024-11-28 12:50:48.414084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.206 qpair failed and we were unable to recover it. 00:27:06.206 [2024-11-28 12:50:48.414237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.206 [2024-11-28 12:50:48.414248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.206 qpair failed and we were unable to recover it. 00:27:06.206 [2024-11-28 12:50:48.414343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.206 [2024-11-28 12:50:48.414356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.206 qpair failed and we were unable to recover it. 00:27:06.206 [2024-11-28 12:50:48.414534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.206 [2024-11-28 12:50:48.414547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.206 qpair failed and we were unable to recover it. 00:27:06.206 [2024-11-28 12:50:48.414748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.206 [2024-11-28 12:50:48.414759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.206 qpair failed and we were unable to recover it. 00:27:06.206 [2024-11-28 12:50:48.414967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.206 [2024-11-28 12:50:48.414979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.206 qpair failed and we were unable to recover it. 00:27:06.206 [2024-11-28 12:50:48.415088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.206 [2024-11-28 12:50:48.415100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.206 qpair failed and we were unable to recover it. 00:27:06.206 [2024-11-28 12:50:48.415182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.206 [2024-11-28 12:50:48.415193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.206 qpair failed and we were unable to recover it. 00:27:06.206 [2024-11-28 12:50:48.415282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.206 [2024-11-28 12:50:48.415293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.206 qpair failed and we were unable to recover it. 00:27:06.206 [2024-11-28 12:50:48.415380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.206 [2024-11-28 12:50:48.415390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.206 qpair failed and we were unable to recover it. 00:27:06.206 [2024-11-28 12:50:48.415537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.206 [2024-11-28 12:50:48.415548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.206 qpair failed and we were unable to recover it. 00:27:06.206 [2024-11-28 12:50:48.415681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.206 [2024-11-28 12:50:48.415692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.206 qpair failed and we were unable to recover it. 00:27:06.206 [2024-11-28 12:50:48.415776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.207 [2024-11-28 12:50:48.415786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.207 qpair failed and we were unable to recover it. 00:27:06.207 [2024-11-28 12:50:48.415930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.207 [2024-11-28 12:50:48.415943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.207 qpair failed and we were unable to recover it. 00:27:06.207 12:50:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:06.207 [2024-11-28 12:50:48.416043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.207 [2024-11-28 12:50:48.416056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.207 qpair failed and we were unable to recover it. 00:27:06.207 [2024-11-28 12:50:48.416207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.207 [2024-11-28 12:50:48.416217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.207 qpair failed and we were unable to recover it. 00:27:06.207 [2024-11-28 12:50:48.416288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.207 [2024-11-28 12:50:48.416298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.207 qpair failed and we were unable to recover it. 00:27:06.207 [2024-11-28 12:50:48.416397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.207 [2024-11-28 12:50:48.416409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.207 qpair failed and we were unable to recover it. 00:27:06.207 12:50:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:06.207 [2024-11-28 12:50:48.416586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.207 [2024-11-28 12:50:48.416599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.207 qpair failed and we were unable to recover it. 00:27:06.207 12:50:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.207 [2024-11-28 12:50:48.416820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.207 [2024-11-28 12:50:48.416833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.207 qpair failed and we were unable to recover it. 00:27:06.207 [2024-11-28 12:50:48.417049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.207 [2024-11-28 12:50:48.417062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.207 qpair failed and we were unable to recover it. 00:27:06.207 12:50:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:06.207 [2024-11-28 12:50:48.417161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.207 [2024-11-28 12:50:48.417173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.207 qpair failed and we were unable to recover it. 00:27:06.207 [2024-11-28 12:50:48.417310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.207 [2024-11-28 12:50:48.417321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.207 qpair failed and we were unable to recover it. 00:27:06.207 [2024-11-28 12:50:48.417416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.207 [2024-11-28 12:50:48.417427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.207 qpair failed and we were unable to recover it. 00:27:06.207 [2024-11-28 12:50:48.417527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.207 [2024-11-28 12:50:48.417538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.207 qpair failed and we were unable to recover it. 00:27:06.207 [2024-11-28 12:50:48.417629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.207 [2024-11-28 12:50:48.417640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.207 qpair failed and we were unable to recover it. 00:27:06.207 [2024-11-28 12:50:48.417790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.207 [2024-11-28 12:50:48.417801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.207 qpair failed and we were unable to recover it. 00:27:06.207 [2024-11-28 12:50:48.418016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.207 [2024-11-28 12:50:48.418028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.207 qpair failed and we were unable to recover it. 00:27:06.207 [2024-11-28 12:50:48.418115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.207 [2024-11-28 12:50:48.418127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.207 qpair failed and we were unable to recover it. 00:27:06.207 [2024-11-28 12:50:48.418222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.207 [2024-11-28 12:50:48.418233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.207 qpair failed and we were unable to recover it. 00:27:06.207 [2024-11-28 12:50:48.418477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.207 [2024-11-28 12:50:48.418488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.207 qpair failed and we were unable to recover it. 00:27:06.207 [2024-11-28 12:50:48.418669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.207 [2024-11-28 12:50:48.418680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.207 qpair failed and we were unable to recover it. 00:27:06.207 [2024-11-28 12:50:48.418814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.207 [2024-11-28 12:50:48.418826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.207 qpair failed and we were unable to recover it. 00:27:06.207 [2024-11-28 12:50:48.419053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.207 [2024-11-28 12:50:48.419063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.207 qpair failed and we were unable to recover it. 00:27:06.207 [2024-11-28 12:50:48.419151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.207 [2024-11-28 12:50:48.419162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.207 qpair failed and we were unable to recover it. 00:27:06.207 [2024-11-28 12:50:48.419268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.207 [2024-11-28 12:50:48.419279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.207 qpair failed and we were unable to recover it. 00:27:06.207 [2024-11-28 12:50:48.419446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.207 [2024-11-28 12:50:48.419458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.207 qpair failed and we were unable to recover it. 00:27:06.207 [2024-11-28 12:50:48.419551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.207 [2024-11-28 12:50:48.419561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.207 qpair failed and we were unable to recover it. 00:27:06.207 [2024-11-28 12:50:48.419643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.207 [2024-11-28 12:50:48.419654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.207 qpair failed and we were unable to recover it. 00:27:06.207 [2024-11-28 12:50:48.419805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.207 [2024-11-28 12:50:48.419815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.207 qpair failed and we were unable to recover it. 00:27:06.207 [2024-11-28 12:50:48.420036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.207 [2024-11-28 12:50:48.420047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.207 qpair failed and we were unable to recover it. 00:27:06.207 [2024-11-28 12:50:48.420144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.207 [2024-11-28 12:50:48.420155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.207 qpair failed and we were unable to recover it. 00:27:06.207 [2024-11-28 12:50:48.420300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.207 [2024-11-28 12:50:48.420312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.207 qpair failed and we were unable to recover it. 00:27:06.207 [2024-11-28 12:50:48.420446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.207 [2024-11-28 12:50:48.420457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.207 qpair failed and we were unable to recover it. 00:27:06.207 [2024-11-28 12:50:48.420613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.207 [2024-11-28 12:50:48.420624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.207 qpair failed and we were unable to recover it. 00:27:06.207 [2024-11-28 12:50:48.420863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.207 [2024-11-28 12:50:48.420875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.207 qpair failed and we were unable to recover it. 00:27:06.207 [2024-11-28 12:50:48.421026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.207 [2024-11-28 12:50:48.421037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.207 qpair failed and we were unable to recover it. 00:27:06.207 [2024-11-28 12:50:48.421140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.207 [2024-11-28 12:50:48.421152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.207 qpair failed and we were unable to recover it. 00:27:06.207 [2024-11-28 12:50:48.421230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.208 [2024-11-28 12:50:48.421241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.208 qpair failed and we were unable to recover it. 00:27:06.208 [2024-11-28 12:50:48.421389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.208 [2024-11-28 12:50:48.421400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.208 qpair failed and we were unable to recover it. 00:27:06.208 [2024-11-28 12:50:48.421478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.208 [2024-11-28 12:50:48.421489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.208 qpair failed and we were unable to recover it. 00:27:06.208 [2024-11-28 12:50:48.421727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.208 [2024-11-28 12:50:48.421738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.208 qpair failed and we were unable to recover it. 00:27:06.208 [2024-11-28 12:50:48.421977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.208 [2024-11-28 12:50:48.421988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.208 qpair failed and we were unable to recover it. 00:27:06.208 [2024-11-28 12:50:48.422167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.208 [2024-11-28 12:50:48.422178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.208 qpair failed and we were unable to recover it. 00:27:06.208 [2024-11-28 12:50:48.422267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.208 [2024-11-28 12:50:48.422278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.208 qpair failed and we were unable to recover it. 00:27:06.208 [2024-11-28 12:50:48.422381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.208 [2024-11-28 12:50:48.422392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.208 qpair failed and we were unable to recover it. 00:27:06.208 [2024-11-28 12:50:48.422613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.208 [2024-11-28 12:50:48.422624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.208 qpair failed and we were unable to recover it. 00:27:06.208 [2024-11-28 12:50:48.422796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.208 [2024-11-28 12:50:48.422806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.208 qpair failed and we were unable to recover it. 00:27:06.208 [2024-11-28 12:50:48.422893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.208 [2024-11-28 12:50:48.422903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.208 qpair failed and we were unable to recover it. 00:27:06.208 [2024-11-28 12:50:48.423007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.208 [2024-11-28 12:50:48.423018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.208 qpair failed and we were unable to recover it. 00:27:06.208 [2024-11-28 12:50:48.423175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.208 [2024-11-28 12:50:48.423186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.208 qpair failed and we were unable to recover it. 00:27:06.208 [2024-11-28 12:50:48.423274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.208 [2024-11-28 12:50:48.423285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.208 qpair failed and we were unable to recover it. 00:27:06.208 [2024-11-28 12:50:48.423375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.208 [2024-11-28 12:50:48.423386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.208 qpair failed and we were unable to recover it. 00:27:06.208 [2024-11-28 12:50:48.423454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.208 [2024-11-28 12:50:48.423465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.208 qpair failed and we were unable to recover it. 00:27:06.208 [2024-11-28 12:50:48.423642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.208 [2024-11-28 12:50:48.423653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.208 qpair failed and we were unable to recover it. 00:27:06.208 [2024-11-28 12:50:48.423789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.208 [2024-11-28 12:50:48.423800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.208 qpair failed and we were unable to recover it. 00:27:06.208 [2024-11-28 12:50:48.423890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.208 [2024-11-28 12:50:48.423901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.208 qpair failed and we were unable to recover it. 00:27:06.208 [2024-11-28 12:50:48.423973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.208 [2024-11-28 12:50:48.423984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.208 qpair failed and we were unable to recover it. 00:27:06.208 [2024-11-28 12:50:48.424080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.208 [2024-11-28 12:50:48.424091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.208 qpair failed and we were unable to recover it. 00:27:06.208 [2024-11-28 12:50:48.424180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.208 [2024-11-28 12:50:48.424191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.208 qpair failed and we were unable to recover it. 00:27:06.208 [2024-11-28 12:50:48.424340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.208 [2024-11-28 12:50:48.424351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.208 qpair failed and we were unable to recover it. 00:27:06.208 [2024-11-28 12:50:48.424506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.208 [2024-11-28 12:50:48.424517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.208 qpair failed and we were unable to recover it. 00:27:06.208 [2024-11-28 12:50:48.424592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.208 [2024-11-28 12:50:48.424603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.208 qpair failed and we were unable to recover it. 00:27:06.208 [2024-11-28 12:50:48.424683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.208 [2024-11-28 12:50:48.424694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.208 qpair failed and we were unable to recover it. 00:27:06.208 [2024-11-28 12:50:48.424841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.208 [2024-11-28 12:50:48.424852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.208 qpair failed and we were unable to recover it. 00:27:06.208 [2024-11-28 12:50:48.425009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.208 [2024-11-28 12:50:48.425021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.208 qpair failed and we were unable to recover it. 00:27:06.208 [2024-11-28 12:50:48.425136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.208 [2024-11-28 12:50:48.425146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.208 qpair failed and we were unable to recover it. 00:27:06.208 [2024-11-28 12:50:48.425245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.208 [2024-11-28 12:50:48.425255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.208 qpair failed and we were unable to recover it. 00:27:06.208 [2024-11-28 12:50:48.425341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.208 [2024-11-28 12:50:48.425352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.208 qpair failed and we were unable to recover it. 00:27:06.208 [2024-11-28 12:50:48.425446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.208 [2024-11-28 12:50:48.425457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.208 qpair failed and we were unable to recover it. 00:27:06.208 [2024-11-28 12:50:48.425610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.208 [2024-11-28 12:50:48.425621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.208 qpair failed and we were unable to recover it. 00:27:06.208 [2024-11-28 12:50:48.425711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.208 [2024-11-28 12:50:48.425723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.208 qpair failed and we were unable to recover it. 00:27:06.208 [2024-11-28 12:50:48.425809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.208 [2024-11-28 12:50:48.425820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.208 qpair failed and we were unable to recover it. 00:27:06.208 [2024-11-28 12:50:48.426088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.208 [2024-11-28 12:50:48.426100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.208 qpair failed and we were unable to recover it. 00:27:06.208 [2024-11-28 12:50:48.426182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.208 [2024-11-28 12:50:48.426192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.208 qpair failed and we were unable to recover it. 00:27:06.208 [2024-11-28 12:50:48.426300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.208 [2024-11-28 12:50:48.426313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.208 qpair failed and we were unable to recover it. 00:27:06.208 [2024-11-28 12:50:48.426512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.208 [2024-11-28 12:50:48.426526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.208 qpair failed and we were unable to recover it. 00:27:06.209 [2024-11-28 12:50:48.426660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.209 [2024-11-28 12:50:48.426671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.209 qpair failed and we were unable to recover it. 00:27:06.209 [2024-11-28 12:50:48.426801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.209 [2024-11-28 12:50:48.426812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.209 qpair failed and we were unable to recover it. 00:27:06.209 [2024-11-28 12:50:48.427037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.209 [2024-11-28 12:50:48.427049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.209 qpair failed and we were unable to recover it. 00:27:06.209 [2024-11-28 12:50:48.427199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.209 [2024-11-28 12:50:48.427210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.209 qpair failed and we were unable to recover it. 00:27:06.209 [2024-11-28 12:50:48.427308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.209 [2024-11-28 12:50:48.427320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.209 qpair failed and we were unable to recover it. 00:27:06.209 [2024-11-28 12:50:48.427467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.209 [2024-11-28 12:50:48.427479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.209 qpair failed and we were unable to recover it. 00:27:06.209 [2024-11-28 12:50:48.427565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.209 [2024-11-28 12:50:48.427576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.209 qpair failed and we were unable to recover it. 00:27:06.209 [2024-11-28 12:50:48.427742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.209 [2024-11-28 12:50:48.427754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.209 qpair failed and we were unable to recover it. 00:27:06.209 [2024-11-28 12:50:48.427966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.209 [2024-11-28 12:50:48.427977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.209 qpair failed and we were unable to recover it. 00:27:06.209 [2024-11-28 12:50:48.428191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.209 [2024-11-28 12:50:48.428203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.209 qpair failed and we were unable to recover it. 00:27:06.209 [2024-11-28 12:50:48.428348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.209 [2024-11-28 12:50:48.428359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.209 qpair failed and we were unable to recover it. 00:27:06.209 [2024-11-28 12:50:48.428673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.209 [2024-11-28 12:50:48.428684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.209 qpair failed and we were unable to recover it. 00:27:06.209 [2024-11-28 12:50:48.428856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.209 [2024-11-28 12:50:48.428867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.209 qpair failed and we were unable to recover it. 00:27:06.209 [2024-11-28 12:50:48.429076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.209 [2024-11-28 12:50:48.429089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.209 qpair failed and we were unable to recover it. 00:27:06.209 [2024-11-28 12:50:48.429198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.209 [2024-11-28 12:50:48.429209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.209 qpair failed and we were unable to recover it. 00:27:06.209 [2024-11-28 12:50:48.429366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.209 [2024-11-28 12:50:48.429376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.209 qpair failed and we were unable to recover it. 00:27:06.209 [2024-11-28 12:50:48.429519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.209 [2024-11-28 12:50:48.429530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.209 qpair failed and we were unable to recover it. 00:27:06.209 [2024-11-28 12:50:48.429680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.209 [2024-11-28 12:50:48.429692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.209 qpair failed and we were unable to recover it. 00:27:06.209 [2024-11-28 12:50:48.429910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.209 [2024-11-28 12:50:48.429920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.209 qpair failed and we were unable to recover it. 00:27:06.209 [2024-11-28 12:50:48.430119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.209 [2024-11-28 12:50:48.430130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.209 qpair failed and we were unable to recover it. 00:27:06.209 [2024-11-28 12:50:48.430220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.209 [2024-11-28 12:50:48.430230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.209 qpair failed and we were unable to recover it. 00:27:06.209 [2024-11-28 12:50:48.430319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.209 [2024-11-28 12:50:48.430331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.209 qpair failed and we were unable to recover it. 00:27:06.209 [2024-11-28 12:50:48.430485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.209 [2024-11-28 12:50:48.430496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.209 qpair failed and we were unable to recover it. 00:27:06.209 [2024-11-28 12:50:48.430662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.209 [2024-11-28 12:50:48.430673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.209 qpair failed and we were unable to recover it. 00:27:06.209 [2024-11-28 12:50:48.430763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.209 [2024-11-28 12:50:48.430774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.209 qpair failed and we were unable to recover it. 00:27:06.209 [2024-11-28 12:50:48.431017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.209 [2024-11-28 12:50:48.431029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.209 qpair failed and we were unable to recover it. 00:27:06.209 [2024-11-28 12:50:48.431138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.209 [2024-11-28 12:50:48.431149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.209 qpair failed and we were unable to recover it. 00:27:06.209 [2024-11-28 12:50:48.431355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.209 [2024-11-28 12:50:48.431367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.209 qpair failed and we were unable to recover it. 00:27:06.209 [2024-11-28 12:50:48.431469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.209 [2024-11-28 12:50:48.431480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.209 qpair failed and we were unable to recover it. 00:27:06.209 [2024-11-28 12:50:48.431729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.209 [2024-11-28 12:50:48.431741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.209 qpair failed and we were unable to recover it. 00:27:06.209 [2024-11-28 12:50:48.431817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.209 [2024-11-28 12:50:48.431828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.209 qpair failed and we were unable to recover it. 00:27:06.209 [2024-11-28 12:50:48.431981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.209 [2024-11-28 12:50:48.431993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.209 qpair failed and we were unable to recover it. 00:27:06.209 [2024-11-28 12:50:48.432125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.209 [2024-11-28 12:50:48.432135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.209 qpair failed and we were unable to recover it. 00:27:06.209 [2024-11-28 12:50:48.432230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.209 [2024-11-28 12:50:48.432241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.209 qpair failed and we were unable to recover it. 00:27:06.209 [2024-11-28 12:50:48.432392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.209 [2024-11-28 12:50:48.432403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.209 qpair failed and we were unable to recover it. 00:27:06.209 [2024-11-28 12:50:48.432537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.209 [2024-11-28 12:50:48.432547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.209 qpair failed and we were unable to recover it. 00:27:06.209 [2024-11-28 12:50:48.432852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.209 [2024-11-28 12:50:48.432863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.209 qpair failed and we were unable to recover it. 00:27:06.209 [2024-11-28 12:50:48.433050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.209 [2024-11-28 12:50:48.433061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.209 qpair failed and we were unable to recover it. 00:27:06.210 [2024-11-28 12:50:48.433212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.210 [2024-11-28 12:50:48.433223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.210 qpair failed and we were unable to recover it. 00:27:06.210 [2024-11-28 12:50:48.433326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.210 [2024-11-28 12:50:48.433339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.210 qpair failed and we were unable to recover it. 00:27:06.210 [2024-11-28 12:50:48.433491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.210 [2024-11-28 12:50:48.433503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.210 qpair failed and we were unable to recover it. 00:27:06.210 [2024-11-28 12:50:48.433667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.210 [2024-11-28 12:50:48.433678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.210 qpair failed and we were unable to recover it. 00:27:06.210 [2024-11-28 12:50:48.433761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.210 [2024-11-28 12:50:48.433772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.210 qpair failed and we were unable to recover it. 00:27:06.210 [2024-11-28 12:50:48.433920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.210 [2024-11-28 12:50:48.433931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.210 qpair failed and we were unable to recover it. 00:27:06.210 [2024-11-28 12:50:48.434055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.210 [2024-11-28 12:50:48.434066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.210 qpair failed and we were unable to recover it. 00:27:06.210 [2024-11-28 12:50:48.434295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.210 [2024-11-28 12:50:48.434306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.210 qpair failed and we were unable to recover it. 00:27:06.210 [2024-11-28 12:50:48.434451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.210 [2024-11-28 12:50:48.434462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.210 qpair failed and we were unable to recover it. 00:27:06.210 [2024-11-28 12:50:48.434733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.210 [2024-11-28 12:50:48.434744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.210 qpair failed and we were unable to recover it. 00:27:06.210 [2024-11-28 12:50:48.434940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.210 [2024-11-28 12:50:48.434956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.210 qpair failed and we were unable to recover it. 00:27:06.210 [2024-11-28 12:50:48.435138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.210 [2024-11-28 12:50:48.435148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.210 qpair failed and we were unable to recover it. 00:27:06.210 [2024-11-28 12:50:48.435305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.210 [2024-11-28 12:50:48.435316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.210 qpair failed and we were unable to recover it. 00:27:06.210 [2024-11-28 12:50:48.435397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.210 [2024-11-28 12:50:48.435408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.210 qpair failed and we were unable to recover it. 00:27:06.210 [2024-11-28 12:50:48.435554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.210 [2024-11-28 12:50:48.435565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.210 qpair failed and we were unable to recover it. 00:27:06.210 [2024-11-28 12:50:48.435734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.210 [2024-11-28 12:50:48.435745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.210 qpair failed and we were unable to recover it. 00:27:06.210 [2024-11-28 12:50:48.435826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.210 [2024-11-28 12:50:48.435837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.210 qpair failed and we were unable to recover it. 00:27:06.210 [2024-11-28 12:50:48.436033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.210 [2024-11-28 12:50:48.436044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.210 qpair failed and we were unable to recover it. 00:27:06.210 [2024-11-28 12:50:48.436215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.210 [2024-11-28 12:50:48.436226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.210 qpair failed and we were unable to recover it. 00:27:06.210 [2024-11-28 12:50:48.436370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.210 [2024-11-28 12:50:48.436380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.210 qpair failed and we were unable to recover it. 00:27:06.210 [2024-11-28 12:50:48.436475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.210 [2024-11-28 12:50:48.436485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.210 qpair failed and we were unable to recover it. 00:27:06.210 [2024-11-28 12:50:48.436748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.210 [2024-11-28 12:50:48.436759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.210 qpair failed and we were unable to recover it. 00:27:06.210 [2024-11-28 12:50:48.436842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.210 [2024-11-28 12:50:48.436852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.210 qpair failed and we were unable to recover it. 00:27:06.210 [2024-11-28 12:50:48.436950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.210 [2024-11-28 12:50:48.436961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.210 qpair failed and we were unable to recover it. 00:27:06.210 [2024-11-28 12:50:48.437041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.210 [2024-11-28 12:50:48.437052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.210 qpair failed and we were unable to recover it. 00:27:06.210 [2024-11-28 12:50:48.437155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.210 [2024-11-28 12:50:48.437166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.210 qpair failed and we were unable to recover it. 00:27:06.210 [2024-11-28 12:50:48.437269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.210 [2024-11-28 12:50:48.437280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.210 qpair failed and we were unable to recover it. 00:27:06.210 [2024-11-28 12:50:48.437378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.210 [2024-11-28 12:50:48.437389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.210 qpair failed and we were unable to recover it. 00:27:06.210 [2024-11-28 12:50:48.437496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.210 [2024-11-28 12:50:48.437508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.210 qpair failed and we were unable to recover it. 00:27:06.210 [2024-11-28 12:50:48.437709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.210 [2024-11-28 12:50:48.437721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.210 qpair failed and we were unable to recover it. 00:27:06.210 [2024-11-28 12:50:48.437860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.210 [2024-11-28 12:50:48.437871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.210 qpair failed and we were unable to recover it. 00:27:06.210 [2024-11-28 12:50:48.438004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.210 [2024-11-28 12:50:48.438034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.210 qpair failed and we were unable to recover it. 00:27:06.210 [2024-11-28 12:50:48.438115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.210 [2024-11-28 12:50:48.438125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.210 qpair failed and we were unable to recover it. 00:27:06.210 [2024-11-28 12:50:48.438334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.210 [2024-11-28 12:50:48.438346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.210 qpair failed and we were unable to recover it. 00:27:06.210 [2024-11-28 12:50:48.438587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.211 [2024-11-28 12:50:48.438598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.211 qpair failed and we were unable to recover it. 00:27:06.211 [2024-11-28 12:50:48.438686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.211 [2024-11-28 12:50:48.438696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.211 qpair failed and we were unable to recover it. 00:27:06.211 [2024-11-28 12:50:48.438789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.211 [2024-11-28 12:50:48.438799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.211 qpair failed and we were unable to recover it. 00:27:06.211 [2024-11-28 12:50:48.438895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.211 [2024-11-28 12:50:48.438906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.211 qpair failed and we were unable to recover it. 00:27:06.211 [2024-11-28 12:50:48.439047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.211 [2024-11-28 12:50:48.439058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.211 qpair failed and we were unable to recover it. 00:27:06.211 [2024-11-28 12:50:48.439260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.211 [2024-11-28 12:50:48.439271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.211 qpair failed and we were unable to recover it. 00:27:06.211 [2024-11-28 12:50:48.439354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.211 [2024-11-28 12:50:48.439364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.211 qpair failed and we were unable to recover it. 00:27:06.211 [2024-11-28 12:50:48.439607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.211 [2024-11-28 12:50:48.439621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.211 qpair failed and we were unable to recover it. 00:27:06.211 [2024-11-28 12:50:48.439827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.211 [2024-11-28 12:50:48.439838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.211 qpair failed and we were unable to recover it. 00:27:06.211 [2024-11-28 12:50:48.440067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.211 [2024-11-28 12:50:48.440078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.211 qpair failed and we were unable to recover it. 00:27:06.211 [2024-11-28 12:50:48.440183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.211 [2024-11-28 12:50:48.440193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.211 qpair failed and we were unable to recover it. 00:27:06.211 [2024-11-28 12:50:48.440343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.211 [2024-11-28 12:50:48.440354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.211 qpair failed and we were unable to recover it. 00:27:06.211 [2024-11-28 12:50:48.440451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.211 [2024-11-28 12:50:48.440462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.211 qpair failed and we were unable to recover it. 00:27:06.211 [2024-11-28 12:50:48.440681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.211 [2024-11-28 12:50:48.440692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.211 qpair failed and we were unable to recover it. 00:27:06.211 [2024-11-28 12:50:48.440837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.211 [2024-11-28 12:50:48.440848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.211 qpair failed and we were unable to recover it. 00:27:06.211 [2024-11-28 12:50:48.441006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.211 [2024-11-28 12:50:48.441018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.211 qpair failed and we were unable to recover it. 00:27:06.211 [2024-11-28 12:50:48.441227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.211 [2024-11-28 12:50:48.441238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.211 qpair failed and we were unable to recover it. 00:27:06.211 [2024-11-28 12:50:48.441388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.211 [2024-11-28 12:50:48.441399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.211 qpair failed and we were unable to recover it. 00:27:06.211 [2024-11-28 12:50:48.441494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.211 [2024-11-28 12:50:48.441505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.211 qpair failed and we were unable to recover it. 00:27:06.211 [2024-11-28 12:50:48.441722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.211 [2024-11-28 12:50:48.441733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.211 qpair failed and we were unable to recover it. 00:27:06.211 [2024-11-28 12:50:48.441862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.211 [2024-11-28 12:50:48.441873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.211 qpair failed and we were unable to recover it. 00:27:06.211 [2024-11-28 12:50:48.442047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.211 [2024-11-28 12:50:48.442058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.211 qpair failed and we were unable to recover it. 00:27:06.211 [2024-11-28 12:50:48.442260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.211 [2024-11-28 12:50:48.442271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.211 qpair failed and we were unable to recover it. 00:27:06.211 [2024-11-28 12:50:48.442443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.211 [2024-11-28 12:50:48.442454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.211 qpair failed and we were unable to recover it. 00:27:06.211 [2024-11-28 12:50:48.442601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.211 [2024-11-28 12:50:48.442611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.211 qpair failed and we were unable to recover it. 00:27:06.211 [2024-11-28 12:50:48.442810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.211 [2024-11-28 12:50:48.442821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.211 qpair failed and we were unable to recover it. 00:27:06.211 [2024-11-28 12:50:48.442900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.211 [2024-11-28 12:50:48.442911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.211 qpair failed and we were unable to recover it. 00:27:06.211 [2024-11-28 12:50:48.443114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.211 [2024-11-28 12:50:48.443125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.211 qpair failed and we were unable to recover it. 00:27:06.211 [2024-11-28 12:50:48.443268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.211 [2024-11-28 12:50:48.443278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.211 qpair failed and we were unable to recover it. 00:27:06.211 [2024-11-28 12:50:48.443432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.211 [2024-11-28 12:50:48.443443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.211 qpair failed and we were unable to recover it. 00:27:06.211 [2024-11-28 12:50:48.443658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.211 [2024-11-28 12:50:48.443668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.211 qpair failed and we were unable to recover it. 00:27:06.211 [2024-11-28 12:50:48.443841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.211 [2024-11-28 12:50:48.443851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.211 qpair failed and we were unable to recover it. 00:27:06.211 [2024-11-28 12:50:48.443996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.211 [2024-11-28 12:50:48.444009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.211 qpair failed and we were unable to recover it. 00:27:06.211 [2024-11-28 12:50:48.444157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.211 [2024-11-28 12:50:48.444171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.211 qpair failed and we were unable to recover it. 00:27:06.211 [2024-11-28 12:50:48.444312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.211 [2024-11-28 12:50:48.444323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.211 qpair failed and we were unable to recover it. 00:27:06.211 [2024-11-28 12:50:48.444405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.211 [2024-11-28 12:50:48.444416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.211 qpair failed and we were unable to recover it. 00:27:06.211 [2024-11-28 12:50:48.444498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.211 [2024-11-28 12:50:48.444509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.211 qpair failed and we were unable to recover it. 00:27:06.211 [2024-11-28 12:50:48.444650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.211 [2024-11-28 12:50:48.444661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.211 qpair failed and we were unable to recover it. 00:27:06.211 [2024-11-28 12:50:48.444911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.212 [2024-11-28 12:50:48.444922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.212 qpair failed and we were unable to recover it. 00:27:06.212 [2024-11-28 12:50:48.445148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.212 [2024-11-28 12:50:48.445160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.212 qpair failed and we were unable to recover it. 00:27:06.212 [2024-11-28 12:50:48.445360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.212 [2024-11-28 12:50:48.445371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.212 qpair failed and we were unable to recover it. 00:27:06.212 [2024-11-28 12:50:48.445453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.212 [2024-11-28 12:50:48.445464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.212 qpair failed and we were unable to recover it. 00:27:06.212 [2024-11-28 12:50:48.445626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.212 [2024-11-28 12:50:48.445637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.212 qpair failed and we were unable to recover it. 00:27:06.212 [2024-11-28 12:50:48.445740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.212 [2024-11-28 12:50:48.445751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.212 qpair failed and we were unable to recover it. 00:27:06.212 [2024-11-28 12:50:48.445979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.212 [2024-11-28 12:50:48.445990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.212 qpair failed and we were unable to recover it. 00:27:06.212 [2024-11-28 12:50:48.446122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.212 [2024-11-28 12:50:48.446133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.212 qpair failed and we were unable to recover it. 00:27:06.212 [2024-11-28 12:50:48.446279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.212 [2024-11-28 12:50:48.446290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.212 qpair failed and we were unable to recover it. 00:27:06.212 [2024-11-28 12:50:48.446445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.212 [2024-11-28 12:50:48.446459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.212 qpair failed and we were unable to recover it. 00:27:06.212 [2024-11-28 12:50:48.446609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.212 [2024-11-28 12:50:48.446620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.212 qpair failed and we were unable to recover it. 00:27:06.212 [2024-11-28 12:50:48.446791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.212 [2024-11-28 12:50:48.446802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.212 qpair failed and we were unable to recover it. 00:27:06.212 [2024-11-28 12:50:48.446885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.212 [2024-11-28 12:50:48.446895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.212 qpair failed and we were unable to recover it. 00:27:06.212 [2024-11-28 12:50:48.447093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.212 [2024-11-28 12:50:48.447104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.212 qpair failed and we were unable to recover it. 00:27:06.212 [2024-11-28 12:50:48.447375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.212 [2024-11-28 12:50:48.447386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.212 qpair failed and we were unable to recover it. 00:27:06.212 [2024-11-28 12:50:48.447542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.212 [2024-11-28 12:50:48.447553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.212 qpair failed and we were unable to recover it. 00:27:06.212 [2024-11-28 12:50:48.447747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.212 [2024-11-28 12:50:48.447758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.212 qpair failed and we were unable to recover it. 00:27:06.212 [2024-11-28 12:50:48.447840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.212 [2024-11-28 12:50:48.447851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.212 qpair failed and we were unable to recover it. 00:27:06.212 [2024-11-28 12:50:48.447983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.212 [2024-11-28 12:50:48.447994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.212 qpair failed and we were unable to recover it. 00:27:06.212 [2024-11-28 12:50:48.448205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.212 [2024-11-28 12:50:48.448216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.212 qpair failed and we were unable to recover it. 00:27:06.212 [2024-11-28 12:50:48.448306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.212 [2024-11-28 12:50:48.448316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.212 qpair failed and we were unable to recover it. 00:27:06.212 [2024-11-28 12:50:48.448460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.212 [2024-11-28 12:50:48.448470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.212 qpair failed and we were unable to recover it. 00:27:06.212 [2024-11-28 12:50:48.448627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.212 [2024-11-28 12:50:48.448637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.212 qpair failed and we were unable to recover it. 00:27:06.212 [2024-11-28 12:50:48.448799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.212 [2024-11-28 12:50:48.448811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.212 qpair failed and we were unable to recover it. 00:27:06.212 [2024-11-28 12:50:48.448995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.212 [2024-11-28 12:50:48.449007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.212 qpair failed and we were unable to recover it. 00:27:06.212 [2024-11-28 12:50:48.449152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.212 [2024-11-28 12:50:48.449163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.212 qpair failed and we were unable to recover it. 00:27:06.212 [2024-11-28 12:50:48.449306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.212 [2024-11-28 12:50:48.449317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.212 qpair failed and we were unable to recover it. 00:27:06.212 [2024-11-28 12:50:48.449419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.212 [2024-11-28 12:50:48.449430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.212 qpair failed and we were unable to recover it. 00:27:06.212 [2024-11-28 12:50:48.449572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.212 [2024-11-28 12:50:48.449583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.212 qpair failed and we were unable to recover it. 00:27:06.212 [2024-11-28 12:50:48.449787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.212 [2024-11-28 12:50:48.449799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.212 qpair failed and we were unable to recover it. 00:27:06.212 [2024-11-28 12:50:48.450037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.212 [2024-11-28 12:50:48.450049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.212 qpair failed and we were unable to recover it. 00:27:06.212 [2024-11-28 12:50:48.450147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.212 [2024-11-28 12:50:48.450162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.212 qpair failed and we were unable to recover it. 00:27:06.212 [2024-11-28 12:50:48.450253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.212 [2024-11-28 12:50:48.450264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.212 qpair failed and we were unable to recover it. 00:27:06.212 [2024-11-28 12:50:48.450409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.212 [2024-11-28 12:50:48.450421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.212 qpair failed and we were unable to recover it. 00:27:06.212 [2024-11-28 12:50:48.450667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.212 [2024-11-28 12:50:48.450679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.212 qpair failed and we were unable to recover it. 00:27:06.212 [2024-11-28 12:50:48.450809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.212 [2024-11-28 12:50:48.450820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.212 qpair failed and we were unable to recover it. 00:27:06.212 [2024-11-28 12:50:48.450990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.212 [2024-11-28 12:50:48.451003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.212 qpair failed and we were unable to recover it. 00:27:06.212 [2024-11-28 12:50:48.451167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.212 [2024-11-28 12:50:48.451179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.212 qpair failed and we were unable to recover it. 00:27:06.212 [2024-11-28 12:50:48.451382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.213 [2024-11-28 12:50:48.451401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.213 qpair failed and we were unable to recover it. 00:27:06.213 [2024-11-28 12:50:48.451641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.213 [2024-11-28 12:50:48.451652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.213 qpair failed and we were unable to recover it. 00:27:06.213 [2024-11-28 12:50:48.451852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.213 [2024-11-28 12:50:48.451864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.213 qpair failed and we were unable to recover it. 00:27:06.213 [2024-11-28 12:50:48.452053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.213 [2024-11-28 12:50:48.452065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.213 qpair failed and we were unable to recover it. 00:27:06.213 [2024-11-28 12:50:48.452277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.213 [2024-11-28 12:50:48.452289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.213 qpair failed and we were unable to recover it. 00:27:06.213 [2024-11-28 12:50:48.452510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.213 [2024-11-28 12:50:48.452522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.213 qpair failed and we were unable to recover it. 00:27:06.213 [2024-11-28 12:50:48.452683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.213 [2024-11-28 12:50:48.452695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.213 qpair failed and we were unable to recover it. 00:27:06.213 [2024-11-28 12:50:48.452847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.213 [2024-11-28 12:50:48.452858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.213 qpair failed and we were unable to recover it. 00:27:06.213 [2024-11-28 12:50:48.453081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.213 [2024-11-28 12:50:48.453093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.213 qpair failed and we were unable to recover it. 00:27:06.213 [2024-11-28 12:50:48.453232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.213 [2024-11-28 12:50:48.453243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.213 qpair failed and we were unable to recover it. 00:27:06.213 [2024-11-28 12:50:48.453377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.213 [2024-11-28 12:50:48.453389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.213 qpair failed and we were unable to recover it. 00:27:06.213 [2024-11-28 12:50:48.453597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.213 [2024-11-28 12:50:48.453612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.213 qpair failed and we were unable to recover it. 00:27:06.213 [2024-11-28 12:50:48.453836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.213 [2024-11-28 12:50:48.453847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.213 qpair failed and we were unable to recover it. 00:27:06.213 [2024-11-28 12:50:48.454026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.213 [2024-11-28 12:50:48.454037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.213 qpair failed and we were unable to recover it. 00:27:06.213 [2024-11-28 12:50:48.454197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.213 [2024-11-28 12:50:48.454207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.213 qpair failed and we were unable to recover it. 00:27:06.213 [2024-11-28 12:50:48.454411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.213 [2024-11-28 12:50:48.454423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.213 qpair failed and we were unable to recover it. 00:27:06.213 [2024-11-28 12:50:48.454510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.213 [2024-11-28 12:50:48.454522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.213 qpair failed and we were unable to recover it. 00:27:06.213 [2024-11-28 12:50:48.454749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.213 [2024-11-28 12:50:48.454760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.213 qpair failed and we were unable to recover it. 00:27:06.213 [2024-11-28 12:50:48.454946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.213 [2024-11-28 12:50:48.454960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.213 qpair failed and we were unable to recover it. 00:27:06.213 [2024-11-28 12:50:48.455183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.213 [2024-11-28 12:50:48.455194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.213 qpair failed and we were unable to recover it. 00:27:06.213 [2024-11-28 12:50:48.455327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.213 [2024-11-28 12:50:48.455337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.213 qpair failed and we were unable to recover it. 00:27:06.213 [2024-11-28 12:50:48.455443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.213 [2024-11-28 12:50:48.455454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.213 qpair failed and we were unable to recover it. 00:27:06.213 [2024-11-28 12:50:48.455532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.213 [2024-11-28 12:50:48.455543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.213 qpair failed and we were unable to recover it. 00:27:06.213 [2024-11-28 12:50:48.455768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.213 [2024-11-28 12:50:48.455779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.213 qpair failed and we were unable to recover it. 00:27:06.213 [2024-11-28 12:50:48.455929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.213 [2024-11-28 12:50:48.455940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.213 qpair failed and we were unable to recover it. 00:27:06.213 [2024-11-28 12:50:48.456093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.213 [2024-11-28 12:50:48.456104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.213 qpair failed and we were unable to recover it. 00:27:06.213 [2024-11-28 12:50:48.456190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.213 [2024-11-28 12:50:48.456200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.213 qpair failed and we were unable to recover it. 00:27:06.213 [2024-11-28 12:50:48.456415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.213 [2024-11-28 12:50:48.456426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.213 qpair failed and we were unable to recover it. 00:27:06.213 [2024-11-28 12:50:48.456569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.213 [2024-11-28 12:50:48.456580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.213 qpair failed and we were unable to recover it. 00:27:06.213 [2024-11-28 12:50:48.456791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.213 [2024-11-28 12:50:48.456803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.213 qpair failed and we were unable to recover it. 00:27:06.213 [2024-11-28 12:50:48.456937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.213 [2024-11-28 12:50:48.456951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.213 qpair failed and we were unable to recover it. 00:27:06.213 Malloc0 00:27:06.213 [2024-11-28 12:50:48.457200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.213 [2024-11-28 12:50:48.457211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.213 qpair failed and we were unable to recover it. 00:27:06.213 [2024-11-28 12:50:48.457376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.213 [2024-11-28 12:50:48.457386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.213 qpair failed and we were unable to recover it. 00:27:06.213 [2024-11-28 12:50:48.457590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.213 [2024-11-28 12:50:48.457601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.213 qpair failed and we were unable to recover it. 00:27:06.213 [2024-11-28 12:50:48.457792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.213 [2024-11-28 12:50:48.457803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.213 qpair failed and we were unable to recover it. 00:27:06.213 12:50:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.213 [2024-11-28 12:50:48.458016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.213 [2024-11-28 12:50:48.458027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.213 qpair failed and we were unable to recover it. 00:27:06.213 [2024-11-28 12:50:48.458113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.213 [2024-11-28 12:50:48.458124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.213 qpair failed and we were unable to recover it. 00:27:06.214 12:50:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:27:06.214 [2024-11-28 12:50:48.458297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.214 [2024-11-28 12:50:48.458317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.214 qpair failed and we were unable to recover it. 00:27:06.214 [2024-11-28 12:50:48.458483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.214 [2024-11-28 12:50:48.458493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.214 qpair failed and we were unable to recover it. 00:27:06.214 [2024-11-28 12:50:48.458627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.214 12:50:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.214 [2024-11-28 12:50:48.458637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.214 qpair failed and we were unable to recover it. 00:27:06.214 [2024-11-28 12:50:48.458783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.214 [2024-11-28 12:50:48.458793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.214 qpair failed and we were unable to recover it. 00:27:06.214 12:50:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:06.214 [2024-11-28 12:50:48.458966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.214 [2024-11-28 12:50:48.458977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.214 qpair failed and we were unable to recover it. 00:27:06.214 [2024-11-28 12:50:48.459180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.214 [2024-11-28 12:50:48.459190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.214 qpair failed and we were unable to recover it. 00:27:06.214 [2024-11-28 12:50:48.459258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.214 [2024-11-28 12:50:48.459267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.214 qpair failed and we were unable to recover it. 00:27:06.214 [2024-11-28 12:50:48.459457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.214 [2024-11-28 12:50:48.459469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.214 qpair failed and we were unable to recover it. 00:27:06.214 [2024-11-28 12:50:48.459622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.214 [2024-11-28 12:50:48.459632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.214 qpair failed and we were unable to recover it. 00:27:06.214 [2024-11-28 12:50:48.459764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.214 [2024-11-28 12:50:48.459774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.214 qpair failed and we were unable to recover it. 00:27:06.214 [2024-11-28 12:50:48.459868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.214 [2024-11-28 12:50:48.459879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.214 qpair failed and we were unable to recover it. 00:27:06.214 [2024-11-28 12:50:48.460083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.214 [2024-11-28 12:50:48.460094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.214 qpair failed and we were unable to recover it. 00:27:06.214 [2024-11-28 12:50:48.460240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.214 [2024-11-28 12:50:48.460251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.214 qpair failed and we were unable to recover it. 00:27:06.214 [2024-11-28 12:50:48.460444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.214 [2024-11-28 12:50:48.460454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.214 qpair failed and we were unable to recover it. 00:27:06.214 [2024-11-28 12:50:48.460693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.214 [2024-11-28 12:50:48.460703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.214 qpair failed and we were unable to recover it. 00:27:06.214 [2024-11-28 12:50:48.460929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.214 [2024-11-28 12:50:48.460941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.214 qpair failed and we were unable to recover it. 00:27:06.214 [2024-11-28 12:50:48.461119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.214 [2024-11-28 12:50:48.461129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.214 qpair failed and we were unable to recover it. 00:27:06.214 [2024-11-28 12:50:48.461338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.214 [2024-11-28 12:50:48.461348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.214 qpair failed and we were unable to recover it. 00:27:06.214 [2024-11-28 12:50:48.461447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.214 [2024-11-28 12:50:48.461457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.214 qpair failed and we were unable to recover it. 00:27:06.214 [2024-11-28 12:50:48.461621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.214 [2024-11-28 12:50:48.461631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.214 qpair failed and we were unable to recover it. 00:27:06.214 [2024-11-28 12:50:48.461785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.214 [2024-11-28 12:50:48.461795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.214 qpair failed and we were unable to recover it. 00:27:06.214 [2024-11-28 12:50:48.461993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.214 [2024-11-28 12:50:48.462004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.214 qpair failed and we were unable to recover it. 00:27:06.214 [2024-11-28 12:50:48.462175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.214 [2024-11-28 12:50:48.462185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.214 qpair failed and we were unable to recover it. 00:27:06.214 [2024-11-28 12:50:48.462413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.214 [2024-11-28 12:50:48.462423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.214 qpair failed and we were unable to recover it. 00:27:06.214 [2024-11-28 12:50:48.462634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.214 [2024-11-28 12:50:48.462645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.214 qpair failed and we were unable to recover it. 00:27:06.214 [2024-11-28 12:50:48.462832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.214 [2024-11-28 12:50:48.462842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.214 qpair failed and we were unable to recover it. 00:27:06.214 [2024-11-28 12:50:48.462929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.214 [2024-11-28 12:50:48.462939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.214 qpair failed and we were unable to recover it. 00:27:06.214 [2024-11-28 12:50:48.463097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.214 [2024-11-28 12:50:48.463108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.214 qpair failed and we were unable to recover it. 00:27:06.214 [2024-11-28 12:50:48.463204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.214 [2024-11-28 12:50:48.463214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.214 qpair failed and we were unable to recover it. 00:27:06.214 [2024-11-28 12:50:48.463388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.214 [2024-11-28 12:50:48.463398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.214 qpair failed and we were unable to recover it. 00:27:06.214 [2024-11-28 12:50:48.463483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.214 [2024-11-28 12:50:48.463493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.214 qpair failed and we were unable to recover it. 00:27:06.214 [2024-11-28 12:50:48.463577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.214 [2024-11-28 12:50:48.463587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.214 qpair failed and we were unable to recover it. 00:27:06.214 [2024-11-28 12:50:48.463812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.214 [2024-11-28 12:50:48.463822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.214 qpair failed and we were unable to recover it. 00:27:06.214 [2024-11-28 12:50:48.463967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.214 [2024-11-28 12:50:48.463977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.214 qpair failed and we were unable to recover it. 00:27:06.214 [2024-11-28 12:50:48.464113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.214 [2024-11-28 12:50:48.464123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.214 qpair failed and we were unable to recover it. 00:27:06.214 [2024-11-28 12:50:48.464288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.214 [2024-11-28 12:50:48.464298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.214 qpair failed and we were unable to recover it. 00:27:06.214 [2024-11-28 12:50:48.464434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.214 [2024-11-28 12:50:48.464444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.214 qpair failed and we were unable to recover it. 00:27:06.215 [2024-11-28 12:50:48.464669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.215 [2024-11-28 12:50:48.464679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.215 qpair failed and we were unable to recover it. 00:27:06.215 [2024-11-28 12:50:48.464799] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:06.215 [2024-11-28 12:50:48.464926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.215 [2024-11-28 12:50:48.464936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c5c000b90 with addr=10.0.0.2, port=4420 00:27:06.215 qpair failed and we were unable to recover it. 00:27:06.215 [2024-11-28 12:50:48.465137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.215 [2024-11-28 12:50:48.465171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:06.215 qpair failed and we were unable to recover it. 00:27:06.215 [2024-11-28 12:50:48.465298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.215 [2024-11-28 12:50:48.465314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:06.215 qpair failed and we were unable to recover it. 00:27:06.215 [2024-11-28 12:50:48.465471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.215 [2024-11-28 12:50:48.465485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:06.215 qpair failed and we were unable to recover it. 00:27:06.215 [2024-11-28 12:50:48.465630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.215 [2024-11-28 12:50:48.465644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:06.215 qpair failed and we were unable to recover it. 00:27:06.215 [2024-11-28 12:50:48.465854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.215 [2024-11-28 12:50:48.465868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:06.215 qpair failed and we were unable to recover it. 00:27:06.215 [2024-11-28 12:50:48.466044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.215 [2024-11-28 12:50:48.466060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:06.215 qpair failed and we were unable to recover it. 00:27:06.215 [2024-11-28 12:50:48.466170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.215 [2024-11-28 12:50:48.466185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:06.215 qpair failed and we were unable to recover it. 00:27:06.215 [2024-11-28 12:50:48.466337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.215 [2024-11-28 12:50:48.466351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:06.215 qpair failed and we were unable to recover it. 00:27:06.215 [2024-11-28 12:50:48.466556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.215 [2024-11-28 12:50:48.466570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:06.215 qpair failed and we were unable to recover it. 00:27:06.215 [2024-11-28 12:50:48.466646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.215 [2024-11-28 12:50:48.466661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:06.215 qpair failed and we were unable to recover it. 00:27:06.215 [2024-11-28 12:50:48.466891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.215 [2024-11-28 12:50:48.466905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:06.215 qpair failed and we were unable to recover it. 00:27:06.215 [2024-11-28 12:50:48.467070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.215 [2024-11-28 12:50:48.467085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:06.215 qpair failed and we were unable to recover it. 00:27:06.215 [2024-11-28 12:50:48.467177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.215 [2024-11-28 12:50:48.467192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:06.215 qpair failed and we were unable to recover it. 00:27:06.215 [2024-11-28 12:50:48.467300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.215 [2024-11-28 12:50:48.467319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:06.215 qpair failed and we were unable to recover it. 00:27:06.215 [2024-11-28 12:50:48.467500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.215 [2024-11-28 12:50:48.467515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:06.215 qpair failed and we were unable to recover it. 00:27:06.215 [2024-11-28 12:50:48.467670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.215 [2024-11-28 12:50:48.467684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:06.215 qpair failed and we were unable to recover it. 00:27:06.215 [2024-11-28 12:50:48.467866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.215 [2024-11-28 12:50:48.467881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:06.215 qpair failed and we were unable to recover it. 00:27:06.215 [2024-11-28 12:50:48.468021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.215 [2024-11-28 12:50:48.468036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:06.215 qpair failed and we were unable to recover it. 00:27:06.215 [2024-11-28 12:50:48.468269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.215 [2024-11-28 12:50:48.468284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:06.215 qpair failed and we were unable to recover it. 00:27:06.215 [2024-11-28 12:50:48.468565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.215 [2024-11-28 12:50:48.468580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:06.215 qpair failed and we were unable to recover it. 00:27:06.215 [2024-11-28 12:50:48.468680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.215 [2024-11-28 12:50:48.468694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:06.215 qpair failed and we were unable to recover it. 00:27:06.215 [2024-11-28 12:50:48.468922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.215 [2024-11-28 12:50:48.468937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:06.215 qpair failed and we were unable to recover it. 00:27:06.215 [2024-11-28 12:50:48.469136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.215 [2024-11-28 12:50:48.469151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:06.215 qpair failed and we were unable to recover it. 00:27:06.215 [2024-11-28 12:50:48.469379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.215 [2024-11-28 12:50:48.469393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:06.215 qpair failed and we were unable to recover it. 00:27:06.215 [2024-11-28 12:50:48.469609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.215 [2024-11-28 12:50:48.469624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:06.215 qpair failed and we were unable to recover it. 00:27:06.215 [2024-11-28 12:50:48.469850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.215 [2024-11-28 12:50:48.469864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:06.215 qpair failed and we were unable to recover it. 00:27:06.215 [2024-11-28 12:50:48.470037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.215 [2024-11-28 12:50:48.470052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:06.215 qpair failed and we were unable to recover it. 00:27:06.215 [2024-11-28 12:50:48.470214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.215 [2024-11-28 12:50:48.470228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:06.215 qpair failed and we were unable to recover it. 00:27:06.215 [2024-11-28 12:50:48.470388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.215 [2024-11-28 12:50:48.470403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:06.215 qpair failed and we were unable to recover it. 00:27:06.215 [2024-11-28 12:50:48.470592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.215 [2024-11-28 12:50:48.470606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:06.215 qpair failed and we were unable to recover it. 00:27:06.215 [2024-11-28 12:50:48.470765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.215 [2024-11-28 12:50:48.470779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:06.215 qpair failed and we were unable to recover it. 00:27:06.216 [2024-11-28 12:50:48.470988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.216 [2024-11-28 12:50:48.471003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:06.216 qpair failed and we were unable to recover it. 00:27:06.216 [2024-11-28 12:50:48.471100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.216 [2024-11-28 12:50:48.471114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:06.216 qpair failed and we were unable to recover it. 00:27:06.216 [2024-11-28 12:50:48.471199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.216 [2024-11-28 12:50:48.471213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:06.216 qpair failed and we were unable to recover it. 00:27:06.216 [2024-11-28 12:50:48.471371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.216 [2024-11-28 12:50:48.471385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:06.216 qpair failed and we were unable to recover it. 00:27:06.216 [2024-11-28 12:50:48.471547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.216 [2024-11-28 12:50:48.471561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:06.216 qpair failed and we were unable to recover it. 00:27:06.216 [2024-11-28 12:50:48.471734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.216 [2024-11-28 12:50:48.471748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:06.216 qpair failed and we were unable to recover it. 00:27:06.216 [2024-11-28 12:50:48.471925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.216 [2024-11-28 12:50:48.471939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:06.216 qpair failed and we were unable to recover it. 00:27:06.216 [2024-11-28 12:50:48.472076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.216 [2024-11-28 12:50:48.472091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:06.216 qpair failed and we were unable to recover it. 00:27:06.216 [2024-11-28 12:50:48.472172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.216 [2024-11-28 12:50:48.472187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:06.216 qpair failed and we were unable to recover it. 00:27:06.216 [2024-11-28 12:50:48.472291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.216 [2024-11-28 12:50:48.472305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:06.216 qpair failed and we were unable to recover it. 00:27:06.216 [2024-11-28 12:50:48.472412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.216 [2024-11-28 12:50:48.472426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:06.216 qpair failed and we were unable to recover it. 00:27:06.216 [2024-11-28 12:50:48.472657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.216 [2024-11-28 12:50:48.472671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:06.216 qpair failed and we were unable to recover it. 00:27:06.216 [2024-11-28 12:50:48.472896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.216 [2024-11-28 12:50:48.472910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:06.216 qpair failed and we were unable to recover it. 00:27:06.216 [2024-11-28 12:50:48.473064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.216 [2024-11-28 12:50:48.473080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:06.216 qpair failed and we were unable to recover it. 00:27:06.216 [2024-11-28 12:50:48.473288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.216 [2024-11-28 12:50:48.473303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:06.216 qpair failed and we were unable to recover it. 00:27:06.216 [2024-11-28 12:50:48.473474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.216 [2024-11-28 12:50:48.473488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:06.216 qpair failed and we were unable to recover it. 00:27:06.216 12:50:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.216 [2024-11-28 12:50:48.473667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.216 [2024-11-28 12:50:48.473683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:06.216 qpair failed and we were unable to recover it. 00:27:06.216 [2024-11-28 12:50:48.473918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.216 [2024-11-28 12:50:48.473934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:06.216 qpair failed and we were unable to recover it. 00:27:06.216 12:50:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:06.216 [2024-11-28 12:50:48.474132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.216 [2024-11-28 12:50:48.474147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:06.216 qpair failed and we were unable to recover it. 00:27:06.216 [2024-11-28 12:50:48.474317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.216 [2024-11-28 12:50:48.474332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:06.216 qpair failed and we were unable to recover it. 00:27:06.216 12:50:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.216 [2024-11-28 12:50:48.474595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.216 [2024-11-28 12:50:48.474611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:06.216 qpair failed and we were unable to recover it. 00:27:06.216 12:50:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:06.216 [2024-11-28 12:50:48.474789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.216 [2024-11-28 12:50:48.474804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:06.216 qpair failed and we were unable to recover it. 00:27:06.216 [2024-11-28 12:50:48.474992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.216 [2024-11-28 12:50:48.475007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:06.216 qpair failed and we were unable to recover it. 00:27:06.216 [2024-11-28 12:50:48.475113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.216 [2024-11-28 12:50:48.475127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:06.216 qpair failed and we were unable to recover it. 00:27:06.216 [2024-11-28 12:50:48.475242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.216 [2024-11-28 12:50:48.475256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:06.216 qpair failed and we were unable to recover it. 00:27:06.216 [2024-11-28 12:50:48.475406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.216 [2024-11-28 12:50:48.475421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:06.216 qpair failed and we were unable to recover it. 00:27:06.216 [2024-11-28 12:50:48.475526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.216 [2024-11-28 12:50:48.475541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:06.216 qpair failed and we were unable to recover it. 00:27:06.216 [2024-11-28 12:50:48.475716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.216 [2024-11-28 12:50:48.475730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:06.216 qpair failed and we were unable to recover it. 00:27:06.216 [2024-11-28 12:50:48.475835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.216 [2024-11-28 12:50:48.475849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:06.216 qpair failed and we were unable to recover it. 00:27:06.216 [2024-11-28 12:50:48.476000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.216 [2024-11-28 12:50:48.476015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:06.216 qpair failed and we were unable to recover it. 00:27:06.216 [2024-11-28 12:50:48.476228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.216 [2024-11-28 12:50:48.476243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:06.216 qpair failed and we were unable to recover it. 00:27:06.216 [2024-11-28 12:50:48.476435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.216 [2024-11-28 12:50:48.476449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:06.216 qpair failed and we were unable to recover it. 00:27:06.216 [2024-11-28 12:50:48.476536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.216 [2024-11-28 12:50:48.476550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:06.216 qpair failed and we were unable to recover it. 00:27:06.216 [2024-11-28 12:50:48.476701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.216 [2024-11-28 12:50:48.476716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:06.216 qpair failed and we were unable to recover it. 00:27:06.216 [2024-11-28 12:50:48.476906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.216 [2024-11-28 12:50:48.476920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:06.216 qpair failed and we were unable to recover it. 00:27:06.216 [2024-11-28 12:50:48.477107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.216 [2024-11-28 12:50:48.477122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:06.216 qpair failed and we were unable to recover it. 00:27:06.216 [2024-11-28 12:50:48.477225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.217 [2024-11-28 12:50:48.477239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:06.217 qpair failed and we were unable to recover it. 00:27:06.217 [2024-11-28 12:50:48.477481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.217 [2024-11-28 12:50:48.477495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:06.217 qpair failed and we were unable to recover it. 00:27:06.217 [2024-11-28 12:50:48.477753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.217 [2024-11-28 12:50:48.477768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:06.217 qpair failed and we were unable to recover it. 00:27:06.217 [2024-11-28 12:50:48.477883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.217 [2024-11-28 12:50:48.477898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:06.217 qpair failed and we were unable to recover it. 00:27:06.217 [2024-11-28 12:50:48.478063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.217 [2024-11-28 12:50:48.478078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:06.217 qpair failed and we were unable to recover it. 00:27:06.217 [2024-11-28 12:50:48.478291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.217 [2024-11-28 12:50:48.478306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:06.217 qpair failed and we were unable to recover it. 00:27:06.217 [2024-11-28 12:50:48.478454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.217 [2024-11-28 12:50:48.478469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:06.217 qpair failed and we were unable to recover it. 00:27:06.217 [2024-11-28 12:50:48.478676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.217 [2024-11-28 12:50:48.478691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:06.217 qpair failed and we were unable to recover it. 00:27:06.217 [2024-11-28 12:50:48.478852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.217 [2024-11-28 12:50:48.478866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:06.217 qpair failed and we were unable to recover it. 00:27:06.217 [2024-11-28 12:50:48.479050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.217 [2024-11-28 12:50:48.479065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:06.217 qpair failed and we were unable to recover it. 00:27:06.217 [2024-11-28 12:50:48.479224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.217 [2024-11-28 12:50:48.479238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c58000b90 with addr=10.0.0.2, port=4420 00:27:06.217 qpair failed and we were unable to recover it. 00:27:06.217 [2024-11-28 12:50:48.479491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.217 [2024-11-28 12:50:48.479519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:06.217 qpair failed and we were unable to recover it. 00:27:06.217 [2024-11-28 12:50:48.479753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.217 [2024-11-28 12:50:48.479769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:06.217 qpair failed and we were unable to recover it. 00:27:06.217 [2024-11-28 12:50:48.479956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.217 [2024-11-28 12:50:48.479972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:06.217 qpair failed and we were unable to recover it. 00:27:06.217 [2024-11-28 12:50:48.480159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.217 [2024-11-28 12:50:48.480174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:06.217 qpair failed and we were unable to recover it. 00:27:06.217 [2024-11-28 12:50:48.480333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.217 [2024-11-28 12:50:48.480347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:06.217 qpair failed and we were unable to recover it. 00:27:06.217 [2024-11-28 12:50:48.480507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.217 [2024-11-28 12:50:48.480521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:06.217 qpair failed and we were unable to recover it. 00:27:06.217 [2024-11-28 12:50:48.480783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.217 [2024-11-28 12:50:48.480797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:06.217 qpair failed and we were unable to recover it. 00:27:06.217 [2024-11-28 12:50:48.480946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.217 [2024-11-28 12:50:48.480966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:06.217 qpair failed and we were unable to recover it. 00:27:06.217 [2024-11-28 12:50:48.481234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.217 [2024-11-28 12:50:48.481249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:06.217 qpair failed and we were unable to recover it. 00:27:06.217 [2024-11-28 12:50:48.481503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.217 [2024-11-28 12:50:48.481517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:06.217 qpair failed and we were unable to recover it. 00:27:06.217 12:50:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.217 [2024-11-28 12:50:48.481772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.217 [2024-11-28 12:50:48.481787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:06.217 qpair failed and we were unable to recover it. 00:27:06.217 12:50:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:06.217 [2024-11-28 12:50:48.482026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.217 [2024-11-28 12:50:48.482042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:06.217 qpair failed and we were unable to recover it. 00:27:06.217 [2024-11-28 12:50:48.482216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.217 [2024-11-28 12:50:48.482236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:06.217 12:50:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.217 qpair failed and we were unable to recover it. 00:27:06.217 [2024-11-28 12:50:48.482402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.217 [2024-11-28 12:50:48.482416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:06.217 qpair failed and we were unable to recover it. 00:27:06.217 12:50:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:06.217 [2024-11-28 12:50:48.482675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.217 [2024-11-28 12:50:48.482691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:06.217 qpair failed and we were unable to recover it. 00:27:06.217 [2024-11-28 12:50:48.482944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.217 [2024-11-28 12:50:48.482964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:06.217 qpair failed and we were unable to recover it. 00:27:06.217 [2024-11-28 12:50:48.483066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.217 [2024-11-28 12:50:48.483081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:06.217 qpair failed and we were unable to recover it. 00:27:06.217 [2024-11-28 12:50:48.483256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.217 [2024-11-28 12:50:48.483271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:06.217 qpair failed and we were unable to recover it. 00:27:06.217 [2024-11-28 12:50:48.483479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.217 [2024-11-28 12:50:48.483493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:06.217 qpair failed and we were unable to recover it. 00:27:06.217 [2024-11-28 12:50:48.483752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.217 [2024-11-28 12:50:48.483766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:06.217 qpair failed and we were unable to recover it. 00:27:06.217 [2024-11-28 12:50:48.483997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.217 [2024-11-28 12:50:48.484012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:06.217 qpair failed and we were unable to recover it. 00:27:06.217 [2024-11-28 12:50:48.484123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.217 [2024-11-28 12:50:48.484138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:06.217 qpair failed and we were unable to recover it. 00:27:06.217 [2024-11-28 12:50:48.484298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.217 [2024-11-28 12:50:48.484313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:06.217 qpair failed and we were unable to recover it. 00:27:06.217 [2024-11-28 12:50:48.484464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.217 [2024-11-28 12:50:48.484479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:06.217 qpair failed and we were unable to recover it. 00:27:06.217 [2024-11-28 12:50:48.484644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.217 [2024-11-28 12:50:48.484658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:06.217 qpair failed and we were unable to recover it. 00:27:06.217 [2024-11-28 12:50:48.484815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.217 [2024-11-28 12:50:48.484830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:06.218 qpair failed and we were unable to recover it. 00:27:06.218 [2024-11-28 12:50:48.485040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.218 [2024-11-28 12:50:48.485055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:06.218 qpair failed and we were unable to recover it. 00:27:06.218 [2024-11-28 12:50:48.485155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.218 [2024-11-28 12:50:48.485170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:06.218 qpair failed and we were unable to recover it. 00:27:06.218 [2024-11-28 12:50:48.485390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.218 [2024-11-28 12:50:48.485405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:06.218 qpair failed and we were unable to recover it. 00:27:06.218 [2024-11-28 12:50:48.485615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.218 [2024-11-28 12:50:48.485629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:06.218 qpair failed and we were unable to recover it. 00:27:06.218 [2024-11-28 12:50:48.485781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.218 [2024-11-28 12:50:48.485795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:06.218 qpair failed and we were unable to recover it. 00:27:06.218 [2024-11-28 12:50:48.485951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.218 [2024-11-28 12:50:48.485966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:06.218 qpair failed and we were unable to recover it. 00:27:06.218 [2024-11-28 12:50:48.486228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.218 [2024-11-28 12:50:48.486242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:06.218 qpair failed and we were unable to recover it. 00:27:06.218 [2024-11-28 12:50:48.486347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.218 [2024-11-28 12:50:48.486361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:06.218 qpair failed and we were unable to recover it. 00:27:06.218 [2024-11-28 12:50:48.486511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.218 [2024-11-28 12:50:48.486525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:06.218 qpair failed and we were unable to recover it. 00:27:06.218 [2024-11-28 12:50:48.486710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.218 [2024-11-28 12:50:48.486724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:06.218 qpair failed and we were unable to recover it. 00:27:06.218 [2024-11-28 12:50:48.486900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.218 [2024-11-28 12:50:48.486914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:06.218 qpair failed and we were unable to recover it. 00:27:06.218 [2024-11-28 12:50:48.487024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.218 [2024-11-28 12:50:48.487039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:06.218 qpair failed and we were unable to recover it. 00:27:06.218 [2024-11-28 12:50:48.487204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.218 [2024-11-28 12:50:48.487222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:06.218 qpair failed and we were unable to recover it. 00:27:06.218 [2024-11-28 12:50:48.487432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.218 [2024-11-28 12:50:48.487446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:06.218 qpair failed and we were unable to recover it. 00:27:06.218 [2024-11-28 12:50:48.487670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.218 [2024-11-28 12:50:48.487684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:06.218 qpair failed and we were unable to recover it. 00:27:06.218 [2024-11-28 12:50:48.487937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.218 [2024-11-28 12:50:48.487956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:06.218 qpair failed and we were unable to recover it. 00:27:06.218 [2024-11-28 12:50:48.488150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.218 [2024-11-28 12:50:48.488165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:06.218 qpair failed and we were unable to recover it. 00:27:06.218 [2024-11-28 12:50:48.488266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.218 [2024-11-28 12:50:48.488280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:06.218 qpair failed and we were unable to recover it. 00:27:06.218 [2024-11-28 12:50:48.488419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.218 [2024-11-28 12:50:48.488434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:06.218 qpair failed and we were unable to recover it. 00:27:06.218 [2024-11-28 12:50:48.488592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.218 [2024-11-28 12:50:48.488607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:06.218 qpair failed and we were unable to recover it. 00:27:06.218 [2024-11-28 12:50:48.488739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.218 [2024-11-28 12:50:48.488753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:06.218 qpair failed and we were unable to recover it. 00:27:06.218 [2024-11-28 12:50:48.488843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.218 [2024-11-28 12:50:48.488857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:06.218 qpair failed and we were unable to recover it. 00:27:06.218 [2024-11-28 12:50:48.489040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.218 [2024-11-28 12:50:48.489055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:06.218 qpair failed and we were unable to recover it. 00:27:06.218 [2024-11-28 12:50:48.489156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.218 [2024-11-28 12:50:48.489170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:06.218 qpair failed and we were unable to recover it. 00:27:06.218 [2024-11-28 12:50:48.489315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.218 [2024-11-28 12:50:48.489330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:06.218 qpair failed and we were unable to recover it. 00:27:06.218 [2024-11-28 12:50:48.489487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.218 [2024-11-28 12:50:48.489503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:06.218 qpair failed and we were unable to recover it. 00:27:06.218 12:50:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.218 [2024-11-28 12:50:48.489697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.218 [2024-11-28 12:50:48.489712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:06.218 qpair failed and we were unable to recover it. 00:27:06.218 [2024-11-28 12:50:48.489874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.218 [2024-11-28 12:50:48.489889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:06.218 qpair failed and we were unable to recover it. 00:27:06.218 12:50:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:06.218 [2024-11-28 12:50:48.490011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.218 [2024-11-28 12:50:48.490026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:06.218 qpair failed and we were unable to recover it. 00:27:06.218 [2024-11-28 12:50:48.490236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.218 [2024-11-28 12:50:48.490252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:06.218 qpair failed and we were unable to recover it. 00:27:06.218 12:50:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.218 [2024-11-28 12:50:48.490502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.218 [2024-11-28 12:50:48.490517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:06.218 qpair failed and we were unable to recover it. 00:27:06.218 12:50:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:06.218 [2024-11-28 12:50:48.490689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.218 [2024-11-28 12:50:48.490705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:06.218 qpair failed and we were unable to recover it. 00:27:06.218 [2024-11-28 12:50:48.490846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.218 [2024-11-28 12:50:48.490861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:06.218 qpair failed and we were unable to recover it. 00:27:06.218 [2024-11-28 12:50:48.490944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.218 [2024-11-28 12:50:48.490964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:06.218 qpair failed and we were unable to recover it. 00:27:06.218 [2024-11-28 12:50:48.491055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.218 [2024-11-28 12:50:48.491069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:06.218 qpair failed and we were unable to recover it. 00:27:06.218 [2024-11-28 12:50:48.491306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.219 [2024-11-28 12:50:48.491320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:06.219 qpair failed and we were unable to recover it. 00:27:06.219 [2024-11-28 12:50:48.491418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.219 [2024-11-28 12:50:48.491433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:06.219 qpair failed and we were unable to recover it. 00:27:06.219 [2024-11-28 12:50:48.491539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.219 [2024-11-28 12:50:48.491553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:06.219 qpair failed and we were unable to recover it. 00:27:06.219 [2024-11-28 12:50:48.491715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.219 [2024-11-28 12:50:48.491730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:06.219 qpair failed and we were unable to recover it. 00:27:06.219 [2024-11-28 12:50:48.492002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.219 [2024-11-28 12:50:48.492018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:06.219 qpair failed and we were unable to recover it. 00:27:06.219 [2024-11-28 12:50:48.492185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.219 [2024-11-28 12:50:48.492199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:06.219 qpair failed and we were unable to recover it. 00:27:06.219 [2024-11-28 12:50:48.492300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.219 [2024-11-28 12:50:48.492314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:06.219 qpair failed and we were unable to recover it. 00:27:06.219 [2024-11-28 12:50:48.492411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.219 [2024-11-28 12:50:48.492426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:06.219 qpair failed and we were unable to recover it. 00:27:06.219 [2024-11-28 12:50:48.492523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.219 [2024-11-28 12:50:48.492537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:06.219 qpair failed and we were unable to recover it. 00:27:06.219 [2024-11-28 12:50:48.492844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.219 [2024-11-28 12:50:48.492859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8c64000b90 with addr=10.0.0.2, port=4420 00:27:06.219 qpair failed and we were unable to recover it. 00:27:06.219 [2024-11-28 12:50:48.493014] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:06.219 [2024-11-28 12:50:48.495470] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.219 [2024-11-28 12:50:48.495556] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.219 [2024-11-28 12:50:48.495578] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.219 [2024-11-28 12:50:48.495588] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.219 [2024-11-28 12:50:48.495597] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:06.219 [2024-11-28 12:50:48.495623] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:06.219 qpair failed and we were unable to recover it. 00:27:06.219 12:50:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.219 12:50:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:06.219 12:50:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.219 12:50:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:06.219 [2024-11-28 12:50:48.505395] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.219 [2024-11-28 12:50:48.505465] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.219 [2024-11-28 12:50:48.505485] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.219 [2024-11-28 12:50:48.505494] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.219 [2024-11-28 12:50:48.505504] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:06.219 [2024-11-28 12:50:48.505525] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:06.219 qpair failed and we were unable to recover it. 00:27:06.219 12:50:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.219 12:50:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 2681648 00:27:06.219 [2024-11-28 12:50:48.515379] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.219 [2024-11-28 12:50:48.515454] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.219 [2024-11-28 12:50:48.515469] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.219 [2024-11-28 12:50:48.515476] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.219 [2024-11-28 12:50:48.515482] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:06.219 [2024-11-28 12:50:48.515497] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:06.219 qpair failed and we were unable to recover it. 00:27:06.219 [2024-11-28 12:50:48.525335] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.219 [2024-11-28 12:50:48.525400] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.219 [2024-11-28 12:50:48.525414] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.219 [2024-11-28 12:50:48.525421] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.219 [2024-11-28 12:50:48.525427] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:06.219 [2024-11-28 12:50:48.525442] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:06.219 qpair failed and we were unable to recover it. 00:27:06.219 [2024-11-28 12:50:48.535311] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.219 [2024-11-28 12:50:48.535374] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.219 [2024-11-28 12:50:48.535388] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.219 [2024-11-28 12:50:48.535395] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.219 [2024-11-28 12:50:48.535401] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:06.219 [2024-11-28 12:50:48.535417] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:06.219 qpair failed and we were unable to recover it. 00:27:06.219 [2024-11-28 12:50:48.545355] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.219 [2024-11-28 12:50:48.545434] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.219 [2024-11-28 12:50:48.545449] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.219 [2024-11-28 12:50:48.545456] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.219 [2024-11-28 12:50:48.545462] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:06.219 [2024-11-28 12:50:48.545478] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:06.219 qpair failed and we were unable to recover it. 00:27:06.219 [2024-11-28 12:50:48.555351] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.219 [2024-11-28 12:50:48.555407] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.219 [2024-11-28 12:50:48.555421] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.219 [2024-11-28 12:50:48.555428] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.219 [2024-11-28 12:50:48.555434] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:06.219 [2024-11-28 12:50:48.555449] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:06.219 qpair failed and we were unable to recover it. 00:27:06.219 [2024-11-28 12:50:48.565387] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.219 [2024-11-28 12:50:48.565450] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.219 [2024-11-28 12:50:48.565464] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.219 [2024-11-28 12:50:48.565471] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.219 [2024-11-28 12:50:48.565477] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:06.219 [2024-11-28 12:50:48.565493] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:06.220 qpair failed and we were unable to recover it. 00:27:06.220 [2024-11-28 12:50:48.575511] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.220 [2024-11-28 12:50:48.575594] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.220 [2024-11-28 12:50:48.575609] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.220 [2024-11-28 12:50:48.575615] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.220 [2024-11-28 12:50:48.575622] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:06.220 [2024-11-28 12:50:48.575637] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:06.220 qpair failed and we were unable to recover it. 00:27:06.220 [2024-11-28 12:50:48.585507] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.220 [2024-11-28 12:50:48.585565] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.220 [2024-11-28 12:50:48.585583] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.220 [2024-11-28 12:50:48.585589] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.220 [2024-11-28 12:50:48.585595] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:06.220 [2024-11-28 12:50:48.585610] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:06.220 qpair failed and we were unable to recover it. 00:27:06.220 [2024-11-28 12:50:48.595480] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.220 [2024-11-28 12:50:48.595538] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.220 [2024-11-28 12:50:48.595551] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.220 [2024-11-28 12:50:48.595558] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.220 [2024-11-28 12:50:48.595564] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:06.220 [2024-11-28 12:50:48.595579] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:06.220 qpair failed and we were unable to recover it. 00:27:06.220 [2024-11-28 12:50:48.605514] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.220 [2024-11-28 12:50:48.605571] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.220 [2024-11-28 12:50:48.605585] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.220 [2024-11-28 12:50:48.605592] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.220 [2024-11-28 12:50:48.605598] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:06.220 [2024-11-28 12:50:48.605612] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:06.220 qpair failed and we were unable to recover it. 00:27:06.220 [2024-11-28 12:50:48.615588] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.220 [2024-11-28 12:50:48.615693] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.220 [2024-11-28 12:50:48.615708] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.220 [2024-11-28 12:50:48.615715] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.220 [2024-11-28 12:50:48.615722] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:06.220 [2024-11-28 12:50:48.615737] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:06.220 qpair failed and we were unable to recover it. 00:27:06.220 [2024-11-28 12:50:48.625632] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.220 [2024-11-28 12:50:48.625695] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.220 [2024-11-28 12:50:48.625710] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.220 [2024-11-28 12:50:48.625717] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.220 [2024-11-28 12:50:48.625730] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:06.220 [2024-11-28 12:50:48.625746] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:06.220 qpair failed and we were unable to recover it. 00:27:06.220 [2024-11-28 12:50:48.635643] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.220 [2024-11-28 12:50:48.635699] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.220 [2024-11-28 12:50:48.635712] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.220 [2024-11-28 12:50:48.635719] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.220 [2024-11-28 12:50:48.635725] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:06.220 [2024-11-28 12:50:48.635740] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:06.220 qpair failed and we were unable to recover it. 00:27:06.220 [2024-11-28 12:50:48.645701] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.220 [2024-11-28 12:50:48.645788] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.220 [2024-11-28 12:50:48.645802] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.220 [2024-11-28 12:50:48.645809] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.220 [2024-11-28 12:50:48.645815] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:06.220 [2024-11-28 12:50:48.645830] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:06.220 qpair failed and we were unable to recover it. 00:27:06.220 [2024-11-28 12:50:48.655696] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.220 [2024-11-28 12:50:48.655774] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.220 [2024-11-28 12:50:48.655787] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.220 [2024-11-28 12:50:48.655794] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.220 [2024-11-28 12:50:48.655800] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:06.220 [2024-11-28 12:50:48.655815] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:06.220 qpair failed and we were unable to recover it. 00:27:06.220 [2024-11-28 12:50:48.665751] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.220 [2024-11-28 12:50:48.665808] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.220 [2024-11-28 12:50:48.665822] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.220 [2024-11-28 12:50:48.665829] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.220 [2024-11-28 12:50:48.665835] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:06.220 [2024-11-28 12:50:48.665850] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:06.220 qpair failed and we were unable to recover it. 00:27:06.220 [2024-11-28 12:50:48.675753] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.220 [2024-11-28 12:50:48.675829] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.220 [2024-11-28 12:50:48.675844] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.220 [2024-11-28 12:50:48.675851] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.220 [2024-11-28 12:50:48.675857] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:06.220 [2024-11-28 12:50:48.675872] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:06.220 qpair failed and we were unable to recover it. 00:27:06.220 [2024-11-28 12:50:48.685785] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.220 [2024-11-28 12:50:48.685842] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.220 [2024-11-28 12:50:48.685856] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.220 [2024-11-28 12:50:48.685863] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.220 [2024-11-28 12:50:48.685869] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:06.220 [2024-11-28 12:50:48.685884] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:06.220 qpair failed and we were unable to recover it. 00:27:06.220 [2024-11-28 12:50:48.695816] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.220 [2024-11-28 12:50:48.695877] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.220 [2024-11-28 12:50:48.695891] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.220 [2024-11-28 12:50:48.695898] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.221 [2024-11-28 12:50:48.695903] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:06.221 [2024-11-28 12:50:48.695918] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:06.221 qpair failed and we were unable to recover it. 00:27:06.221 [2024-11-28 12:50:48.705838] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.221 [2024-11-28 12:50:48.705897] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.221 [2024-11-28 12:50:48.705911] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.221 [2024-11-28 12:50:48.705918] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.221 [2024-11-28 12:50:48.705924] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:06.221 [2024-11-28 12:50:48.705939] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:06.221 qpair failed and we were unable to recover it. 00:27:06.480 [2024-11-28 12:50:48.715873] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.480 [2024-11-28 12:50:48.715938] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.480 [2024-11-28 12:50:48.715959] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.480 [2024-11-28 12:50:48.715966] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.480 [2024-11-28 12:50:48.715972] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:06.480 [2024-11-28 12:50:48.715987] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:06.480 qpair failed and we were unable to recover it. 00:27:06.480 [2024-11-28 12:50:48.725920] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.480 [2024-11-28 12:50:48.726019] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.480 [2024-11-28 12:50:48.726036] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.480 [2024-11-28 12:50:48.726043] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.480 [2024-11-28 12:50:48.726049] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:06.480 [2024-11-28 12:50:48.726066] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:06.480 qpair failed and we were unable to recover it. 00:27:06.480 [2024-11-28 12:50:48.735868] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.480 [2024-11-28 12:50:48.735928] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.480 [2024-11-28 12:50:48.735943] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.480 [2024-11-28 12:50:48.735956] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.480 [2024-11-28 12:50:48.735963] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:06.480 [2024-11-28 12:50:48.735980] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:06.480 qpair failed and we were unable to recover it. 00:27:06.480 [2024-11-28 12:50:48.745979] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.480 [2024-11-28 12:50:48.746036] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.480 [2024-11-28 12:50:48.746050] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.480 [2024-11-28 12:50:48.746057] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.480 [2024-11-28 12:50:48.746063] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:06.480 [2024-11-28 12:50:48.746078] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:06.480 qpair failed and we were unable to recover it. 00:27:06.480 [2024-11-28 12:50:48.755966] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.480 [2024-11-28 12:50:48.756022] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.480 [2024-11-28 12:50:48.756037] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.480 [2024-11-28 12:50:48.756044] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.480 [2024-11-28 12:50:48.756054] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:06.480 [2024-11-28 12:50:48.756069] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:06.480 qpair failed and we were unable to recover it. 00:27:06.480 [2024-11-28 12:50:48.766067] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.480 [2024-11-28 12:50:48.766127] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.480 [2024-11-28 12:50:48.766142] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.480 [2024-11-28 12:50:48.766149] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.480 [2024-11-28 12:50:48.766155] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:06.480 [2024-11-28 12:50:48.766170] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:06.480 qpair failed and we were unable to recover it. 00:27:06.480 [2024-11-28 12:50:48.776058] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.480 [2024-11-28 12:50:48.776118] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.480 [2024-11-28 12:50:48.776132] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.480 [2024-11-28 12:50:48.776139] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.480 [2024-11-28 12:50:48.776145] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:06.480 [2024-11-28 12:50:48.776160] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:06.480 qpair failed and we were unable to recover it. 00:27:06.480 [2024-11-28 12:50:48.786017] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.480 [2024-11-28 12:50:48.786074] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.480 [2024-11-28 12:50:48.786088] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.480 [2024-11-28 12:50:48.786095] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.480 [2024-11-28 12:50:48.786101] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:06.480 [2024-11-28 12:50:48.786116] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:06.480 qpair failed and we were unable to recover it. 00:27:06.480 [2024-11-28 12:50:48.796123] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.480 [2024-11-28 12:50:48.796184] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.480 [2024-11-28 12:50:48.796198] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.480 [2024-11-28 12:50:48.796205] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.480 [2024-11-28 12:50:48.796211] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:06.480 [2024-11-28 12:50:48.796226] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:06.480 qpair failed and we were unable to recover it. 00:27:06.480 [2024-11-28 12:50:48.806099] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.480 [2024-11-28 12:50:48.806168] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.480 [2024-11-28 12:50:48.806182] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.480 [2024-11-28 12:50:48.806189] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.480 [2024-11-28 12:50:48.806196] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:06.480 [2024-11-28 12:50:48.806211] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:06.480 qpair failed and we were unable to recover it. 00:27:06.480 [2024-11-28 12:50:48.816184] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.480 [2024-11-28 12:50:48.816240] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.480 [2024-11-28 12:50:48.816254] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.480 [2024-11-28 12:50:48.816261] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.480 [2024-11-28 12:50:48.816267] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:06.480 [2024-11-28 12:50:48.816281] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:06.481 qpair failed and we were unable to recover it. 00:27:06.481 [2024-11-28 12:50:48.826187] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.481 [2024-11-28 12:50:48.826246] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.481 [2024-11-28 12:50:48.826259] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.481 [2024-11-28 12:50:48.826266] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.481 [2024-11-28 12:50:48.826272] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:06.481 [2024-11-28 12:50:48.826287] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:06.481 qpair failed and we were unable to recover it. 00:27:06.481 [2024-11-28 12:50:48.836214] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.481 [2024-11-28 12:50:48.836267] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.481 [2024-11-28 12:50:48.836281] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.481 [2024-11-28 12:50:48.836288] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.481 [2024-11-28 12:50:48.836293] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:06.481 [2024-11-28 12:50:48.836308] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:06.481 qpair failed and we were unable to recover it. 00:27:06.481 [2024-11-28 12:50:48.846252] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.481 [2024-11-28 12:50:48.846310] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.481 [2024-11-28 12:50:48.846328] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.481 [2024-11-28 12:50:48.846334] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.481 [2024-11-28 12:50:48.846340] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:06.481 [2024-11-28 12:50:48.846355] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:06.481 qpair failed and we were unable to recover it. 00:27:06.481 [2024-11-28 12:50:48.856283] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.481 [2024-11-28 12:50:48.856343] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.481 [2024-11-28 12:50:48.856357] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.481 [2024-11-28 12:50:48.856364] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.481 [2024-11-28 12:50:48.856370] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:06.481 [2024-11-28 12:50:48.856385] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:06.481 qpair failed and we were unable to recover it. 00:27:06.481 [2024-11-28 12:50:48.866337] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.481 [2024-11-28 12:50:48.866418] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.481 [2024-11-28 12:50:48.866431] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.481 [2024-11-28 12:50:48.866438] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.481 [2024-11-28 12:50:48.866444] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:06.481 [2024-11-28 12:50:48.866460] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:06.481 qpair failed and we were unable to recover it. 00:27:06.481 [2024-11-28 12:50:48.876328] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.481 [2024-11-28 12:50:48.876387] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.481 [2024-11-28 12:50:48.876401] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.481 [2024-11-28 12:50:48.876407] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.481 [2024-11-28 12:50:48.876413] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:06.481 [2024-11-28 12:50:48.876428] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:06.481 qpair failed and we were unable to recover it. 00:27:06.481 [2024-11-28 12:50:48.886367] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.481 [2024-11-28 12:50:48.886428] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.481 [2024-11-28 12:50:48.886442] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.481 [2024-11-28 12:50:48.886452] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.481 [2024-11-28 12:50:48.886457] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:06.481 [2024-11-28 12:50:48.886473] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:06.481 qpair failed and we were unable to recover it. 00:27:06.481 [2024-11-28 12:50:48.896398] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.481 [2024-11-28 12:50:48.896469] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.481 [2024-11-28 12:50:48.896483] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.481 [2024-11-28 12:50:48.896490] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.481 [2024-11-28 12:50:48.896496] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:06.481 [2024-11-28 12:50:48.896510] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:06.481 qpair failed and we were unable to recover it. 00:27:06.481 [2024-11-28 12:50:48.906410] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.481 [2024-11-28 12:50:48.906468] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.481 [2024-11-28 12:50:48.906482] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.481 [2024-11-28 12:50:48.906488] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.481 [2024-11-28 12:50:48.906494] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:06.481 [2024-11-28 12:50:48.906509] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:06.481 qpair failed and we were unable to recover it. 00:27:06.481 [2024-11-28 12:50:48.916455] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.481 [2024-11-28 12:50:48.916536] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.481 [2024-11-28 12:50:48.916550] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.481 [2024-11-28 12:50:48.916557] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.481 [2024-11-28 12:50:48.916563] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:06.481 [2024-11-28 12:50:48.916578] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:06.481 qpair failed and we were unable to recover it. 00:27:06.481 [2024-11-28 12:50:48.926484] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.481 [2024-11-28 12:50:48.926542] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.481 [2024-11-28 12:50:48.926556] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.481 [2024-11-28 12:50:48.926563] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.481 [2024-11-28 12:50:48.926569] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:06.481 [2024-11-28 12:50:48.926588] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:06.481 qpair failed and we were unable to recover it. 00:27:06.481 [2024-11-28 12:50:48.936445] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.481 [2024-11-28 12:50:48.936505] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.481 [2024-11-28 12:50:48.936518] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.481 [2024-11-28 12:50:48.936525] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.481 [2024-11-28 12:50:48.936531] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:06.481 [2024-11-28 12:50:48.936545] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:06.481 qpair failed and we were unable to recover it. 00:27:06.481 [2024-11-28 12:50:48.946544] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.481 [2024-11-28 12:50:48.946598] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.481 [2024-11-28 12:50:48.946612] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.481 [2024-11-28 12:50:48.946620] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.481 [2024-11-28 12:50:48.946626] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:06.481 [2024-11-28 12:50:48.946641] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:06.481 qpair failed and we were unable to recover it. 00:27:06.482 [2024-11-28 12:50:48.956504] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.482 [2024-11-28 12:50:48.956561] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.482 [2024-11-28 12:50:48.956575] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.482 [2024-11-28 12:50:48.956582] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.482 [2024-11-28 12:50:48.956587] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:06.482 [2024-11-28 12:50:48.956602] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:06.482 qpair failed and we were unable to recover it. 00:27:06.482 [2024-11-28 12:50:48.966649] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.482 [2024-11-28 12:50:48.966710] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.482 [2024-11-28 12:50:48.966724] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.482 [2024-11-28 12:50:48.966731] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.482 [2024-11-28 12:50:48.966737] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:06.482 [2024-11-28 12:50:48.966752] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:06.482 qpair failed and we were unable to recover it. 00:27:06.482 [2024-11-28 12:50:48.976630] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.482 [2024-11-28 12:50:48.976688] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.482 [2024-11-28 12:50:48.976702] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.482 [2024-11-28 12:50:48.976709] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.482 [2024-11-28 12:50:48.976715] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:06.482 [2024-11-28 12:50:48.976730] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:06.482 qpair failed and we were unable to recover it. 00:27:06.482 [2024-11-28 12:50:48.986627] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.482 [2024-11-28 12:50:48.986700] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.482 [2024-11-28 12:50:48.986714] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.482 [2024-11-28 12:50:48.986721] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.482 [2024-11-28 12:50:48.986727] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:06.482 [2024-11-28 12:50:48.986742] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:06.482 qpair failed and we were unable to recover it. 00:27:06.741 [2024-11-28 12:50:48.996693] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.741 [2024-11-28 12:50:48.996749] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.741 [2024-11-28 12:50:48.996764] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.741 [2024-11-28 12:50:48.996771] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.741 [2024-11-28 12:50:48.996776] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:06.741 [2024-11-28 12:50:48.996791] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:06.741 qpair failed and we were unable to recover it. 00:27:06.741 [2024-11-28 12:50:49.006724] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.741 [2024-11-28 12:50:49.006790] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.741 [2024-11-28 12:50:49.006805] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.741 [2024-11-28 12:50:49.006811] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.741 [2024-11-28 12:50:49.006817] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:06.741 [2024-11-28 12:50:49.006831] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:06.741 qpair failed and we were unable to recover it. 00:27:06.741 [2024-11-28 12:50:49.016696] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.741 [2024-11-28 12:50:49.016751] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.741 [2024-11-28 12:50:49.016765] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.741 [2024-11-28 12:50:49.016775] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.741 [2024-11-28 12:50:49.016781] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:06.741 [2024-11-28 12:50:49.016796] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:06.741 qpair failed and we were unable to recover it. 00:27:06.741 [2024-11-28 12:50:49.026780] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.741 [2024-11-28 12:50:49.026837] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.741 [2024-11-28 12:50:49.026851] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.741 [2024-11-28 12:50:49.026858] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.741 [2024-11-28 12:50:49.026864] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:06.741 [2024-11-28 12:50:49.026880] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:06.741 qpair failed and we were unable to recover it. 00:27:06.741 [2024-11-28 12:50:49.036809] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.741 [2024-11-28 12:50:49.036865] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.741 [2024-11-28 12:50:49.036879] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.741 [2024-11-28 12:50:49.036886] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.741 [2024-11-28 12:50:49.036891] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:06.741 [2024-11-28 12:50:49.036906] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:06.741 qpair failed and we were unable to recover it. 00:27:06.741 [2024-11-28 12:50:49.046871] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.742 [2024-11-28 12:50:49.046943] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.742 [2024-11-28 12:50:49.046962] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.742 [2024-11-28 12:50:49.046969] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.742 [2024-11-28 12:50:49.046975] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:06.742 [2024-11-28 12:50:49.046990] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:06.742 qpair failed and we were unable to recover it. 00:27:06.742 [2024-11-28 12:50:49.056913] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.742 [2024-11-28 12:50:49.057022] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.742 [2024-11-28 12:50:49.057036] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.742 [2024-11-28 12:50:49.057043] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.742 [2024-11-28 12:50:49.057049] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:06.742 [2024-11-28 12:50:49.057068] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:06.742 qpair failed and we were unable to recover it. 00:27:06.742 [2024-11-28 12:50:49.066911] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.742 [2024-11-28 12:50:49.067011] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.742 [2024-11-28 12:50:49.067025] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.742 [2024-11-28 12:50:49.067031] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.742 [2024-11-28 12:50:49.067038] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:06.742 [2024-11-28 12:50:49.067052] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:06.742 qpair failed and we were unable to recover it. 00:27:06.742 [2024-11-28 12:50:49.076926] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.742 [2024-11-28 12:50:49.076985] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.742 [2024-11-28 12:50:49.076999] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.742 [2024-11-28 12:50:49.077006] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.742 [2024-11-28 12:50:49.077012] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:06.742 [2024-11-28 12:50:49.077026] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:06.742 qpair failed and we were unable to recover it. 00:27:06.742 [2024-11-28 12:50:49.086978] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.742 [2024-11-28 12:50:49.087039] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.742 [2024-11-28 12:50:49.087052] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.742 [2024-11-28 12:50:49.087059] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.742 [2024-11-28 12:50:49.087065] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:06.742 [2024-11-28 12:50:49.087080] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:06.742 qpair failed and we were unable to recover it. 00:27:06.742 [2024-11-28 12:50:49.096996] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.742 [2024-11-28 12:50:49.097066] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.742 [2024-11-28 12:50:49.097080] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.742 [2024-11-28 12:50:49.097087] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.742 [2024-11-28 12:50:49.097093] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:06.742 [2024-11-28 12:50:49.097107] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:06.742 qpair failed and we were unable to recover it. 00:27:06.742 [2024-11-28 12:50:49.107013] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.742 [2024-11-28 12:50:49.107082] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.742 [2024-11-28 12:50:49.107096] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.742 [2024-11-28 12:50:49.107102] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.742 [2024-11-28 12:50:49.107108] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:06.742 [2024-11-28 12:50:49.107123] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:06.742 qpair failed and we were unable to recover it. 00:27:06.742 [2024-11-28 12:50:49.117024] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.742 [2024-11-28 12:50:49.117079] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.742 [2024-11-28 12:50:49.117094] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.742 [2024-11-28 12:50:49.117101] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.742 [2024-11-28 12:50:49.117107] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:06.742 [2024-11-28 12:50:49.117122] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:06.742 qpair failed and we were unable to recover it. 00:27:06.742 [2024-11-28 12:50:49.127066] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.742 [2024-11-28 12:50:49.127124] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.742 [2024-11-28 12:50:49.127139] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.742 [2024-11-28 12:50:49.127146] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.742 [2024-11-28 12:50:49.127152] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:06.742 [2024-11-28 12:50:49.127167] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:06.742 qpair failed and we were unable to recover it. 00:27:06.742 [2024-11-28 12:50:49.137092] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.742 [2024-11-28 12:50:49.137149] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.742 [2024-11-28 12:50:49.137163] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.742 [2024-11-28 12:50:49.137170] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.742 [2024-11-28 12:50:49.137176] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:06.742 [2024-11-28 12:50:49.137191] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:06.742 qpair failed and we were unable to recover it. 00:27:06.742 [2024-11-28 12:50:49.147052] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.742 [2024-11-28 12:50:49.147110] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.742 [2024-11-28 12:50:49.147126] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.742 [2024-11-28 12:50:49.147133] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.742 [2024-11-28 12:50:49.147139] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:06.742 [2024-11-28 12:50:49.147153] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:06.742 qpair failed and we were unable to recover it. 00:27:06.742 [2024-11-28 12:50:49.157154] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.742 [2024-11-28 12:50:49.157204] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.742 [2024-11-28 12:50:49.157217] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.742 [2024-11-28 12:50:49.157224] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.742 [2024-11-28 12:50:49.157230] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:06.742 [2024-11-28 12:50:49.157244] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:06.742 qpair failed and we were unable to recover it. 00:27:06.742 [2024-11-28 12:50:49.167193] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.742 [2024-11-28 12:50:49.167254] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.742 [2024-11-28 12:50:49.167268] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.742 [2024-11-28 12:50:49.167275] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.742 [2024-11-28 12:50:49.167281] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:06.742 [2024-11-28 12:50:49.167296] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:06.742 qpair failed and we were unable to recover it. 00:27:06.743 [2024-11-28 12:50:49.177215] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.743 [2024-11-28 12:50:49.177270] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.743 [2024-11-28 12:50:49.177285] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.743 [2024-11-28 12:50:49.177292] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.743 [2024-11-28 12:50:49.177298] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:06.743 [2024-11-28 12:50:49.177313] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:06.743 qpair failed and we were unable to recover it. 00:27:06.743 [2024-11-28 12:50:49.187252] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.743 [2024-11-28 12:50:49.187306] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.743 [2024-11-28 12:50:49.187320] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.743 [2024-11-28 12:50:49.187327] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.743 [2024-11-28 12:50:49.187336] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:06.743 [2024-11-28 12:50:49.187351] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:06.743 qpair failed and we were unable to recover it. 00:27:06.743 [2024-11-28 12:50:49.197252] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.743 [2024-11-28 12:50:49.197307] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.743 [2024-11-28 12:50:49.197320] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.743 [2024-11-28 12:50:49.197327] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.743 [2024-11-28 12:50:49.197333] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:06.743 [2024-11-28 12:50:49.197348] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:06.743 qpair failed and we were unable to recover it. 00:27:06.743 [2024-11-28 12:50:49.207326] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.743 [2024-11-28 12:50:49.207389] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.743 [2024-11-28 12:50:49.207403] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.743 [2024-11-28 12:50:49.207409] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.743 [2024-11-28 12:50:49.207415] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:06.743 [2024-11-28 12:50:49.207430] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:06.743 qpair failed and we were unable to recover it. 00:27:06.743 [2024-11-28 12:50:49.217305] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.743 [2024-11-28 12:50:49.217362] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.743 [2024-11-28 12:50:49.217376] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.743 [2024-11-28 12:50:49.217383] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.743 [2024-11-28 12:50:49.217388] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:06.743 [2024-11-28 12:50:49.217404] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:06.743 qpair failed and we were unable to recover it. 00:27:06.743 [2024-11-28 12:50:49.227339] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.743 [2024-11-28 12:50:49.227393] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.743 [2024-11-28 12:50:49.227406] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.743 [2024-11-28 12:50:49.227414] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.743 [2024-11-28 12:50:49.227420] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:06.743 [2024-11-28 12:50:49.227435] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:06.743 qpair failed and we were unable to recover it. 00:27:06.743 [2024-11-28 12:50:49.237368] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.743 [2024-11-28 12:50:49.237420] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.743 [2024-11-28 12:50:49.237434] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.743 [2024-11-28 12:50:49.237441] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.743 [2024-11-28 12:50:49.237446] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:06.743 [2024-11-28 12:50:49.237460] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:06.743 qpair failed and we were unable to recover it. 00:27:06.743 [2024-11-28 12:50:49.247402] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:06.743 [2024-11-28 12:50:49.247462] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:06.743 [2024-11-28 12:50:49.247476] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:06.743 [2024-11-28 12:50:49.247483] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:06.743 [2024-11-28 12:50:49.247489] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:06.743 [2024-11-28 12:50:49.247504] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:06.743 qpair failed and we were unable to recover it. 00:27:07.003 [2024-11-28 12:50:49.257423] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.003 [2024-11-28 12:50:49.257478] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.003 [2024-11-28 12:50:49.257492] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.003 [2024-11-28 12:50:49.257498] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.003 [2024-11-28 12:50:49.257504] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:07.003 [2024-11-28 12:50:49.257519] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.003 qpair failed and we were unable to recover it. 00:27:07.003 [2024-11-28 12:50:49.267459] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.003 [2024-11-28 12:50:49.267545] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.003 [2024-11-28 12:50:49.267559] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.003 [2024-11-28 12:50:49.267566] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.003 [2024-11-28 12:50:49.267572] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:07.003 [2024-11-28 12:50:49.267587] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.003 qpair failed and we were unable to recover it. 00:27:07.003 [2024-11-28 12:50:49.277483] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.003 [2024-11-28 12:50:49.277543] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.003 [2024-11-28 12:50:49.277560] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.003 [2024-11-28 12:50:49.277567] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.003 [2024-11-28 12:50:49.277573] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:07.003 [2024-11-28 12:50:49.277587] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.003 qpair failed and we were unable to recover it. 00:27:07.003 [2024-11-28 12:50:49.287512] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.003 [2024-11-28 12:50:49.287572] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.003 [2024-11-28 12:50:49.287586] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.003 [2024-11-28 12:50:49.287593] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.003 [2024-11-28 12:50:49.287599] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:07.003 [2024-11-28 12:50:49.287614] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.003 qpair failed and we were unable to recover it. 00:27:07.003 [2024-11-28 12:50:49.297678] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.003 [2024-11-28 12:50:49.297741] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.003 [2024-11-28 12:50:49.297755] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.003 [2024-11-28 12:50:49.297762] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.003 [2024-11-28 12:50:49.297767] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:07.003 [2024-11-28 12:50:49.297781] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.003 qpair failed and we were unable to recover it. 00:27:07.003 [2024-11-28 12:50:49.307617] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.003 [2024-11-28 12:50:49.307719] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.003 [2024-11-28 12:50:49.307733] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.003 [2024-11-28 12:50:49.307739] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.003 [2024-11-28 12:50:49.307746] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:07.003 [2024-11-28 12:50:49.307760] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.003 qpair failed and we were unable to recover it. 00:27:07.003 [2024-11-28 12:50:49.317653] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.003 [2024-11-28 12:50:49.317747] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.003 [2024-11-28 12:50:49.317761] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.003 [2024-11-28 12:50:49.317768] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.003 [2024-11-28 12:50:49.317779] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:07.003 [2024-11-28 12:50:49.317794] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.003 qpair failed and we were unable to recover it. 00:27:07.003 [2024-11-28 12:50:49.327652] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.003 [2024-11-28 12:50:49.327709] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.003 [2024-11-28 12:50:49.327724] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.003 [2024-11-28 12:50:49.327730] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.003 [2024-11-28 12:50:49.327736] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:07.003 [2024-11-28 12:50:49.327751] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.003 qpair failed and we were unable to recover it. 00:27:07.003 [2024-11-28 12:50:49.337659] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.003 [2024-11-28 12:50:49.337923] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.003 [2024-11-28 12:50:49.337939] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.003 [2024-11-28 12:50:49.337946] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.003 [2024-11-28 12:50:49.337955] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:07.003 [2024-11-28 12:50:49.337971] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.003 qpair failed and we were unable to recover it. 00:27:07.003 [2024-11-28 12:50:49.347673] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.003 [2024-11-28 12:50:49.347731] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.003 [2024-11-28 12:50:49.347745] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.003 [2024-11-28 12:50:49.347751] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.003 [2024-11-28 12:50:49.347757] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:07.003 [2024-11-28 12:50:49.347772] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.003 qpair failed and we were unable to recover it. 00:27:07.004 [2024-11-28 12:50:49.357707] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.004 [2024-11-28 12:50:49.357766] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.004 [2024-11-28 12:50:49.357779] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.004 [2024-11-28 12:50:49.357786] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.004 [2024-11-28 12:50:49.357792] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:07.004 [2024-11-28 12:50:49.357807] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.004 qpair failed and we were unable to recover it. 00:27:07.004 [2024-11-28 12:50:49.367751] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.004 [2024-11-28 12:50:49.367809] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.004 [2024-11-28 12:50:49.367823] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.004 [2024-11-28 12:50:49.367830] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.004 [2024-11-28 12:50:49.367836] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:07.004 [2024-11-28 12:50:49.367851] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.004 qpair failed and we were unable to recover it. 00:27:07.004 [2024-11-28 12:50:49.377838] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.004 [2024-11-28 12:50:49.377914] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.004 [2024-11-28 12:50:49.377927] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.004 [2024-11-28 12:50:49.377935] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.004 [2024-11-28 12:50:49.377941] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:07.004 [2024-11-28 12:50:49.377960] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.004 qpair failed and we were unable to recover it. 00:27:07.004 [2024-11-28 12:50:49.387798] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.004 [2024-11-28 12:50:49.387853] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.004 [2024-11-28 12:50:49.387867] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.004 [2024-11-28 12:50:49.387873] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.004 [2024-11-28 12:50:49.387879] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:07.004 [2024-11-28 12:50:49.387894] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.004 qpair failed and we were unable to recover it. 00:27:07.004 [2024-11-28 12:50:49.397835] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.004 [2024-11-28 12:50:49.397891] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.004 [2024-11-28 12:50:49.397906] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.004 [2024-11-28 12:50:49.397912] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.004 [2024-11-28 12:50:49.397918] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:07.004 [2024-11-28 12:50:49.397934] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.004 qpair failed and we were unable to recover it. 00:27:07.004 [2024-11-28 12:50:49.407903] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.004 [2024-11-28 12:50:49.408007] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.004 [2024-11-28 12:50:49.408025] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.004 [2024-11-28 12:50:49.408032] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.004 [2024-11-28 12:50:49.408038] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:07.004 [2024-11-28 12:50:49.408053] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.004 qpair failed and we were unable to recover it. 00:27:07.004 [2024-11-28 12:50:49.417900] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.004 [2024-11-28 12:50:49.417962] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.004 [2024-11-28 12:50:49.417976] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.004 [2024-11-28 12:50:49.417983] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.004 [2024-11-28 12:50:49.417989] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:07.004 [2024-11-28 12:50:49.418004] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.004 qpair failed and we were unable to recover it. 00:27:07.004 [2024-11-28 12:50:49.427915] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.004 [2024-11-28 12:50:49.427974] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.004 [2024-11-28 12:50:49.427989] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.004 [2024-11-28 12:50:49.427996] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.004 [2024-11-28 12:50:49.428002] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:07.004 [2024-11-28 12:50:49.428017] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.004 qpair failed and we were unable to recover it. 00:27:07.004 [2024-11-28 12:50:49.437936] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.004 [2024-11-28 12:50:49.437998] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.004 [2024-11-28 12:50:49.438012] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.004 [2024-11-28 12:50:49.438019] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.004 [2024-11-28 12:50:49.438025] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:07.004 [2024-11-28 12:50:49.438041] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.004 qpair failed and we were unable to recover it. 00:27:07.004 [2024-11-28 12:50:49.448034] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.004 [2024-11-28 12:50:49.448122] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.004 [2024-11-28 12:50:49.448135] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.004 [2024-11-28 12:50:49.448145] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.004 [2024-11-28 12:50:49.448151] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:07.004 [2024-11-28 12:50:49.448165] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.004 qpair failed and we were unable to recover it. 00:27:07.004 [2024-11-28 12:50:49.458010] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.004 [2024-11-28 12:50:49.458071] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.004 [2024-11-28 12:50:49.458084] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.004 [2024-11-28 12:50:49.458091] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.004 [2024-11-28 12:50:49.458097] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:07.004 [2024-11-28 12:50:49.458111] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.004 qpair failed and we were unable to recover it. 00:27:07.004 [2024-11-28 12:50:49.468031] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.004 [2024-11-28 12:50:49.468084] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.004 [2024-11-28 12:50:49.468098] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.004 [2024-11-28 12:50:49.468105] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.004 [2024-11-28 12:50:49.468111] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:07.004 [2024-11-28 12:50:49.468126] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.004 qpair failed and we were unable to recover it. 00:27:07.004 [2024-11-28 12:50:49.478126] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.004 [2024-11-28 12:50:49.478190] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.004 [2024-11-28 12:50:49.478203] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.004 [2024-11-28 12:50:49.478210] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.004 [2024-11-28 12:50:49.478216] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:07.004 [2024-11-28 12:50:49.478231] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.005 qpair failed and we were unable to recover it. 00:27:07.005 [2024-11-28 12:50:49.488110] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.005 [2024-11-28 12:50:49.488187] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.005 [2024-11-28 12:50:49.488222] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.005 [2024-11-28 12:50:49.488230] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.005 [2024-11-28 12:50:49.488236] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:07.005 [2024-11-28 12:50:49.488264] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.005 qpair failed and we were unable to recover it. 00:27:07.005 [2024-11-28 12:50:49.498145] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.005 [2024-11-28 12:50:49.498207] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.005 [2024-11-28 12:50:49.498222] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.005 [2024-11-28 12:50:49.498229] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.005 [2024-11-28 12:50:49.498235] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:07.005 [2024-11-28 12:50:49.498250] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.005 qpair failed and we were unable to recover it. 00:27:07.005 [2024-11-28 12:50:49.508127] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.005 [2024-11-28 12:50:49.508186] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.005 [2024-11-28 12:50:49.508200] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.005 [2024-11-28 12:50:49.508207] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.005 [2024-11-28 12:50:49.508213] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:07.005 [2024-11-28 12:50:49.508228] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.005 qpair failed and we were unable to recover it. 00:27:07.264 [2024-11-28 12:50:49.518194] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.264 [2024-11-28 12:50:49.518250] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.264 [2024-11-28 12:50:49.518264] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.265 [2024-11-28 12:50:49.518271] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.265 [2024-11-28 12:50:49.518277] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:07.265 [2024-11-28 12:50:49.518292] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.265 qpair failed and we were unable to recover it. 00:27:07.265 [2024-11-28 12:50:49.528209] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.265 [2024-11-28 12:50:49.528267] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.265 [2024-11-28 12:50:49.528281] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.265 [2024-11-28 12:50:49.528288] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.265 [2024-11-28 12:50:49.528294] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:07.265 [2024-11-28 12:50:49.528309] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.265 qpair failed and we were unable to recover it. 00:27:07.265 [2024-11-28 12:50:49.538188] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.265 [2024-11-28 12:50:49.538254] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.265 [2024-11-28 12:50:49.538274] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.265 [2024-11-28 12:50:49.538281] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.265 [2024-11-28 12:50:49.538287] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:07.265 [2024-11-28 12:50:49.538302] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.265 qpair failed and we were unable to recover it. 00:27:07.265 [2024-11-28 12:50:49.548281] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.265 [2024-11-28 12:50:49.548338] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.265 [2024-11-28 12:50:49.548352] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.265 [2024-11-28 12:50:49.548358] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.265 [2024-11-28 12:50:49.548364] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:07.265 [2024-11-28 12:50:49.548379] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.265 qpair failed and we were unable to recover it. 00:27:07.265 [2024-11-28 12:50:49.558274] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.265 [2024-11-28 12:50:49.558363] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.265 [2024-11-28 12:50:49.558377] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.265 [2024-11-28 12:50:49.558383] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.265 [2024-11-28 12:50:49.558389] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:07.265 [2024-11-28 12:50:49.558405] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.265 qpair failed and we were unable to recover it. 00:27:07.265 [2024-11-28 12:50:49.568324] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.265 [2024-11-28 12:50:49.568381] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.265 [2024-11-28 12:50:49.568394] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.265 [2024-11-28 12:50:49.568401] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.265 [2024-11-28 12:50:49.568407] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:07.265 [2024-11-28 12:50:49.568422] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.265 qpair failed and we were unable to recover it. 00:27:07.265 [2024-11-28 12:50:49.578373] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.265 [2024-11-28 12:50:49.578480] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.265 [2024-11-28 12:50:49.578494] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.265 [2024-11-28 12:50:49.578507] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.265 [2024-11-28 12:50:49.578513] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:07.265 [2024-11-28 12:50:49.578529] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.265 qpair failed and we were unable to recover it. 00:27:07.265 [2024-11-28 12:50:49.588393] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.265 [2024-11-28 12:50:49.588450] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.265 [2024-11-28 12:50:49.588464] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.265 [2024-11-28 12:50:49.588471] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.265 [2024-11-28 12:50:49.588477] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:07.265 [2024-11-28 12:50:49.588492] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.265 qpair failed and we were unable to recover it. 00:27:07.265 [2024-11-28 12:50:49.598408] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.265 [2024-11-28 12:50:49.598466] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.265 [2024-11-28 12:50:49.598480] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.265 [2024-11-28 12:50:49.598487] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.265 [2024-11-28 12:50:49.598493] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:07.265 [2024-11-28 12:50:49.598507] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.265 qpair failed and we were unable to recover it. 00:27:07.265 [2024-11-28 12:50:49.608435] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.265 [2024-11-28 12:50:49.608492] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.265 [2024-11-28 12:50:49.608506] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.265 [2024-11-28 12:50:49.608513] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.265 [2024-11-28 12:50:49.608520] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:07.265 [2024-11-28 12:50:49.608534] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.265 qpair failed and we were unable to recover it. 00:27:07.265 [2024-11-28 12:50:49.618443] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.265 [2024-11-28 12:50:49.618524] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.265 [2024-11-28 12:50:49.618538] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.265 [2024-11-28 12:50:49.618545] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.265 [2024-11-28 12:50:49.618550] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:07.265 [2024-11-28 12:50:49.618568] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.265 qpair failed and we were unable to recover it. 00:27:07.265 [2024-11-28 12:50:49.628488] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.265 [2024-11-28 12:50:49.628539] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.265 [2024-11-28 12:50:49.628552] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.265 [2024-11-28 12:50:49.628559] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.265 [2024-11-28 12:50:49.628565] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:07.265 [2024-11-28 12:50:49.628579] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.265 qpair failed and we were unable to recover it. 00:27:07.265 [2024-11-28 12:50:49.638529] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.265 [2024-11-28 12:50:49.638594] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.265 [2024-11-28 12:50:49.638608] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.265 [2024-11-28 12:50:49.638614] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.265 [2024-11-28 12:50:49.638621] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:07.265 [2024-11-28 12:50:49.638635] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.265 qpair failed and we were unable to recover it. 00:27:07.265 [2024-11-28 12:50:49.648542] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.266 [2024-11-28 12:50:49.648618] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.266 [2024-11-28 12:50:49.648632] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.266 [2024-11-28 12:50:49.648639] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.266 [2024-11-28 12:50:49.648645] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:07.266 [2024-11-28 12:50:49.648660] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.266 qpair failed and we were unable to recover it. 00:27:07.266 [2024-11-28 12:50:49.658576] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.266 [2024-11-28 12:50:49.658633] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.266 [2024-11-28 12:50:49.658646] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.266 [2024-11-28 12:50:49.658653] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.266 [2024-11-28 12:50:49.658659] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:07.266 [2024-11-28 12:50:49.658673] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.266 qpair failed and we were unable to recover it. 00:27:07.266 [2024-11-28 12:50:49.668587] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.266 [2024-11-28 12:50:49.668640] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.266 [2024-11-28 12:50:49.668654] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.266 [2024-11-28 12:50:49.668661] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.266 [2024-11-28 12:50:49.668667] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:07.266 [2024-11-28 12:50:49.668682] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.266 qpair failed and we were unable to recover it. 00:27:07.266 [2024-11-28 12:50:49.678639] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.266 [2024-11-28 12:50:49.678721] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.266 [2024-11-28 12:50:49.678734] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.266 [2024-11-28 12:50:49.678741] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.266 [2024-11-28 12:50:49.678746] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:07.266 [2024-11-28 12:50:49.678761] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.266 qpair failed and we were unable to recover it. 00:27:07.266 [2024-11-28 12:50:49.688679] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.266 [2024-11-28 12:50:49.688750] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.266 [2024-11-28 12:50:49.688764] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.266 [2024-11-28 12:50:49.688771] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.266 [2024-11-28 12:50:49.688776] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:07.266 [2024-11-28 12:50:49.688792] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.266 qpair failed and we were unable to recover it. 00:27:07.266 [2024-11-28 12:50:49.698686] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.266 [2024-11-28 12:50:49.698744] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.266 [2024-11-28 12:50:49.698758] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.266 [2024-11-28 12:50:49.698765] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.266 [2024-11-28 12:50:49.698771] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:07.266 [2024-11-28 12:50:49.698786] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.266 qpair failed and we were unable to recover it. 00:27:07.266 [2024-11-28 12:50:49.708705] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.266 [2024-11-28 12:50:49.708785] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.266 [2024-11-28 12:50:49.708803] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.266 [2024-11-28 12:50:49.708809] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.266 [2024-11-28 12:50:49.708815] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:07.266 [2024-11-28 12:50:49.708831] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.266 qpair failed and we were unable to recover it. 00:27:07.266 [2024-11-28 12:50:49.718723] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.266 [2024-11-28 12:50:49.718783] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.266 [2024-11-28 12:50:49.718797] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.266 [2024-11-28 12:50:49.718803] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.266 [2024-11-28 12:50:49.718809] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:07.266 [2024-11-28 12:50:49.718824] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.266 qpair failed and we were unable to recover it. 00:27:07.266 [2024-11-28 12:50:49.728769] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.266 [2024-11-28 12:50:49.728826] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.266 [2024-11-28 12:50:49.728840] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.266 [2024-11-28 12:50:49.728846] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.266 [2024-11-28 12:50:49.728852] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:07.266 [2024-11-28 12:50:49.728867] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.266 qpair failed and we were unable to recover it. 00:27:07.266 [2024-11-28 12:50:49.738813] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.266 [2024-11-28 12:50:49.738871] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.266 [2024-11-28 12:50:49.738884] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.266 [2024-11-28 12:50:49.738891] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.266 [2024-11-28 12:50:49.738897] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:07.266 [2024-11-28 12:50:49.738911] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.266 qpair failed and we were unable to recover it. 00:27:07.266 [2024-11-28 12:50:49.748812] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.266 [2024-11-28 12:50:49.748866] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.266 [2024-11-28 12:50:49.748879] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.266 [2024-11-28 12:50:49.748886] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.266 [2024-11-28 12:50:49.748895] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:07.266 [2024-11-28 12:50:49.748910] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.266 qpair failed and we were unable to recover it. 00:27:07.266 [2024-11-28 12:50:49.758839] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.266 [2024-11-28 12:50:49.758897] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.266 [2024-11-28 12:50:49.758911] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.266 [2024-11-28 12:50:49.758918] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.266 [2024-11-28 12:50:49.758924] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:07.266 [2024-11-28 12:50:49.758939] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.266 qpair failed and we were unable to recover it. 00:27:07.266 [2024-11-28 12:50:49.768876] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.266 [2024-11-28 12:50:49.768934] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.266 [2024-11-28 12:50:49.768952] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.266 [2024-11-28 12:50:49.768959] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.266 [2024-11-28 12:50:49.768965] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:07.266 [2024-11-28 12:50:49.768981] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.266 qpair failed and we were unable to recover it. 00:27:07.266 [2024-11-28 12:50:49.778910] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.267 [2024-11-28 12:50:49.778974] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.267 [2024-11-28 12:50:49.778988] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.267 [2024-11-28 12:50:49.778995] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.267 [2024-11-28 12:50:49.779000] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:07.267 [2024-11-28 12:50:49.779016] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.267 qpair failed and we were unable to recover it. 00:27:07.526 [2024-11-28 12:50:49.788865] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.526 [2024-11-28 12:50:49.788923] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.526 [2024-11-28 12:50:49.788937] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.526 [2024-11-28 12:50:49.788944] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.526 [2024-11-28 12:50:49.788954] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:07.526 [2024-11-28 12:50:49.788969] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.526 qpair failed and we were unable to recover it. 00:27:07.526 [2024-11-28 12:50:49.798968] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.526 [2024-11-28 12:50:49.799024] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.526 [2024-11-28 12:50:49.799038] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.526 [2024-11-28 12:50:49.799045] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.526 [2024-11-28 12:50:49.799050] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:07.526 [2024-11-28 12:50:49.799066] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.526 qpair failed and we were unable to recover it. 00:27:07.526 [2024-11-28 12:50:49.809005] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.526 [2024-11-28 12:50:49.809065] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.526 [2024-11-28 12:50:49.809079] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.526 [2024-11-28 12:50:49.809086] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.526 [2024-11-28 12:50:49.809092] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:07.526 [2024-11-28 12:50:49.809107] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.526 qpair failed and we were unable to recover it. 00:27:07.526 [2024-11-28 12:50:49.819038] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.526 [2024-11-28 12:50:49.819117] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.526 [2024-11-28 12:50:49.819131] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.526 [2024-11-28 12:50:49.819137] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.526 [2024-11-28 12:50:49.819143] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:07.527 [2024-11-28 12:50:49.819158] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.527 qpair failed and we were unable to recover it. 00:27:07.527 [2024-11-28 12:50:49.829035] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.527 [2024-11-28 12:50:49.829093] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.527 [2024-11-28 12:50:49.829107] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.527 [2024-11-28 12:50:49.829114] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.527 [2024-11-28 12:50:49.829120] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:07.527 [2024-11-28 12:50:49.829135] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.527 qpair failed and we were unable to recover it. 00:27:07.527 [2024-11-28 12:50:49.839043] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.527 [2024-11-28 12:50:49.839100] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.527 [2024-11-28 12:50:49.839119] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.527 [2024-11-28 12:50:49.839126] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.527 [2024-11-28 12:50:49.839133] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:07.527 [2024-11-28 12:50:49.839149] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.527 qpair failed and we were unable to recover it. 00:27:07.527 [2024-11-28 12:50:49.849115] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.527 [2024-11-28 12:50:49.849172] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.527 [2024-11-28 12:50:49.849187] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.527 [2024-11-28 12:50:49.849194] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.527 [2024-11-28 12:50:49.849201] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:07.527 [2024-11-28 12:50:49.849215] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.527 qpair failed and we were unable to recover it. 00:27:07.527 [2024-11-28 12:50:49.859062] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.527 [2024-11-28 12:50:49.859124] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.527 [2024-11-28 12:50:49.859138] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.527 [2024-11-28 12:50:49.859145] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.527 [2024-11-28 12:50:49.859151] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:07.527 [2024-11-28 12:50:49.859168] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.527 qpair failed and we were unable to recover it. 00:27:07.527 [2024-11-28 12:50:49.869167] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.527 [2024-11-28 12:50:49.869224] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.527 [2024-11-28 12:50:49.869237] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.527 [2024-11-28 12:50:49.869244] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.527 [2024-11-28 12:50:49.869250] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:07.527 [2024-11-28 12:50:49.869265] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.527 qpair failed and we were unable to recover it. 00:27:07.527 [2024-11-28 12:50:49.879182] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.527 [2024-11-28 12:50:49.879265] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.527 [2024-11-28 12:50:49.879278] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.527 [2024-11-28 12:50:49.879285] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.527 [2024-11-28 12:50:49.879294] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:07.527 [2024-11-28 12:50:49.879310] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.527 qpair failed and we were unable to recover it. 00:27:07.527 [2024-11-28 12:50:49.889228] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.527 [2024-11-28 12:50:49.889286] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.527 [2024-11-28 12:50:49.889300] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.527 [2024-11-28 12:50:49.889307] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.527 [2024-11-28 12:50:49.889313] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:07.527 [2024-11-28 12:50:49.889328] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.527 qpair failed and we were unable to recover it. 00:27:07.527 [2024-11-28 12:50:49.899242] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.527 [2024-11-28 12:50:49.899297] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.527 [2024-11-28 12:50:49.899311] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.527 [2024-11-28 12:50:49.899318] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.527 [2024-11-28 12:50:49.899324] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:07.527 [2024-11-28 12:50:49.899339] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.527 qpair failed and we were unable to recover it. 00:27:07.527 [2024-11-28 12:50:49.909276] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.527 [2024-11-28 12:50:49.909335] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.527 [2024-11-28 12:50:49.909349] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.527 [2024-11-28 12:50:49.909355] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.527 [2024-11-28 12:50:49.909362] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:07.527 [2024-11-28 12:50:49.909376] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.527 qpair failed and we were unable to recover it. 00:27:07.527 [2024-11-28 12:50:49.919328] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.527 [2024-11-28 12:50:49.919379] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.527 [2024-11-28 12:50:49.919392] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.527 [2024-11-28 12:50:49.919399] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.527 [2024-11-28 12:50:49.919405] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:07.527 [2024-11-28 12:50:49.919419] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.527 qpair failed and we were unable to recover it. 00:27:07.527 [2024-11-28 12:50:49.929393] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.527 [2024-11-28 12:50:49.929492] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.527 [2024-11-28 12:50:49.929506] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.527 [2024-11-28 12:50:49.929513] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.527 [2024-11-28 12:50:49.929518] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:07.527 [2024-11-28 12:50:49.929534] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.527 qpair failed and we were unable to recover it. 00:27:07.527 [2024-11-28 12:50:49.939352] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.527 [2024-11-28 12:50:49.939411] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.527 [2024-11-28 12:50:49.939425] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.527 [2024-11-28 12:50:49.939432] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.527 [2024-11-28 12:50:49.939438] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:07.527 [2024-11-28 12:50:49.939452] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.527 qpair failed and we were unable to recover it. 00:27:07.527 [2024-11-28 12:50:49.949391] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.527 [2024-11-28 12:50:49.949447] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.527 [2024-11-28 12:50:49.949460] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.527 [2024-11-28 12:50:49.949467] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.527 [2024-11-28 12:50:49.949473] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:07.528 [2024-11-28 12:50:49.949488] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.528 qpair failed and we were unable to recover it. 00:27:07.528 [2024-11-28 12:50:49.959486] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.528 [2024-11-28 12:50:49.959540] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.528 [2024-11-28 12:50:49.959554] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.528 [2024-11-28 12:50:49.959560] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.528 [2024-11-28 12:50:49.959566] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:07.528 [2024-11-28 12:50:49.959579] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.528 qpair failed and we were unable to recover it. 00:27:07.528 [2024-11-28 12:50:49.969386] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.528 [2024-11-28 12:50:49.969446] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.528 [2024-11-28 12:50:49.969460] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.528 [2024-11-28 12:50:49.969467] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.528 [2024-11-28 12:50:49.969473] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:07.528 [2024-11-28 12:50:49.969487] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.528 qpair failed and we were unable to recover it. 00:27:07.528 [2024-11-28 12:50:49.979487] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.528 [2024-11-28 12:50:49.979543] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.528 [2024-11-28 12:50:49.979558] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.528 [2024-11-28 12:50:49.979564] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.528 [2024-11-28 12:50:49.979571] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:07.528 [2024-11-28 12:50:49.979585] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.528 qpair failed and we were unable to recover it. 00:27:07.528 [2024-11-28 12:50:49.989520] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.528 [2024-11-28 12:50:49.989579] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.528 [2024-11-28 12:50:49.989593] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.528 [2024-11-28 12:50:49.989600] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.528 [2024-11-28 12:50:49.989606] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:07.528 [2024-11-28 12:50:49.989621] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.528 qpair failed and we were unable to recover it. 00:27:07.528 [2024-11-28 12:50:49.999548] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.528 [2024-11-28 12:50:49.999610] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.528 [2024-11-28 12:50:49.999623] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.528 [2024-11-28 12:50:49.999630] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.528 [2024-11-28 12:50:49.999636] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:07.528 [2024-11-28 12:50:49.999651] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.528 qpair failed and we were unable to recover it. 00:27:07.528 [2024-11-28 12:50:50.009543] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.528 [2024-11-28 12:50:50.009602] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.528 [2024-11-28 12:50:50.009617] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.528 [2024-11-28 12:50:50.009628] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.528 [2024-11-28 12:50:50.009635] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:07.528 [2024-11-28 12:50:50.009650] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.528 qpair failed and we were unable to recover it. 00:27:07.528 [2024-11-28 12:50:50.019513] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.528 [2024-11-28 12:50:50.019578] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.528 [2024-11-28 12:50:50.019592] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.528 [2024-11-28 12:50:50.019600] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.528 [2024-11-28 12:50:50.019606] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:07.528 [2024-11-28 12:50:50.019621] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.528 qpair failed and we were unable to recover it. 00:27:07.528 [2024-11-28 12:50:50.029605] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.528 [2024-11-28 12:50:50.029663] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.528 [2024-11-28 12:50:50.029677] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.528 [2024-11-28 12:50:50.029684] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.528 [2024-11-28 12:50:50.029691] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:07.528 [2024-11-28 12:50:50.029706] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.528 qpair failed and we were unable to recover it. 00:27:07.528 [2024-11-28 12:50:50.039677] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.528 [2024-11-28 12:50:50.039738] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.528 [2024-11-28 12:50:50.039751] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.528 [2024-11-28 12:50:50.039758] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.528 [2024-11-28 12:50:50.039764] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:07.528 [2024-11-28 12:50:50.039779] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.528 qpair failed and we were unable to recover it. 00:27:07.788 [2024-11-28 12:50:50.049683] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.788 [2024-11-28 12:50:50.049777] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.789 [2024-11-28 12:50:50.049791] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.789 [2024-11-28 12:50:50.049798] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.789 [2024-11-28 12:50:50.049804] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:07.789 [2024-11-28 12:50:50.049823] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.789 qpair failed and we were unable to recover it. 00:27:07.789 [2024-11-28 12:50:50.059711] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.789 [2024-11-28 12:50:50.059769] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.789 [2024-11-28 12:50:50.059783] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.789 [2024-11-28 12:50:50.059790] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.789 [2024-11-28 12:50:50.059796] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:07.789 [2024-11-28 12:50:50.059811] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.789 qpair failed and we were unable to recover it. 00:27:07.789 [2024-11-28 12:50:50.069670] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.789 [2024-11-28 12:50:50.069729] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.789 [2024-11-28 12:50:50.069744] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.789 [2024-11-28 12:50:50.069751] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.789 [2024-11-28 12:50:50.069757] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:07.789 [2024-11-28 12:50:50.069772] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.789 qpair failed and we were unable to recover it. 00:27:07.789 [2024-11-28 12:50:50.079760] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.789 [2024-11-28 12:50:50.079819] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.789 [2024-11-28 12:50:50.079834] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.789 [2024-11-28 12:50:50.079841] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.789 [2024-11-28 12:50:50.079847] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:07.789 [2024-11-28 12:50:50.079861] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.789 qpair failed and we were unable to recover it. 00:27:07.789 [2024-11-28 12:50:50.089798] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.789 [2024-11-28 12:50:50.089864] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.789 [2024-11-28 12:50:50.089878] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.789 [2024-11-28 12:50:50.089885] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.789 [2024-11-28 12:50:50.089891] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:07.789 [2024-11-28 12:50:50.089906] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.789 qpair failed and we were unable to recover it. 00:27:07.789 [2024-11-28 12:50:50.099737] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.789 [2024-11-28 12:50:50.099799] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.789 [2024-11-28 12:50:50.099814] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.789 [2024-11-28 12:50:50.099821] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.789 [2024-11-28 12:50:50.099827] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:07.789 [2024-11-28 12:50:50.099842] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.789 qpair failed and we were unable to recover it. 00:27:07.789 [2024-11-28 12:50:50.109771] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.789 [2024-11-28 12:50:50.109827] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.789 [2024-11-28 12:50:50.109841] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.789 [2024-11-28 12:50:50.109848] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.789 [2024-11-28 12:50:50.109854] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:07.789 [2024-11-28 12:50:50.109868] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.789 qpair failed and we were unable to recover it. 00:27:07.789 [2024-11-28 12:50:50.119786] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.789 [2024-11-28 12:50:50.119844] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.789 [2024-11-28 12:50:50.119860] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.789 [2024-11-28 12:50:50.119867] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.789 [2024-11-28 12:50:50.119874] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:07.789 [2024-11-28 12:50:50.119889] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.789 qpair failed and we were unable to recover it. 00:27:07.789 [2024-11-28 12:50:50.129927] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.789 [2024-11-28 12:50:50.130005] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.789 [2024-11-28 12:50:50.130020] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.789 [2024-11-28 12:50:50.130026] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.789 [2024-11-28 12:50:50.130032] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:07.789 [2024-11-28 12:50:50.130047] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.789 qpair failed and we were unable to recover it. 00:27:07.789 [2024-11-28 12:50:50.139886] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.789 [2024-11-28 12:50:50.139943] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.789 [2024-11-28 12:50:50.139961] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.789 [2024-11-28 12:50:50.139971] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.789 [2024-11-28 12:50:50.139977] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:07.789 [2024-11-28 12:50:50.139992] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.789 qpair failed and we were unable to recover it. 00:27:07.789 [2024-11-28 12:50:50.149929] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.789 [2024-11-28 12:50:50.149988] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.789 [2024-11-28 12:50:50.150002] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.789 [2024-11-28 12:50:50.150009] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.789 [2024-11-28 12:50:50.150015] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:07.789 [2024-11-28 12:50:50.150029] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.789 qpair failed and we were unable to recover it. 00:27:07.789 [2024-11-28 12:50:50.159979] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.789 [2024-11-28 12:50:50.160037] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.789 [2024-11-28 12:50:50.160051] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.789 [2024-11-28 12:50:50.160058] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.789 [2024-11-28 12:50:50.160063] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:07.789 [2024-11-28 12:50:50.160078] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.789 qpair failed and we were unable to recover it. 00:27:07.789 [2024-11-28 12:50:50.170068] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.789 [2024-11-28 12:50:50.170127] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.789 [2024-11-28 12:50:50.170141] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.789 [2024-11-28 12:50:50.170148] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.789 [2024-11-28 12:50:50.170154] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:07.789 [2024-11-28 12:50:50.170169] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.789 qpair failed and we were unable to recover it. 00:27:07.789 [2024-11-28 12:50:50.180080] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.789 [2024-11-28 12:50:50.180204] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.790 [2024-11-28 12:50:50.180220] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.790 [2024-11-28 12:50:50.180226] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.790 [2024-11-28 12:50:50.180232] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:07.790 [2024-11-28 12:50:50.180252] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.790 qpair failed and we were unable to recover it. 00:27:07.790 [2024-11-28 12:50:50.190084] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.790 [2024-11-28 12:50:50.190141] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.790 [2024-11-28 12:50:50.190155] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.790 [2024-11-28 12:50:50.190161] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.790 [2024-11-28 12:50:50.190168] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:07.790 [2024-11-28 12:50:50.190183] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.790 qpair failed and we were unable to recover it. 00:27:07.790 [2024-11-28 12:50:50.200018] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.790 [2024-11-28 12:50:50.200074] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.790 [2024-11-28 12:50:50.200087] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.790 [2024-11-28 12:50:50.200094] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.790 [2024-11-28 12:50:50.200100] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:07.790 [2024-11-28 12:50:50.200114] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.790 qpair failed and we were unable to recover it. 00:27:07.790 [2024-11-28 12:50:50.210116] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.790 [2024-11-28 12:50:50.210209] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.790 [2024-11-28 12:50:50.210223] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.790 [2024-11-28 12:50:50.210230] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.790 [2024-11-28 12:50:50.210236] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:07.790 [2024-11-28 12:50:50.210251] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.790 qpair failed and we were unable to recover it. 00:27:07.790 [2024-11-28 12:50:50.220148] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.790 [2024-11-28 12:50:50.220228] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.790 [2024-11-28 12:50:50.220242] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.790 [2024-11-28 12:50:50.220249] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.790 [2024-11-28 12:50:50.220255] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:07.790 [2024-11-28 12:50:50.220269] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.790 qpair failed and we were unable to recover it. 00:27:07.790 [2024-11-28 12:50:50.230105] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.790 [2024-11-28 12:50:50.230162] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.790 [2024-11-28 12:50:50.230176] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.790 [2024-11-28 12:50:50.230183] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.790 [2024-11-28 12:50:50.230189] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:07.790 [2024-11-28 12:50:50.230203] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.790 qpair failed and we were unable to recover it. 00:27:07.790 [2024-11-28 12:50:50.240201] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.790 [2024-11-28 12:50:50.240254] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.790 [2024-11-28 12:50:50.240268] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.790 [2024-11-28 12:50:50.240275] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.790 [2024-11-28 12:50:50.240281] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:07.790 [2024-11-28 12:50:50.240295] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.790 qpair failed and we were unable to recover it. 00:27:07.790 [2024-11-28 12:50:50.250242] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.790 [2024-11-28 12:50:50.250323] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.790 [2024-11-28 12:50:50.250337] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.790 [2024-11-28 12:50:50.250344] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.790 [2024-11-28 12:50:50.250350] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:07.790 [2024-11-28 12:50:50.250365] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.790 qpair failed and we were unable to recover it. 00:27:07.790 [2024-11-28 12:50:50.260268] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.790 [2024-11-28 12:50:50.260324] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.790 [2024-11-28 12:50:50.260337] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.790 [2024-11-28 12:50:50.260344] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.790 [2024-11-28 12:50:50.260350] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:07.790 [2024-11-28 12:50:50.260364] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.790 qpair failed and we were unable to recover it. 00:27:07.790 [2024-11-28 12:50:50.270278] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.790 [2024-11-28 12:50:50.270346] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.790 [2024-11-28 12:50:50.270363] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.790 [2024-11-28 12:50:50.270370] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.790 [2024-11-28 12:50:50.270376] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:07.790 [2024-11-28 12:50:50.270391] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.790 qpair failed and we were unable to recover it. 00:27:07.790 [2024-11-28 12:50:50.280318] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.790 [2024-11-28 12:50:50.280404] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.790 [2024-11-28 12:50:50.280418] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.790 [2024-11-28 12:50:50.280425] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.791 [2024-11-28 12:50:50.280431] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:07.791 [2024-11-28 12:50:50.280446] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.791 qpair failed and we were unable to recover it. 00:27:07.791 [2024-11-28 12:50:50.290310] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.791 [2024-11-28 12:50:50.290369] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.791 [2024-11-28 12:50:50.290383] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.791 [2024-11-28 12:50:50.290390] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.791 [2024-11-28 12:50:50.290396] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:07.791 [2024-11-28 12:50:50.290411] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.791 qpair failed and we were unable to recover it. 00:27:07.791 [2024-11-28 12:50:50.300373] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:07.791 [2024-11-28 12:50:50.300434] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:07.791 [2024-11-28 12:50:50.300448] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:07.791 [2024-11-28 12:50:50.300455] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:07.791 [2024-11-28 12:50:50.300460] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:07.791 [2024-11-28 12:50:50.300475] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.791 qpair failed and we were unable to recover it. 00:27:08.050 [2024-11-28 12:50:50.310340] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.050 [2024-11-28 12:50:50.310398] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.050 [2024-11-28 12:50:50.310411] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.050 [2024-11-28 12:50:50.310418] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.050 [2024-11-28 12:50:50.310429] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:08.050 [2024-11-28 12:50:50.310444] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.050 qpair failed and we were unable to recover it. 00:27:08.050 [2024-11-28 12:50:50.320417] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.050 [2024-11-28 12:50:50.320476] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.050 [2024-11-28 12:50:50.320489] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.050 [2024-11-28 12:50:50.320495] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.050 [2024-11-28 12:50:50.320501] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:08.050 [2024-11-28 12:50:50.320515] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.050 qpair failed and we were unable to recover it. 00:27:08.050 [2024-11-28 12:50:50.330453] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.050 [2024-11-28 12:50:50.330512] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.050 [2024-11-28 12:50:50.330526] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.050 [2024-11-28 12:50:50.330533] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.050 [2024-11-28 12:50:50.330539] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:08.050 [2024-11-28 12:50:50.330554] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.050 qpair failed and we were unable to recover it. 00:27:08.051 [2024-11-28 12:50:50.340479] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.051 [2024-11-28 12:50:50.340536] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.051 [2024-11-28 12:50:50.340550] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.051 [2024-11-28 12:50:50.340557] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.051 [2024-11-28 12:50:50.340563] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:08.051 [2024-11-28 12:50:50.340577] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.051 qpair failed and we were unable to recover it. 00:27:08.051 [2024-11-28 12:50:50.350512] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.051 [2024-11-28 12:50:50.350570] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.051 [2024-11-28 12:50:50.350584] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.051 [2024-11-28 12:50:50.350591] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.051 [2024-11-28 12:50:50.350597] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:08.051 [2024-11-28 12:50:50.350611] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.051 qpair failed and we were unable to recover it. 00:27:08.051 [2024-11-28 12:50:50.360556] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.051 [2024-11-28 12:50:50.360613] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.051 [2024-11-28 12:50:50.360626] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.051 [2024-11-28 12:50:50.360633] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.051 [2024-11-28 12:50:50.360639] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:08.051 [2024-11-28 12:50:50.360653] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.051 qpair failed and we were unable to recover it. 00:27:08.051 [2024-11-28 12:50:50.370544] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.051 [2024-11-28 12:50:50.370645] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.051 [2024-11-28 12:50:50.370659] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.051 [2024-11-28 12:50:50.370665] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.051 [2024-11-28 12:50:50.370672] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:08.051 [2024-11-28 12:50:50.370687] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.051 qpair failed and we were unable to recover it. 00:27:08.051 [2024-11-28 12:50:50.380599] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.051 [2024-11-28 12:50:50.380655] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.051 [2024-11-28 12:50:50.380668] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.051 [2024-11-28 12:50:50.380675] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.051 [2024-11-28 12:50:50.380681] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:08.051 [2024-11-28 12:50:50.380695] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.051 qpair failed and we were unable to recover it. 00:27:08.051 [2024-11-28 12:50:50.390623] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.051 [2024-11-28 12:50:50.390678] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.051 [2024-11-28 12:50:50.390692] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.051 [2024-11-28 12:50:50.390699] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.051 [2024-11-28 12:50:50.390705] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:08.051 [2024-11-28 12:50:50.390720] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.051 qpair failed and we were unable to recover it. 00:27:08.051 [2024-11-28 12:50:50.400653] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.051 [2024-11-28 12:50:50.400723] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.051 [2024-11-28 12:50:50.400740] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.051 [2024-11-28 12:50:50.400747] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.051 [2024-11-28 12:50:50.400753] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:08.051 [2024-11-28 12:50:50.400768] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.051 qpair failed and we were unable to recover it. 00:27:08.051 [2024-11-28 12:50:50.410678] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.051 [2024-11-28 12:50:50.410733] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.051 [2024-11-28 12:50:50.410747] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.051 [2024-11-28 12:50:50.410754] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.051 [2024-11-28 12:50:50.410760] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:08.051 [2024-11-28 12:50:50.410775] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.051 qpair failed and we were unable to recover it. 00:27:08.051 [2024-11-28 12:50:50.420694] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.051 [2024-11-28 12:50:50.420749] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.051 [2024-11-28 12:50:50.420763] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.051 [2024-11-28 12:50:50.420770] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.051 [2024-11-28 12:50:50.420776] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:08.051 [2024-11-28 12:50:50.420791] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.051 qpair failed and we were unable to recover it. 00:27:08.051 [2024-11-28 12:50:50.430738] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.051 [2024-11-28 12:50:50.430804] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.051 [2024-11-28 12:50:50.430818] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.051 [2024-11-28 12:50:50.430825] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.051 [2024-11-28 12:50:50.430831] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:08.051 [2024-11-28 12:50:50.430846] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.051 qpair failed and we were unable to recover it. 00:27:08.051 [2024-11-28 12:50:50.440754] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.051 [2024-11-28 12:50:50.440812] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.051 [2024-11-28 12:50:50.440826] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.051 [2024-11-28 12:50:50.440833] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.051 [2024-11-28 12:50:50.440842] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:08.051 [2024-11-28 12:50:50.440857] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.051 qpair failed and we were unable to recover it. 00:27:08.051 [2024-11-28 12:50:50.450788] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.051 [2024-11-28 12:50:50.450847] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.051 [2024-11-28 12:50:50.450861] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.051 [2024-11-28 12:50:50.450868] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.051 [2024-11-28 12:50:50.450874] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:08.051 [2024-11-28 12:50:50.450888] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.051 qpair failed and we were unable to recover it. 00:27:08.051 [2024-11-28 12:50:50.460810] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.051 [2024-11-28 12:50:50.460869] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.051 [2024-11-28 12:50:50.460884] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.051 [2024-11-28 12:50:50.460890] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.051 [2024-11-28 12:50:50.460897] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:08.051 [2024-11-28 12:50:50.460911] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.051 qpair failed and we were unable to recover it. 00:27:08.051 [2024-11-28 12:50:50.470832] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.052 [2024-11-28 12:50:50.470892] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.052 [2024-11-28 12:50:50.470906] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.052 [2024-11-28 12:50:50.470912] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.052 [2024-11-28 12:50:50.470918] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:08.052 [2024-11-28 12:50:50.470933] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.052 qpair failed and we were unable to recover it. 00:27:08.052 [2024-11-28 12:50:50.480877] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.052 [2024-11-28 12:50:50.480932] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.052 [2024-11-28 12:50:50.480945] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.052 [2024-11-28 12:50:50.480956] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.052 [2024-11-28 12:50:50.480962] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:08.052 [2024-11-28 12:50:50.480977] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.052 qpair failed and we were unable to recover it. 00:27:08.052 [2024-11-28 12:50:50.490909] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.052 [2024-11-28 12:50:50.491018] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.052 [2024-11-28 12:50:50.491032] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.052 [2024-11-28 12:50:50.491039] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.052 [2024-11-28 12:50:50.491045] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:08.052 [2024-11-28 12:50:50.491061] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.052 qpair failed and we were unable to recover it. 00:27:08.052 [2024-11-28 12:50:50.500975] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.052 [2024-11-28 12:50:50.501079] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.052 [2024-11-28 12:50:50.501093] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.052 [2024-11-28 12:50:50.501100] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.052 [2024-11-28 12:50:50.501106] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:08.052 [2024-11-28 12:50:50.501121] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.052 qpair failed and we were unable to recover it. 00:27:08.052 [2024-11-28 12:50:50.510940] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.052 [2024-11-28 12:50:50.510999] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.052 [2024-11-28 12:50:50.511013] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.052 [2024-11-28 12:50:50.511020] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.052 [2024-11-28 12:50:50.511026] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:08.052 [2024-11-28 12:50:50.511041] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.052 qpair failed and we were unable to recover it. 00:27:08.052 [2024-11-28 12:50:50.520981] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.052 [2024-11-28 12:50:50.521035] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.052 [2024-11-28 12:50:50.521049] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.052 [2024-11-28 12:50:50.521056] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.052 [2024-11-28 12:50:50.521062] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:08.052 [2024-11-28 12:50:50.521077] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.052 qpair failed and we were unable to recover it. 00:27:08.052 [2024-11-28 12:50:50.531019] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.052 [2024-11-28 12:50:50.531080] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.052 [2024-11-28 12:50:50.531094] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.052 [2024-11-28 12:50:50.531101] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.052 [2024-11-28 12:50:50.531107] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:08.052 [2024-11-28 12:50:50.531122] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.052 qpair failed and we were unable to recover it. 00:27:08.052 [2024-11-28 12:50:50.541036] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.052 [2024-11-28 12:50:50.541089] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.052 [2024-11-28 12:50:50.541104] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.052 [2024-11-28 12:50:50.541110] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.052 [2024-11-28 12:50:50.541116] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:08.052 [2024-11-28 12:50:50.541131] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.052 qpair failed and we were unable to recover it. 00:27:08.052 [2024-11-28 12:50:50.551089] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.052 [2024-11-28 12:50:50.551195] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.052 [2024-11-28 12:50:50.551209] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.052 [2024-11-28 12:50:50.551215] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.052 [2024-11-28 12:50:50.551222] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:08.052 [2024-11-28 12:50:50.551237] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.052 qpair failed and we were unable to recover it. 00:27:08.052 [2024-11-28 12:50:50.561095] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.052 [2024-11-28 12:50:50.561153] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.052 [2024-11-28 12:50:50.561167] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.052 [2024-11-28 12:50:50.561174] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.052 [2024-11-28 12:50:50.561180] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:08.052 [2024-11-28 12:50:50.561195] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.052 qpair failed and we were unable to recover it. 00:27:08.311 [2024-11-28 12:50:50.571140] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.311 [2024-11-28 12:50:50.571195] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.311 [2024-11-28 12:50:50.571209] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.311 [2024-11-28 12:50:50.571220] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.311 [2024-11-28 12:50:50.571226] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:08.311 [2024-11-28 12:50:50.571241] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.311 qpair failed and we were unable to recover it. 00:27:08.311 [2024-11-28 12:50:50.581175] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.311 [2024-11-28 12:50:50.581276] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.311 [2024-11-28 12:50:50.581290] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.311 [2024-11-28 12:50:50.581296] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.311 [2024-11-28 12:50:50.581302] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:08.311 [2024-11-28 12:50:50.581318] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.311 qpair failed and we were unable to recover it. 00:27:08.311 [2024-11-28 12:50:50.591134] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.311 [2024-11-28 12:50:50.591192] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.311 [2024-11-28 12:50:50.591205] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.311 [2024-11-28 12:50:50.591212] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.311 [2024-11-28 12:50:50.591218] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:08.311 [2024-11-28 12:50:50.591233] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.311 qpair failed and we were unable to recover it. 00:27:08.311 [2024-11-28 12:50:50.601228] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.311 [2024-11-28 12:50:50.601284] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.311 [2024-11-28 12:50:50.601297] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.311 [2024-11-28 12:50:50.601305] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.311 [2024-11-28 12:50:50.601311] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:08.311 [2024-11-28 12:50:50.601325] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.311 qpair failed and we were unable to recover it. 00:27:08.311 [2024-11-28 12:50:50.611257] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.311 [2024-11-28 12:50:50.611318] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.311 [2024-11-28 12:50:50.611332] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.311 [2024-11-28 12:50:50.611339] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.311 [2024-11-28 12:50:50.611347] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:08.311 [2024-11-28 12:50:50.611366] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.311 qpair failed and we were unable to recover it. 00:27:08.311 [2024-11-28 12:50:50.621281] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.311 [2024-11-28 12:50:50.621342] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.311 [2024-11-28 12:50:50.621356] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.311 [2024-11-28 12:50:50.621362] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.311 [2024-11-28 12:50:50.621368] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:08.311 [2024-11-28 12:50:50.621383] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.311 qpair failed and we were unable to recover it. 00:27:08.311 [2024-11-28 12:50:50.631365] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.311 [2024-11-28 12:50:50.631470] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.311 [2024-11-28 12:50:50.631484] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.311 [2024-11-28 12:50:50.631490] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.311 [2024-11-28 12:50:50.631497] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:08.311 [2024-11-28 12:50:50.631511] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.311 qpair failed and we were unable to recover it. 00:27:08.311 [2024-11-28 12:50:50.641319] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.311 [2024-11-28 12:50:50.641378] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.311 [2024-11-28 12:50:50.641391] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.311 [2024-11-28 12:50:50.641398] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.311 [2024-11-28 12:50:50.641404] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:08.311 [2024-11-28 12:50:50.641418] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.311 qpair failed and we were unable to recover it. 00:27:08.311 [2024-11-28 12:50:50.651362] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.312 [2024-11-28 12:50:50.651418] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.312 [2024-11-28 12:50:50.651432] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.312 [2024-11-28 12:50:50.651439] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.312 [2024-11-28 12:50:50.651444] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:08.312 [2024-11-28 12:50:50.651459] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.312 qpair failed and we were unable to recover it. 00:27:08.312 [2024-11-28 12:50:50.661439] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.312 [2024-11-28 12:50:50.661546] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.312 [2024-11-28 12:50:50.661559] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.312 [2024-11-28 12:50:50.661566] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.312 [2024-11-28 12:50:50.661572] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:08.312 [2024-11-28 12:50:50.661587] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.312 qpair failed and we were unable to recover it. 00:27:08.312 [2024-11-28 12:50:50.671427] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.312 [2024-11-28 12:50:50.671500] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.312 [2024-11-28 12:50:50.671514] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.312 [2024-11-28 12:50:50.671521] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.312 [2024-11-28 12:50:50.671527] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:08.312 [2024-11-28 12:50:50.671542] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.312 qpair failed and we were unable to recover it. 00:27:08.312 [2024-11-28 12:50:50.681445] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.312 [2024-11-28 12:50:50.681502] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.312 [2024-11-28 12:50:50.681515] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.312 [2024-11-28 12:50:50.681522] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.312 [2024-11-28 12:50:50.681528] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:08.312 [2024-11-28 12:50:50.681543] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.312 qpair failed and we were unable to recover it. 00:27:08.312 [2024-11-28 12:50:50.691404] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.312 [2024-11-28 12:50:50.691463] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.312 [2024-11-28 12:50:50.691477] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.312 [2024-11-28 12:50:50.691484] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.312 [2024-11-28 12:50:50.691490] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:08.312 [2024-11-28 12:50:50.691505] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.312 qpair failed and we were unable to recover it. 00:27:08.312 [2024-11-28 12:50:50.701510] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.312 [2024-11-28 12:50:50.701574] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.312 [2024-11-28 12:50:50.701591] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.312 [2024-11-28 12:50:50.701599] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.312 [2024-11-28 12:50:50.701604] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:08.312 [2024-11-28 12:50:50.701620] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.312 qpair failed and we were unable to recover it. 00:27:08.312 [2024-11-28 12:50:50.711520] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.312 [2024-11-28 12:50:50.711577] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.312 [2024-11-28 12:50:50.711591] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.312 [2024-11-28 12:50:50.711598] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.312 [2024-11-28 12:50:50.711605] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:08.312 [2024-11-28 12:50:50.711619] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.312 qpair failed and we were unable to recover it. 00:27:08.312 [2024-11-28 12:50:50.721550] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.312 [2024-11-28 12:50:50.721600] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.312 [2024-11-28 12:50:50.721614] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.312 [2024-11-28 12:50:50.721621] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.312 [2024-11-28 12:50:50.721628] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:08.312 [2024-11-28 12:50:50.721643] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.312 qpair failed and we were unable to recover it. 00:27:08.312 [2024-11-28 12:50:50.731584] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.312 [2024-11-28 12:50:50.731644] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.312 [2024-11-28 12:50:50.731657] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.312 [2024-11-28 12:50:50.731664] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.312 [2024-11-28 12:50:50.731670] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:08.312 [2024-11-28 12:50:50.731685] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.312 qpair failed and we were unable to recover it. 00:27:08.312 [2024-11-28 12:50:50.741662] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.312 [2024-11-28 12:50:50.741728] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.312 [2024-11-28 12:50:50.741741] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.312 [2024-11-28 12:50:50.741748] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.312 [2024-11-28 12:50:50.741754] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:08.312 [2024-11-28 12:50:50.741772] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.312 qpair failed and we were unable to recover it. 00:27:08.312 [2024-11-28 12:50:50.751652] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.312 [2024-11-28 12:50:50.751710] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.312 [2024-11-28 12:50:50.751724] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.312 [2024-11-28 12:50:50.751731] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.312 [2024-11-28 12:50:50.751737] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:08.312 [2024-11-28 12:50:50.751752] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.312 qpair failed and we were unable to recover it. 00:27:08.312 [2024-11-28 12:50:50.761667] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.312 [2024-11-28 12:50:50.761723] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.312 [2024-11-28 12:50:50.761737] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.312 [2024-11-28 12:50:50.761744] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.312 [2024-11-28 12:50:50.761750] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:08.312 [2024-11-28 12:50:50.761765] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.312 qpair failed and we were unable to recover it. 00:27:08.312 [2024-11-28 12:50:50.771704] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.312 [2024-11-28 12:50:50.771762] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.312 [2024-11-28 12:50:50.771776] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.312 [2024-11-28 12:50:50.771782] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.312 [2024-11-28 12:50:50.771789] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:08.312 [2024-11-28 12:50:50.771803] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.312 qpair failed and we were unable to recover it. 00:27:08.312 [2024-11-28 12:50:50.781735] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.313 [2024-11-28 12:50:50.781795] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.313 [2024-11-28 12:50:50.781809] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.313 [2024-11-28 12:50:50.781816] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.313 [2024-11-28 12:50:50.781822] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:08.313 [2024-11-28 12:50:50.781837] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.313 qpair failed and we were unable to recover it. 00:27:08.313 [2024-11-28 12:50:50.791742] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.313 [2024-11-28 12:50:50.791801] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.313 [2024-11-28 12:50:50.791816] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.313 [2024-11-28 12:50:50.791823] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.313 [2024-11-28 12:50:50.791829] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:08.313 [2024-11-28 12:50:50.791844] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.313 qpair failed and we were unable to recover it. 00:27:08.313 [2024-11-28 12:50:50.801776] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.313 [2024-11-28 12:50:50.801848] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.313 [2024-11-28 12:50:50.801862] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.313 [2024-11-28 12:50:50.801869] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.313 [2024-11-28 12:50:50.801875] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:08.313 [2024-11-28 12:50:50.801889] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.313 qpair failed and we were unable to recover it. 00:27:08.313 [2024-11-28 12:50:50.811814] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.313 [2024-11-28 12:50:50.811870] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.313 [2024-11-28 12:50:50.811884] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.313 [2024-11-28 12:50:50.811890] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.313 [2024-11-28 12:50:50.811896] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:08.313 [2024-11-28 12:50:50.811911] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.313 qpair failed and we were unable to recover it. 00:27:08.313 [2024-11-28 12:50:50.821833] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.313 [2024-11-28 12:50:50.821941] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.313 [2024-11-28 12:50:50.821959] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.313 [2024-11-28 12:50:50.821965] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.313 [2024-11-28 12:50:50.821972] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:08.313 [2024-11-28 12:50:50.821987] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.313 qpair failed and we were unable to recover it. 00:27:08.572 [2024-11-28 12:50:50.831843] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.572 [2024-11-28 12:50:50.831921] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.572 [2024-11-28 12:50:50.831938] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.572 [2024-11-28 12:50:50.831945] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.572 [2024-11-28 12:50:50.831955] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:08.572 [2024-11-28 12:50:50.831969] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.572 qpair failed and we were unable to recover it. 00:27:08.572 [2024-11-28 12:50:50.841889] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.572 [2024-11-28 12:50:50.841953] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.572 [2024-11-28 12:50:50.841968] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.572 [2024-11-28 12:50:50.841974] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.572 [2024-11-28 12:50:50.841980] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:08.572 [2024-11-28 12:50:50.841995] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.572 qpair failed and we were unable to recover it. 00:27:08.572 [2024-11-28 12:50:50.851925] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.572 [2024-11-28 12:50:50.851988] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.572 [2024-11-28 12:50:50.852003] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.572 [2024-11-28 12:50:50.852010] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.572 [2024-11-28 12:50:50.852015] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:08.572 [2024-11-28 12:50:50.852030] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.572 qpair failed and we were unable to recover it. 00:27:08.572 [2024-11-28 12:50:50.861945] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.572 [2024-11-28 12:50:50.862011] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.572 [2024-11-28 12:50:50.862026] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.573 [2024-11-28 12:50:50.862034] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.573 [2024-11-28 12:50:50.862040] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:08.573 [2024-11-28 12:50:50.862055] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.573 qpair failed and we were unable to recover it. 00:27:08.573 [2024-11-28 12:50:50.871914] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.573 [2024-11-28 12:50:50.871976] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.573 [2024-11-28 12:50:50.871991] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.573 [2024-11-28 12:50:50.871998] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.573 [2024-11-28 12:50:50.872008] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:08.573 [2024-11-28 12:50:50.872022] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.573 qpair failed and we were unable to recover it. 00:27:08.573 [2024-11-28 12:50:50.881990] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.573 [2024-11-28 12:50:50.882051] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.573 [2024-11-28 12:50:50.882065] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.573 [2024-11-28 12:50:50.882071] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.573 [2024-11-28 12:50:50.882077] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:08.573 [2024-11-28 12:50:50.882092] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.573 qpair failed and we were unable to recover it. 00:27:08.573 [2024-11-28 12:50:50.891972] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.573 [2024-11-28 12:50:50.892052] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.573 [2024-11-28 12:50:50.892066] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.573 [2024-11-28 12:50:50.892073] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.573 [2024-11-28 12:50:50.892079] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:08.573 [2024-11-28 12:50:50.892094] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.573 qpair failed and we were unable to recover it. 00:27:08.573 [2024-11-28 12:50:50.902056] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.573 [2024-11-28 12:50:50.902113] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.573 [2024-11-28 12:50:50.902128] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.573 [2024-11-28 12:50:50.902135] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.573 [2024-11-28 12:50:50.902141] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:08.573 [2024-11-28 12:50:50.902156] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.573 qpair failed and we were unable to recover it. 00:27:08.573 [2024-11-28 12:50:50.912029] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.573 [2024-11-28 12:50:50.912087] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.573 [2024-11-28 12:50:50.912101] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.573 [2024-11-28 12:50:50.912108] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.573 [2024-11-28 12:50:50.912114] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:08.573 [2024-11-28 12:50:50.912128] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.573 qpair failed and we were unable to recover it. 00:27:08.573 [2024-11-28 12:50:50.922109] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.573 [2024-11-28 12:50:50.922166] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.573 [2024-11-28 12:50:50.922179] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.573 [2024-11-28 12:50:50.922186] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.573 [2024-11-28 12:50:50.922192] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:08.573 [2024-11-28 12:50:50.922207] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.573 qpair failed and we were unable to recover it. 00:27:08.573 [2024-11-28 12:50:50.932154] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.573 [2024-11-28 12:50:50.932212] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.573 [2024-11-28 12:50:50.932226] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.573 [2024-11-28 12:50:50.932233] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.573 [2024-11-28 12:50:50.932239] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:08.573 [2024-11-28 12:50:50.932254] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.573 qpair failed and we were unable to recover it. 00:27:08.573 [2024-11-28 12:50:50.942228] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.573 [2024-11-28 12:50:50.942286] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.573 [2024-11-28 12:50:50.942300] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.573 [2024-11-28 12:50:50.942307] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.573 [2024-11-28 12:50:50.942312] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:08.573 [2024-11-28 12:50:50.942328] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.573 qpair failed and we were unable to recover it. 00:27:08.573 [2024-11-28 12:50:50.952206] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.573 [2024-11-28 12:50:50.952263] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.573 [2024-11-28 12:50:50.952277] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.573 [2024-11-28 12:50:50.952283] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.573 [2024-11-28 12:50:50.952289] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:08.573 [2024-11-28 12:50:50.952304] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.573 qpair failed and we were unable to recover it. 00:27:08.573 [2024-11-28 12:50:50.962232] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.573 [2024-11-28 12:50:50.962282] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.573 [2024-11-28 12:50:50.962299] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.573 [2024-11-28 12:50:50.962306] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.573 [2024-11-28 12:50:50.962311] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:08.573 [2024-11-28 12:50:50.962326] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.573 qpair failed and we were unable to recover it. 00:27:08.573 [2024-11-28 12:50:50.972270] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.573 [2024-11-28 12:50:50.972329] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.573 [2024-11-28 12:50:50.972342] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.573 [2024-11-28 12:50:50.972350] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.573 [2024-11-28 12:50:50.972356] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:08.573 [2024-11-28 12:50:50.972371] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.573 qpair failed and we were unable to recover it. 00:27:08.573 [2024-11-28 12:50:50.982298] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.573 [2024-11-28 12:50:50.982407] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.573 [2024-11-28 12:50:50.982429] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.573 [2024-11-28 12:50:50.982436] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.573 [2024-11-28 12:50:50.982442] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:08.573 [2024-11-28 12:50:50.982457] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.573 qpair failed and we were unable to recover it. 00:27:08.573 [2024-11-28 12:50:50.992303] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.573 [2024-11-28 12:50:50.992409] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.573 [2024-11-28 12:50:50.992423] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.573 [2024-11-28 12:50:50.992430] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.574 [2024-11-28 12:50:50.992436] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:08.574 [2024-11-28 12:50:50.992451] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.574 qpair failed and we were unable to recover it. 00:27:08.574 [2024-11-28 12:50:51.002350] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.574 [2024-11-28 12:50:51.002410] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.574 [2024-11-28 12:50:51.002423] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.574 [2024-11-28 12:50:51.002430] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.574 [2024-11-28 12:50:51.002439] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:08.574 [2024-11-28 12:50:51.002454] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.574 qpair failed and we were unable to recover it. 00:27:08.574 [2024-11-28 12:50:51.012389] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.574 [2024-11-28 12:50:51.012460] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.574 [2024-11-28 12:50:51.012474] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.574 [2024-11-28 12:50:51.012481] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.574 [2024-11-28 12:50:51.012486] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:08.574 [2024-11-28 12:50:51.012501] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.574 qpair failed and we were unable to recover it. 00:27:08.574 [2024-11-28 12:50:51.022478] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.574 [2024-11-28 12:50:51.022535] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.574 [2024-11-28 12:50:51.022550] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.574 [2024-11-28 12:50:51.022556] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.574 [2024-11-28 12:50:51.022563] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:08.574 [2024-11-28 12:50:51.022578] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.574 qpair failed and we were unable to recover it. 00:27:08.574 [2024-11-28 12:50:51.032468] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.574 [2024-11-28 12:50:51.032530] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.574 [2024-11-28 12:50:51.032544] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.574 [2024-11-28 12:50:51.032551] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.574 [2024-11-28 12:50:51.032557] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:08.574 [2024-11-28 12:50:51.032572] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.574 qpair failed and we were unable to recover it. 00:27:08.574 [2024-11-28 12:50:51.042463] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.574 [2024-11-28 12:50:51.042519] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.574 [2024-11-28 12:50:51.042533] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.574 [2024-11-28 12:50:51.042540] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.574 [2024-11-28 12:50:51.042545] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:08.574 [2024-11-28 12:50:51.042560] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.574 qpair failed and we were unable to recover it. 00:27:08.574 [2024-11-28 12:50:51.052501] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.574 [2024-11-28 12:50:51.052559] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.574 [2024-11-28 12:50:51.052573] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.574 [2024-11-28 12:50:51.052580] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.574 [2024-11-28 12:50:51.052586] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:08.574 [2024-11-28 12:50:51.052599] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.574 qpair failed and we were unable to recover it. 00:27:08.574 [2024-11-28 12:50:51.062532] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.574 [2024-11-28 12:50:51.062589] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.574 [2024-11-28 12:50:51.062602] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.574 [2024-11-28 12:50:51.062609] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.574 [2024-11-28 12:50:51.062615] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:08.574 [2024-11-28 12:50:51.062630] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.574 qpair failed and we were unable to recover it. 00:27:08.574 [2024-11-28 12:50:51.072556] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.574 [2024-11-28 12:50:51.072610] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.574 [2024-11-28 12:50:51.072624] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.574 [2024-11-28 12:50:51.072631] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.574 [2024-11-28 12:50:51.072637] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:08.574 [2024-11-28 12:50:51.072652] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.574 qpair failed and we were unable to recover it. 00:27:08.574 [2024-11-28 12:50:51.082576] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.574 [2024-11-28 12:50:51.082635] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.574 [2024-11-28 12:50:51.082649] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.574 [2024-11-28 12:50:51.082656] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.574 [2024-11-28 12:50:51.082662] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:08.574 [2024-11-28 12:50:51.082677] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.574 qpair failed and we were unable to recover it. 00:27:08.834 [2024-11-28 12:50:51.092641] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.834 [2024-11-28 12:50:51.092706] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.834 [2024-11-28 12:50:51.092720] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.834 [2024-11-28 12:50:51.092727] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.834 [2024-11-28 12:50:51.092732] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:08.834 [2024-11-28 12:50:51.092748] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.834 qpair failed and we were unable to recover it. 00:27:08.834 [2024-11-28 12:50:51.102674] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.834 [2024-11-28 12:50:51.102737] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.834 [2024-11-28 12:50:51.102751] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.834 [2024-11-28 12:50:51.102758] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.834 [2024-11-28 12:50:51.102764] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:08.834 [2024-11-28 12:50:51.102779] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.834 qpair failed and we were unable to recover it. 00:27:08.834 [2024-11-28 12:50:51.112676] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.834 [2024-11-28 12:50:51.112732] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.834 [2024-11-28 12:50:51.112746] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.834 [2024-11-28 12:50:51.112753] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.834 [2024-11-28 12:50:51.112759] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:08.834 [2024-11-28 12:50:51.112773] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.834 qpair failed and we were unable to recover it. 00:27:08.834 [2024-11-28 12:50:51.122703] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.834 [2024-11-28 12:50:51.122760] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.834 [2024-11-28 12:50:51.122775] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.834 [2024-11-28 12:50:51.122782] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.834 [2024-11-28 12:50:51.122788] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:08.834 [2024-11-28 12:50:51.122803] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.834 qpair failed and we were unable to recover it. 00:27:08.834 [2024-11-28 12:50:51.132736] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.834 [2024-11-28 12:50:51.132795] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.834 [2024-11-28 12:50:51.132809] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.834 [2024-11-28 12:50:51.132819] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.834 [2024-11-28 12:50:51.132825] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:08.834 [2024-11-28 12:50:51.132840] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.834 qpair failed and we were unable to recover it. 00:27:08.834 [2024-11-28 12:50:51.142761] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.834 [2024-11-28 12:50:51.142822] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.834 [2024-11-28 12:50:51.142836] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.834 [2024-11-28 12:50:51.142843] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.834 [2024-11-28 12:50:51.142849] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:08.834 [2024-11-28 12:50:51.142864] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.834 qpair failed and we were unable to recover it. 00:27:08.835 [2024-11-28 12:50:51.152791] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.835 [2024-11-28 12:50:51.152846] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.835 [2024-11-28 12:50:51.152860] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.835 [2024-11-28 12:50:51.152867] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.835 [2024-11-28 12:50:51.152873] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:08.835 [2024-11-28 12:50:51.152887] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.835 qpair failed and we were unable to recover it. 00:27:08.835 [2024-11-28 12:50:51.162820] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.835 [2024-11-28 12:50:51.162874] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.835 [2024-11-28 12:50:51.162888] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.835 [2024-11-28 12:50:51.162895] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.835 [2024-11-28 12:50:51.162901] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:08.835 [2024-11-28 12:50:51.162917] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.835 qpair failed and we were unable to recover it. 00:27:08.835 [2024-11-28 12:50:51.172854] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.835 [2024-11-28 12:50:51.172916] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.835 [2024-11-28 12:50:51.172929] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.835 [2024-11-28 12:50:51.172936] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.835 [2024-11-28 12:50:51.172942] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:08.835 [2024-11-28 12:50:51.172965] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.835 qpair failed and we were unable to recover it. 00:27:08.835 [2024-11-28 12:50:51.182909] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.835 [2024-11-28 12:50:51.182987] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.835 [2024-11-28 12:50:51.183003] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.835 [2024-11-28 12:50:51.183009] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.835 [2024-11-28 12:50:51.183016] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:08.835 [2024-11-28 12:50:51.183032] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.835 qpair failed and we were unable to recover it. 00:27:08.835 [2024-11-28 12:50:51.192914] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.835 [2024-11-28 12:50:51.192974] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.835 [2024-11-28 12:50:51.192988] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.835 [2024-11-28 12:50:51.192995] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.835 [2024-11-28 12:50:51.193001] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:08.835 [2024-11-28 12:50:51.193016] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.835 qpair failed and we were unable to recover it. 00:27:08.835 [2024-11-28 12:50:51.202934] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.835 [2024-11-28 12:50:51.202995] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.835 [2024-11-28 12:50:51.203009] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.835 [2024-11-28 12:50:51.203016] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.835 [2024-11-28 12:50:51.203021] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:08.835 [2024-11-28 12:50:51.203037] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.835 qpair failed and we were unable to recover it. 00:27:08.835 [2024-11-28 12:50:51.212895] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.835 [2024-11-28 12:50:51.212957] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.835 [2024-11-28 12:50:51.212971] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.835 [2024-11-28 12:50:51.212978] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.835 [2024-11-28 12:50:51.212984] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:08.835 [2024-11-28 12:50:51.212999] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.835 qpair failed and we were unable to recover it. 00:27:08.835 [2024-11-28 12:50:51.222987] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.835 [2024-11-28 12:50:51.223064] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.835 [2024-11-28 12:50:51.223078] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.835 [2024-11-28 12:50:51.223085] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.835 [2024-11-28 12:50:51.223091] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:08.835 [2024-11-28 12:50:51.223105] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.835 qpair failed and we were unable to recover it. 00:27:08.835 [2024-11-28 12:50:51.233020] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.835 [2024-11-28 12:50:51.233079] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.835 [2024-11-28 12:50:51.233093] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.835 [2024-11-28 12:50:51.233099] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.835 [2024-11-28 12:50:51.233105] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:08.835 [2024-11-28 12:50:51.233120] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.835 qpair failed and we were unable to recover it. 00:27:08.835 [2024-11-28 12:50:51.243106] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.835 [2024-11-28 12:50:51.243213] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.835 [2024-11-28 12:50:51.243226] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.835 [2024-11-28 12:50:51.243233] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.835 [2024-11-28 12:50:51.243239] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:08.835 [2024-11-28 12:50:51.243253] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.835 qpair failed and we were unable to recover it. 00:27:08.835 [2024-11-28 12:50:51.253111] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.835 [2024-11-28 12:50:51.253173] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.835 [2024-11-28 12:50:51.253189] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.835 [2024-11-28 12:50:51.253196] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.835 [2024-11-28 12:50:51.253202] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:08.835 [2024-11-28 12:50:51.253217] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.835 qpair failed and we were unable to recover it. 00:27:08.835 [2024-11-28 12:50:51.263119] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.835 [2024-11-28 12:50:51.263173] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.835 [2024-11-28 12:50:51.263192] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.835 [2024-11-28 12:50:51.263199] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.835 [2024-11-28 12:50:51.263205] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:08.835 [2024-11-28 12:50:51.263220] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.835 qpair failed and we were unable to recover it. 00:27:08.835 [2024-11-28 12:50:51.273101] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.835 [2024-11-28 12:50:51.273170] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.835 [2024-11-28 12:50:51.273184] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.835 [2024-11-28 12:50:51.273190] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.835 [2024-11-28 12:50:51.273197] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:08.835 [2024-11-28 12:50:51.273212] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.835 qpair failed and we were unable to recover it. 00:27:08.835 [2024-11-28 12:50:51.283193] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.836 [2024-11-28 12:50:51.283259] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.836 [2024-11-28 12:50:51.283272] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.836 [2024-11-28 12:50:51.283279] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.836 [2024-11-28 12:50:51.283285] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:08.836 [2024-11-28 12:50:51.283300] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.836 qpair failed and we were unable to recover it. 00:27:08.836 [2024-11-28 12:50:51.293205] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.836 [2024-11-28 12:50:51.293280] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.836 [2024-11-28 12:50:51.293294] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.836 [2024-11-28 12:50:51.293301] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.836 [2024-11-28 12:50:51.293308] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:08.836 [2024-11-28 12:50:51.293321] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.836 qpair failed and we were unable to recover it. 00:27:08.836 [2024-11-28 12:50:51.303341] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.836 [2024-11-28 12:50:51.303399] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.836 [2024-11-28 12:50:51.303414] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.836 [2024-11-28 12:50:51.303421] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.836 [2024-11-28 12:50:51.303427] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:08.836 [2024-11-28 12:50:51.303445] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.836 qpair failed and we were unable to recover it. 00:27:08.836 [2024-11-28 12:50:51.313326] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.836 [2024-11-28 12:50:51.313383] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.836 [2024-11-28 12:50:51.313397] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.836 [2024-11-28 12:50:51.313404] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.836 [2024-11-28 12:50:51.313409] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:08.836 [2024-11-28 12:50:51.313425] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.836 qpair failed and we were unable to recover it. 00:27:08.836 [2024-11-28 12:50:51.323320] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.836 [2024-11-28 12:50:51.323380] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.836 [2024-11-28 12:50:51.323394] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.836 [2024-11-28 12:50:51.323401] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.836 [2024-11-28 12:50:51.323407] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:08.836 [2024-11-28 12:50:51.323421] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.836 qpair failed and we were unable to recover it. 00:27:08.836 [2024-11-28 12:50:51.333367] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.836 [2024-11-28 12:50:51.333441] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.836 [2024-11-28 12:50:51.333454] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.836 [2024-11-28 12:50:51.333461] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.836 [2024-11-28 12:50:51.333467] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:08.836 [2024-11-28 12:50:51.333481] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.836 qpair failed and we were unable to recover it. 00:27:08.836 [2024-11-28 12:50:51.343414] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:08.836 [2024-11-28 12:50:51.343521] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:08.836 [2024-11-28 12:50:51.343534] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:08.836 [2024-11-28 12:50:51.343541] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:08.836 [2024-11-28 12:50:51.343547] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:08.836 [2024-11-28 12:50:51.343561] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.836 qpair failed and we were unable to recover it. 00:27:09.095 [2024-11-28 12:50:51.353376] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.095 [2024-11-28 12:50:51.353436] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.095 [2024-11-28 12:50:51.353450] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.095 [2024-11-28 12:50:51.353457] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.095 [2024-11-28 12:50:51.353463] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:09.095 [2024-11-28 12:50:51.353477] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.095 qpair failed and we were unable to recover it. 00:27:09.095 [2024-11-28 12:50:51.363357] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.095 [2024-11-28 12:50:51.363416] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.095 [2024-11-28 12:50:51.363430] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.095 [2024-11-28 12:50:51.363437] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.095 [2024-11-28 12:50:51.363443] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:09.095 [2024-11-28 12:50:51.363458] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.095 qpair failed and we were unable to recover it. 00:27:09.095 [2024-11-28 12:50:51.373413] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.095 [2024-11-28 12:50:51.373488] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.095 [2024-11-28 12:50:51.373502] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.095 [2024-11-28 12:50:51.373509] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.095 [2024-11-28 12:50:51.373515] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:09.095 [2024-11-28 12:50:51.373529] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.095 qpair failed and we were unable to recover it. 00:27:09.095 [2024-11-28 12:50:51.383478] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.095 [2024-11-28 12:50:51.383532] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.095 [2024-11-28 12:50:51.383546] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.095 [2024-11-28 12:50:51.383553] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.095 [2024-11-28 12:50:51.383559] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:09.095 [2024-11-28 12:50:51.383573] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.095 qpair failed and we were unable to recover it. 00:27:09.095 [2024-11-28 12:50:51.393544] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.095 [2024-11-28 12:50:51.393608] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.095 [2024-11-28 12:50:51.393625] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.095 [2024-11-28 12:50:51.393632] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.095 [2024-11-28 12:50:51.393637] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:09.095 [2024-11-28 12:50:51.393653] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.095 qpair failed and we were unable to recover it. 00:27:09.095 [2024-11-28 12:50:51.403559] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.095 [2024-11-28 12:50:51.403620] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.095 [2024-11-28 12:50:51.403634] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.095 [2024-11-28 12:50:51.403641] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.095 [2024-11-28 12:50:51.403647] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:09.095 [2024-11-28 12:50:51.403662] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.095 qpair failed and we were unable to recover it. 00:27:09.095 [2024-11-28 12:50:51.413579] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.095 [2024-11-28 12:50:51.413686] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.095 [2024-11-28 12:50:51.413700] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.095 [2024-11-28 12:50:51.413707] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.095 [2024-11-28 12:50:51.413714] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:09.095 [2024-11-28 12:50:51.413728] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.095 qpair failed and we were unable to recover it. 00:27:09.096 [2024-11-28 12:50:51.423539] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.096 [2024-11-28 12:50:51.423596] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.096 [2024-11-28 12:50:51.423610] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.096 [2024-11-28 12:50:51.423617] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.096 [2024-11-28 12:50:51.423623] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:09.096 [2024-11-28 12:50:51.423638] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.096 qpair failed and we were unable to recover it. 00:27:09.096 [2024-11-28 12:50:51.433675] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.096 [2024-11-28 12:50:51.433740] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.096 [2024-11-28 12:50:51.433754] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.096 [2024-11-28 12:50:51.433760] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.096 [2024-11-28 12:50:51.433770] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:09.096 [2024-11-28 12:50:51.433785] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.096 qpair failed and we were unable to recover it. 00:27:09.096 [2024-11-28 12:50:51.443674] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.096 [2024-11-28 12:50:51.443755] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.096 [2024-11-28 12:50:51.443769] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.096 [2024-11-28 12:50:51.443775] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.096 [2024-11-28 12:50:51.443781] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:09.096 [2024-11-28 12:50:51.443796] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.096 qpair failed and we were unable to recover it. 00:27:09.096 [2024-11-28 12:50:51.453686] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.096 [2024-11-28 12:50:51.453751] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.096 [2024-11-28 12:50:51.453765] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.096 [2024-11-28 12:50:51.453772] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.096 [2024-11-28 12:50:51.453778] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:09.096 [2024-11-28 12:50:51.453793] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.096 qpair failed and we were unable to recover it. 00:27:09.096 [2024-11-28 12:50:51.463656] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.096 [2024-11-28 12:50:51.463717] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.096 [2024-11-28 12:50:51.463733] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.096 [2024-11-28 12:50:51.463740] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.096 [2024-11-28 12:50:51.463746] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:09.096 [2024-11-28 12:50:51.463761] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.096 qpair failed and we were unable to recover it. 00:27:09.096 [2024-11-28 12:50:51.473764] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.096 [2024-11-28 12:50:51.473820] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.096 [2024-11-28 12:50:51.473834] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.096 [2024-11-28 12:50:51.473842] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.096 [2024-11-28 12:50:51.473847] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:09.096 [2024-11-28 12:50:51.473863] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.096 qpair failed and we were unable to recover it. 00:27:09.096 [2024-11-28 12:50:51.483746] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.096 [2024-11-28 12:50:51.483802] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.096 [2024-11-28 12:50:51.483816] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.096 [2024-11-28 12:50:51.483823] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.096 [2024-11-28 12:50:51.483829] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:09.096 [2024-11-28 12:50:51.483844] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.096 qpair failed and we were unable to recover it. 00:27:09.096 [2024-11-28 12:50:51.493805] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.096 [2024-11-28 12:50:51.493864] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.096 [2024-11-28 12:50:51.493879] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.096 [2024-11-28 12:50:51.493886] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.096 [2024-11-28 12:50:51.493892] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:09.096 [2024-11-28 12:50:51.493908] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.096 qpair failed and we were unable to recover it. 00:27:09.096 [2024-11-28 12:50:51.503759] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.096 [2024-11-28 12:50:51.503816] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.096 [2024-11-28 12:50:51.503831] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.096 [2024-11-28 12:50:51.503838] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.096 [2024-11-28 12:50:51.503844] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:09.096 [2024-11-28 12:50:51.503858] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.096 qpair failed and we were unable to recover it. 00:27:09.096 [2024-11-28 12:50:51.513812] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.096 [2024-11-28 12:50:51.513866] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.096 [2024-11-28 12:50:51.513880] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.096 [2024-11-28 12:50:51.513887] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.096 [2024-11-28 12:50:51.513893] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:09.096 [2024-11-28 12:50:51.513908] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.096 qpair failed and we were unable to recover it. 00:27:09.096 [2024-11-28 12:50:51.523880] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.096 [2024-11-28 12:50:51.523944] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.096 [2024-11-28 12:50:51.523975] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.096 [2024-11-28 12:50:51.523982] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.096 [2024-11-28 12:50:51.523988] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:09.096 [2024-11-28 12:50:51.524008] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.096 qpair failed and we were unable to recover it. 00:27:09.096 [2024-11-28 12:50:51.533913] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.096 [2024-11-28 12:50:51.533978] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.096 [2024-11-28 12:50:51.533992] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.096 [2024-11-28 12:50:51.533999] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.096 [2024-11-28 12:50:51.534005] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:09.096 [2024-11-28 12:50:51.534020] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.096 qpair failed and we were unable to recover it. 00:27:09.096 [2024-11-28 12:50:51.543868] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.096 [2024-11-28 12:50:51.543926] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.096 [2024-11-28 12:50:51.543940] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.096 [2024-11-28 12:50:51.543950] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.096 [2024-11-28 12:50:51.543957] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:09.096 [2024-11-28 12:50:51.543972] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.096 qpair failed and we were unable to recover it. 00:27:09.096 [2024-11-28 12:50:51.553967] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.096 [2024-11-28 12:50:51.554022] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.096 [2024-11-28 12:50:51.554036] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.096 [2024-11-28 12:50:51.554043] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.096 [2024-11-28 12:50:51.554049] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:09.096 [2024-11-28 12:50:51.554064] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.096 qpair failed and we were unable to recover it. 00:27:09.096 [2024-11-28 12:50:51.563989] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.096 [2024-11-28 12:50:51.564045] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.096 [2024-11-28 12:50:51.564059] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.096 [2024-11-28 12:50:51.564069] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.096 [2024-11-28 12:50:51.564075] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:09.096 [2024-11-28 12:50:51.564091] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.096 qpair failed and we were unable to recover it. 00:27:09.096 [2024-11-28 12:50:51.574023] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.096 [2024-11-28 12:50:51.574087] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.096 [2024-11-28 12:50:51.574101] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.096 [2024-11-28 12:50:51.574108] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.096 [2024-11-28 12:50:51.574114] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:09.096 [2024-11-28 12:50:51.574129] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.096 qpair failed and we were unable to recover it. 00:27:09.096 [2024-11-28 12:50:51.583986] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.096 [2024-11-28 12:50:51.584042] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.096 [2024-11-28 12:50:51.584057] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.096 [2024-11-28 12:50:51.584064] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.096 [2024-11-28 12:50:51.584070] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:09.096 [2024-11-28 12:50:51.584084] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.096 qpair failed and we were unable to recover it. 00:27:09.096 [2024-11-28 12:50:51.594026] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.096 [2024-11-28 12:50:51.594079] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.096 [2024-11-28 12:50:51.594093] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.096 [2024-11-28 12:50:51.594100] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.096 [2024-11-28 12:50:51.594107] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:09.096 [2024-11-28 12:50:51.594121] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.096 qpair failed and we were unable to recover it. 00:27:09.096 [2024-11-28 12:50:51.604127] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.096 [2024-11-28 12:50:51.604183] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.096 [2024-11-28 12:50:51.604197] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.097 [2024-11-28 12:50:51.604204] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.097 [2024-11-28 12:50:51.604209] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:09.097 [2024-11-28 12:50:51.604224] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.097 qpair failed and we were unable to recover it. 00:27:09.355 [2024-11-28 12:50:51.614086] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.355 [2024-11-28 12:50:51.614144] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.355 [2024-11-28 12:50:51.614157] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.355 [2024-11-28 12:50:51.614164] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.355 [2024-11-28 12:50:51.614170] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:09.355 [2024-11-28 12:50:51.614185] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.355 qpair failed and we were unable to recover it. 00:27:09.355 [2024-11-28 12:50:51.624173] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.355 [2024-11-28 12:50:51.624232] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.355 [2024-11-28 12:50:51.624245] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.355 [2024-11-28 12:50:51.624252] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.355 [2024-11-28 12:50:51.624258] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:09.355 [2024-11-28 12:50:51.624273] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.355 qpair failed and we were unable to recover it. 00:27:09.355 [2024-11-28 12:50:51.634197] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.355 [2024-11-28 12:50:51.634255] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.355 [2024-11-28 12:50:51.634269] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.355 [2024-11-28 12:50:51.634276] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.355 [2024-11-28 12:50:51.634282] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:09.355 [2024-11-28 12:50:51.634296] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.355 qpair failed and we were unable to recover it. 00:27:09.355 [2024-11-28 12:50:51.644228] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.355 [2024-11-28 12:50:51.644285] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.355 [2024-11-28 12:50:51.644298] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.355 [2024-11-28 12:50:51.644305] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.355 [2024-11-28 12:50:51.644311] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:09.355 [2024-11-28 12:50:51.644326] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.355 qpair failed and we were unable to recover it. 00:27:09.355 [2024-11-28 12:50:51.654213] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.355 [2024-11-28 12:50:51.654300] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.355 [2024-11-28 12:50:51.654313] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.355 [2024-11-28 12:50:51.654320] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.355 [2024-11-28 12:50:51.654326] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:09.355 [2024-11-28 12:50:51.654340] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.355 qpair failed and we were unable to recover it. 00:27:09.355 [2024-11-28 12:50:51.664223] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.355 [2024-11-28 12:50:51.664312] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.355 [2024-11-28 12:50:51.664326] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.355 [2024-11-28 12:50:51.664332] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.355 [2024-11-28 12:50:51.664338] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:09.355 [2024-11-28 12:50:51.664352] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.355 qpair failed and we were unable to recover it. 00:27:09.355 [2024-11-28 12:50:51.674299] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.355 [2024-11-28 12:50:51.674355] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.355 [2024-11-28 12:50:51.674368] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.355 [2024-11-28 12:50:51.674375] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.355 [2024-11-28 12:50:51.674381] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:09.355 [2024-11-28 12:50:51.674396] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.355 qpair failed and we were unable to recover it. 00:27:09.355 [2024-11-28 12:50:51.684342] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.355 [2024-11-28 12:50:51.684399] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.355 [2024-11-28 12:50:51.684412] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.355 [2024-11-28 12:50:51.684419] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.355 [2024-11-28 12:50:51.684425] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:09.355 [2024-11-28 12:50:51.684440] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.355 qpair failed and we were unable to recover it. 00:27:09.355 [2024-11-28 12:50:51.694311] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.355 [2024-11-28 12:50:51.694376] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.355 [2024-11-28 12:50:51.694390] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.355 [2024-11-28 12:50:51.694400] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.355 [2024-11-28 12:50:51.694406] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:09.356 [2024-11-28 12:50:51.694420] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.356 qpair failed and we were unable to recover it. 00:27:09.356 [2024-11-28 12:50:51.704412] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.356 [2024-11-28 12:50:51.704468] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.356 [2024-11-28 12:50:51.704482] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.356 [2024-11-28 12:50:51.704489] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.356 [2024-11-28 12:50:51.704495] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:09.356 [2024-11-28 12:50:51.704510] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.356 qpair failed and we were unable to recover it. 00:27:09.356 [2024-11-28 12:50:51.714413] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.356 [2024-11-28 12:50:51.714482] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.356 [2024-11-28 12:50:51.714496] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.356 [2024-11-28 12:50:51.714503] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.356 [2024-11-28 12:50:51.714509] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:09.356 [2024-11-28 12:50:51.714524] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.356 qpair failed and we were unable to recover it. 00:27:09.356 [2024-11-28 12:50:51.724456] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.356 [2024-11-28 12:50:51.724515] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.356 [2024-11-28 12:50:51.724529] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.356 [2024-11-28 12:50:51.724536] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.356 [2024-11-28 12:50:51.724542] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:09.356 [2024-11-28 12:50:51.724556] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.356 qpair failed and we were unable to recover it. 00:27:09.356 [2024-11-28 12:50:51.734499] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.356 [2024-11-28 12:50:51.734558] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.356 [2024-11-28 12:50:51.734572] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.356 [2024-11-28 12:50:51.734579] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.356 [2024-11-28 12:50:51.734584] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:09.356 [2024-11-28 12:50:51.734603] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.356 qpair failed and we were unable to recover it. 00:27:09.356 [2024-11-28 12:50:51.744523] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.356 [2024-11-28 12:50:51.744575] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.356 [2024-11-28 12:50:51.744589] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.356 [2024-11-28 12:50:51.744595] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.356 [2024-11-28 12:50:51.744601] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:09.356 [2024-11-28 12:50:51.744616] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.356 qpair failed and we were unable to recover it. 00:27:09.356 [2024-11-28 12:50:51.754546] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.356 [2024-11-28 12:50:51.754596] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.356 [2024-11-28 12:50:51.754609] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.356 [2024-11-28 12:50:51.754616] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.356 [2024-11-28 12:50:51.754622] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:09.356 [2024-11-28 12:50:51.754636] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.356 qpair failed and we were unable to recover it. 00:27:09.356 [2024-11-28 12:50:51.764582] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.356 [2024-11-28 12:50:51.764636] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.356 [2024-11-28 12:50:51.764649] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.356 [2024-11-28 12:50:51.764656] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.356 [2024-11-28 12:50:51.764663] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:09.356 [2024-11-28 12:50:51.764678] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.356 qpair failed and we were unable to recover it. 00:27:09.356 [2024-11-28 12:50:51.774655] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.356 [2024-11-28 12:50:51.774714] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.356 [2024-11-28 12:50:51.774728] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.356 [2024-11-28 12:50:51.774735] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.356 [2024-11-28 12:50:51.774741] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:09.356 [2024-11-28 12:50:51.774755] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.356 qpair failed and we were unable to recover it. 00:27:09.356 [2024-11-28 12:50:51.784662] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.356 [2024-11-28 12:50:51.784769] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.356 [2024-11-28 12:50:51.784783] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.356 [2024-11-28 12:50:51.784789] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.356 [2024-11-28 12:50:51.784796] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:09.356 [2024-11-28 12:50:51.784810] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.356 qpair failed and we were unable to recover it. 00:27:09.356 [2024-11-28 12:50:51.794664] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.356 [2024-11-28 12:50:51.794719] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.356 [2024-11-28 12:50:51.794733] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.356 [2024-11-28 12:50:51.794740] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.356 [2024-11-28 12:50:51.794746] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:09.356 [2024-11-28 12:50:51.794761] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.356 qpair failed and we were unable to recover it. 00:27:09.356 [2024-11-28 12:50:51.804701] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.356 [2024-11-28 12:50:51.804781] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.356 [2024-11-28 12:50:51.804795] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.356 [2024-11-28 12:50:51.804802] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.356 [2024-11-28 12:50:51.804809] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:09.356 [2024-11-28 12:50:51.804824] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.356 qpair failed and we were unable to recover it. 00:27:09.356 [2024-11-28 12:50:51.814756] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.356 [2024-11-28 12:50:51.814820] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.356 [2024-11-28 12:50:51.814835] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.356 [2024-11-28 12:50:51.814842] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.356 [2024-11-28 12:50:51.814849] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:09.356 [2024-11-28 12:50:51.814864] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.356 qpair failed and we were unable to recover it. 00:27:09.356 [2024-11-28 12:50:51.824802] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.356 [2024-11-28 12:50:51.824909] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.356 [2024-11-28 12:50:51.824927] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.356 [2024-11-28 12:50:51.824934] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.356 [2024-11-28 12:50:51.824940] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:09.356 [2024-11-28 12:50:51.824959] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.357 qpair failed and we were unable to recover it. 00:27:09.357 [2024-11-28 12:50:51.834744] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.357 [2024-11-28 12:50:51.834841] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.357 [2024-11-28 12:50:51.834854] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.357 [2024-11-28 12:50:51.834861] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.357 [2024-11-28 12:50:51.834867] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:09.357 [2024-11-28 12:50:51.834883] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.357 qpair failed and we were unable to recover it. 00:27:09.357 [2024-11-28 12:50:51.844835] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.357 [2024-11-28 12:50:51.844889] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.357 [2024-11-28 12:50:51.844903] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.357 [2024-11-28 12:50:51.844909] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.357 [2024-11-28 12:50:51.844915] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:09.357 [2024-11-28 12:50:51.844930] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.357 qpair failed and we were unable to recover it. 00:27:09.357 [2024-11-28 12:50:51.854851] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.357 [2024-11-28 12:50:51.854914] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.357 [2024-11-28 12:50:51.854927] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.357 [2024-11-28 12:50:51.854934] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.357 [2024-11-28 12:50:51.854940] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:09.357 [2024-11-28 12:50:51.854960] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.357 qpair failed and we were unable to recover it. 00:27:09.357 [2024-11-28 12:50:51.864857] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.357 [2024-11-28 12:50:51.864921] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.357 [2024-11-28 12:50:51.864934] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.357 [2024-11-28 12:50:51.864941] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.357 [2024-11-28 12:50:51.864962] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:09.357 [2024-11-28 12:50:51.864979] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.357 qpair failed and we were unable to recover it. 00:27:09.615 [2024-11-28 12:50:51.874939] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.615 [2024-11-28 12:50:51.874998] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.615 [2024-11-28 12:50:51.875012] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.615 [2024-11-28 12:50:51.875019] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.615 [2024-11-28 12:50:51.875025] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:09.615 [2024-11-28 12:50:51.875039] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.615 qpair failed and we were unable to recover it. 00:27:09.615 [2024-11-28 12:50:51.884909] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.615 [2024-11-28 12:50:51.884998] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.615 [2024-11-28 12:50:51.885012] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.615 [2024-11-28 12:50:51.885018] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.615 [2024-11-28 12:50:51.885024] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:09.615 [2024-11-28 12:50:51.885039] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.615 qpair failed and we were unable to recover it. 00:27:09.616 [2024-11-28 12:50:51.894990] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.616 [2024-11-28 12:50:51.895049] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.616 [2024-11-28 12:50:51.895063] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.616 [2024-11-28 12:50:51.895070] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.616 [2024-11-28 12:50:51.895076] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:09.616 [2024-11-28 12:50:51.895090] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.616 qpair failed and we were unable to recover it. 00:27:09.616 [2024-11-28 12:50:51.904980] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.616 [2024-11-28 12:50:51.905059] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.616 [2024-11-28 12:50:51.905074] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.616 [2024-11-28 12:50:51.905081] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.616 [2024-11-28 12:50:51.905086] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:09.616 [2024-11-28 12:50:51.905101] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.616 qpair failed and we were unable to recover it. 00:27:09.616 [2024-11-28 12:50:51.915039] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.616 [2024-11-28 12:50:51.915112] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.616 [2024-11-28 12:50:51.915125] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.616 [2024-11-28 12:50:51.915132] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.616 [2024-11-28 12:50:51.915138] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:09.616 [2024-11-28 12:50:51.915154] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.616 qpair failed and we were unable to recover it. 00:27:09.616 [2024-11-28 12:50:51.925040] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.616 [2024-11-28 12:50:51.925093] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.616 [2024-11-28 12:50:51.925107] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.616 [2024-11-28 12:50:51.925114] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.616 [2024-11-28 12:50:51.925120] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:09.616 [2024-11-28 12:50:51.925135] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.616 qpair failed and we were unable to recover it. 00:27:09.616 [2024-11-28 12:50:51.935102] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.616 [2024-11-28 12:50:51.935158] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.616 [2024-11-28 12:50:51.935172] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.616 [2024-11-28 12:50:51.935178] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.616 [2024-11-28 12:50:51.935185] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:09.616 [2024-11-28 12:50:51.935200] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.616 qpair failed and we were unable to recover it. 00:27:09.616 [2024-11-28 12:50:51.945096] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.616 [2024-11-28 12:50:51.945151] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.616 [2024-11-28 12:50:51.945165] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.616 [2024-11-28 12:50:51.945171] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.616 [2024-11-28 12:50:51.945177] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:09.616 [2024-11-28 12:50:51.945192] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.616 qpair failed and we were unable to recover it. 00:27:09.616 [2024-11-28 12:50:51.955133] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.616 [2024-11-28 12:50:51.955189] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.616 [2024-11-28 12:50:51.955206] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.616 [2024-11-28 12:50:51.955213] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.616 [2024-11-28 12:50:51.955219] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:09.616 [2024-11-28 12:50:51.955233] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.616 qpair failed and we were unable to recover it. 00:27:09.616 [2024-11-28 12:50:51.965168] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.616 [2024-11-28 12:50:51.965221] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.616 [2024-11-28 12:50:51.965235] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.616 [2024-11-28 12:50:51.965241] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.616 [2024-11-28 12:50:51.965247] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:09.616 [2024-11-28 12:50:51.965262] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.616 qpair failed and we were unable to recover it. 00:27:09.616 [2024-11-28 12:50:51.975224] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.616 [2024-11-28 12:50:51.975293] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.616 [2024-11-28 12:50:51.975307] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.616 [2024-11-28 12:50:51.975314] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.616 [2024-11-28 12:50:51.975320] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:09.616 [2024-11-28 12:50:51.975335] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.616 qpair failed and we were unable to recover it. 00:27:09.616 [2024-11-28 12:50:51.985232] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.616 [2024-11-28 12:50:51.985292] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.616 [2024-11-28 12:50:51.985306] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.616 [2024-11-28 12:50:51.985313] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.616 [2024-11-28 12:50:51.985319] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:09.616 [2024-11-28 12:50:51.985334] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.616 qpair failed and we were unable to recover it. 00:27:09.616 [2024-11-28 12:50:51.995244] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.616 [2024-11-28 12:50:51.995304] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.616 [2024-11-28 12:50:51.995317] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.616 [2024-11-28 12:50:51.995324] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.616 [2024-11-28 12:50:51.995333] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:09.616 [2024-11-28 12:50:51.995348] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.616 qpair failed and we were unable to recover it. 00:27:09.616 [2024-11-28 12:50:52.005228] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.616 [2024-11-28 12:50:52.005286] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.616 [2024-11-28 12:50:52.005300] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.616 [2024-11-28 12:50:52.005307] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.616 [2024-11-28 12:50:52.005313] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:09.616 [2024-11-28 12:50:52.005328] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.616 qpair failed and we were unable to recover it. 00:27:09.616 [2024-11-28 12:50:52.015309] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.616 [2024-11-28 12:50:52.015384] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.616 [2024-11-28 12:50:52.015397] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.616 [2024-11-28 12:50:52.015404] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.616 [2024-11-28 12:50:52.015410] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:09.616 [2024-11-28 12:50:52.015425] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.616 qpair failed and we were unable to recover it. 00:27:09.616 [2024-11-28 12:50:52.025342] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.616 [2024-11-28 12:50:52.025414] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.616 [2024-11-28 12:50:52.025428] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.616 [2024-11-28 12:50:52.025435] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.616 [2024-11-28 12:50:52.025441] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:09.616 [2024-11-28 12:50:52.025455] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.616 qpair failed and we were unable to recover it. 00:27:09.616 [2024-11-28 12:50:52.035357] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.616 [2024-11-28 12:50:52.035411] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.616 [2024-11-28 12:50:52.035425] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.616 [2024-11-28 12:50:52.035432] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.616 [2024-11-28 12:50:52.035438] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:09.616 [2024-11-28 12:50:52.035452] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.616 qpair failed and we were unable to recover it. 00:27:09.616 [2024-11-28 12:50:52.045390] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.616 [2024-11-28 12:50:52.045443] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.616 [2024-11-28 12:50:52.045457] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.616 [2024-11-28 12:50:52.045464] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.616 [2024-11-28 12:50:52.045470] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:09.616 [2024-11-28 12:50:52.045485] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.616 qpair failed and we were unable to recover it. 00:27:09.616 [2024-11-28 12:50:52.055436] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.616 [2024-11-28 12:50:52.055505] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.616 [2024-11-28 12:50:52.055518] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.616 [2024-11-28 12:50:52.055525] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.616 [2024-11-28 12:50:52.055531] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:09.616 [2024-11-28 12:50:52.055545] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.616 qpair failed and we were unable to recover it. 00:27:09.616 [2024-11-28 12:50:52.065427] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.616 [2024-11-28 12:50:52.065498] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.616 [2024-11-28 12:50:52.065512] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.616 [2024-11-28 12:50:52.065519] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.616 [2024-11-28 12:50:52.065525] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:09.616 [2024-11-28 12:50:52.065539] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.616 qpair failed and we were unable to recover it. 00:27:09.616 [2024-11-28 12:50:52.075522] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.616 [2024-11-28 12:50:52.075577] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.616 [2024-11-28 12:50:52.075590] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.616 [2024-11-28 12:50:52.075597] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.616 [2024-11-28 12:50:52.075603] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:09.616 [2024-11-28 12:50:52.075618] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.616 qpair failed and we were unable to recover it. 00:27:09.616 [2024-11-28 12:50:52.085516] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.616 [2024-11-28 12:50:52.085572] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.616 [2024-11-28 12:50:52.085589] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.616 [2024-11-28 12:50:52.085596] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.616 [2024-11-28 12:50:52.085602] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:09.616 [2024-11-28 12:50:52.085616] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.616 qpair failed and we were unable to recover it. 00:27:09.616 [2024-11-28 12:50:52.095478] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.616 [2024-11-28 12:50:52.095533] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.616 [2024-11-28 12:50:52.095547] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.616 [2024-11-28 12:50:52.095553] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.616 [2024-11-28 12:50:52.095559] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:09.616 [2024-11-28 12:50:52.095574] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.616 qpair failed and we were unable to recover it. 00:27:09.616 [2024-11-28 12:50:52.105576] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.616 [2024-11-28 12:50:52.105633] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.616 [2024-11-28 12:50:52.105647] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.616 [2024-11-28 12:50:52.105654] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.616 [2024-11-28 12:50:52.105660] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:09.616 [2024-11-28 12:50:52.105675] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.616 qpair failed and we were unable to recover it. 00:27:09.616 [2024-11-28 12:50:52.115596] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.616 [2024-11-28 12:50:52.115650] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.617 [2024-11-28 12:50:52.115664] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.617 [2024-11-28 12:50:52.115671] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.617 [2024-11-28 12:50:52.115676] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:09.617 [2024-11-28 12:50:52.115691] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.617 qpair failed and we were unable to recover it. 00:27:09.617 [2024-11-28 12:50:52.125627] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.617 [2024-11-28 12:50:52.125688] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.617 [2024-11-28 12:50:52.125703] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.617 [2024-11-28 12:50:52.125713] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.617 [2024-11-28 12:50:52.125720] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:09.617 [2024-11-28 12:50:52.125735] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.617 qpair failed and we were unable to recover it. 00:27:09.873 [2024-11-28 12:50:52.135672] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.873 [2024-11-28 12:50:52.135731] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.873 [2024-11-28 12:50:52.135745] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.873 [2024-11-28 12:50:52.135752] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.873 [2024-11-28 12:50:52.135758] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:09.873 [2024-11-28 12:50:52.135773] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.874 qpair failed and we were unable to recover it. 00:27:09.874 [2024-11-28 12:50:52.145685] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.874 [2024-11-28 12:50:52.145741] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.874 [2024-11-28 12:50:52.145755] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.874 [2024-11-28 12:50:52.145762] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.874 [2024-11-28 12:50:52.145768] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:09.874 [2024-11-28 12:50:52.145782] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.874 qpair failed and we were unable to recover it. 00:27:09.874 [2024-11-28 12:50:52.155705] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.874 [2024-11-28 12:50:52.155766] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.874 [2024-11-28 12:50:52.155779] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.874 [2024-11-28 12:50:52.155786] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.874 [2024-11-28 12:50:52.155792] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:09.874 [2024-11-28 12:50:52.155807] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.874 qpair failed and we were unable to recover it. 00:27:09.874 [2024-11-28 12:50:52.165741] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.874 [2024-11-28 12:50:52.165800] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.874 [2024-11-28 12:50:52.165814] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.874 [2024-11-28 12:50:52.165821] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.874 [2024-11-28 12:50:52.165827] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:09.874 [2024-11-28 12:50:52.165842] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.874 qpair failed and we were unable to recover it. 00:27:09.874 [2024-11-28 12:50:52.175773] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.874 [2024-11-28 12:50:52.175829] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.874 [2024-11-28 12:50:52.175843] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.874 [2024-11-28 12:50:52.175849] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.874 [2024-11-28 12:50:52.175855] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:09.874 [2024-11-28 12:50:52.175870] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.874 qpair failed and we were unable to recover it. 00:27:09.874 [2024-11-28 12:50:52.185817] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.874 [2024-11-28 12:50:52.185878] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.874 [2024-11-28 12:50:52.185894] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.874 [2024-11-28 12:50:52.185901] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.874 [2024-11-28 12:50:52.185907] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:09.874 [2024-11-28 12:50:52.185921] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.874 qpair failed and we were unable to recover it. 00:27:09.874 [2024-11-28 12:50:52.195801] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.874 [2024-11-28 12:50:52.195867] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.874 [2024-11-28 12:50:52.195881] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.874 [2024-11-28 12:50:52.195887] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.874 [2024-11-28 12:50:52.195893] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:09.874 [2024-11-28 12:50:52.195908] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.874 qpair failed and we were unable to recover it. 00:27:09.874 [2024-11-28 12:50:52.205854] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.874 [2024-11-28 12:50:52.205906] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.874 [2024-11-28 12:50:52.205921] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.874 [2024-11-28 12:50:52.205928] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.874 [2024-11-28 12:50:52.205934] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:09.874 [2024-11-28 12:50:52.205953] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.874 qpair failed and we were unable to recover it. 00:27:09.874 [2024-11-28 12:50:52.215884] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.874 [2024-11-28 12:50:52.215988] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.874 [2024-11-28 12:50:52.216001] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.874 [2024-11-28 12:50:52.216008] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.874 [2024-11-28 12:50:52.216014] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:09.874 [2024-11-28 12:50:52.216029] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.874 qpair failed and we were unable to recover it. 00:27:09.874 [2024-11-28 12:50:52.225918] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.874 [2024-11-28 12:50:52.225986] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.874 [2024-11-28 12:50:52.226000] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.874 [2024-11-28 12:50:52.226006] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.874 [2024-11-28 12:50:52.226012] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:09.874 [2024-11-28 12:50:52.226027] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.874 qpair failed and we were unable to recover it. 00:27:09.874 [2024-11-28 12:50:52.235971] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.874 [2024-11-28 12:50:52.236022] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.874 [2024-11-28 12:50:52.236036] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.874 [2024-11-28 12:50:52.236042] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.874 [2024-11-28 12:50:52.236048] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:09.874 [2024-11-28 12:50:52.236063] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.874 qpair failed and we were unable to recover it. 00:27:09.874 [2024-11-28 12:50:52.245920] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.874 [2024-11-28 12:50:52.246012] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.874 [2024-11-28 12:50:52.246027] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.874 [2024-11-28 12:50:52.246034] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.874 [2024-11-28 12:50:52.246040] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:09.874 [2024-11-28 12:50:52.246055] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.874 qpair failed and we were unable to recover it. 00:27:09.874 [2024-11-28 12:50:52.256010] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.874 [2024-11-28 12:50:52.256071] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.874 [2024-11-28 12:50:52.256085] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.874 [2024-11-28 12:50:52.256095] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.874 [2024-11-28 12:50:52.256101] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:09.874 [2024-11-28 12:50:52.256116] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.874 qpair failed and we were unable to recover it. 00:27:09.874 [2024-11-28 12:50:52.266030] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.874 [2024-11-28 12:50:52.266102] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.874 [2024-11-28 12:50:52.266117] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.874 [2024-11-28 12:50:52.266123] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.874 [2024-11-28 12:50:52.266129] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:09.874 [2024-11-28 12:50:52.266144] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.874 qpair failed and we were unable to recover it. 00:27:09.874 [2024-11-28 12:50:52.276107] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.874 [2024-11-28 12:50:52.276204] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.874 [2024-11-28 12:50:52.276217] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.874 [2024-11-28 12:50:52.276224] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.874 [2024-11-28 12:50:52.276230] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:09.874 [2024-11-28 12:50:52.276246] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.874 qpair failed and we were unable to recover it. 00:27:09.874 [2024-11-28 12:50:52.286182] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.874 [2024-11-28 12:50:52.286233] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.874 [2024-11-28 12:50:52.286246] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.874 [2024-11-28 12:50:52.286252] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.874 [2024-11-28 12:50:52.286258] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:09.874 [2024-11-28 12:50:52.286273] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.874 qpair failed and we were unable to recover it. 00:27:09.874 [2024-11-28 12:50:52.296097] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.874 [2024-11-28 12:50:52.296171] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.874 [2024-11-28 12:50:52.296185] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.874 [2024-11-28 12:50:52.296192] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.874 [2024-11-28 12:50:52.296197] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:09.874 [2024-11-28 12:50:52.296215] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.874 qpair failed and we were unable to recover it. 00:27:09.874 [2024-11-28 12:50:52.306157] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.874 [2024-11-28 12:50:52.306222] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.874 [2024-11-28 12:50:52.306235] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.874 [2024-11-28 12:50:52.306242] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.874 [2024-11-28 12:50:52.306248] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:09.874 [2024-11-28 12:50:52.306263] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.874 qpair failed and we were unable to recover it. 00:27:09.874 [2024-11-28 12:50:52.316173] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.874 [2024-11-28 12:50:52.316233] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.874 [2024-11-28 12:50:52.316246] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.874 [2024-11-28 12:50:52.316253] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.874 [2024-11-28 12:50:52.316259] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:09.874 [2024-11-28 12:50:52.316273] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.874 qpair failed and we were unable to recover it. 00:27:09.874 [2024-11-28 12:50:52.326202] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.874 [2024-11-28 12:50:52.326258] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.874 [2024-11-28 12:50:52.326271] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.874 [2024-11-28 12:50:52.326278] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.874 [2024-11-28 12:50:52.326284] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:09.874 [2024-11-28 12:50:52.326298] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.874 qpair failed and we were unable to recover it. 00:27:09.874 [2024-11-28 12:50:52.336238] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.874 [2024-11-28 12:50:52.336293] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.874 [2024-11-28 12:50:52.336307] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.874 [2024-11-28 12:50:52.336313] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.874 [2024-11-28 12:50:52.336319] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:09.874 [2024-11-28 12:50:52.336334] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.874 qpair failed and we were unable to recover it. 00:27:09.874 [2024-11-28 12:50:52.346292] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.874 [2024-11-28 12:50:52.346396] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.874 [2024-11-28 12:50:52.346410] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.874 [2024-11-28 12:50:52.346417] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.874 [2024-11-28 12:50:52.346423] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:09.874 [2024-11-28 12:50:52.346437] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.874 qpair failed and we were unable to recover it. 00:27:09.874 [2024-11-28 12:50:52.356262] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.874 [2024-11-28 12:50:52.356319] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.874 [2024-11-28 12:50:52.356333] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.874 [2024-11-28 12:50:52.356340] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.874 [2024-11-28 12:50:52.356346] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:09.874 [2024-11-28 12:50:52.356360] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.874 qpair failed and we were unable to recover it. 00:27:09.874 [2024-11-28 12:50:52.366303] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.874 [2024-11-28 12:50:52.366355] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.874 [2024-11-28 12:50:52.366369] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.874 [2024-11-28 12:50:52.366375] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.874 [2024-11-28 12:50:52.366381] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:09.874 [2024-11-28 12:50:52.366395] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.874 qpair failed and we were unable to recover it. 00:27:09.874 [2024-11-28 12:50:52.376379] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.874 [2024-11-28 12:50:52.376485] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.874 [2024-11-28 12:50:52.376499] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.874 [2024-11-28 12:50:52.376505] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.874 [2024-11-28 12:50:52.376511] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:09.875 [2024-11-28 12:50:52.376526] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.875 qpair failed and we were unable to recover it. 00:27:09.875 [2024-11-28 12:50:52.386374] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.875 [2024-11-28 12:50:52.386436] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.875 [2024-11-28 12:50:52.386453] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.875 [2024-11-28 12:50:52.386460] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.875 [2024-11-28 12:50:52.386466] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:09.875 [2024-11-28 12:50:52.386480] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.875 qpair failed and we were unable to recover it. 00:27:10.132 [2024-11-28 12:50:52.396413] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.132 [2024-11-28 12:50:52.396477] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.132 [2024-11-28 12:50:52.396490] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.132 [2024-11-28 12:50:52.396497] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.132 [2024-11-28 12:50:52.396503] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:10.132 [2024-11-28 12:50:52.396518] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.132 qpair failed and we were unable to recover it. 00:27:10.132 [2024-11-28 12:50:52.406451] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.132 [2024-11-28 12:50:52.406509] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.132 [2024-11-28 12:50:52.406522] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.132 [2024-11-28 12:50:52.406529] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.132 [2024-11-28 12:50:52.406535] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:10.132 [2024-11-28 12:50:52.406550] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.132 qpair failed and we were unable to recover it. 00:27:10.132 [2024-11-28 12:50:52.416462] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.132 [2024-11-28 12:50:52.416520] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.132 [2024-11-28 12:50:52.416533] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.132 [2024-11-28 12:50:52.416540] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.132 [2024-11-28 12:50:52.416546] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:10.132 [2024-11-28 12:50:52.416560] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.132 qpair failed and we were unable to recover it. 00:27:10.132 [2024-11-28 12:50:52.426492] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.132 [2024-11-28 12:50:52.426544] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.132 [2024-11-28 12:50:52.426558] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.132 [2024-11-28 12:50:52.426564] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.132 [2024-11-28 12:50:52.426573] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:10.132 [2024-11-28 12:50:52.426588] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.132 qpair failed and we were unable to recover it. 00:27:10.132 [2024-11-28 12:50:52.436565] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.132 [2024-11-28 12:50:52.436670] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.132 [2024-11-28 12:50:52.436684] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.132 [2024-11-28 12:50:52.436691] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.132 [2024-11-28 12:50:52.436696] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:10.132 [2024-11-28 12:50:52.436712] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.132 qpair failed and we were unable to recover it. 00:27:10.132 [2024-11-28 12:50:52.446508] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.132 [2024-11-28 12:50:52.446565] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.132 [2024-11-28 12:50:52.446579] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.132 [2024-11-28 12:50:52.446586] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.132 [2024-11-28 12:50:52.446591] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:10.132 [2024-11-28 12:50:52.446607] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.132 qpair failed and we were unable to recover it. 00:27:10.132 [2024-11-28 12:50:52.456573] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.132 [2024-11-28 12:50:52.456632] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.132 [2024-11-28 12:50:52.456646] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.132 [2024-11-28 12:50:52.456653] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.132 [2024-11-28 12:50:52.456659] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:10.132 [2024-11-28 12:50:52.456674] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.132 qpair failed and we were unable to recover it. 00:27:10.132 [2024-11-28 12:50:52.466658] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.132 [2024-11-28 12:50:52.466717] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.132 [2024-11-28 12:50:52.466731] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.132 [2024-11-28 12:50:52.466738] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.132 [2024-11-28 12:50:52.466744] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:10.132 [2024-11-28 12:50:52.466759] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.132 qpair failed and we were unable to recover it. 00:27:10.132 [2024-11-28 12:50:52.476664] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.132 [2024-11-28 12:50:52.476766] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.132 [2024-11-28 12:50:52.476781] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.132 [2024-11-28 12:50:52.476788] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.132 [2024-11-28 12:50:52.476794] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:10.132 [2024-11-28 12:50:52.476809] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.132 qpair failed and we were unable to recover it. 00:27:10.132 [2024-11-28 12:50:52.486654] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.132 [2024-11-28 12:50:52.486712] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.132 [2024-11-28 12:50:52.486726] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.132 [2024-11-28 12:50:52.486733] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.132 [2024-11-28 12:50:52.486738] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:10.132 [2024-11-28 12:50:52.486753] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.132 qpair failed and we were unable to recover it. 00:27:10.132 [2024-11-28 12:50:52.496709] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.132 [2024-11-28 12:50:52.496781] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.132 [2024-11-28 12:50:52.496796] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.132 [2024-11-28 12:50:52.496802] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.132 [2024-11-28 12:50:52.496808] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:10.132 [2024-11-28 12:50:52.496823] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.132 qpair failed and we were unable to recover it. 00:27:10.132 [2024-11-28 12:50:52.506733] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.132 [2024-11-28 12:50:52.506791] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.132 [2024-11-28 12:50:52.506805] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.132 [2024-11-28 12:50:52.506812] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.132 [2024-11-28 12:50:52.506818] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:10.133 [2024-11-28 12:50:52.506834] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.133 qpair failed and we were unable to recover it. 00:27:10.133 [2024-11-28 12:50:52.516782] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.133 [2024-11-28 12:50:52.516839] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.133 [2024-11-28 12:50:52.516856] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.133 [2024-11-28 12:50:52.516863] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.133 [2024-11-28 12:50:52.516869] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:10.133 [2024-11-28 12:50:52.516885] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.133 qpair failed and we were unable to recover it. 00:27:10.133 [2024-11-28 12:50:52.526765] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.133 [2024-11-28 12:50:52.526819] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.133 [2024-11-28 12:50:52.526832] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.133 [2024-11-28 12:50:52.526839] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.133 [2024-11-28 12:50:52.526845] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:10.133 [2024-11-28 12:50:52.526860] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.133 qpair failed and we were unable to recover it. 00:27:10.133 [2024-11-28 12:50:52.536802] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.133 [2024-11-28 12:50:52.536859] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.133 [2024-11-28 12:50:52.536873] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.133 [2024-11-28 12:50:52.536879] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.133 [2024-11-28 12:50:52.536885] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:10.133 [2024-11-28 12:50:52.536899] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.133 qpair failed and we were unable to recover it. 00:27:10.133 [2024-11-28 12:50:52.546860] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.133 [2024-11-28 12:50:52.546923] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.133 [2024-11-28 12:50:52.546937] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.133 [2024-11-28 12:50:52.546943] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.133 [2024-11-28 12:50:52.546953] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:10.133 [2024-11-28 12:50:52.546968] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.133 qpair failed and we were unable to recover it. 00:27:10.133 [2024-11-28 12:50:52.556908] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.133 [2024-11-28 12:50:52.556963] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.133 [2024-11-28 12:50:52.556976] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.133 [2024-11-28 12:50:52.556983] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.133 [2024-11-28 12:50:52.556995] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:10.133 [2024-11-28 12:50:52.557010] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.133 qpair failed and we were unable to recover it. 00:27:10.133 [2024-11-28 12:50:52.566885] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.133 [2024-11-28 12:50:52.566951] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.133 [2024-11-28 12:50:52.566966] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.133 [2024-11-28 12:50:52.566972] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.133 [2024-11-28 12:50:52.566978] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:10.133 [2024-11-28 12:50:52.566992] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.133 qpair failed and we were unable to recover it. 00:27:10.133 [2024-11-28 12:50:52.576925] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.133 [2024-11-28 12:50:52.576988] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.133 [2024-11-28 12:50:52.577002] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.133 [2024-11-28 12:50:52.577009] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.133 [2024-11-28 12:50:52.577015] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:10.133 [2024-11-28 12:50:52.577029] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.133 qpair failed and we were unable to recover it. 00:27:10.133 [2024-11-28 12:50:52.587015] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.133 [2024-11-28 12:50:52.587117] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.133 [2024-11-28 12:50:52.587130] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.133 [2024-11-28 12:50:52.587137] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.133 [2024-11-28 12:50:52.587143] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:10.133 [2024-11-28 12:50:52.587159] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.133 qpair failed and we were unable to recover it. 00:27:10.133 [2024-11-28 12:50:52.596970] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.133 [2024-11-28 12:50:52.597029] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.133 [2024-11-28 12:50:52.597043] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.133 [2024-11-28 12:50:52.597050] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.133 [2024-11-28 12:50:52.597056] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:10.133 [2024-11-28 12:50:52.597070] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.133 qpair failed and we were unable to recover it. 00:27:10.133 [2024-11-28 12:50:52.607004] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.133 [2024-11-28 12:50:52.607109] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.133 [2024-11-28 12:50:52.607123] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.133 [2024-11-28 12:50:52.607130] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.133 [2024-11-28 12:50:52.607136] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:10.133 [2024-11-28 12:50:52.607150] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.133 qpair failed and we were unable to recover it. 00:27:10.133 [2024-11-28 12:50:52.617081] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.133 [2024-11-28 12:50:52.617139] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.133 [2024-11-28 12:50:52.617153] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.133 [2024-11-28 12:50:52.617160] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.133 [2024-11-28 12:50:52.617166] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:10.133 [2024-11-28 12:50:52.617180] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.133 qpair failed and we were unable to recover it. 00:27:10.133 [2024-11-28 12:50:52.627069] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.133 [2024-11-28 12:50:52.627128] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.133 [2024-11-28 12:50:52.627142] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.133 [2024-11-28 12:50:52.627148] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.133 [2024-11-28 12:50:52.627154] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:10.133 [2024-11-28 12:50:52.627169] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.133 qpair failed and we were unable to recover it. 00:27:10.133 [2024-11-28 12:50:52.637082] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.133 [2024-11-28 12:50:52.637139] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.133 [2024-11-28 12:50:52.637152] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.133 [2024-11-28 12:50:52.637159] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.133 [2024-11-28 12:50:52.637165] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:10.133 [2024-11-28 12:50:52.637181] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.134 qpair failed and we were unable to recover it. 00:27:10.134 [2024-11-28 12:50:52.647123] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.134 [2024-11-28 12:50:52.647195] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.134 [2024-11-28 12:50:52.647212] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.134 [2024-11-28 12:50:52.647219] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.134 [2024-11-28 12:50:52.647224] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:10.134 [2024-11-28 12:50:52.647239] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.134 qpair failed and we were unable to recover it. 00:27:10.392 [2024-11-28 12:50:52.657188] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.392 [2024-11-28 12:50:52.657246] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.392 [2024-11-28 12:50:52.657260] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.392 [2024-11-28 12:50:52.657266] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.392 [2024-11-28 12:50:52.657272] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:10.392 [2024-11-28 12:50:52.657287] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.392 qpair failed and we were unable to recover it. 00:27:10.392 [2024-11-28 12:50:52.667196] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.392 [2024-11-28 12:50:52.667252] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.392 [2024-11-28 12:50:52.667266] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.392 [2024-11-28 12:50:52.667273] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.392 [2024-11-28 12:50:52.667279] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:10.392 [2024-11-28 12:50:52.667294] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.392 qpair failed and we were unable to recover it. 00:27:10.392 [2024-11-28 12:50:52.677199] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.392 [2024-11-28 12:50:52.677255] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.392 [2024-11-28 12:50:52.677268] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.392 [2024-11-28 12:50:52.677275] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.392 [2024-11-28 12:50:52.677281] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:10.392 [2024-11-28 12:50:52.677296] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.392 qpair failed and we were unable to recover it. 00:27:10.392 [2024-11-28 12:50:52.687234] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.392 [2024-11-28 12:50:52.687288] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.392 [2024-11-28 12:50:52.687301] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.392 [2024-11-28 12:50:52.687311] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.392 [2024-11-28 12:50:52.687318] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:10.392 [2024-11-28 12:50:52.687332] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.392 qpair failed and we were unable to recover it. 00:27:10.392 [2024-11-28 12:50:52.697284] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.392 [2024-11-28 12:50:52.697362] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.392 [2024-11-28 12:50:52.697376] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.392 [2024-11-28 12:50:52.697383] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.392 [2024-11-28 12:50:52.697389] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:10.392 [2024-11-28 12:50:52.697404] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.392 qpair failed and we were unable to recover it. 00:27:10.392 [2024-11-28 12:50:52.707307] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.392 [2024-11-28 12:50:52.707384] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.392 [2024-11-28 12:50:52.707398] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.392 [2024-11-28 12:50:52.707405] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.392 [2024-11-28 12:50:52.707410] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:10.392 [2024-11-28 12:50:52.707425] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.392 qpair failed and we were unable to recover it. 00:27:10.392 [2024-11-28 12:50:52.717323] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.392 [2024-11-28 12:50:52.717381] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.392 [2024-11-28 12:50:52.717393] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.392 [2024-11-28 12:50:52.717400] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.392 [2024-11-28 12:50:52.717406] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:10.392 [2024-11-28 12:50:52.717421] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.392 qpair failed and we were unable to recover it. 00:27:10.392 [2024-11-28 12:50:52.727347] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.392 [2024-11-28 12:50:52.727407] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.392 [2024-11-28 12:50:52.727421] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.392 [2024-11-28 12:50:52.727428] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.392 [2024-11-28 12:50:52.727434] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:10.392 [2024-11-28 12:50:52.727448] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.392 qpair failed and we were unable to recover it. 00:27:10.392 [2024-11-28 12:50:52.737375] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.392 [2024-11-28 12:50:52.737434] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.392 [2024-11-28 12:50:52.737448] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.392 [2024-11-28 12:50:52.737454] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.392 [2024-11-28 12:50:52.737460] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:10.392 [2024-11-28 12:50:52.737475] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.392 qpair failed and we were unable to recover it. 00:27:10.392 [2024-11-28 12:50:52.747411] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.392 [2024-11-28 12:50:52.747467] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.392 [2024-11-28 12:50:52.747481] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.392 [2024-11-28 12:50:52.747487] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.392 [2024-11-28 12:50:52.747494] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:10.392 [2024-11-28 12:50:52.747508] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.392 qpair failed and we were unable to recover it. 00:27:10.392 [2024-11-28 12:50:52.757462] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.393 [2024-11-28 12:50:52.757524] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.393 [2024-11-28 12:50:52.757538] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.393 [2024-11-28 12:50:52.757544] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.393 [2024-11-28 12:50:52.757550] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:10.393 [2024-11-28 12:50:52.757566] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.393 qpair failed and we were unable to recover it. 00:27:10.393 [2024-11-28 12:50:52.767485] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.393 [2024-11-28 12:50:52.767545] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.393 [2024-11-28 12:50:52.767559] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.393 [2024-11-28 12:50:52.767567] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.393 [2024-11-28 12:50:52.767572] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:10.393 [2024-11-28 12:50:52.767588] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.393 qpair failed and we were unable to recover it. 00:27:10.393 [2024-11-28 12:50:52.777488] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.393 [2024-11-28 12:50:52.777582] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.393 [2024-11-28 12:50:52.777596] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.393 [2024-11-28 12:50:52.777603] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.393 [2024-11-28 12:50:52.777608] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:10.393 [2024-11-28 12:50:52.777623] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.393 qpair failed and we were unable to recover it. 00:27:10.393 [2024-11-28 12:50:52.787504] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.393 [2024-11-28 12:50:52.787592] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.393 [2024-11-28 12:50:52.787606] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.393 [2024-11-28 12:50:52.787612] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.393 [2024-11-28 12:50:52.787619] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:10.393 [2024-11-28 12:50:52.787633] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.393 qpair failed and we were unable to recover it. 00:27:10.393 [2024-11-28 12:50:52.797558] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.393 [2024-11-28 12:50:52.797616] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.393 [2024-11-28 12:50:52.797630] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.393 [2024-11-28 12:50:52.797637] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.393 [2024-11-28 12:50:52.797643] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:10.393 [2024-11-28 12:50:52.797658] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.393 qpair failed and we were unable to recover it. 00:27:10.393 [2024-11-28 12:50:52.807586] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.393 [2024-11-28 12:50:52.807643] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.393 [2024-11-28 12:50:52.807656] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.393 [2024-11-28 12:50:52.807663] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.393 [2024-11-28 12:50:52.807669] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:10.393 [2024-11-28 12:50:52.807683] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.393 qpair failed and we were unable to recover it. 00:27:10.393 [2024-11-28 12:50:52.817622] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.393 [2024-11-28 12:50:52.817704] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.393 [2024-11-28 12:50:52.817718] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.393 [2024-11-28 12:50:52.817729] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.393 [2024-11-28 12:50:52.817735] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:10.393 [2024-11-28 12:50:52.817749] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.393 qpair failed and we were unable to recover it. 00:27:10.393 [2024-11-28 12:50:52.827673] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.393 [2024-11-28 12:50:52.827727] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.393 [2024-11-28 12:50:52.827742] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.393 [2024-11-28 12:50:52.827748] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.393 [2024-11-28 12:50:52.827754] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:10.393 [2024-11-28 12:50:52.827769] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.393 qpair failed and we were unable to recover it. 00:27:10.393 [2024-11-28 12:50:52.837674] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.393 [2024-11-28 12:50:52.837731] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.393 [2024-11-28 12:50:52.837745] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.393 [2024-11-28 12:50:52.837751] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.393 [2024-11-28 12:50:52.837757] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:10.393 [2024-11-28 12:50:52.837772] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.393 qpair failed and we were unable to recover it. 00:27:10.393 [2024-11-28 12:50:52.847699] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.393 [2024-11-28 12:50:52.847752] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.393 [2024-11-28 12:50:52.847766] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.393 [2024-11-28 12:50:52.847772] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.393 [2024-11-28 12:50:52.847778] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:10.393 [2024-11-28 12:50:52.847794] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.393 qpair failed and we were unable to recover it. 00:27:10.393 [2024-11-28 12:50:52.857805] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.393 [2024-11-28 12:50:52.857895] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.393 [2024-11-28 12:50:52.857908] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.393 [2024-11-28 12:50:52.857915] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.393 [2024-11-28 12:50:52.857920] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:10.393 [2024-11-28 12:50:52.857938] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.393 qpair failed and we were unable to recover it. 00:27:10.393 [2024-11-28 12:50:52.867738] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.393 [2024-11-28 12:50:52.867816] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.393 [2024-11-28 12:50:52.867831] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.393 [2024-11-28 12:50:52.867838] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.393 [2024-11-28 12:50:52.867844] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:10.393 [2024-11-28 12:50:52.867859] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.393 qpair failed and we were unable to recover it. 00:27:10.393 [2024-11-28 12:50:52.877873] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.393 [2024-11-28 12:50:52.877957] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.393 [2024-11-28 12:50:52.877973] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.393 [2024-11-28 12:50:52.877980] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.393 [2024-11-28 12:50:52.877986] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:10.393 [2024-11-28 12:50:52.878001] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.393 qpair failed and we were unable to recover it. 00:27:10.393 [2024-11-28 12:50:52.887837] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.393 [2024-11-28 12:50:52.887888] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.393 [2024-11-28 12:50:52.887902] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.393 [2024-11-28 12:50:52.887909] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.393 [2024-11-28 12:50:52.887915] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:10.393 [2024-11-28 12:50:52.887930] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.393 qpair failed and we were unable to recover it. 00:27:10.393 [2024-11-28 12:50:52.897799] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.393 [2024-11-28 12:50:52.897857] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.393 [2024-11-28 12:50:52.897871] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.393 [2024-11-28 12:50:52.897879] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.393 [2024-11-28 12:50:52.897885] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:10.393 [2024-11-28 12:50:52.897900] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.393 qpair failed and we were unable to recover it. 00:27:10.649 [2024-11-28 12:50:52.907893] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.649 [2024-11-28 12:50:52.907945] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.649 [2024-11-28 12:50:52.907963] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.649 [2024-11-28 12:50:52.907969] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.650 [2024-11-28 12:50:52.907975] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:10.650 [2024-11-28 12:50:52.907990] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.650 qpair failed and we were unable to recover it. 00:27:10.650 [2024-11-28 12:50:52.917917] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.650 [2024-11-28 12:50:52.917977] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.650 [2024-11-28 12:50:52.917992] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.650 [2024-11-28 12:50:52.917999] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.650 [2024-11-28 12:50:52.918005] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:10.650 [2024-11-28 12:50:52.918019] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.650 qpair failed and we were unable to recover it. 00:27:10.650 [2024-11-28 12:50:52.927904] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.650 [2024-11-28 12:50:52.927966] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.650 [2024-11-28 12:50:52.927980] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.650 [2024-11-28 12:50:52.927986] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.650 [2024-11-28 12:50:52.927992] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:10.650 [2024-11-28 12:50:52.928007] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.650 qpair failed and we were unable to recover it. 00:27:10.650 [2024-11-28 12:50:52.937960] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.650 [2024-11-28 12:50:52.938040] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.650 [2024-11-28 12:50:52.938055] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.650 [2024-11-28 12:50:52.938061] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.650 [2024-11-28 12:50:52.938069] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:10.650 [2024-11-28 12:50:52.938084] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.650 qpair failed and we were unable to recover it. 00:27:10.650 [2024-11-28 12:50:52.948020] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.650 [2024-11-28 12:50:52.948080] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.650 [2024-11-28 12:50:52.948098] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.650 [2024-11-28 12:50:52.948105] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.650 [2024-11-28 12:50:52.948111] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:10.650 [2024-11-28 12:50:52.948125] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.650 qpair failed and we were unable to recover it. 00:27:10.650 [2024-11-28 12:50:52.958043] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.650 [2024-11-28 12:50:52.958099] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.650 [2024-11-28 12:50:52.958113] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.650 [2024-11-28 12:50:52.958120] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.650 [2024-11-28 12:50:52.958125] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:10.650 [2024-11-28 12:50:52.958140] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.650 qpair failed and we were unable to recover it. 00:27:10.650 [2024-11-28 12:50:52.968001] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.650 [2024-11-28 12:50:52.968058] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.650 [2024-11-28 12:50:52.968072] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.650 [2024-11-28 12:50:52.968079] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.650 [2024-11-28 12:50:52.968085] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:10.650 [2024-11-28 12:50:52.968100] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.650 qpair failed and we were unable to recover it. 00:27:10.650 [2024-11-28 12:50:52.978125] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.650 [2024-11-28 12:50:52.978187] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.650 [2024-11-28 12:50:52.978201] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.650 [2024-11-28 12:50:52.978207] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.650 [2024-11-28 12:50:52.978213] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:10.650 [2024-11-28 12:50:52.978229] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.650 qpair failed and we were unable to recover it. 00:27:10.650 [2024-11-28 12:50:52.988136] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.650 [2024-11-28 12:50:52.988192] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.650 [2024-11-28 12:50:52.988206] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.650 [2024-11-28 12:50:52.988213] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.650 [2024-11-28 12:50:52.988223] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:10.650 [2024-11-28 12:50:52.988237] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.650 qpair failed and we were unable to recover it. 00:27:10.650 [2024-11-28 12:50:52.998092] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.650 [2024-11-28 12:50:52.998182] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.650 [2024-11-28 12:50:52.998196] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.650 [2024-11-28 12:50:52.998203] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.650 [2024-11-28 12:50:52.998209] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:10.650 [2024-11-28 12:50:52.998224] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.650 qpair failed and we were unable to recover it. 00:27:10.650 [2024-11-28 12:50:53.008190] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.650 [2024-11-28 12:50:53.008250] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.650 [2024-11-28 12:50:53.008264] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.650 [2024-11-28 12:50:53.008271] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.650 [2024-11-28 12:50:53.008277] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:10.650 [2024-11-28 12:50:53.008292] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.650 qpair failed and we were unable to recover it. 00:27:10.650 [2024-11-28 12:50:53.018219] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.650 [2024-11-28 12:50:53.018278] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.650 [2024-11-28 12:50:53.018292] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.650 [2024-11-28 12:50:53.018299] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.650 [2024-11-28 12:50:53.018305] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:10.650 [2024-11-28 12:50:53.018320] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.650 qpair failed and we were unable to recover it. 00:27:10.650 [2024-11-28 12:50:53.028210] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.650 [2024-11-28 12:50:53.028277] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.650 [2024-11-28 12:50:53.028292] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.650 [2024-11-28 12:50:53.028298] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.650 [2024-11-28 12:50:53.028304] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:10.650 [2024-11-28 12:50:53.028319] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.650 qpair failed and we were unable to recover it. 00:27:10.651 [2024-11-28 12:50:53.038244] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.651 [2024-11-28 12:50:53.038305] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.651 [2024-11-28 12:50:53.038319] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.651 [2024-11-28 12:50:53.038326] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.651 [2024-11-28 12:50:53.038332] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:10.651 [2024-11-28 12:50:53.038346] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.651 qpair failed and we were unable to recover it. 00:27:10.651 [2024-11-28 12:50:53.048331] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.651 [2024-11-28 12:50:53.048382] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.651 [2024-11-28 12:50:53.048397] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.651 [2024-11-28 12:50:53.048403] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.651 [2024-11-28 12:50:53.048409] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:10.651 [2024-11-28 12:50:53.048424] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.651 qpair failed and we were unable to recover it. 00:27:10.651 [2024-11-28 12:50:53.058299] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.651 [2024-11-28 12:50:53.058402] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.651 [2024-11-28 12:50:53.058416] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.651 [2024-11-28 12:50:53.058423] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.651 [2024-11-28 12:50:53.058429] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:10.651 [2024-11-28 12:50:53.058443] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.651 qpair failed and we were unable to recover it. 00:27:10.651 [2024-11-28 12:50:53.068357] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.651 [2024-11-28 12:50:53.068413] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.651 [2024-11-28 12:50:53.068427] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.651 [2024-11-28 12:50:53.068434] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.651 [2024-11-28 12:50:53.068440] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:10.651 [2024-11-28 12:50:53.068455] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.651 qpair failed and we were unable to recover it. 00:27:10.651 [2024-11-28 12:50:53.078380] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.651 [2024-11-28 12:50:53.078437] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.651 [2024-11-28 12:50:53.078454] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.651 [2024-11-28 12:50:53.078461] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.651 [2024-11-28 12:50:53.078466] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:10.651 [2024-11-28 12:50:53.078481] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.651 qpair failed and we were unable to recover it. 00:27:10.651 [2024-11-28 12:50:53.088415] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.651 [2024-11-28 12:50:53.088470] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.651 [2024-11-28 12:50:53.088484] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.651 [2024-11-28 12:50:53.088491] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.651 [2024-11-28 12:50:53.088497] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:10.651 [2024-11-28 12:50:53.088512] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.651 qpair failed and we were unable to recover it. 00:27:10.651 [2024-11-28 12:50:53.098389] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.651 [2024-11-28 12:50:53.098464] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.651 [2024-11-28 12:50:53.098478] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.651 [2024-11-28 12:50:53.098485] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.651 [2024-11-28 12:50:53.098491] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:10.651 [2024-11-28 12:50:53.098506] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.651 qpair failed and we were unable to recover it. 00:27:10.651 [2024-11-28 12:50:53.108422] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.651 [2024-11-28 12:50:53.108478] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.651 [2024-11-28 12:50:53.108491] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.651 [2024-11-28 12:50:53.108498] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.651 [2024-11-28 12:50:53.108504] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:10.651 [2024-11-28 12:50:53.108518] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.651 qpair failed and we were unable to recover it. 00:27:10.651 [2024-11-28 12:50:53.118544] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.651 [2024-11-28 12:50:53.118605] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.651 [2024-11-28 12:50:53.118619] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.651 [2024-11-28 12:50:53.118626] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.651 [2024-11-28 12:50:53.118635] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:10.651 [2024-11-28 12:50:53.118650] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.651 qpair failed and we were unable to recover it. 00:27:10.651 [2024-11-28 12:50:53.128540] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.651 [2024-11-28 12:50:53.128601] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.651 [2024-11-28 12:50:53.128616] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.651 [2024-11-28 12:50:53.128623] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.651 [2024-11-28 12:50:53.128629] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:10.651 [2024-11-28 12:50:53.128644] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.651 qpair failed and we were unable to recover it. 00:27:10.651 [2024-11-28 12:50:53.138531] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.651 [2024-11-28 12:50:53.138587] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.651 [2024-11-28 12:50:53.138601] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.651 [2024-11-28 12:50:53.138607] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.651 [2024-11-28 12:50:53.138613] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:10.651 [2024-11-28 12:50:53.138628] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.651 qpair failed and we were unable to recover it. 00:27:10.651 [2024-11-28 12:50:53.148576] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.651 [2024-11-28 12:50:53.148632] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.651 [2024-11-28 12:50:53.148646] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.651 [2024-11-28 12:50:53.148652] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.651 [2024-11-28 12:50:53.148658] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:10.651 [2024-11-28 12:50:53.148673] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.651 qpair failed and we were unable to recover it. 00:27:10.651 [2024-11-28 12:50:53.158597] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.651 [2024-11-28 12:50:53.158651] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.651 [2024-11-28 12:50:53.158665] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.651 [2024-11-28 12:50:53.158672] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.651 [2024-11-28 12:50:53.158678] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:10.651 [2024-11-28 12:50:53.158693] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.651 qpair failed and we were unable to recover it. 00:27:10.909 [2024-11-28 12:50:53.168638] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.909 [2024-11-28 12:50:53.168696] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.909 [2024-11-28 12:50:53.168710] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.909 [2024-11-28 12:50:53.168717] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.909 [2024-11-28 12:50:53.168723] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:10.909 [2024-11-28 12:50:53.168738] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.909 qpair failed and we were unable to recover it. 00:27:10.909 [2024-11-28 12:50:53.178668] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.910 [2024-11-28 12:50:53.178728] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.910 [2024-11-28 12:50:53.178741] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.910 [2024-11-28 12:50:53.178748] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.910 [2024-11-28 12:50:53.178755] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:10.910 [2024-11-28 12:50:53.178769] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.910 qpair failed and we were unable to recover it. 00:27:10.910 [2024-11-28 12:50:53.188694] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.910 [2024-11-28 12:50:53.188780] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.910 [2024-11-28 12:50:53.188795] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.910 [2024-11-28 12:50:53.188802] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.910 [2024-11-28 12:50:53.188808] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:10.910 [2024-11-28 12:50:53.188823] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.910 qpair failed and we were unable to recover it. 00:27:10.910 [2024-11-28 12:50:53.198715] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.910 [2024-11-28 12:50:53.198771] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.910 [2024-11-28 12:50:53.198785] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.910 [2024-11-28 12:50:53.198792] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.910 [2024-11-28 12:50:53.198798] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:10.910 [2024-11-28 12:50:53.198813] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.910 qpair failed and we were unable to recover it. 00:27:10.910 [2024-11-28 12:50:53.208746] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.910 [2024-11-28 12:50:53.208800] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.910 [2024-11-28 12:50:53.208816] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.910 [2024-11-28 12:50:53.208823] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.910 [2024-11-28 12:50:53.208829] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:10.910 [2024-11-28 12:50:53.208844] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.910 qpair failed and we were unable to recover it. 00:27:10.910 [2024-11-28 12:50:53.218786] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.910 [2024-11-28 12:50:53.218847] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.910 [2024-11-28 12:50:53.218860] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.910 [2024-11-28 12:50:53.218867] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.910 [2024-11-28 12:50:53.218873] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:10.910 [2024-11-28 12:50:53.218888] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.910 qpair failed and we were unable to recover it. 00:27:10.910 [2024-11-28 12:50:53.228840] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.910 [2024-11-28 12:50:53.228928] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.910 [2024-11-28 12:50:53.228941] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.910 [2024-11-28 12:50:53.228951] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.910 [2024-11-28 12:50:53.228957] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:10.910 [2024-11-28 12:50:53.228972] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.910 qpair failed and we were unable to recover it. 00:27:10.910 [2024-11-28 12:50:53.238830] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.910 [2024-11-28 12:50:53.238893] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.910 [2024-11-28 12:50:53.238906] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.910 [2024-11-28 12:50:53.238913] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.910 [2024-11-28 12:50:53.238919] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:10.910 [2024-11-28 12:50:53.238933] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.910 qpair failed and we were unable to recover it. 00:27:10.910 [2024-11-28 12:50:53.248878] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.910 [2024-11-28 12:50:53.248960] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.910 [2024-11-28 12:50:53.248975] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.910 [2024-11-28 12:50:53.248987] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.910 [2024-11-28 12:50:53.248993] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:10.910 [2024-11-28 12:50:53.249008] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.910 qpair failed and we were unable to recover it. 00:27:10.910 [2024-11-28 12:50:53.258901] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.910 [2024-11-28 12:50:53.258963] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.910 [2024-11-28 12:50:53.258977] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.910 [2024-11-28 12:50:53.258984] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.910 [2024-11-28 12:50:53.258989] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:10.910 [2024-11-28 12:50:53.259005] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.910 qpair failed and we were unable to recover it. 00:27:10.910 [2024-11-28 12:50:53.268932] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.910 [2024-11-28 12:50:53.268995] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.910 [2024-11-28 12:50:53.269010] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.910 [2024-11-28 12:50:53.269017] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.911 [2024-11-28 12:50:53.269023] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:10.911 [2024-11-28 12:50:53.269038] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.911 qpair failed and we were unable to recover it. 00:27:10.911 [2024-11-28 12:50:53.278950] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.911 [2024-11-28 12:50:53.279003] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.911 [2024-11-28 12:50:53.279017] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.911 [2024-11-28 12:50:53.279024] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.911 [2024-11-28 12:50:53.279029] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:10.911 [2024-11-28 12:50:53.279044] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.911 qpair failed and we were unable to recover it. 00:27:10.911 [2024-11-28 12:50:53.289020] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.911 [2024-11-28 12:50:53.289080] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.911 [2024-11-28 12:50:53.289094] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.911 [2024-11-28 12:50:53.289100] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.911 [2024-11-28 12:50:53.289106] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:10.911 [2024-11-28 12:50:53.289124] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.911 qpair failed and we were unable to recover it. 00:27:10.911 [2024-11-28 12:50:53.299023] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.911 [2024-11-28 12:50:53.299082] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.911 [2024-11-28 12:50:53.299096] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.911 [2024-11-28 12:50:53.299103] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.911 [2024-11-28 12:50:53.299108] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:10.911 [2024-11-28 12:50:53.299123] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.911 qpair failed and we were unable to recover it. 00:27:10.911 [2024-11-28 12:50:53.309038] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.911 [2024-11-28 12:50:53.309099] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.911 [2024-11-28 12:50:53.309112] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.911 [2024-11-28 12:50:53.309119] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.911 [2024-11-28 12:50:53.309125] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:10.911 [2024-11-28 12:50:53.309139] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.911 qpair failed and we were unable to recover it. 00:27:10.911 [2024-11-28 12:50:53.319063] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.911 [2024-11-28 12:50:53.319113] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.911 [2024-11-28 12:50:53.319127] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.911 [2024-11-28 12:50:53.319134] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.911 [2024-11-28 12:50:53.319139] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:10.911 [2024-11-28 12:50:53.319154] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.911 qpair failed and we were unable to recover it. 00:27:10.911 [2024-11-28 12:50:53.329101] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.911 [2024-11-28 12:50:53.329161] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.911 [2024-11-28 12:50:53.329175] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.911 [2024-11-28 12:50:53.329181] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.911 [2024-11-28 12:50:53.329187] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:10.911 [2024-11-28 12:50:53.329202] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.911 qpair failed and we were unable to recover it. 00:27:10.911 [2024-11-28 12:50:53.339160] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.911 [2024-11-28 12:50:53.339268] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.911 [2024-11-28 12:50:53.339282] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.911 [2024-11-28 12:50:53.339289] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.911 [2024-11-28 12:50:53.339295] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:10.911 [2024-11-28 12:50:53.339310] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.911 qpair failed and we were unable to recover it. 00:27:10.911 [2024-11-28 12:50:53.349189] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.911 [2024-11-28 12:50:53.349272] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.911 [2024-11-28 12:50:53.349286] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.911 [2024-11-28 12:50:53.349293] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.911 [2024-11-28 12:50:53.349299] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:10.911 [2024-11-28 12:50:53.349313] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.911 qpair failed and we were unable to recover it. 00:27:10.911 [2024-11-28 12:50:53.359181] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.911 [2024-11-28 12:50:53.359239] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.911 [2024-11-28 12:50:53.359253] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.911 [2024-11-28 12:50:53.359259] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.911 [2024-11-28 12:50:53.359265] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:10.911 [2024-11-28 12:50:53.359280] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.911 qpair failed and we were unable to recover it. 00:27:10.911 [2024-11-28 12:50:53.369217] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.911 [2024-11-28 12:50:53.369275] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.911 [2024-11-28 12:50:53.369289] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.912 [2024-11-28 12:50:53.369295] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.912 [2024-11-28 12:50:53.369301] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:10.912 [2024-11-28 12:50:53.369316] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.912 qpair failed and we were unable to recover it. 00:27:10.912 [2024-11-28 12:50:53.379257] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.912 [2024-11-28 12:50:53.379356] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.912 [2024-11-28 12:50:53.379369] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.912 [2024-11-28 12:50:53.379379] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.912 [2024-11-28 12:50:53.379385] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:10.912 [2024-11-28 12:50:53.379400] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.912 qpair failed and we were unable to recover it. 00:27:10.912 [2024-11-28 12:50:53.389195] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.912 [2024-11-28 12:50:53.389252] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.912 [2024-11-28 12:50:53.389266] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.912 [2024-11-28 12:50:53.389273] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.912 [2024-11-28 12:50:53.389278] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:10.912 [2024-11-28 12:50:53.389293] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.912 qpair failed and we were unable to recover it. 00:27:10.912 [2024-11-28 12:50:53.399292] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.912 [2024-11-28 12:50:53.399348] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.912 [2024-11-28 12:50:53.399362] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.912 [2024-11-28 12:50:53.399369] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.912 [2024-11-28 12:50:53.399375] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:10.912 [2024-11-28 12:50:53.399389] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.912 qpair failed and we were unable to recover it. 00:27:10.912 [2024-11-28 12:50:53.409338] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.912 [2024-11-28 12:50:53.409397] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.912 [2024-11-28 12:50:53.409410] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.912 [2024-11-28 12:50:53.409417] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.912 [2024-11-28 12:50:53.409423] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:10.912 [2024-11-28 12:50:53.409439] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.912 qpair failed and we were unable to recover it. 00:27:10.912 [2024-11-28 12:50:53.419353] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.912 [2024-11-28 12:50:53.419414] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.912 [2024-11-28 12:50:53.419428] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.912 [2024-11-28 12:50:53.419435] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.912 [2024-11-28 12:50:53.419441] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:10.912 [2024-11-28 12:50:53.419459] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.912 qpair failed and we were unable to recover it. 00:27:11.170 [2024-11-28 12:50:53.429389] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.170 [2024-11-28 12:50:53.429448] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.170 [2024-11-28 12:50:53.429462] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.170 [2024-11-28 12:50:53.429469] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.170 [2024-11-28 12:50:53.429474] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:11.170 [2024-11-28 12:50:53.429489] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.170 qpair failed and we were unable to recover it. 00:27:11.170 [2024-11-28 12:50:53.439417] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.170 [2024-11-28 12:50:53.439476] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.170 [2024-11-28 12:50:53.439489] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.170 [2024-11-28 12:50:53.439496] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.170 [2024-11-28 12:50:53.439502] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:11.170 [2024-11-28 12:50:53.439517] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.170 qpair failed and we were unable to recover it. 00:27:11.170 [2024-11-28 12:50:53.449447] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.170 [2024-11-28 12:50:53.449504] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.170 [2024-11-28 12:50:53.449518] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.170 [2024-11-28 12:50:53.449525] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.170 [2024-11-28 12:50:53.449531] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:11.170 [2024-11-28 12:50:53.449545] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.170 qpair failed and we were unable to recover it. 00:27:11.170 [2024-11-28 12:50:53.459519] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.170 [2024-11-28 12:50:53.459625] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.170 [2024-11-28 12:50:53.459639] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.170 [2024-11-28 12:50:53.459646] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.170 [2024-11-28 12:50:53.459652] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:11.170 [2024-11-28 12:50:53.459666] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.170 qpair failed and we were unable to recover it. 00:27:11.170 [2024-11-28 12:50:53.469502] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.170 [2024-11-28 12:50:53.469560] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.170 [2024-11-28 12:50:53.469575] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.170 [2024-11-28 12:50:53.469582] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.170 [2024-11-28 12:50:53.469587] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:11.170 [2024-11-28 12:50:53.469602] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.170 qpair failed and we were unable to recover it. 00:27:11.170 [2024-11-28 12:50:53.479455] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.170 [2024-11-28 12:50:53.479509] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.170 [2024-11-28 12:50:53.479522] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.170 [2024-11-28 12:50:53.479529] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.170 [2024-11-28 12:50:53.479534] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:11.170 [2024-11-28 12:50:53.479549] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.170 qpair failed and we were unable to recover it. 00:27:11.170 [2024-11-28 12:50:53.489553] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.170 [2024-11-28 12:50:53.489607] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.170 [2024-11-28 12:50:53.489620] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.170 [2024-11-28 12:50:53.489627] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.170 [2024-11-28 12:50:53.489633] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:11.170 [2024-11-28 12:50:53.489647] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.170 qpair failed and we were unable to recover it. 00:27:11.170 [2024-11-28 12:50:53.499592] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.170 [2024-11-28 12:50:53.499652] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.170 [2024-11-28 12:50:53.499666] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.170 [2024-11-28 12:50:53.499673] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.170 [2024-11-28 12:50:53.499679] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:11.170 [2024-11-28 12:50:53.499694] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.170 qpair failed and we were unable to recover it. 00:27:11.170 [2024-11-28 12:50:53.509646] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.171 [2024-11-28 12:50:53.509708] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.171 [2024-11-28 12:50:53.509733] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.171 [2024-11-28 12:50:53.509740] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.171 [2024-11-28 12:50:53.509746] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:11.171 [2024-11-28 12:50:53.509761] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.171 qpair failed and we were unable to recover it. 00:27:11.171 [2024-11-28 12:50:53.519668] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.171 [2024-11-28 12:50:53.519723] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.171 [2024-11-28 12:50:53.519737] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.171 [2024-11-28 12:50:53.519744] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.171 [2024-11-28 12:50:53.519750] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:11.171 [2024-11-28 12:50:53.519765] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.171 qpair failed and we were unable to recover it. 00:27:11.171 [2024-11-28 12:50:53.529711] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.171 [2024-11-28 12:50:53.529808] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.171 [2024-11-28 12:50:53.529822] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.171 [2024-11-28 12:50:53.529828] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.171 [2024-11-28 12:50:53.529834] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:11.171 [2024-11-28 12:50:53.529849] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.171 qpair failed and we were unable to recover it. 00:27:11.171 [2024-11-28 12:50:53.539679] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.171 [2024-11-28 12:50:53.539739] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.171 [2024-11-28 12:50:53.539753] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.171 [2024-11-28 12:50:53.539761] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.171 [2024-11-28 12:50:53.539769] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:11.171 [2024-11-28 12:50:53.539786] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.171 qpair failed and we were unable to recover it. 00:27:11.171 [2024-11-28 12:50:53.549729] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.171 [2024-11-28 12:50:53.549809] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.171 [2024-11-28 12:50:53.549823] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.171 [2024-11-28 12:50:53.549830] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.171 [2024-11-28 12:50:53.549839] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:11.171 [2024-11-28 12:50:53.549854] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.171 qpair failed and we were unable to recover it. 00:27:11.171 [2024-11-28 12:50:53.559757] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.171 [2024-11-28 12:50:53.559814] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.171 [2024-11-28 12:50:53.559828] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.171 [2024-11-28 12:50:53.559835] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.171 [2024-11-28 12:50:53.559841] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:11.171 [2024-11-28 12:50:53.559856] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.171 qpair failed and we were unable to recover it. 00:27:11.171 [2024-11-28 12:50:53.569777] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.171 [2024-11-28 12:50:53.569835] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.171 [2024-11-28 12:50:53.569850] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.171 [2024-11-28 12:50:53.569857] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.171 [2024-11-28 12:50:53.569863] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:11.171 [2024-11-28 12:50:53.569878] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.171 qpair failed and we were unable to recover it. 00:27:11.171 [2024-11-28 12:50:53.579847] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.171 [2024-11-28 12:50:53.579959] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.171 [2024-11-28 12:50:53.579974] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.171 [2024-11-28 12:50:53.579981] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.171 [2024-11-28 12:50:53.579987] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:11.171 [2024-11-28 12:50:53.580003] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.171 qpair failed and we were unable to recover it. 00:27:11.171 [2024-11-28 12:50:53.589896] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.171 [2024-11-28 12:50:53.589998] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.171 [2024-11-28 12:50:53.590012] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.171 [2024-11-28 12:50:53.590019] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.171 [2024-11-28 12:50:53.590025] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:11.171 [2024-11-28 12:50:53.590040] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.171 qpair failed and we were unable to recover it. 00:27:11.171 [2024-11-28 12:50:53.599883] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.171 [2024-11-28 12:50:53.599940] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.171 [2024-11-28 12:50:53.599958] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.171 [2024-11-28 12:50:53.599965] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.171 [2024-11-28 12:50:53.599971] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:11.171 [2024-11-28 12:50:53.599986] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.171 qpair failed and we were unable to recover it. 00:27:11.171 [2024-11-28 12:50:53.609868] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.171 [2024-11-28 12:50:53.609922] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.171 [2024-11-28 12:50:53.609936] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.171 [2024-11-28 12:50:53.609943] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.171 [2024-11-28 12:50:53.609953] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:11.171 [2024-11-28 12:50:53.609969] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.171 qpair failed and we were unable to recover it. 00:27:11.171 [2024-11-28 12:50:53.619986] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.171 [2024-11-28 12:50:53.620048] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.171 [2024-11-28 12:50:53.620062] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.171 [2024-11-28 12:50:53.620069] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.171 [2024-11-28 12:50:53.620075] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:11.171 [2024-11-28 12:50:53.620090] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.171 qpair failed and we were unable to recover it. 00:27:11.171 [2024-11-28 12:50:53.629951] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.171 [2024-11-28 12:50:53.630006] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.171 [2024-11-28 12:50:53.630020] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.171 [2024-11-28 12:50:53.630027] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.171 [2024-11-28 12:50:53.630033] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:11.171 [2024-11-28 12:50:53.630048] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.171 qpair failed and we were unable to recover it. 00:27:11.171 [2024-11-28 12:50:53.639990] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.172 [2024-11-28 12:50:53.640047] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.172 [2024-11-28 12:50:53.640064] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.172 [2024-11-28 12:50:53.640071] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.172 [2024-11-28 12:50:53.640077] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:11.172 [2024-11-28 12:50:53.640091] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.172 qpair failed and we were unable to recover it. 00:27:11.172 [2024-11-28 12:50:53.650016] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.172 [2024-11-28 12:50:53.650108] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.172 [2024-11-28 12:50:53.650122] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.172 [2024-11-28 12:50:53.650129] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.172 [2024-11-28 12:50:53.650135] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:11.172 [2024-11-28 12:50:53.650149] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.172 qpair failed and we were unable to recover it. 00:27:11.172 [2024-11-28 12:50:53.660019] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.172 [2024-11-28 12:50:53.660075] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.172 [2024-11-28 12:50:53.660089] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.172 [2024-11-28 12:50:53.660095] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.172 [2024-11-28 12:50:53.660101] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:11.172 [2024-11-28 12:50:53.660116] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.172 qpair failed and we were unable to recover it. 00:27:11.172 [2024-11-28 12:50:53.670072] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.172 [2024-11-28 12:50:53.670132] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.172 [2024-11-28 12:50:53.670146] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.172 [2024-11-28 12:50:53.670154] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.172 [2024-11-28 12:50:53.670161] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:11.172 [2024-11-28 12:50:53.670176] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.172 qpair failed and we were unable to recover it. 00:27:11.172 [2024-11-28 12:50:53.680138] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.172 [2024-11-28 12:50:53.680194] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.172 [2024-11-28 12:50:53.680208] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.172 [2024-11-28 12:50:53.680215] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.172 [2024-11-28 12:50:53.680224] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:11.172 [2024-11-28 12:50:53.680239] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.172 qpair failed and we were unable to recover it. 00:27:11.429 [2024-11-28 12:50:53.690176] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.429 [2024-11-28 12:50:53.690227] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.429 [2024-11-28 12:50:53.690241] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.429 [2024-11-28 12:50:53.690247] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.429 [2024-11-28 12:50:53.690253] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:11.429 [2024-11-28 12:50:53.690268] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.429 qpair failed and we were unable to recover it. 00:27:11.429 [2024-11-28 12:50:53.700182] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.429 [2024-11-28 12:50:53.700239] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.429 [2024-11-28 12:50:53.700252] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.429 [2024-11-28 12:50:53.700259] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.429 [2024-11-28 12:50:53.700265] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:11.429 [2024-11-28 12:50:53.700280] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.429 qpair failed and we were unable to recover it. 00:27:11.429 [2024-11-28 12:50:53.710197] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.429 [2024-11-28 12:50:53.710301] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.429 [2024-11-28 12:50:53.710314] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.429 [2024-11-28 12:50:53.710321] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.429 [2024-11-28 12:50:53.710327] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:11.429 [2024-11-28 12:50:53.710342] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.429 qpair failed and we were unable to recover it. 00:27:11.429 [2024-11-28 12:50:53.720221] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.429 [2024-11-28 12:50:53.720324] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.429 [2024-11-28 12:50:53.720337] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.429 [2024-11-28 12:50:53.720343] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.429 [2024-11-28 12:50:53.720350] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:11.429 [2024-11-28 12:50:53.720364] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.429 qpair failed and we were unable to recover it. 00:27:11.429 [2024-11-28 12:50:53.730335] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.429 [2024-11-28 12:50:53.730391] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.429 [2024-11-28 12:50:53.730406] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.429 [2024-11-28 12:50:53.730413] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.429 [2024-11-28 12:50:53.730419] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:11.429 [2024-11-28 12:50:53.730434] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.429 qpair failed and we were unable to recover it. 00:27:11.429 [2024-11-28 12:50:53.740276] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.429 [2024-11-28 12:50:53.740336] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.429 [2024-11-28 12:50:53.740350] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.429 [2024-11-28 12:50:53.740356] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.429 [2024-11-28 12:50:53.740362] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:11.429 [2024-11-28 12:50:53.740376] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.429 qpair failed and we were unable to recover it. 00:27:11.429 [2024-11-28 12:50:53.750310] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.429 [2024-11-28 12:50:53.750371] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.429 [2024-11-28 12:50:53.750385] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.429 [2024-11-28 12:50:53.750393] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.429 [2024-11-28 12:50:53.750399] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:11.429 [2024-11-28 12:50:53.750413] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.429 qpair failed and we were unable to recover it. 00:27:11.429 [2024-11-28 12:50:53.760368] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.429 [2024-11-28 12:50:53.760426] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.429 [2024-11-28 12:50:53.760440] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.429 [2024-11-28 12:50:53.760447] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.429 [2024-11-28 12:50:53.760453] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:11.429 [2024-11-28 12:50:53.760468] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.429 qpair failed and we were unable to recover it. 00:27:11.429 [2024-11-28 12:50:53.770329] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.429 [2024-11-28 12:50:53.770386] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.429 [2024-11-28 12:50:53.770404] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.429 [2024-11-28 12:50:53.770411] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.429 [2024-11-28 12:50:53.770417] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:11.430 [2024-11-28 12:50:53.770432] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.430 qpair failed and we were unable to recover it. 00:27:11.430 [2024-11-28 12:50:53.780400] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.430 [2024-11-28 12:50:53.780458] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.430 [2024-11-28 12:50:53.780472] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.430 [2024-11-28 12:50:53.780479] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.430 [2024-11-28 12:50:53.780486] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:11.430 [2024-11-28 12:50:53.780500] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.430 qpair failed and we were unable to recover it. 00:27:11.430 [2024-11-28 12:50:53.790451] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.430 [2024-11-28 12:50:53.790537] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.430 [2024-11-28 12:50:53.790551] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.430 [2024-11-28 12:50:53.790558] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.430 [2024-11-28 12:50:53.790564] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:11.430 [2024-11-28 12:50:53.790578] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.430 qpair failed and we were unable to recover it. 00:27:11.430 [2024-11-28 12:50:53.800464] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.430 [2024-11-28 12:50:53.800519] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.430 [2024-11-28 12:50:53.800533] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.430 [2024-11-28 12:50:53.800539] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.430 [2024-11-28 12:50:53.800545] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:11.430 [2024-11-28 12:50:53.800560] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.430 qpair failed and we were unable to recover it. 00:27:11.430 [2024-11-28 12:50:53.810456] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.430 [2024-11-28 12:50:53.810511] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.430 [2024-11-28 12:50:53.810525] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.430 [2024-11-28 12:50:53.810535] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.430 [2024-11-28 12:50:53.810540] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:11.430 [2024-11-28 12:50:53.810555] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.430 qpair failed and we were unable to recover it. 00:27:11.430 [2024-11-28 12:50:53.820527] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.430 [2024-11-28 12:50:53.820592] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.430 [2024-11-28 12:50:53.820606] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.430 [2024-11-28 12:50:53.820613] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.430 [2024-11-28 12:50:53.820619] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:11.430 [2024-11-28 12:50:53.820633] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.430 qpair failed and we were unable to recover it. 00:27:11.430 [2024-11-28 12:50:53.830545] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.430 [2024-11-28 12:50:53.830604] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.430 [2024-11-28 12:50:53.830618] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.430 [2024-11-28 12:50:53.830624] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.430 [2024-11-28 12:50:53.830630] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:11.430 [2024-11-28 12:50:53.830644] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.430 qpair failed and we were unable to recover it. 00:27:11.430 [2024-11-28 12:50:53.840567] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.430 [2024-11-28 12:50:53.840664] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.430 [2024-11-28 12:50:53.840677] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.430 [2024-11-28 12:50:53.840684] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.430 [2024-11-28 12:50:53.840690] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:11.430 [2024-11-28 12:50:53.840705] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.430 qpair failed and we were unable to recover it. 00:27:11.430 [2024-11-28 12:50:53.850601] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.430 [2024-11-28 12:50:53.850655] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.430 [2024-11-28 12:50:53.850669] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.430 [2024-11-28 12:50:53.850676] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.430 [2024-11-28 12:50:53.850682] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:11.430 [2024-11-28 12:50:53.850700] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.430 qpair failed and we were unable to recover it. 00:27:11.430 [2024-11-28 12:50:53.860641] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.430 [2024-11-28 12:50:53.860701] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.430 [2024-11-28 12:50:53.860714] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.430 [2024-11-28 12:50:53.860721] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.430 [2024-11-28 12:50:53.860727] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:11.430 [2024-11-28 12:50:53.860742] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.430 qpair failed and we were unable to recover it. 00:27:11.430 [2024-11-28 12:50:53.870670] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.430 [2024-11-28 12:50:53.870733] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.430 [2024-11-28 12:50:53.870748] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.430 [2024-11-28 12:50:53.870755] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.430 [2024-11-28 12:50:53.870760] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:11.430 [2024-11-28 12:50:53.870776] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.430 qpair failed and we were unable to recover it. 00:27:11.430 [2024-11-28 12:50:53.880681] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.430 [2024-11-28 12:50:53.880738] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.430 [2024-11-28 12:50:53.880752] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.430 [2024-11-28 12:50:53.880759] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.430 [2024-11-28 12:50:53.880764] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:11.430 [2024-11-28 12:50:53.880779] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.430 qpair failed and we were unable to recover it. 00:27:11.430 [2024-11-28 12:50:53.890707] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.430 [2024-11-28 12:50:53.890759] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.430 [2024-11-28 12:50:53.890773] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.430 [2024-11-28 12:50:53.890780] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.430 [2024-11-28 12:50:53.890786] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:11.430 [2024-11-28 12:50:53.890801] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.430 qpair failed and we were unable to recover it. 00:27:11.430 [2024-11-28 12:50:53.900742] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.430 [2024-11-28 12:50:53.900800] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.430 [2024-11-28 12:50:53.900815] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.430 [2024-11-28 12:50:53.900822] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.430 [2024-11-28 12:50:53.900828] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:11.431 [2024-11-28 12:50:53.900843] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.431 qpair failed and we were unable to recover it. 00:27:11.431 [2024-11-28 12:50:53.910768] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.431 [2024-11-28 12:50:53.910830] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.431 [2024-11-28 12:50:53.910845] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.431 [2024-11-28 12:50:53.910851] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.431 [2024-11-28 12:50:53.910858] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:11.431 [2024-11-28 12:50:53.910872] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.431 qpair failed and we were unable to recover it. 00:27:11.431 [2024-11-28 12:50:53.920797] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.431 [2024-11-28 12:50:53.920855] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.431 [2024-11-28 12:50:53.920869] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.431 [2024-11-28 12:50:53.920876] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.431 [2024-11-28 12:50:53.920882] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:11.431 [2024-11-28 12:50:53.920897] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.431 qpair failed and we were unable to recover it. 00:27:11.431 [2024-11-28 12:50:53.930823] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.431 [2024-11-28 12:50:53.930875] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.431 [2024-11-28 12:50:53.930890] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.431 [2024-11-28 12:50:53.930897] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.431 [2024-11-28 12:50:53.930903] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:11.431 [2024-11-28 12:50:53.930917] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.431 qpair failed and we were unable to recover it. 00:27:11.431 [2024-11-28 12:50:53.940901] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.431 [2024-11-28 12:50:53.940979] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.431 [2024-11-28 12:50:53.940993] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.431 [2024-11-28 12:50:53.941004] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.431 [2024-11-28 12:50:53.941009] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:11.431 [2024-11-28 12:50:53.941025] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.431 qpair failed and we were unable to recover it. 00:27:11.689 [2024-11-28 12:50:53.950879] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.689 [2024-11-28 12:50:53.950936] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.689 [2024-11-28 12:50:53.950954] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.689 [2024-11-28 12:50:53.950961] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.689 [2024-11-28 12:50:53.950967] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:11.689 [2024-11-28 12:50:53.950982] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.689 qpair failed and we were unable to recover it. 00:27:11.689 [2024-11-28 12:50:53.960858] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.689 [2024-11-28 12:50:53.960915] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.690 [2024-11-28 12:50:53.960929] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.690 [2024-11-28 12:50:53.960936] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.690 [2024-11-28 12:50:53.960943] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:11.690 [2024-11-28 12:50:53.960963] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.690 qpair failed and we were unable to recover it. 00:27:11.690 [2024-11-28 12:50:53.970963] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.690 [2024-11-28 12:50:53.971019] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.690 [2024-11-28 12:50:53.971034] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.690 [2024-11-28 12:50:53.971041] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.690 [2024-11-28 12:50:53.971047] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:11.690 [2024-11-28 12:50:53.971063] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.690 qpair failed and we were unable to recover it. 00:27:11.690 [2024-11-28 12:50:53.980995] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.690 [2024-11-28 12:50:53.981060] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.690 [2024-11-28 12:50:53.981074] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.690 [2024-11-28 12:50:53.981081] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.690 [2024-11-28 12:50:53.981087] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:11.690 [2024-11-28 12:50:53.981107] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.690 qpair failed and we were unable to recover it. 00:27:11.690 [2024-11-28 12:50:53.991019] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.690 [2024-11-28 12:50:53.991077] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.690 [2024-11-28 12:50:53.991091] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.690 [2024-11-28 12:50:53.991098] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.690 [2024-11-28 12:50:53.991104] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:11.690 [2024-11-28 12:50:53.991119] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.690 qpair failed and we were unable to recover it. 00:27:11.690 [2024-11-28 12:50:54.001015] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.690 [2024-11-28 12:50:54.001069] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.690 [2024-11-28 12:50:54.001083] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.690 [2024-11-28 12:50:54.001090] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.690 [2024-11-28 12:50:54.001097] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:11.690 [2024-11-28 12:50:54.001112] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.690 qpair failed and we were unable to recover it. 00:27:11.690 [2024-11-28 12:50:54.011087] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.690 [2024-11-28 12:50:54.011146] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.690 [2024-11-28 12:50:54.011160] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.690 [2024-11-28 12:50:54.011167] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.690 [2024-11-28 12:50:54.011172] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:11.690 [2024-11-28 12:50:54.011187] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.690 qpair failed and we were unable to recover it. 00:27:11.690 [2024-11-28 12:50:54.021095] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.690 [2024-11-28 12:50:54.021171] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.690 [2024-11-28 12:50:54.021185] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.690 [2024-11-28 12:50:54.021192] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.690 [2024-11-28 12:50:54.021198] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:11.690 [2024-11-28 12:50:54.021212] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.690 qpair failed and we were unable to recover it. 00:27:11.690 [2024-11-28 12:50:54.031201] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.690 [2024-11-28 12:50:54.031280] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.690 [2024-11-28 12:50:54.031293] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.690 [2024-11-28 12:50:54.031300] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.690 [2024-11-28 12:50:54.031306] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:11.690 [2024-11-28 12:50:54.031320] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.690 qpair failed and we were unable to recover it. 00:27:11.690 [2024-11-28 12:50:54.041155] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.690 [2024-11-28 12:50:54.041214] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.690 [2024-11-28 12:50:54.041227] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.690 [2024-11-28 12:50:54.041234] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.690 [2024-11-28 12:50:54.041240] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:11.690 [2024-11-28 12:50:54.041254] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.690 qpair failed and we were unable to recover it. 00:27:11.690 [2024-11-28 12:50:54.051232] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.690 [2024-11-28 12:50:54.051288] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.690 [2024-11-28 12:50:54.051302] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.690 [2024-11-28 12:50:54.051309] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.690 [2024-11-28 12:50:54.051314] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:11.690 [2024-11-28 12:50:54.051329] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.690 qpair failed and we were unable to recover it. 00:27:11.690 [2024-11-28 12:50:54.061233] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.690 [2024-11-28 12:50:54.061291] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.690 [2024-11-28 12:50:54.061304] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.690 [2024-11-28 12:50:54.061311] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.690 [2024-11-28 12:50:54.061317] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:11.690 [2024-11-28 12:50:54.061332] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.691 qpair failed and we were unable to recover it. 00:27:11.691 [2024-11-28 12:50:54.071249] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.691 [2024-11-28 12:50:54.071307] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.691 [2024-11-28 12:50:54.071323] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.691 [2024-11-28 12:50:54.071331] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.691 [2024-11-28 12:50:54.071337] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:11.691 [2024-11-28 12:50:54.071352] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.691 qpair failed and we were unable to recover it. 00:27:11.691 [2024-11-28 12:50:54.081311] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.691 [2024-11-28 12:50:54.081368] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.691 [2024-11-28 12:50:54.081381] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.691 [2024-11-28 12:50:54.081388] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.691 [2024-11-28 12:50:54.081394] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:11.691 [2024-11-28 12:50:54.081409] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.691 qpair failed and we were unable to recover it. 00:27:11.691 [2024-11-28 12:50:54.091303] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.691 [2024-11-28 12:50:54.091367] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.691 [2024-11-28 12:50:54.091381] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.691 [2024-11-28 12:50:54.091387] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.691 [2024-11-28 12:50:54.091393] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:11.691 [2024-11-28 12:50:54.091408] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.691 qpair failed and we were unable to recover it. 00:27:11.691 [2024-11-28 12:50:54.101342] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.691 [2024-11-28 12:50:54.101399] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.691 [2024-11-28 12:50:54.101413] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.691 [2024-11-28 12:50:54.101420] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.691 [2024-11-28 12:50:54.101426] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:11.691 [2024-11-28 12:50:54.101440] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.691 qpair failed and we were unable to recover it. 00:27:11.691 [2024-11-28 12:50:54.111373] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.691 [2024-11-28 12:50:54.111454] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.691 [2024-11-28 12:50:54.111468] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.691 [2024-11-28 12:50:54.111475] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.691 [2024-11-28 12:50:54.111484] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:11.691 [2024-11-28 12:50:54.111499] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.691 qpair failed and we were unable to recover it. 00:27:11.691 [2024-11-28 12:50:54.121404] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.691 [2024-11-28 12:50:54.121486] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.691 [2024-11-28 12:50:54.121500] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.691 [2024-11-28 12:50:54.121507] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.691 [2024-11-28 12:50:54.121513] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:11.691 [2024-11-28 12:50:54.121528] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.691 qpair failed and we were unable to recover it. 00:27:11.691 [2024-11-28 12:50:54.131467] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.691 [2024-11-28 12:50:54.131519] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.691 [2024-11-28 12:50:54.131533] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.691 [2024-11-28 12:50:54.131540] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.691 [2024-11-28 12:50:54.131546] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:11.691 [2024-11-28 12:50:54.131560] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.691 qpair failed and we were unable to recover it. 00:27:11.691 [2024-11-28 12:50:54.141456] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.691 [2024-11-28 12:50:54.141512] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.691 [2024-11-28 12:50:54.141526] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.691 [2024-11-28 12:50:54.141533] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.691 [2024-11-28 12:50:54.141539] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:11.691 [2024-11-28 12:50:54.141554] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.691 qpair failed and we were unable to recover it. 00:27:11.691 [2024-11-28 12:50:54.151404] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.691 [2024-11-28 12:50:54.151463] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.691 [2024-11-28 12:50:54.151477] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.691 [2024-11-28 12:50:54.151484] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.691 [2024-11-28 12:50:54.151490] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:11.691 [2024-11-28 12:50:54.151505] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.691 qpair failed and we were unable to recover it. 00:27:11.691 [2024-11-28 12:50:54.161497] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.691 [2024-11-28 12:50:54.161550] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.691 [2024-11-28 12:50:54.161565] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.691 [2024-11-28 12:50:54.161571] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.691 [2024-11-28 12:50:54.161577] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:11.691 [2024-11-28 12:50:54.161592] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.691 qpair failed and we were unable to recover it. 00:27:11.691 [2024-11-28 12:50:54.171538] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.691 [2024-11-28 12:50:54.171593] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.691 [2024-11-28 12:50:54.171608] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.691 [2024-11-28 12:50:54.171615] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.691 [2024-11-28 12:50:54.171621] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:11.692 [2024-11-28 12:50:54.171636] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.692 qpair failed and we were unable to recover it. 00:27:11.692 [2024-11-28 12:50:54.181490] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.692 [2024-11-28 12:50:54.181551] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.692 [2024-11-28 12:50:54.181564] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.692 [2024-11-28 12:50:54.181571] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.692 [2024-11-28 12:50:54.181577] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:11.692 [2024-11-28 12:50:54.181592] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.692 qpair failed and we were unable to recover it. 00:27:11.692 [2024-11-28 12:50:54.191519] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.692 [2024-11-28 12:50:54.191574] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.692 [2024-11-28 12:50:54.191589] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.692 [2024-11-28 12:50:54.191596] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.692 [2024-11-28 12:50:54.191603] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:11.692 [2024-11-28 12:50:54.191618] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.692 qpair failed and we were unable to recover it. 00:27:11.692 [2024-11-28 12:50:54.201552] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.692 [2024-11-28 12:50:54.201611] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.692 [2024-11-28 12:50:54.201630] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.692 [2024-11-28 12:50:54.201637] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.692 [2024-11-28 12:50:54.201643] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:11.692 [2024-11-28 12:50:54.201658] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.692 qpair failed and we were unable to recover it. 00:27:11.988 [2024-11-28 12:50:54.211703] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.988 [2024-11-28 12:50:54.211782] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.988 [2024-11-28 12:50:54.211801] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.988 [2024-11-28 12:50:54.211808] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.988 [2024-11-28 12:50:54.211814] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:11.988 [2024-11-28 12:50:54.211831] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.988 qpair failed and we were unable to recover it. 00:27:11.988 [2024-11-28 12:50:54.221633] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.988 [2024-11-28 12:50:54.221703] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.988 [2024-11-28 12:50:54.221717] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.988 [2024-11-28 12:50:54.221724] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.988 [2024-11-28 12:50:54.221730] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:11.988 [2024-11-28 12:50:54.221746] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.988 qpair failed and we were unable to recover it. 00:27:11.988 [2024-11-28 12:50:54.231685] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.988 [2024-11-28 12:50:54.231778] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.988 [2024-11-28 12:50:54.231793] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.988 [2024-11-28 12:50:54.231800] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.988 [2024-11-28 12:50:54.231805] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:11.988 [2024-11-28 12:50:54.231821] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.988 qpair failed and we were unable to recover it. 00:27:11.988 [2024-11-28 12:50:54.241763] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.988 [2024-11-28 12:50:54.241818] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.988 [2024-11-28 12:50:54.241833] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.988 [2024-11-28 12:50:54.241840] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.988 [2024-11-28 12:50:54.241849] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:11.988 [2024-11-28 12:50:54.241864] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.988 qpair failed and we were unable to recover it. 00:27:11.988 [2024-11-28 12:50:54.251752] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.989 [2024-11-28 12:50:54.251809] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.989 [2024-11-28 12:50:54.251824] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.989 [2024-11-28 12:50:54.251831] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.989 [2024-11-28 12:50:54.251837] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:11.989 [2024-11-28 12:50:54.251852] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.989 qpair failed and we were unable to recover it. 00:27:11.989 [2024-11-28 12:50:54.261768] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.989 [2024-11-28 12:50:54.261832] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.989 [2024-11-28 12:50:54.261846] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.989 [2024-11-28 12:50:54.261854] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.989 [2024-11-28 12:50:54.261860] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:11.989 [2024-11-28 12:50:54.261875] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.989 qpair failed and we were unable to recover it. 00:27:11.989 [2024-11-28 12:50:54.271769] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.989 [2024-11-28 12:50:54.271846] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.989 [2024-11-28 12:50:54.271860] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.989 [2024-11-28 12:50:54.271867] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.989 [2024-11-28 12:50:54.271872] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:11.989 [2024-11-28 12:50:54.271888] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.989 qpair failed and we were unable to recover it. 00:27:11.989 [2024-11-28 12:50:54.281814] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.989 [2024-11-28 12:50:54.281887] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.989 [2024-11-28 12:50:54.281901] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.989 [2024-11-28 12:50:54.281908] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.989 [2024-11-28 12:50:54.281913] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:11.989 [2024-11-28 12:50:54.281928] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.989 qpair failed and we were unable to recover it. 00:27:11.989 [2024-11-28 12:50:54.291864] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.989 [2024-11-28 12:50:54.291926] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.989 [2024-11-28 12:50:54.291940] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.989 [2024-11-28 12:50:54.291951] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.989 [2024-11-28 12:50:54.291957] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c64000b90 00:27:11.989 [2024-11-28 12:50:54.291973] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.989 qpair failed and we were unable to recover it. 00:27:11.989 [2024-11-28 12:50:54.301861] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.989 [2024-11-28 12:50:54.301929] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.989 [2024-11-28 12:50:54.301956] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.989 [2024-11-28 12:50:54.301966] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.989 [2024-11-28 12:50:54.301973] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c5c000b90 00:27:11.989 [2024-11-28 12:50:54.301992] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.989 qpair failed and we were unable to recover it. 00:27:11.989 [2024-11-28 12:50:54.311853] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.989 [2024-11-28 12:50:54.311916] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.989 [2024-11-28 12:50:54.311931] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.989 [2024-11-28 12:50:54.311938] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.989 [2024-11-28 12:50:54.311945] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c5c000b90 00:27:11.989 [2024-11-28 12:50:54.311966] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.989 qpair failed and we were unable to recover it. 00:27:11.989 [2024-11-28 12:50:54.321969] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.989 [2024-11-28 12:50:54.322045] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.989 [2024-11-28 12:50:54.322071] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.989 [2024-11-28 12:50:54.322083] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.989 [2024-11-28 12:50:54.322092] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c58000b90 00:27:11.989 [2024-11-28 12:50:54.322116] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:11.989 qpair failed and we were unable to recover it. 00:27:11.989 [2024-11-28 12:50:54.331911] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.989 [2024-11-28 12:50:54.331970] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.989 [2024-11-28 12:50:54.331989] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.989 [2024-11-28 12:50:54.331996] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.989 [2024-11-28 12:50:54.332002] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8c58000b90 00:27:11.989 [2024-11-28 12:50:54.332018] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:11.989 qpair failed and we were unable to recover it. 00:27:11.989 [2024-11-28 12:50:54.332136] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:27:11.989 A controller has encountered a failure and is being reset. 00:27:11.989 Controller properly reset. 00:27:11.989 Initializing NVMe Controllers 00:27:11.989 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:11.989 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:11.989 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:27:11.989 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:27:11.989 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:27:11.989 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:27:11.989 Initialization complete. Launching workers. 00:27:11.989 Starting thread on core 1 00:27:11.989 Starting thread on core 2 00:27:11.989 Starting thread on core 3 00:27:11.989 Starting thread on core 0 00:27:11.990 12:50:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:27:11.990 00:27:11.990 real 0m11.308s 00:27:11.990 user 0m21.790s 00:27:11.990 sys 0m4.621s 00:27:11.990 12:50:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:11.990 12:50:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:11.990 ************************************ 00:27:11.990 END TEST nvmf_target_disconnect_tc2 00:27:11.990 ************************************ 00:27:11.990 12:50:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:27:11.990 12:50:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:27:11.990 12:50:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:27:11.990 12:50:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:11.990 12:50:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:27:11.990 12:50:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:11.990 12:50:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:27:11.990 12:50:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:11.990 12:50:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:11.990 rmmod nvme_tcp 00:27:11.990 rmmod nvme_fabrics 00:27:11.990 rmmod nvme_keyring 00:27:11.990 12:50:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:11.990 12:50:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:27:11.990 12:50:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:27:11.990 12:50:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 2682327 ']' 00:27:11.990 12:50:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 2682327 00:27:11.990 12:50:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 2682327 ']' 00:27:11.990 12:50:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 2682327 00:27:11.990 12:50:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname 00:27:11.990 12:50:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:11.990 12:50:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2682327 00:27:12.288 12:50:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4 00:27:12.289 12:50:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']' 00:27:12.289 12:50:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2682327' 00:27:12.289 killing process with pid 2682327 00:27:12.289 12:50:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 2682327 00:27:12.289 12:50:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 2682327 00:27:12.289 12:50:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:12.289 12:50:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:12.289 12:50:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:12.289 12:50:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:27:12.289 12:50:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:27:12.289 12:50:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:27:12.289 12:50:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:12.289 12:50:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:12.289 12:50:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:12.289 12:50:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:12.289 12:50:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:12.289 12:50:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:14.847 12:50:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:14.847 00:27:14.847 real 0m19.528s 00:27:14.847 user 0m48.848s 00:27:14.847 sys 0m9.138s 00:27:14.847 12:50:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:14.847 12:50:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:14.847 ************************************ 00:27:14.847 END TEST nvmf_target_disconnect 00:27:14.847 ************************************ 00:27:14.847 12:50:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:27:14.847 00:27:14.847 real 5m40.601s 00:27:14.847 user 10m22.594s 00:27:14.847 sys 1m51.798s 00:27:14.847 12:50:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:14.847 12:50:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.847 ************************************ 00:27:14.847 END TEST nvmf_host 00:27:14.847 ************************************ 00:27:14.847 12:50:56 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:27:14.847 12:50:56 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:27:14.847 12:50:56 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:27:14.847 12:50:56 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:27:14.847 12:50:56 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:14.847 12:50:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:14.847 ************************************ 00:27:14.847 START TEST nvmf_target_core_interrupt_mode 00:27:14.847 ************************************ 00:27:14.847 12:50:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:27:14.847 * Looking for test storage... 00:27:14.847 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:27:14.847 12:50:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:14.847 12:50:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lcov --version 00:27:14.847 12:50:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:14.847 12:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:14.847 12:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:14.847 12:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:14.847 12:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:14.847 12:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:27:14.847 12:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:27:14.847 12:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:27:14.847 12:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:27:14.847 12:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:27:14.847 12:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:27:14.847 12:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:27:14.847 12:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:14.847 12:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:27:14.847 12:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:27:14.847 12:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:14.847 12:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:14.847 12:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:27:14.847 12:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:27:14.847 12:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:14.847 12:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:27:14.847 12:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:27:14.847 12:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:27:14.847 12:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:27:14.847 12:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:14.847 12:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:27:14.847 12:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:27:14.847 12:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:14.847 12:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:14.847 12:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:27:14.847 12:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:14.847 12:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:14.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:14.847 --rc genhtml_branch_coverage=1 00:27:14.847 --rc genhtml_function_coverage=1 00:27:14.847 --rc genhtml_legend=1 00:27:14.847 --rc geninfo_all_blocks=1 00:27:14.847 --rc geninfo_unexecuted_blocks=1 00:27:14.847 00:27:14.847 ' 00:27:14.847 12:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:14.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:14.847 --rc genhtml_branch_coverage=1 00:27:14.847 --rc genhtml_function_coverage=1 00:27:14.847 --rc genhtml_legend=1 00:27:14.847 --rc geninfo_all_blocks=1 00:27:14.847 --rc geninfo_unexecuted_blocks=1 00:27:14.847 00:27:14.847 ' 00:27:14.847 12:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:14.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:14.847 --rc genhtml_branch_coverage=1 00:27:14.847 --rc genhtml_function_coverage=1 00:27:14.847 --rc genhtml_legend=1 00:27:14.847 --rc geninfo_all_blocks=1 00:27:14.847 --rc geninfo_unexecuted_blocks=1 00:27:14.847 00:27:14.847 ' 00:27:14.847 12:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:14.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:14.847 --rc genhtml_branch_coverage=1 00:27:14.847 --rc genhtml_function_coverage=1 00:27:14.847 --rc genhtml_legend=1 00:27:14.847 --rc geninfo_all_blocks=1 00:27:14.847 --rc geninfo_unexecuted_blocks=1 00:27:14.847 00:27:14.847 ' 00:27:14.847 12:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:27:14.847 12:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:27:14.847 12:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:14.847 12:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:27:14.847 12:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:14.847 12:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:14.847 12:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:14.847 12:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:14.847 12:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:14.847 12:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:14.847 12:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:14.847 12:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:14.847 12:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:14.847 12:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:14.847 12:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:27:14.847 12:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:27:14.847 12:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:14.847 12:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:14.847 12:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:14.847 12:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:14.847 12:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:14.847 12:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:27:14.847 12:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:14.847 12:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:14.847 12:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:14.847 12:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:14.847 12:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:14.847 12:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:14.847 12:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:27:14.848 12:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:14.848 12:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:27:14.848 12:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:14.848 12:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:14.848 12:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:14.848 12:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:14.848 12:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:14.848 12:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:27:14.848 12:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:27:14.848 12:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:14.848 12:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:14.848 12:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:14.848 12:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:27:14.848 12:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:27:14.848 12:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:27:14.848 12:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:27:14.848 12:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:27:14.848 12:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:14.848 12:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:27:14.848 ************************************ 00:27:14.848 START TEST nvmf_abort 00:27:14.848 ************************************ 00:27:14.848 12:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:27:14.848 * Looking for test storage... 00:27:14.848 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:14.848 12:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:14.848 12:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:27:14.848 12:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:14.848 12:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:14.848 12:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:14.848 12:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:14.848 12:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:14.848 12:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:27:14.848 12:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:27:14.848 12:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:27:14.848 12:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:27:14.848 12:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:27:14.848 12:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:27:14.848 12:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:27:14.848 12:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:14.848 12:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:27:14.848 12:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:27:14.848 12:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:14.848 12:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:14.848 12:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:27:14.848 12:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:27:14.848 12:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:14.848 12:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:27:14.848 12:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:27:14.848 12:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:27:14.848 12:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:27:14.848 12:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:14.848 12:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:27:14.848 12:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:27:14.848 12:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:14.848 12:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:14.848 12:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:27:14.848 12:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:14.848 12:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:14.848 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:14.848 --rc genhtml_branch_coverage=1 00:27:14.848 --rc genhtml_function_coverage=1 00:27:14.848 --rc genhtml_legend=1 00:27:14.848 --rc geninfo_all_blocks=1 00:27:14.848 --rc geninfo_unexecuted_blocks=1 00:27:14.848 00:27:14.848 ' 00:27:14.848 12:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:14.848 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:14.848 --rc genhtml_branch_coverage=1 00:27:14.848 --rc genhtml_function_coverage=1 00:27:14.848 --rc genhtml_legend=1 00:27:14.848 --rc geninfo_all_blocks=1 00:27:14.848 --rc geninfo_unexecuted_blocks=1 00:27:14.848 00:27:14.848 ' 00:27:14.848 12:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:14.848 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:14.848 --rc genhtml_branch_coverage=1 00:27:14.848 --rc genhtml_function_coverage=1 00:27:14.848 --rc genhtml_legend=1 00:27:14.848 --rc geninfo_all_blocks=1 00:27:14.848 --rc geninfo_unexecuted_blocks=1 00:27:14.848 00:27:14.848 ' 00:27:14.848 12:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:14.848 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:14.848 --rc genhtml_branch_coverage=1 00:27:14.848 --rc genhtml_function_coverage=1 00:27:14.848 --rc genhtml_legend=1 00:27:14.848 --rc geninfo_all_blocks=1 00:27:14.848 --rc geninfo_unexecuted_blocks=1 00:27:14.848 00:27:14.848 ' 00:27:14.848 12:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:14.848 12:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:27:14.848 12:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:14.848 12:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:14.848 12:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:14.848 12:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:14.848 12:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:14.848 12:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:14.848 12:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:14.848 12:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:14.848 12:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:14.848 12:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:14.848 12:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:27:14.848 12:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:27:14.848 12:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:14.848 12:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:14.848 12:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:14.848 12:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:14.848 12:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:14.848 12:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:27:14.848 12:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:14.848 12:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:14.848 12:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:14.848 12:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:14.849 12:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:14.849 12:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:14.849 12:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:27:14.849 12:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:14.849 12:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:27:14.849 12:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:14.849 12:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:14.849 12:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:14.849 12:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:14.849 12:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:14.849 12:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:27:14.849 12:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:27:14.849 12:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:14.849 12:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:14.849 12:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:14.849 12:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:14.849 12:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:27:14.849 12:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:27:14.849 12:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:14.849 12:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:14.849 12:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:14.849 12:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:14.849 12:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:14.849 12:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:14.849 12:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:14.849 12:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:14.849 12:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:14.849 12:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:14.849 12:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:27:14.849 12:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:20.115 12:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:20.115 12:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:27:20.115 12:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:20.115 12:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:20.115 12:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:20.115 12:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:20.115 12:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:20.115 12:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:27:20.115 12:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:20.116 12:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:27:20.116 12:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:27:20.116 12:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:27:20.116 12:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:27:20.116 12:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:27:20.116 12:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:27:20.116 12:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:20.116 12:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:20.116 12:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:20.116 12:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:20.116 12:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:20.116 12:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:20.116 12:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:20.116 12:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:20.116 12:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:20.116 12:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:20.116 12:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:20.116 12:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:20.116 12:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:20.116 12:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:20.116 12:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:20.116 12:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:20.116 12:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:20.116 12:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:20.116 12:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:20.116 12:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:20.116 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:20.116 12:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:20.116 12:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:20.116 12:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:20.116 12:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:20.116 12:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:20.116 12:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:20.116 12:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:20.116 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:20.116 12:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:20.116 12:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:20.116 12:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:20.116 12:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:20.116 12:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:20.116 12:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:20.116 12:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:20.116 12:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:20.116 12:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:20.116 12:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:20.116 12:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:20.116 12:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:20.116 12:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:20.116 12:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:20.116 12:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:20.116 12:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:20.116 Found net devices under 0000:86:00.0: cvl_0_0 00:27:20.116 12:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:20.116 12:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:20.116 12:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:20.116 12:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:20.116 12:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:20.116 12:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:20.116 12:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:20.116 12:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:20.116 12:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:20.116 Found net devices under 0000:86:00.1: cvl_0_1 00:27:20.116 12:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:20.116 12:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:20.116 12:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:27:20.116 12:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:20.116 12:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:20.116 12:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:20.116 12:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:20.116 12:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:20.117 12:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:20.117 12:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:20.117 12:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:20.117 12:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:20.117 12:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:20.117 12:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:20.117 12:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:20.117 12:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:20.117 12:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:20.117 12:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:20.117 12:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:20.117 12:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:20.117 12:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:20.117 12:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:20.117 12:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:20.117 12:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:20.117 12:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:20.375 12:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:20.375 12:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:20.375 12:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:20.375 12:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:20.375 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:20.375 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.445 ms 00:27:20.375 00:27:20.375 --- 10.0.0.2 ping statistics --- 00:27:20.375 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:20.375 rtt min/avg/max/mdev = 0.445/0.445/0.445/0.000 ms 00:27:20.375 12:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:20.375 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:20.375 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:27:20.375 00:27:20.375 --- 10.0.0.1 ping statistics --- 00:27:20.375 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:20.375 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:27:20.375 12:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:20.375 12:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:27:20.375 12:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:20.375 12:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:20.375 12:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:20.375 12:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:20.375 12:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:20.375 12:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:20.375 12:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:20.375 12:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:27:20.375 12:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:20.375 12:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:20.375 12:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:20.375 12:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=2686991 00:27:20.375 12:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 2686991 00:27:20.375 12:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 2686991 ']' 00:27:20.375 12:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:20.375 12:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:20.375 12:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:20.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:20.375 12:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:20.375 12:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:20.375 12:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:27:20.375 [2024-11-28 12:51:02.764894] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:27:20.375 [2024-11-28 12:51:02.765854] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:27:20.375 [2024-11-28 12:51:02.765891] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:20.375 [2024-11-28 12:51:02.831471] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:20.375 [2024-11-28 12:51:02.873633] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:20.375 [2024-11-28 12:51:02.873675] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:20.375 [2024-11-28 12:51:02.873683] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:20.375 [2024-11-28 12:51:02.873689] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:20.375 [2024-11-28 12:51:02.873694] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:20.375 [2024-11-28 12:51:02.875159] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:20.375 [2024-11-28 12:51:02.875271] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:20.375 [2024-11-28 12:51:02.875273] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:20.633 [2024-11-28 12:51:02.943156] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:27:20.633 [2024-11-28 12:51:02.943177] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:27:20.633 [2024-11-28 12:51:02.943384] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:27:20.633 [2024-11-28 12:51:02.943457] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:27:20.633 12:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:20.633 12:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:27:20.633 12:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:20.633 12:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:20.633 12:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:20.633 12:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:20.633 12:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:27:20.633 12:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.633 12:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:20.633 [2024-11-28 12:51:03.004037] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:20.633 12:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.633 12:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:27:20.633 12:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.633 12:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:20.633 Malloc0 00:27:20.633 12:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.633 12:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:27:20.633 12:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.633 12:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:20.633 Delay0 00:27:20.633 12:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.633 12:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:27:20.633 12:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.633 12:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:20.633 12:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.633 12:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:27:20.633 12:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.633 12:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:20.633 12:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.634 12:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:20.634 12:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.634 12:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:20.634 [2024-11-28 12:51:03.071888] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:20.634 12:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.634 12:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:20.634 12:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.634 12:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:20.634 12:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.634 12:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:27:20.892 [2024-11-28 12:51:03.188834] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:27:22.789 Initializing NVMe Controllers 00:27:22.789 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:27:22.789 controller IO queue size 128 less than required 00:27:22.789 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:27:22.789 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:27:22.789 Initialization complete. Launching workers. 00:27:22.789 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 36720 00:27:22.789 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 36781, failed to submit 66 00:27:22.789 success 36720, unsuccessful 61, failed 0 00:27:22.789 12:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:22.789 12:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.789 12:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:22.789 12:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.789 12:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:27:22.789 12:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:27:22.789 12:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:22.789 12:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:27:22.789 12:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:22.789 12:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:27:22.789 12:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:22.789 12:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:22.789 rmmod nvme_tcp 00:27:22.789 rmmod nvme_fabrics 00:27:22.789 rmmod nvme_keyring 00:27:22.789 12:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:22.789 12:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:27:22.789 12:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:27:22.789 12:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 2686991 ']' 00:27:22.789 12:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 2686991 00:27:22.789 12:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 2686991 ']' 00:27:22.789 12:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 2686991 00:27:22.789 12:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:27:22.789 12:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:22.789 12:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2686991 00:27:23.046 12:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:23.046 12:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:23.046 12:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2686991' 00:27:23.046 killing process with pid 2686991 00:27:23.046 12:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 2686991 00:27:23.046 12:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 2686991 00:27:23.046 12:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:23.046 12:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:23.046 12:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:23.046 12:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:27:23.046 12:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:27:23.046 12:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:23.046 12:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:27:23.046 12:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:23.046 12:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:23.046 12:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:23.046 12:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:23.046 12:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:25.580 12:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:25.580 00:27:25.580 real 0m10.460s 00:27:25.580 user 0m9.898s 00:27:25.580 sys 0m5.213s 00:27:25.580 12:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:25.580 12:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:25.580 ************************************ 00:27:25.580 END TEST nvmf_abort 00:27:25.580 ************************************ 00:27:25.580 12:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:27:25.580 12:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:27:25.580 12:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:25.580 12:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:27:25.580 ************************************ 00:27:25.580 START TEST nvmf_ns_hotplug_stress 00:27:25.580 ************************************ 00:27:25.580 12:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:27:25.580 * Looking for test storage... 00:27:25.580 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:25.580 12:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:25.580 12:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:27:25.580 12:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:25.580 12:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:25.580 12:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:25.580 12:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:25.580 12:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:25.580 12:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:27:25.580 12:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:27:25.580 12:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:27:25.580 12:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:27:25.580 12:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:27:25.580 12:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:27:25.580 12:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:27:25.580 12:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:25.580 12:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:27:25.580 12:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:27:25.580 12:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:25.580 12:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:25.580 12:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:27:25.580 12:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:27:25.580 12:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:25.580 12:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:27:25.580 12:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:27:25.580 12:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:27:25.580 12:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:27:25.580 12:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:25.580 12:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:27:25.580 12:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:27:25.580 12:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:25.580 12:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:25.580 12:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:27:25.580 12:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:25.580 12:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:25.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:25.580 --rc genhtml_branch_coverage=1 00:27:25.580 --rc genhtml_function_coverage=1 00:27:25.580 --rc genhtml_legend=1 00:27:25.580 --rc geninfo_all_blocks=1 00:27:25.580 --rc geninfo_unexecuted_blocks=1 00:27:25.580 00:27:25.580 ' 00:27:25.580 12:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:25.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:25.580 --rc genhtml_branch_coverage=1 00:27:25.580 --rc genhtml_function_coverage=1 00:27:25.580 --rc genhtml_legend=1 00:27:25.580 --rc geninfo_all_blocks=1 00:27:25.580 --rc geninfo_unexecuted_blocks=1 00:27:25.580 00:27:25.580 ' 00:27:25.580 12:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:25.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:25.580 --rc genhtml_branch_coverage=1 00:27:25.580 --rc genhtml_function_coverage=1 00:27:25.580 --rc genhtml_legend=1 00:27:25.580 --rc geninfo_all_blocks=1 00:27:25.580 --rc geninfo_unexecuted_blocks=1 00:27:25.580 00:27:25.580 ' 00:27:25.580 12:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:25.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:25.580 --rc genhtml_branch_coverage=1 00:27:25.580 --rc genhtml_function_coverage=1 00:27:25.580 --rc genhtml_legend=1 00:27:25.580 --rc geninfo_all_blocks=1 00:27:25.580 --rc geninfo_unexecuted_blocks=1 00:27:25.580 00:27:25.580 ' 00:27:25.580 12:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:25.580 12:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:27:25.580 12:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:25.580 12:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:25.580 12:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:25.580 12:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:25.580 12:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:25.580 12:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:25.580 12:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:25.580 12:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:25.580 12:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:25.580 12:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:25.580 12:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:27:25.580 12:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:27:25.580 12:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:25.580 12:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:25.580 12:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:25.580 12:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:25.580 12:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:25.580 12:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:27:25.580 12:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:25.580 12:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:25.580 12:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:25.580 12:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:25.581 12:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:25.581 12:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:25.581 12:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:27:25.581 12:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:25.581 12:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:27:25.581 12:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:25.581 12:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:25.581 12:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:25.581 12:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:25.581 12:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:25.581 12:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:27:25.581 12:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:27:25.581 12:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:25.581 12:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:25.581 12:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:25.581 12:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:25.581 12:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:27:25.581 12:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:25.581 12:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:25.581 12:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:25.581 12:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:25.581 12:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:25.581 12:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:25.581 12:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:25.581 12:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:25.581 12:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:25.581 12:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:25.581 12:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:27:25.581 12:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:27:30.845 12:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:30.845 12:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:27:30.845 12:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:30.845 12:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:30.845 12:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:30.845 12:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:30.845 12:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:30.845 12:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:27:30.845 12:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:30.845 12:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:27:30.845 12:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:27:30.845 12:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:27:30.845 12:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:27:30.845 12:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:27:30.845 12:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:27:30.845 12:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:30.845 12:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:30.845 12:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:30.845 12:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:30.845 12:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:30.845 12:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:30.845 12:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:30.845 12:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:30.845 12:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:30.845 12:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:30.845 12:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:30.845 12:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:30.845 12:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:30.845 12:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:30.845 12:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:30.845 12:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:30.845 12:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:30.845 12:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:30.845 12:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:30.845 12:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:30.845 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:30.845 12:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:30.845 12:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:30.845 12:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:30.845 12:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:30.845 12:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:30.845 12:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:30.845 12:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:30.845 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:30.845 12:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:30.845 12:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:30.845 12:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:30.845 12:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:30.845 12:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:30.845 12:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:30.845 12:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:30.845 12:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:30.845 12:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:30.845 12:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:30.845 12:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:30.845 12:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:30.845 12:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:30.845 12:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:30.845 12:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:30.845 12:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:30.845 Found net devices under 0000:86:00.0: cvl_0_0 00:27:30.845 12:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:30.845 12:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:30.845 12:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:30.845 12:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:30.845 12:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:30.845 12:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:30.845 12:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:30.845 12:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:30.845 12:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:30.845 Found net devices under 0000:86:00.1: cvl_0_1 00:27:30.845 12:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:30.845 12:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:30.845 12:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:27:30.845 12:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:30.845 12:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:30.846 12:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:30.846 12:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:30.846 12:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:30.846 12:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:30.846 12:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:30.846 12:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:30.846 12:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:30.846 12:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:30.846 12:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:30.846 12:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:30.846 12:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:30.846 12:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:30.846 12:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:30.846 12:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:30.846 12:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:30.846 12:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:30.846 12:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:30.846 12:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:30.846 12:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:30.846 12:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:30.846 12:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:30.846 12:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:30.846 12:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:30.846 12:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:30.846 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:30.846 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.448 ms 00:27:30.846 00:27:30.846 --- 10.0.0.2 ping statistics --- 00:27:30.846 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:30.846 rtt min/avg/max/mdev = 0.448/0.448/0.448/0.000 ms 00:27:30.846 12:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:30.846 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:30.846 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.253 ms 00:27:30.846 00:27:30.846 --- 10.0.0.1 ping statistics --- 00:27:30.846 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:30.846 rtt min/avg/max/mdev = 0.253/0.253/0.253/0.000 ms 00:27:30.846 12:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:30.846 12:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:27:30.846 12:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:30.846 12:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:30.846 12:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:30.846 12:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:30.846 12:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:30.846 12:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:30.846 12:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:30.846 12:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:27:30.846 12:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:30.846 12:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:30.846 12:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:27:30.846 12:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=2691361 00:27:30.846 12:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 2691361 00:27:30.846 12:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 2691361 ']' 00:27:30.846 12:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:30.846 12:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:30.846 12:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:30.846 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:30.846 12:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:27:30.846 12:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:30.846 12:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:27:30.846 [2024-11-28 12:51:13.324597] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:27:30.846 [2024-11-28 12:51:13.325515] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:27:30.846 [2024-11-28 12:51:13.325547] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:31.104 [2024-11-28 12:51:13.390908] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:31.104 [2024-11-28 12:51:13.433256] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:31.104 [2024-11-28 12:51:13.433292] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:31.104 [2024-11-28 12:51:13.433300] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:31.104 [2024-11-28 12:51:13.433306] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:31.104 [2024-11-28 12:51:13.433314] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:31.104 [2024-11-28 12:51:13.434653] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:31.104 [2024-11-28 12:51:13.434672] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:31.104 [2024-11-28 12:51:13.434674] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:31.104 [2024-11-28 12:51:13.502800] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:27:31.104 [2024-11-28 12:51:13.502820] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:27:31.104 [2024-11-28 12:51:13.503029] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:27:31.104 [2024-11-28 12:51:13.503101] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:27:31.104 12:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:31.104 12:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:27:31.104 12:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:31.104 12:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:31.104 12:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:27:31.104 12:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:31.104 12:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:27:31.104 12:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:27:31.362 [2024-11-28 12:51:13.735247] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:31.362 12:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:27:31.620 12:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:31.620 [2024-11-28 12:51:14.123816] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:31.878 12:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:31.878 12:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:27:32.135 Malloc0 00:27:32.135 12:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:27:32.392 Delay0 00:27:32.392 12:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:32.656 12:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:27:32.656 NULL1 00:27:32.656 12:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:27:32.914 12:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2691627 00:27:32.914 12:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2691627 00:27:32.914 12:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:32.914 12:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:27:33.171 12:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:33.171 12:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:27:33.171 12:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:27:33.429 true 00:27:33.429 12:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2691627 00:27:33.429 12:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:33.687 12:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:33.945 12:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:27:33.945 12:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:27:33.945 true 00:27:34.202 12:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2691627 00:27:34.202 12:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:35.133 Read completed with error (sct=0, sc=11) 00:27:35.133 12:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:35.133 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:35.133 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:35.133 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:35.403 12:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:27:35.403 12:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:27:35.659 true 00:27:35.659 12:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2691627 00:27:35.659 12:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:35.659 12:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:35.916 12:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:27:35.916 12:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:27:36.174 true 00:27:36.174 12:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2691627 00:27:36.174 12:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:37.562 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:37.562 12:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:37.562 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:37.562 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:37.562 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:37.562 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:37.562 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:37.562 12:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:27:37.562 12:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:27:37.820 true 00:27:37.820 12:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2691627 00:27:37.820 12:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:38.754 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:38.754 12:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:38.754 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:38.754 12:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:27:38.754 12:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:27:39.012 true 00:27:39.012 12:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2691627 00:27:39.012 12:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:39.270 12:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:39.270 12:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:27:39.270 12:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:27:39.528 true 00:27:39.528 12:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2691627 00:27:39.528 12:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:40.902 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:40.902 12:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:40.902 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:40.902 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:40.902 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:40.902 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:40.902 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:40.902 12:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:27:40.902 12:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:27:41.161 true 00:27:41.161 12:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2691627 00:27:41.161 12:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:42.095 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:42.095 12:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:42.095 12:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:27:42.095 12:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:27:42.353 true 00:27:42.353 12:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2691627 00:27:42.354 12:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:42.612 12:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:42.870 12:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:27:42.870 12:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:27:42.870 true 00:27:42.870 12:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2691627 00:27:42.870 12:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:43.127 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:43.127 12:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:43.127 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:43.128 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:43.128 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:43.386 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:43.386 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:43.386 12:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:27:43.386 12:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:27:43.644 true 00:27:43.644 12:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2691627 00:27:43.644 12:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:44.578 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:44.578 12:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:44.578 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:44.578 12:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:27:44.578 12:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:27:44.835 true 00:27:44.835 12:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2691627 00:27:44.835 12:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:45.093 12:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:45.093 12:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:27:45.093 12:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:27:45.351 true 00:27:45.351 12:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2691627 00:27:45.351 12:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:46.725 12:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:46.725 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:46.725 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:46.725 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:46.725 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:46.725 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:46.725 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:46.725 12:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:27:46.725 12:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:27:46.983 true 00:27:46.983 12:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2691627 00:27:46.983 12:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:47.918 12:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:47.918 12:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:27:47.918 12:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:27:48.176 true 00:27:48.176 12:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2691627 00:27:48.176 12:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:48.434 12:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:48.692 12:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:27:48.692 12:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:27:48.692 true 00:27:48.692 12:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2691627 00:27:48.692 12:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:50.066 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:50.066 12:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:50.066 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:50.066 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:50.066 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:50.066 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:50.067 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:50.067 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:50.067 12:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:27:50.067 12:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:27:50.325 true 00:27:50.325 12:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2691627 00:27:50.325 12:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:51.258 12:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:51.258 12:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:27:51.258 12:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:27:51.516 true 00:27:51.516 12:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2691627 00:27:51.516 12:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:51.774 12:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:52.032 12:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:27:52.032 12:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:27:52.032 true 00:27:52.032 12:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2691627 00:27:52.032 12:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:53.406 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:53.406 12:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:53.406 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:53.406 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:53.406 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:53.406 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:53.406 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:53.406 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:53.406 12:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:27:53.406 12:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:27:53.665 true 00:27:53.665 12:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2691627 00:27:53.665 12:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:54.597 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:54.597 12:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:54.597 12:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:27:54.597 12:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:27:54.855 true 00:27:54.855 12:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2691627 00:27:54.855 12:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:55.114 12:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:55.114 12:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:27:55.114 12:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:27:55.372 true 00:27:55.372 12:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2691627 00:27:55.372 12:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:56.746 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:56.746 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:56.746 12:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:56.746 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:56.746 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:56.746 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:56.746 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:56.746 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:56.746 12:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:27:56.746 12:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:27:57.004 true 00:27:57.004 12:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2691627 00:27:57.004 12:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:57.938 12:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:57.938 12:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:27:57.938 12:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:27:58.197 true 00:27:58.197 12:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2691627 00:27:58.197 12:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:58.456 12:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:58.456 12:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:27:58.456 12:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:27:58.715 true 00:27:58.715 12:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2691627 00:27:58.715 12:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:59.649 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:59.649 12:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:59.649 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:59.907 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:59.907 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:59.907 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:59.907 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:59.907 12:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:27:59.907 12:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:28:00.165 true 00:28:00.165 12:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2691627 00:28:00.165 12:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:01.099 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:01.099 12:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:01.099 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:01.099 12:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:28:01.099 12:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:28:01.358 true 00:28:01.358 12:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2691627 00:28:01.358 12:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:01.616 12:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:01.873 12:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:28:01.873 12:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:28:01.873 true 00:28:01.873 12:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2691627 00:28:01.873 12:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:03.246 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:03.247 12:51:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:03.247 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:03.247 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:03.247 Initializing NVMe Controllers 00:28:03.247 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:03.247 Controller IO queue size 128, less than required. 00:28:03.247 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:03.247 Controller IO queue size 128, less than required. 00:28:03.247 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:03.247 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:03.247 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:03.247 Initialization complete. Launching workers. 00:28:03.247 ======================================================== 00:28:03.247 Latency(us) 00:28:03.247 Device Information : IOPS MiB/s Average min max 00:28:03.247 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2268.90 1.11 37355.98 1061.00 1012761.77 00:28:03.247 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 16992.23 8.30 7513.83 1586.98 380848.26 00:28:03.247 ======================================================== 00:28:03.247 Total : 19261.13 9.40 11029.14 1061.00 1012761.77 00:28:03.247 00:28:03.247 12:51:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:28:03.247 12:51:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:28:03.504 true 00:28:03.504 12:51:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2691627 00:28:03.504 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2691627) - No such process 00:28:03.504 12:51:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2691627 00:28:03.504 12:51:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:03.762 12:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:03.762 12:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:28:03.762 12:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:28:03.762 12:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:28:03.762 12:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:03.762 12:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:28:04.018 null0 00:28:04.018 12:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:04.018 12:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:04.018 12:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:28:04.275 null1 00:28:04.275 12:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:04.275 12:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:04.275 12:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:28:04.531 null2 00:28:04.531 12:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:04.531 12:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:04.531 12:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:28:04.531 null3 00:28:04.531 12:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:04.531 12:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:04.531 12:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:28:04.788 null4 00:28:04.788 12:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:04.788 12:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:04.788 12:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:28:05.046 null5 00:28:05.046 12:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:05.046 12:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:05.046 12:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:28:05.303 null6 00:28:05.303 12:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:05.303 12:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:05.303 12:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:28:05.303 null7 00:28:05.303 12:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:05.303 12:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:05.303 12:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:28:05.303 12:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:05.303 12:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:05.303 12:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:28:05.303 12:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:05.303 12:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:28:05.303 12:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:05.303 12:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:05.303 12:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:05.303 12:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:05.303 12:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:05.303 12:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:05.304 12:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:05.304 12:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:28:05.304 12:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:05.304 12:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:05.304 12:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:28:05.304 12:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:05.304 12:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:28:05.304 12:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:28:05.304 12:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:05.304 12:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:05.304 12:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:05.304 12:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:05.304 12:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:05.304 12:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:05.304 12:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:05.304 12:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:05.304 12:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:05.304 12:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:28:05.304 12:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:05.304 12:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:05.304 12:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:05.304 12:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:28:05.304 12:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:05.304 12:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:28:05.304 12:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:05.304 12:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:05.304 12:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:05.304 12:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:05.304 12:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:05.304 12:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:28:05.304 12:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:05.304 12:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:28:05.304 12:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:05.304 12:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:05.304 12:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:05.304 12:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:05.304 12:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:28:05.304 12:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:05.304 12:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:05.304 12:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:28:05.304 12:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:05.304 12:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:05.304 12:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:05.304 12:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:05.304 12:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:05.304 12:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2696961 2696962 2696964 2696965 2696967 2696969 2696970 2696972 00:28:05.304 12:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:28:05.304 12:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:05.304 12:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:28:05.304 12:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:05.304 12:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:28:05.304 12:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:05.304 12:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:05.304 12:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:05.304 12:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:05.562 12:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:05.562 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:05.562 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:05.562 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:05.562 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:05.562 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:05.562 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:05.562 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:05.819 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:05.819 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:05.819 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:05.819 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:05.819 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:05.819 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:05.819 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:05.819 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:05.819 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:05.819 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:05.819 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:05.819 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:05.819 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:05.819 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:05.819 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:05.819 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:05.819 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:05.819 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:05.819 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:05.820 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:05.820 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:05.820 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:05.820 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:05.820 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:06.077 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:06.077 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:06.077 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:06.077 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:06.077 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:06.077 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:06.077 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:06.077 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:06.334 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:06.334 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:06.334 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:06.334 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:06.334 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:06.334 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:06.334 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:06.335 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:06.335 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:06.335 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:06.335 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:06.335 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:06.335 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:06.335 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:06.335 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:06.335 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:06.335 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:06.335 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:06.335 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:06.335 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:06.335 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:06.335 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:06.335 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:06.335 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:06.335 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:06.335 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:06.335 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:06.592 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:06.592 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:06.592 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:06.592 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:06.592 12:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:06.592 12:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:06.592 12:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:06.592 12:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:06.592 12:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:06.592 12:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:06.592 12:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:06.592 12:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:06.592 12:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:06.592 12:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:06.592 12:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:06.592 12:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:06.592 12:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:06.592 12:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:06.592 12:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:06.592 12:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:06.592 12:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:06.592 12:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:06.592 12:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:06.592 12:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:06.592 12:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:06.592 12:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:06.592 12:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:06.592 12:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:06.592 12:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:06.850 12:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:06.850 12:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:06.850 12:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:06.850 12:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:06.850 12:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:06.850 12:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:06.850 12:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:06.850 12:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:07.108 12:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:07.108 12:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:07.108 12:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:07.108 12:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:07.108 12:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:07.108 12:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:07.108 12:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:07.108 12:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:07.108 12:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:07.108 12:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:07.108 12:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:07.108 12:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:07.108 12:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:07.108 12:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:07.108 12:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:07.108 12:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:07.108 12:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:07.108 12:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:07.108 12:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:07.108 12:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:07.108 12:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:07.108 12:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:07.108 12:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:07.108 12:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:07.366 12:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:07.366 12:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:07.366 12:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:07.366 12:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:07.366 12:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:07.366 12:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:07.366 12:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:07.366 12:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:07.624 12:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:07.624 12:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:07.624 12:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:07.624 12:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:07.624 12:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:07.624 12:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:07.624 12:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:07.624 12:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:07.624 12:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:07.624 12:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:07.624 12:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:07.624 12:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:07.624 12:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:07.624 12:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:07.624 12:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:07.624 12:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:07.624 12:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:07.624 12:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:07.624 12:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:07.624 12:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:07.624 12:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:07.624 12:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:07.624 12:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:07.624 12:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:07.624 12:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:07.624 12:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:07.624 12:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:07.624 12:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:07.624 12:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:07.624 12:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:07.624 12:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:07.882 12:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:07.882 12:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:07.882 12:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:07.882 12:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:07.882 12:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:07.882 12:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:07.882 12:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:07.882 12:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:07.882 12:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:07.882 12:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:07.882 12:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:07.882 12:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:07.882 12:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:07.882 12:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:07.882 12:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:07.882 12:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:07.882 12:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:07.882 12:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:07.882 12:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:07.882 12:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:07.882 12:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:07.882 12:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:07.882 12:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:07.882 12:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:07.882 12:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:08.140 12:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:08.140 12:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:08.140 12:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:08.140 12:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:08.140 12:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:08.140 12:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:08.140 12:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:08.140 12:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:08.397 12:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:08.397 12:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:08.397 12:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:08.397 12:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:08.397 12:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:08.397 12:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:08.397 12:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:08.397 12:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:08.397 12:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:08.397 12:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:08.397 12:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:08.397 12:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:08.397 12:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:08.397 12:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:08.397 12:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:08.397 12:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:08.397 12:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:08.397 12:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:08.398 12:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:08.398 12:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:08.398 12:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:08.398 12:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:08.398 12:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:08.398 12:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:08.655 12:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:08.655 12:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:08.655 12:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:08.655 12:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:08.655 12:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:08.655 12:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:08.655 12:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:08.655 12:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:08.655 12:51:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:08.655 12:51:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:08.655 12:51:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:08.655 12:51:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:08.655 12:51:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:08.655 12:51:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:08.655 12:51:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:08.655 12:51:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:08.655 12:51:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:08.655 12:51:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:08.655 12:51:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:08.655 12:51:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:08.913 12:51:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:08.913 12:51:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:08.913 12:51:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:08.913 12:51:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:08.914 12:51:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:08.914 12:51:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:08.914 12:51:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:08.914 12:51:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:08.914 12:51:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:08.914 12:51:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:08.914 12:51:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:08.914 12:51:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:08.914 12:51:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:08.914 12:51:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:08.914 12:51:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:08.914 12:51:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:08.914 12:51:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:08.914 12:51:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:08.914 12:51:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:08.914 12:51:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:09.172 12:51:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:09.172 12:51:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:09.172 12:51:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:09.172 12:51:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:09.172 12:51:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:09.172 12:51:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:09.172 12:51:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:09.172 12:51:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:09.172 12:51:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:09.172 12:51:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:09.172 12:51:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:09.172 12:51:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:09.172 12:51:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:09.172 12:51:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:09.172 12:51:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:09.172 12:51:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:09.173 12:51:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:09.173 12:51:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:09.173 12:51:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:09.173 12:51:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:09.173 12:51:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:09.173 12:51:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:09.173 12:51:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:09.173 12:51:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:09.431 12:51:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:09.431 12:51:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:09.431 12:51:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:09.431 12:51:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:09.431 12:51:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:09.431 12:51:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:09.431 12:51:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:09.431 12:51:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:09.690 12:51:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:09.690 12:51:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:09.690 12:51:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:09.690 12:51:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:09.690 12:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:09.690 12:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:09.690 12:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:09.690 12:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:09.690 12:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:09.690 12:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:09.690 12:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:09.690 12:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:09.690 12:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:09.690 12:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:09.690 12:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:09.690 12:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:09.690 12:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:28:09.690 12:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:28:09.691 12:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:09.691 12:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:28:09.691 12:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:09.691 12:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:28:09.691 12:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:09.691 12:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:09.691 rmmod nvme_tcp 00:28:09.691 rmmod nvme_fabrics 00:28:09.691 rmmod nvme_keyring 00:28:09.691 12:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:09.691 12:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:28:09.691 12:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:28:09.691 12:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 2691361 ']' 00:28:09.691 12:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 2691361 00:28:09.691 12:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 2691361 ']' 00:28:09.691 12:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 2691361 00:28:09.691 12:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:28:09.691 12:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:09.691 12:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2691361 00:28:09.691 12:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:09.691 12:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:09.691 12:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2691361' 00:28:09.691 killing process with pid 2691361 00:28:09.691 12:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 2691361 00:28:09.691 12:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 2691361 00:28:09.949 12:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:09.949 12:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:09.949 12:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:09.949 12:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:28:09.949 12:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:28:09.949 12:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:09.949 12:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:28:09.949 12:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:09.949 12:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:09.949 12:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:09.949 12:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:09.949 12:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:12.484 12:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:12.484 00:28:12.484 real 0m46.728s 00:28:12.484 user 2m57.823s 00:28:12.484 sys 0m19.910s 00:28:12.484 12:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:12.484 12:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:28:12.484 ************************************ 00:28:12.484 END TEST nvmf_ns_hotplug_stress 00:28:12.484 ************************************ 00:28:12.484 12:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:28:12.484 12:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:12.484 12:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:12.484 12:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:12.484 ************************************ 00:28:12.484 START TEST nvmf_delete_subsystem 00:28:12.484 ************************************ 00:28:12.484 12:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:28:12.484 * Looking for test storage... 00:28:12.484 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:12.484 12:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:12.484 12:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:28:12.484 12:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:12.484 12:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:12.484 12:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:12.484 12:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:12.484 12:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:12.484 12:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:28:12.484 12:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:28:12.484 12:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:28:12.484 12:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:28:12.484 12:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:28:12.484 12:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:28:12.484 12:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:28:12.484 12:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:12.484 12:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:28:12.484 12:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:28:12.484 12:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:12.484 12:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:12.484 12:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:28:12.484 12:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:28:12.484 12:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:12.484 12:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:28:12.484 12:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:28:12.484 12:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:28:12.484 12:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:28:12.484 12:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:12.484 12:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:28:12.484 12:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:28:12.484 12:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:12.484 12:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:12.484 12:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:28:12.484 12:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:12.484 12:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:12.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:12.484 --rc genhtml_branch_coverage=1 00:28:12.484 --rc genhtml_function_coverage=1 00:28:12.484 --rc genhtml_legend=1 00:28:12.484 --rc geninfo_all_blocks=1 00:28:12.484 --rc geninfo_unexecuted_blocks=1 00:28:12.484 00:28:12.484 ' 00:28:12.484 12:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:12.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:12.484 --rc genhtml_branch_coverage=1 00:28:12.484 --rc genhtml_function_coverage=1 00:28:12.484 --rc genhtml_legend=1 00:28:12.484 --rc geninfo_all_blocks=1 00:28:12.484 --rc geninfo_unexecuted_blocks=1 00:28:12.485 00:28:12.485 ' 00:28:12.485 12:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:12.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:12.485 --rc genhtml_branch_coverage=1 00:28:12.485 --rc genhtml_function_coverage=1 00:28:12.485 --rc genhtml_legend=1 00:28:12.485 --rc geninfo_all_blocks=1 00:28:12.485 --rc geninfo_unexecuted_blocks=1 00:28:12.485 00:28:12.485 ' 00:28:12.485 12:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:12.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:12.485 --rc genhtml_branch_coverage=1 00:28:12.485 --rc genhtml_function_coverage=1 00:28:12.485 --rc genhtml_legend=1 00:28:12.485 --rc geninfo_all_blocks=1 00:28:12.485 --rc geninfo_unexecuted_blocks=1 00:28:12.485 00:28:12.485 ' 00:28:12.485 12:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:12.485 12:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:28:12.485 12:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:12.485 12:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:12.485 12:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:12.485 12:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:12.485 12:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:12.485 12:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:12.485 12:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:12.485 12:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:12.485 12:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:12.485 12:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:12.485 12:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:28:12.485 12:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:28:12.485 12:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:12.485 12:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:12.485 12:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:12.485 12:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:12.485 12:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:12.485 12:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:28:12.485 12:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:12.485 12:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:12.485 12:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:12.485 12:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:12.485 12:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:12.485 12:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:12.485 12:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:28:12.485 12:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:12.485 12:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:28:12.485 12:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:12.485 12:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:12.485 12:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:12.485 12:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:12.485 12:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:12.485 12:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:12.485 12:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:12.485 12:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:12.485 12:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:12.485 12:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:12.485 12:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:28:12.485 12:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:12.485 12:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:12.485 12:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:12.485 12:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:12.485 12:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:12.485 12:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:12.485 12:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:12.485 12:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:12.485 12:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:12.485 12:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:12.485 12:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:28:12.485 12:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:17.766 12:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:17.766 12:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:28:17.766 12:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:17.766 12:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:17.766 12:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:17.766 12:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:17.766 12:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:17.766 12:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:28:17.766 12:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:17.766 12:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:28:17.766 12:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:28:17.766 12:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:28:17.766 12:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:28:17.766 12:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:28:17.766 12:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:28:17.766 12:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:17.766 12:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:17.766 12:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:17.766 12:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:17.766 12:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:17.766 12:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:17.766 12:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:17.766 12:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:17.766 12:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:17.766 12:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:17.766 12:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:17.766 12:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:17.766 12:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:17.766 12:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:17.766 12:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:17.766 12:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:17.766 12:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:17.766 12:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:17.766 12:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:17.766 12:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:17.766 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:17.766 12:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:17.766 12:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:17.766 12:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:17.766 12:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:17.766 12:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:17.766 12:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:17.766 12:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:17.766 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:17.766 12:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:17.766 12:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:17.766 12:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:17.766 12:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:17.766 12:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:17.766 12:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:17.766 12:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:17.766 12:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:17.766 12:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:17.766 12:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:17.766 12:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:17.766 12:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:17.766 12:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:17.766 12:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:17.766 12:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:17.766 12:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:17.766 Found net devices under 0000:86:00.0: cvl_0_0 00:28:17.766 12:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:17.766 12:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:17.766 12:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:17.766 12:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:17.766 12:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:17.766 12:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:17.766 12:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:17.766 12:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:17.766 12:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:17.766 Found net devices under 0000:86:00.1: cvl_0_1 00:28:17.766 12:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:17.766 12:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:17.766 12:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:28:17.766 12:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:17.766 12:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:17.766 12:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:17.766 12:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:17.766 12:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:17.766 12:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:17.766 12:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:17.766 12:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:17.766 12:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:17.766 12:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:17.766 12:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:17.766 12:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:17.766 12:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:17.766 12:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:17.766 12:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:17.766 12:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:17.766 12:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:17.766 12:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:17.766 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:17.766 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:17.767 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:17.767 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:17.767 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:17.767 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:17.767 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:17.767 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:17.767 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:17.767 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.222 ms 00:28:17.767 00:28:17.767 --- 10.0.0.2 ping statistics --- 00:28:17.767 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:17.767 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:28:17.767 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:17.767 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:17.767 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.149 ms 00:28:17.767 00:28:17.767 --- 10.0.0.1 ping statistics --- 00:28:17.767 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:17.767 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:28:17.767 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:17.767 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:28:17.767 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:17.767 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:17.767 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:17.767 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:17.767 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:17.767 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:17.767 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:17.767 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:28:17.767 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:17.767 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:17.767 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:17.767 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=2701245 00:28:17.767 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 2701245 00:28:17.767 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:28:17.767 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 2701245 ']' 00:28:17.767 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:17.767 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:17.767 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:17.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:17.767 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:17.767 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:18.024 [2024-11-28 12:52:00.323082] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:18.025 [2024-11-28 12:52:00.324048] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:28:18.025 [2024-11-28 12:52:00.324083] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:18.025 [2024-11-28 12:52:00.389941] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:28:18.025 [2024-11-28 12:52:00.433488] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:18.025 [2024-11-28 12:52:00.433526] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:18.025 [2024-11-28 12:52:00.433534] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:18.025 [2024-11-28 12:52:00.433541] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:18.025 [2024-11-28 12:52:00.433546] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:18.025 [2024-11-28 12:52:00.434677] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:18.025 [2024-11-28 12:52:00.434682] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:18.025 [2024-11-28 12:52:00.504794] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:18.025 [2024-11-28 12:52:00.504969] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:18.025 [2024-11-28 12:52:00.505073] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:28:18.025 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:18.025 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:28:18.025 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:18.025 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:18.025 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:18.282 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:18.282 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:18.282 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.282 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:18.282 [2024-11-28 12:52:00.571156] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:18.282 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:18.282 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:28:18.282 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.282 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:18.282 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:18.282 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:18.282 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.282 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:18.282 [2024-11-28 12:52:00.587320] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:18.282 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:18.282 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:28:18.282 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.282 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:18.282 NULL1 00:28:18.282 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:18.283 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:28:18.283 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.283 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:18.283 Delay0 00:28:18.283 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:18.283 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:18.283 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.283 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:18.283 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:18.283 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2701352 00:28:18.283 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:28:18.283 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:28:18.283 [2024-11-28 12:52:00.672523] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:28:20.178 12:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:20.178 12:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:20.178 12:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:20.436 Read completed with error (sct=0, sc=8) 00:28:20.436 Read completed with error (sct=0, sc=8) 00:28:20.436 Read completed with error (sct=0, sc=8) 00:28:20.436 starting I/O failed: -6 00:28:20.436 Read completed with error (sct=0, sc=8) 00:28:20.436 Write completed with error (sct=0, sc=8) 00:28:20.436 Read completed with error (sct=0, sc=8) 00:28:20.436 Read completed with error (sct=0, sc=8) 00:28:20.436 starting I/O failed: -6 00:28:20.436 Read completed with error (sct=0, sc=8) 00:28:20.436 Read completed with error (sct=0, sc=8) 00:28:20.436 Read completed with error (sct=0, sc=8) 00:28:20.436 Read completed with error (sct=0, sc=8) 00:28:20.436 starting I/O failed: -6 00:28:20.436 Write completed with error (sct=0, sc=8) 00:28:20.436 Write completed with error (sct=0, sc=8) 00:28:20.436 Read completed with error (sct=0, sc=8) 00:28:20.436 Read completed with error (sct=0, sc=8) 00:28:20.436 starting I/O failed: -6 00:28:20.436 Read completed with error (sct=0, sc=8) 00:28:20.436 Write completed with error (sct=0, sc=8) 00:28:20.436 Read completed with error (sct=0, sc=8) 00:28:20.436 Read completed with error (sct=0, sc=8) 00:28:20.436 starting I/O failed: -6 00:28:20.436 Write completed with error (sct=0, sc=8) 00:28:20.436 Write completed with error (sct=0, sc=8) 00:28:20.436 Read completed with error (sct=0, sc=8) 00:28:20.436 Read completed with error (sct=0, sc=8) 00:28:20.436 starting I/O failed: -6 00:28:20.436 Read completed with error (sct=0, sc=8) 00:28:20.436 Write completed with error (sct=0, sc=8) 00:28:20.436 Read completed with error (sct=0, sc=8) 00:28:20.436 Read completed with error (sct=0, sc=8) 00:28:20.436 starting I/O failed: -6 00:28:20.436 Read completed with error (sct=0, sc=8) 00:28:20.436 Read completed with error (sct=0, sc=8) 00:28:20.436 Read completed with error (sct=0, sc=8) 00:28:20.436 Read completed with error (sct=0, sc=8) 00:28:20.436 starting I/O failed: -6 00:28:20.436 Read completed with error (sct=0, sc=8) 00:28:20.436 Read completed with error (sct=0, sc=8) 00:28:20.436 Read completed with error (sct=0, sc=8) 00:28:20.436 Read completed with error (sct=0, sc=8) 00:28:20.436 starting I/O failed: -6 00:28:20.436 Read completed with error (sct=0, sc=8) 00:28:20.436 Write completed with error (sct=0, sc=8) 00:28:20.436 Read completed with error (sct=0, sc=8) 00:28:20.436 Write completed with error (sct=0, sc=8) 00:28:20.437 starting I/O failed: -6 00:28:20.437 [2024-11-28 12:52:02.798631] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ff46400d020 is same with the state(6) to be set 00:28:20.437 Write completed with error (sct=0, sc=8) 00:28:20.437 Read completed with error (sct=0, sc=8) 00:28:20.437 Read completed with error (sct=0, sc=8) 00:28:20.437 Read completed with error (sct=0, sc=8) 00:28:20.437 Write completed with error (sct=0, sc=8) 00:28:20.437 Read completed with error (sct=0, sc=8) 00:28:20.437 Write completed with error (sct=0, sc=8) 00:28:20.437 Read completed with error (sct=0, sc=8) 00:28:20.437 Write completed with error (sct=0, sc=8) 00:28:20.437 Read completed with error (sct=0, sc=8) 00:28:20.437 Read completed with error (sct=0, sc=8) 00:28:20.437 Read completed with error (sct=0, sc=8) 00:28:20.437 Write completed with error (sct=0, sc=8) 00:28:20.437 Read completed with error (sct=0, sc=8) 00:28:20.437 Read completed with error (sct=0, sc=8) 00:28:20.437 Read completed with error (sct=0, sc=8) 00:28:20.437 Read completed with error (sct=0, sc=8) 00:28:20.437 Write completed with error (sct=0, sc=8) 00:28:20.437 Read completed with error (sct=0, sc=8) 00:28:20.437 Read completed with error (sct=0, sc=8) 00:28:20.437 Read completed with error (sct=0, sc=8) 00:28:20.437 Read completed with error (sct=0, sc=8) 00:28:20.437 Write completed with error (sct=0, sc=8) 00:28:20.437 Read completed with error (sct=0, sc=8) 00:28:20.437 Read completed with error (sct=0, sc=8) 00:28:20.437 Read completed with error (sct=0, sc=8) 00:28:20.437 Read completed with error (sct=0, sc=8) 00:28:20.437 Read completed with error (sct=0, sc=8) 00:28:20.437 Read completed with error (sct=0, sc=8) 00:28:20.437 Read completed with error (sct=0, sc=8) 00:28:20.437 starting I/O failed: -6 00:28:20.437 Write completed with error (sct=0, sc=8) 00:28:20.437 Write completed with error (sct=0, sc=8) 00:28:20.437 Read completed with error (sct=0, sc=8) 00:28:20.437 Read completed with error (sct=0, sc=8) 00:28:20.437 Read completed with error (sct=0, sc=8) 00:28:20.437 Read completed with error (sct=0, sc=8) 00:28:20.437 Read completed with error (sct=0, sc=8) 00:28:20.437 Read completed with error (sct=0, sc=8) 00:28:20.437 Read completed with error (sct=0, sc=8) 00:28:20.437 starting I/O failed: -6 00:28:20.437 Read completed with error (sct=0, sc=8) 00:28:20.437 Write completed with error (sct=0, sc=8) 00:28:20.437 Read completed with error (sct=0, sc=8) 00:28:20.437 Write completed with error (sct=0, sc=8) 00:28:20.437 Write completed with error (sct=0, sc=8) 00:28:20.437 Read completed with error (sct=0, sc=8) 00:28:20.437 Write completed with error (sct=0, sc=8) 00:28:20.437 Write completed with error (sct=0, sc=8) 00:28:20.437 Write completed with error (sct=0, sc=8) 00:28:20.437 starting I/O failed: -6 00:28:20.437 Read completed with error (sct=0, sc=8) 00:28:20.437 Read completed with error (sct=0, sc=8) 00:28:20.437 Read completed with error (sct=0, sc=8) 00:28:20.437 Read completed with error (sct=0, sc=8) 00:28:20.437 Read completed with error (sct=0, sc=8) 00:28:20.437 Read completed with error (sct=0, sc=8) 00:28:20.437 Read completed with error (sct=0, sc=8) 00:28:20.437 Read completed with error (sct=0, sc=8) 00:28:20.437 Write completed with error (sct=0, sc=8) 00:28:20.437 Read completed with error (sct=0, sc=8) 00:28:20.437 starting I/O failed: -6 00:28:20.437 Read completed with error (sct=0, sc=8) 00:28:20.437 Read completed with error (sct=0, sc=8) 00:28:20.437 Read completed with error (sct=0, sc=8) 00:28:20.437 Read completed with error (sct=0, sc=8) 00:28:20.437 Write completed with error (sct=0, sc=8) 00:28:20.437 Write completed with error (sct=0, sc=8) 00:28:20.437 Read completed with error (sct=0, sc=8) 00:28:20.437 starting I/O failed: -6 00:28:20.437 Write completed with error (sct=0, sc=8) 00:28:20.437 Read completed with error (sct=0, sc=8) 00:28:20.437 Read completed with error (sct=0, sc=8) 00:28:20.437 Read completed with error (sct=0, sc=8) 00:28:20.437 Read completed with error (sct=0, sc=8) 00:28:20.437 starting I/O failed: -6 00:28:20.437 Read completed with error (sct=0, sc=8) 00:28:20.437 Read completed with error (sct=0, sc=8) 00:28:20.437 Read completed with error (sct=0, sc=8) 00:28:20.437 Read completed with error (sct=0, sc=8) 00:28:20.437 Write completed with error (sct=0, sc=8) 00:28:20.437 Read completed with error (sct=0, sc=8) 00:28:20.437 Read completed with error (sct=0, sc=8) 00:28:20.437 starting I/O failed: -6 00:28:20.437 Read completed with error (sct=0, sc=8) 00:28:20.437 Read completed with error (sct=0, sc=8) 00:28:20.437 Read completed with error (sct=0, sc=8) 00:28:20.437 Write completed with error (sct=0, sc=8) 00:28:20.437 Write completed with error (sct=0, sc=8) 00:28:20.437 Write completed with error (sct=0, sc=8) 00:28:20.437 Read completed with error (sct=0, sc=8) 00:28:20.437 Write completed with error (sct=0, sc=8) 00:28:20.437 Read completed with error (sct=0, sc=8) 00:28:20.437 Write completed with error (sct=0, sc=8) 00:28:20.437 Read completed with error (sct=0, sc=8) 00:28:20.437 Read completed with error (sct=0, sc=8) 00:28:20.437 starting I/O failed: -6 00:28:20.437 Read completed with error (sct=0, sc=8) 00:28:20.437 Write completed with error (sct=0, sc=8) 00:28:20.437 Read completed with error (sct=0, sc=8) 00:28:20.437 Read completed with error (sct=0, sc=8) 00:28:20.437 Read completed with error (sct=0, sc=8) 00:28:20.437 Read completed with error (sct=0, sc=8) 00:28:20.437 Write completed with error (sct=0, sc=8) 00:28:20.437 Read completed with error (sct=0, sc=8) 00:28:20.437 Write completed with error (sct=0, sc=8) 00:28:20.437 Read completed with error (sct=0, sc=8) 00:28:20.437 Read completed with error (sct=0, sc=8) 00:28:20.437 starting I/O failed: -6 00:28:20.437 Read completed with error (sct=0, sc=8) 00:28:20.437 Write completed with error (sct=0, sc=8) 00:28:20.437 Write completed with error (sct=0, sc=8) 00:28:20.437 Write completed with error (sct=0, sc=8) 00:28:20.437 Write completed with error (sct=0, sc=8) 00:28:20.437 Read completed with error (sct=0, sc=8) 00:28:20.437 Read completed with error (sct=0, sc=8) 00:28:20.437 Read completed with error (sct=0, sc=8) 00:28:20.437 Read completed with error (sct=0, sc=8) 00:28:20.437 Read completed with error (sct=0, sc=8) 00:28:20.437 Read completed with error (sct=0, sc=8) 00:28:20.437 starting I/O failed: -6 00:28:20.437 Write completed with error (sct=0, sc=8) 00:28:20.437 Read completed with error (sct=0, sc=8) 00:28:20.437 Read completed with error (sct=0, sc=8) 00:28:20.437 Read completed with error (sct=0, sc=8) 00:28:20.437 Read completed with error (sct=0, sc=8) 00:28:20.437 Read completed with error (sct=0, sc=8) 00:28:20.437 Read completed with error (sct=0, sc=8) 00:28:20.437 Read completed with error (sct=0, sc=8) 00:28:20.437 Read completed with error (sct=0, sc=8) 00:28:20.437 Read completed with error (sct=0, sc=8) 00:28:20.437 Write completed with error (sct=0, sc=8) 00:28:20.437 Write completed with error (sct=0, sc=8) 00:28:20.437 starting I/O failed: -6 00:28:20.437 Read completed with error (sct=0, sc=8) 00:28:20.437 Read completed with error (sct=0, sc=8) 00:28:20.437 Write completed with error (sct=0, sc=8) 00:28:20.437 Read completed with error (sct=0, sc=8) 00:28:20.437 Read completed with error (sct=0, sc=8) 00:28:20.437 Read completed with error (sct=0, sc=8) 00:28:20.437 Read completed with error (sct=0, sc=8) 00:28:20.437 Write completed with error (sct=0, sc=8) 00:28:20.437 Read completed with error (sct=0, sc=8) 00:28:20.437 Write completed with error (sct=0, sc=8) 00:28:20.437 Write completed with error (sct=0, sc=8) 00:28:20.437 Read completed with error (sct=0, sc=8) 00:28:20.437 starting I/O failed: -6 00:28:20.437 Read completed with error (sct=0, sc=8) 00:28:20.437 Read completed with error (sct=0, sc=8) 00:28:20.437 Write completed with error (sct=0, sc=8) 00:28:20.437 Read completed with error (sct=0, sc=8) 00:28:20.437 Read completed with error (sct=0, sc=8) 00:28:20.437 Read completed with error (sct=0, sc=8) 00:28:20.437 Read completed with error (sct=0, sc=8) 00:28:20.437 Read completed with error (sct=0, sc=8) 00:28:20.437 Write completed with error (sct=0, sc=8) 00:28:20.437 Write completed with error (sct=0, sc=8) 00:28:20.437 starting I/O failed: -6 00:28:20.437 Write completed with error (sct=0, sc=8) 00:28:20.437 Read completed with error (sct=0, sc=8) 00:28:20.437 Read completed with error (sct=0, sc=8) 00:28:20.437 Write completed with error (sct=0, sc=8) 00:28:20.437 Read completed with error (sct=0, sc=8) 00:28:20.437 starting I/O failed: -6 00:28:20.437 Read completed with error (sct=0, sc=8) 00:28:20.437 Read completed with error (sct=0, sc=8) 00:28:20.437 starting I/O failed: -6 00:28:21.371 [2024-11-28 12:52:03.767074] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a29b0 is same with the state(6) to be set 00:28:21.371 Write completed with error (sct=0, sc=8) 00:28:21.371 Read completed with error (sct=0, sc=8) 00:28:21.371 Read completed with error (sct=0, sc=8) 00:28:21.371 Write completed with error (sct=0, sc=8) 00:28:21.371 Write completed with error (sct=0, sc=8) 00:28:21.371 Read completed with error (sct=0, sc=8) 00:28:21.371 Read completed with error (sct=0, sc=8) 00:28:21.371 Read completed with error (sct=0, sc=8) 00:28:21.371 Read completed with error (sct=0, sc=8) 00:28:21.371 Read completed with error (sct=0, sc=8) 00:28:21.371 Read completed with error (sct=0, sc=8) 00:28:21.371 Read completed with error (sct=0, sc=8) 00:28:21.371 Write completed with error (sct=0, sc=8) 00:28:21.371 Read completed with error (sct=0, sc=8) 00:28:21.371 Read completed with error (sct=0, sc=8) 00:28:21.371 Write completed with error (sct=0, sc=8) 00:28:21.371 [2024-11-28 12:52:03.800366] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ff46400d350 is same with the state(6) to be set 00:28:21.371 Read completed with error (sct=0, sc=8) 00:28:21.371 Read completed with error (sct=0, sc=8) 00:28:21.371 Read completed with error (sct=0, sc=8) 00:28:21.371 Write completed with error (sct=0, sc=8) 00:28:21.371 Read completed with error (sct=0, sc=8) 00:28:21.371 Read completed with error (sct=0, sc=8) 00:28:21.371 Write completed with error (sct=0, sc=8) 00:28:21.371 Write completed with error (sct=0, sc=8) 00:28:21.371 Read completed with error (sct=0, sc=8) 00:28:21.371 Write completed with error (sct=0, sc=8) 00:28:21.371 Read completed with error (sct=0, sc=8) 00:28:21.371 Read completed with error (sct=0, sc=8) 00:28:21.371 Write completed with error (sct=0, sc=8) 00:28:21.371 Write completed with error (sct=0, sc=8) 00:28:21.371 Write completed with error (sct=0, sc=8) 00:28:21.371 Read completed with error (sct=0, sc=8) 00:28:21.371 Write completed with error (sct=0, sc=8) 00:28:21.371 Read completed with error (sct=0, sc=8) 00:28:21.371 Read completed with error (sct=0, sc=8) 00:28:21.371 Read completed with error (sct=0, sc=8) 00:28:21.371 Read completed with error (sct=0, sc=8) 00:28:21.371 Read completed with error (sct=0, sc=8) 00:28:21.371 Read completed with error (sct=0, sc=8) 00:28:21.371 Read completed with error (sct=0, sc=8) 00:28:21.371 Write completed with error (sct=0, sc=8) 00:28:21.371 Read completed with error (sct=0, sc=8) 00:28:21.371 Write completed with error (sct=0, sc=8) 00:28:21.371 Read completed with error (sct=0, sc=8) 00:28:21.371 Read completed with error (sct=0, sc=8) 00:28:21.371 Read completed with error (sct=0, sc=8) 00:28:21.371 Read completed with error (sct=0, sc=8) 00:28:21.371 Read completed with error (sct=0, sc=8) 00:28:21.371 Read completed with error (sct=0, sc=8) 00:28:21.371 Read completed with error (sct=0, sc=8) 00:28:21.371 Write completed with error (sct=0, sc=8) 00:28:21.371 Read completed with error (sct=0, sc=8) 00:28:21.371 Read completed with error (sct=0, sc=8) 00:28:21.371 Read completed with error (sct=0, sc=8) 00:28:21.371 [2024-11-28 12:52:03.800734] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a12c0 is same with the state(6) to be set 00:28:21.371 Write completed with error (sct=0, sc=8) 00:28:21.371 Read completed with error (sct=0, sc=8) 00:28:21.371 Read completed with error (sct=0, sc=8) 00:28:21.372 Read completed with error (sct=0, sc=8) 00:28:21.372 Read completed with error (sct=0, sc=8) 00:28:21.372 Read completed with error (sct=0, sc=8) 00:28:21.372 Read completed with error (sct=0, sc=8) 00:28:21.372 Read completed with error (sct=0, sc=8) 00:28:21.372 Write completed with error (sct=0, sc=8) 00:28:21.372 Write completed with error (sct=0, sc=8) 00:28:21.372 Read completed with error (sct=0, sc=8) 00:28:21.372 Read completed with error (sct=0, sc=8) 00:28:21.372 Write completed with error (sct=0, sc=8) 00:28:21.372 Read completed with error (sct=0, sc=8) 00:28:21.372 Write completed with error (sct=0, sc=8) 00:28:21.372 Write completed with error (sct=0, sc=8) 00:28:21.372 Read completed with error (sct=0, sc=8) 00:28:21.372 Write completed with error (sct=0, sc=8) 00:28:21.372 Read completed with error (sct=0, sc=8) 00:28:21.372 Read completed with error (sct=0, sc=8) 00:28:21.372 Read completed with error (sct=0, sc=8) 00:28:21.372 Read completed with error (sct=0, sc=8) 00:28:21.372 Write completed with error (sct=0, sc=8) 00:28:21.372 Read completed with error (sct=0, sc=8) 00:28:21.372 Read completed with error (sct=0, sc=8) 00:28:21.372 Read completed with error (sct=0, sc=8) 00:28:21.372 Read completed with error (sct=0, sc=8) 00:28:21.372 Write completed with error (sct=0, sc=8) 00:28:21.372 Read completed with error (sct=0, sc=8) 00:28:21.372 Read completed with error (sct=0, sc=8) 00:28:21.372 Read completed with error (sct=0, sc=8) 00:28:21.372 Read completed with error (sct=0, sc=8) 00:28:21.372 Write completed with error (sct=0, sc=8) 00:28:21.372 Read completed with error (sct=0, sc=8) 00:28:21.372 Write completed with error (sct=0, sc=8) 00:28:21.372 Read completed with error (sct=0, sc=8) 00:28:21.372 Write completed with error (sct=0, sc=8) 00:28:21.372 Read completed with error (sct=0, sc=8) 00:28:21.372 [2024-11-28 12:52:03.800915] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a14a0 is same with the state(6) to be set 00:28:21.372 Write completed with error (sct=0, sc=8) 00:28:21.372 Write completed with error (sct=0, sc=8) 00:28:21.372 Write completed with error (sct=0, sc=8) 00:28:21.372 Write completed with error (sct=0, sc=8) 00:28:21.372 Read completed with error (sct=0, sc=8) 00:28:21.372 Read completed with error (sct=0, sc=8) 00:28:21.372 Read completed with error (sct=0, sc=8) 00:28:21.372 Write completed with error (sct=0, sc=8) 00:28:21.372 Write completed with error (sct=0, sc=8) 00:28:21.372 Read completed with error (sct=0, sc=8) 00:28:21.372 Read completed with error (sct=0, sc=8) 00:28:21.372 Read completed with error (sct=0, sc=8) 00:28:21.372 Read completed with error (sct=0, sc=8) 00:28:21.372 Read completed with error (sct=0, sc=8) 00:28:21.372 Read completed with error (sct=0, sc=8) 00:28:21.372 Read completed with error (sct=0, sc=8) 00:28:21.372 Read completed with error (sct=0, sc=8) 00:28:21.372 Read completed with error (sct=0, sc=8) 00:28:21.372 Read completed with error (sct=0, sc=8) 00:28:21.372 Read completed with error (sct=0, sc=8) 00:28:21.372 Write completed with error (sct=0, sc=8) 00:28:21.372 Write completed with error (sct=0, sc=8) 00:28:21.372 Read completed with error (sct=0, sc=8) 00:28:21.372 Write completed with error (sct=0, sc=8) 00:28:21.372 Write completed with error (sct=0, sc=8) 00:28:21.372 Write completed with error (sct=0, sc=8) 00:28:21.372 Read completed with error (sct=0, sc=8) 00:28:21.372 Write completed with error (sct=0, sc=8) 00:28:21.372 Read completed with error (sct=0, sc=8) 00:28:21.372 Read completed with error (sct=0, sc=8) 00:28:21.372 Read completed with error (sct=0, sc=8) 00:28:21.372 Read completed with error (sct=0, sc=8) 00:28:21.372 Read completed with error (sct=0, sc=8) 00:28:21.372 Write completed with error (sct=0, sc=8) 00:28:21.372 Read completed with error (sct=0, sc=8) 00:28:21.372 Write completed with error (sct=0, sc=8) 00:28:21.372 Read completed with error (sct=0, sc=8) 00:28:21.372 [2024-11-28 12:52:03.801875] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a1860 is same with the state(6) to be set 00:28:21.372 Initializing NVMe Controllers 00:28:21.372 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:21.372 Controller IO queue size 128, less than required. 00:28:21.372 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:21.372 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:28:21.372 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:28:21.372 Initialization complete. Launching workers. 00:28:21.372 ======================================================== 00:28:21.372 Latency(us) 00:28:21.372 Device Information : IOPS MiB/s Average min max 00:28:21.372 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 194.54 0.09 945694.48 644.03 1011865.36 00:28:21.372 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 154.84 0.08 877338.99 412.26 1011766.83 00:28:21.372 ======================================================== 00:28:21.372 Total : 349.37 0.17 915400.57 412.26 1011865.36 00:28:21.372 00:28:21.372 [2024-11-28 12:52:03.802537] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16a29b0 (9): Bad file descriptor 00:28:21.372 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:28:21.372 12:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:21.372 12:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:28:21.372 12:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2701352 00:28:21.372 12:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:28:21.941 12:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:28:21.941 12:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2701352 00:28:21.941 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2701352) - No such process 00:28:21.941 12:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2701352 00:28:21.941 12:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:28:21.941 12:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2701352 00:28:21.941 12:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:28:21.941 12:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:21.941 12:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:28:21.941 12:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:21.941 12:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 2701352 00:28:21.941 12:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:28:21.941 12:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:21.941 12:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:21.941 12:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:21.941 12:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:28:21.941 12:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:21.941 12:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:21.941 12:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:21.941 12:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:21.941 12:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:21.941 12:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:21.941 [2024-11-28 12:52:04.335594] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:21.941 12:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:21.941 12:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:21.941 12:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:21.941 12:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:21.941 12:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:21.941 12:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2701830 00:28:21.941 12:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:28:21.941 12:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:28:21.941 12:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2701830 00:28:21.941 12:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:21.941 [2024-11-28 12:52:04.403058] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:28:22.506 12:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:22.506 12:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2701830 00:28:22.506 12:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:23.073 12:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:23.073 12:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2701830 00:28:23.073 12:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:23.639 12:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:23.639 12:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2701830 00:28:23.639 12:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:23.897 12:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:23.897 12:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2701830 00:28:23.897 12:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:24.464 12:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:24.464 12:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2701830 00:28:24.464 12:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:25.031 12:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:25.031 12:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2701830 00:28:25.031 12:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:25.289 Initializing NVMe Controllers 00:28:25.289 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:25.289 Controller IO queue size 128, less than required. 00:28:25.289 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:25.289 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:28:25.289 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:28:25.289 Initialization complete. Launching workers. 00:28:25.289 ======================================================== 00:28:25.289 Latency(us) 00:28:25.289 Device Information : IOPS MiB/s Average min max 00:28:25.289 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003066.59 1000198.17 1012044.34 00:28:25.289 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1006209.81 1000181.73 1042574.99 00:28:25.289 ======================================================== 00:28:25.289 Total : 256.00 0.12 1004638.20 1000181.73 1042574.99 00:28:25.289 00:28:25.549 12:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:25.549 12:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2701830 00:28:25.549 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2701830) - No such process 00:28:25.549 12:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2701830 00:28:25.549 12:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:28:25.549 12:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:28:25.549 12:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:25.549 12:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:28:25.549 12:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:25.549 12:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:28:25.549 12:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:25.549 12:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:25.549 rmmod nvme_tcp 00:28:25.549 rmmod nvme_fabrics 00:28:25.549 rmmod nvme_keyring 00:28:25.549 12:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:25.549 12:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:28:25.549 12:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:28:25.549 12:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 2701245 ']' 00:28:25.549 12:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 2701245 00:28:25.549 12:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 2701245 ']' 00:28:25.549 12:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 2701245 00:28:25.549 12:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:28:25.549 12:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:25.549 12:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2701245 00:28:25.549 12:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:25.549 12:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:25.549 12:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2701245' 00:28:25.549 killing process with pid 2701245 00:28:25.549 12:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 2701245 00:28:25.549 12:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 2701245 00:28:25.809 12:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:25.809 12:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:25.809 12:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:25.809 12:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:28:25.809 12:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:28:25.809 12:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:25.809 12:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:28:25.809 12:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:25.809 12:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:25.809 12:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:25.809 12:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:25.809 12:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:27.714 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:27.973 00:28:27.973 real 0m15.783s 00:28:27.973 user 0m26.000s 00:28:27.973 sys 0m5.849s 00:28:27.973 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:27.973 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:27.973 ************************************ 00:28:27.973 END TEST nvmf_delete_subsystem 00:28:27.973 ************************************ 00:28:27.973 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:28:27.973 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:27.973 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:27.973 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:27.973 ************************************ 00:28:27.973 START TEST nvmf_host_management 00:28:27.973 ************************************ 00:28:27.973 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:28:27.973 * Looking for test storage... 00:28:27.973 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:27.973 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:27.973 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:28:27.973 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:27.973 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:27.973 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:27.973 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:27.973 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:27.973 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:28:27.973 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:28:27.973 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:28:27.973 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:28:27.973 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:28:27.973 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:28:27.973 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:28:27.973 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:27.974 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:28:27.974 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:28:27.974 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:27.974 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:27.974 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:28:27.974 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:28:27.974 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:27.974 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:28:27.974 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:28:27.974 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:28:27.974 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:28:27.974 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:27.974 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:28:27.974 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:28:27.974 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:27.974 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:27.974 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:28:27.974 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:27.974 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:27.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:27.974 --rc genhtml_branch_coverage=1 00:28:27.974 --rc genhtml_function_coverage=1 00:28:27.974 --rc genhtml_legend=1 00:28:27.974 --rc geninfo_all_blocks=1 00:28:27.974 --rc geninfo_unexecuted_blocks=1 00:28:27.974 00:28:27.974 ' 00:28:27.974 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:27.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:27.974 --rc genhtml_branch_coverage=1 00:28:27.974 --rc genhtml_function_coverage=1 00:28:27.974 --rc genhtml_legend=1 00:28:27.974 --rc geninfo_all_blocks=1 00:28:27.974 --rc geninfo_unexecuted_blocks=1 00:28:27.974 00:28:27.974 ' 00:28:27.974 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:27.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:27.974 --rc genhtml_branch_coverage=1 00:28:27.974 --rc genhtml_function_coverage=1 00:28:27.974 --rc genhtml_legend=1 00:28:27.974 --rc geninfo_all_blocks=1 00:28:27.974 --rc geninfo_unexecuted_blocks=1 00:28:27.974 00:28:27.974 ' 00:28:27.974 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:27.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:27.974 --rc genhtml_branch_coverage=1 00:28:27.974 --rc genhtml_function_coverage=1 00:28:27.974 --rc genhtml_legend=1 00:28:27.974 --rc geninfo_all_blocks=1 00:28:27.974 --rc geninfo_unexecuted_blocks=1 00:28:27.974 00:28:27.974 ' 00:28:27.974 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:27.974 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:28:27.974 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:27.974 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:27.974 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:27.974 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:27.974 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:27.974 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:27.974 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:27.974 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:27.974 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:27.974 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:27.974 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:28:27.974 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:28:27.974 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:27.974 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:27.974 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:27.974 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:27.974 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:27.974 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:28:27.974 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:27.974 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:27.974 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:27.974 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:27.974 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:27.974 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:27.974 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:28:27.974 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:27.974 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:28:27.974 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:27.974 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:27.974 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:27.974 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:27.974 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:27.974 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:27.974 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:27.974 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:27.974 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:27.974 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:27.974 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:27.974 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:27.974 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:28:27.974 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:27.974 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:27.974 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:27.974 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:27.975 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:27.975 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:27.975 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:27.975 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:28.233 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:28.233 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:28.233 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:28:28.233 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:33.563 12:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:33.563 12:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:28:33.563 12:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:33.563 12:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:33.564 12:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:33.564 12:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:33.564 12:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:33.564 12:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:28:33.564 12:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:33.564 12:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:28:33.564 12:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:28:33.564 12:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:28:33.564 12:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:28:33.564 12:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:28:33.564 12:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:28:33.564 12:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:33.564 12:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:33.564 12:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:33.564 12:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:33.564 12:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:33.564 12:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:33.564 12:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:33.564 12:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:33.564 12:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:33.564 12:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:33.564 12:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:33.564 12:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:33.564 12:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:33.564 12:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:33.564 12:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:33.564 12:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:33.564 12:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:33.564 12:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:33.564 12:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:33.564 12:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:33.564 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:33.564 12:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:33.564 12:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:33.564 12:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:33.564 12:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:33.564 12:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:33.564 12:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:33.564 12:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:33.564 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:33.564 12:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:33.564 12:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:33.564 12:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:33.564 12:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:33.564 12:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:33.564 12:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:33.564 12:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:33.564 12:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:33.564 12:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:33.564 12:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:33.564 12:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:33.564 12:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:33.564 12:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:33.564 12:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:33.564 12:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:33.564 12:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:33.564 Found net devices under 0000:86:00.0: cvl_0_0 00:28:33.564 12:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:33.564 12:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:33.564 12:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:33.564 12:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:33.564 12:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:33.564 12:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:33.564 12:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:33.564 12:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:33.564 12:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:33.564 Found net devices under 0000:86:00.1: cvl_0_1 00:28:33.564 12:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:33.564 12:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:33.564 12:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:28:33.564 12:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:33.564 12:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:33.564 12:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:33.564 12:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:33.564 12:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:33.564 12:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:33.564 12:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:33.564 12:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:33.564 12:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:33.564 12:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:33.564 12:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:33.564 12:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:33.564 12:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:33.564 12:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:33.564 12:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:33.564 12:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:33.564 12:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:33.564 12:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:33.564 12:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:33.564 12:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:33.564 12:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:33.564 12:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:33.564 12:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:33.564 12:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:33.564 12:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:33.564 12:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:33.564 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:33.565 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.326 ms 00:28:33.565 00:28:33.565 --- 10.0.0.2 ping statistics --- 00:28:33.565 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:33.565 rtt min/avg/max/mdev = 0.326/0.326/0.326/0.000 ms 00:28:33.565 12:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:33.565 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:33.565 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.154 ms 00:28:33.565 00:28:33.565 --- 10.0.0.1 ping statistics --- 00:28:33.565 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:33.565 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:28:33.565 12:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:33.565 12:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:28:33.565 12:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:33.565 12:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:33.565 12:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:33.565 12:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:33.565 12:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:33.565 12:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:33.565 12:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:33.565 12:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:28:33.565 12:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:28:33.565 12:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:28:33.565 12:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:33.565 12:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:33.565 12:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:33.565 12:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=2705815 00:28:33.565 12:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 2705815 00:28:33.565 12:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:28:33.565 12:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 2705815 ']' 00:28:33.565 12:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:33.565 12:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:33.565 12:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:33.565 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:33.565 12:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:33.565 12:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:33.565 [2024-11-28 12:52:15.608814] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:33.565 [2024-11-28 12:52:15.609776] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:28:33.565 [2024-11-28 12:52:15.609815] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:33.565 [2024-11-28 12:52:15.677761] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:33.565 [2024-11-28 12:52:15.721245] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:33.565 [2024-11-28 12:52:15.721283] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:33.565 [2024-11-28 12:52:15.721290] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:33.565 [2024-11-28 12:52:15.721296] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:33.565 [2024-11-28 12:52:15.721302] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:33.565 [2024-11-28 12:52:15.722845] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:33.565 [2024-11-28 12:52:15.722930] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:33.565 [2024-11-28 12:52:15.723108] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:28:33.565 [2024-11-28 12:52:15.723109] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:33.565 [2024-11-28 12:52:15.791056] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:33.565 [2024-11-28 12:52:15.791224] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:28:33.565 [2024-11-28 12:52:15.791637] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:33.565 [2024-11-28 12:52:15.791677] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:28:33.565 [2024-11-28 12:52:15.791820] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:28:33.565 12:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:33.565 12:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:28:33.565 12:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:33.565 12:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:33.565 12:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:33.565 12:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:33.565 12:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:33.565 12:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.565 12:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:33.565 [2024-11-28 12:52:15.859650] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:33.565 12:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.565 12:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:28:33.565 12:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:33.565 12:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:33.565 12:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:33.565 12:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:28:33.565 12:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:28:33.565 12:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.565 12:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:33.565 Malloc0 00:28:33.565 [2024-11-28 12:52:15.935870] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:33.565 12:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.565 12:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:28:33.565 12:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:33.565 12:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:33.565 12:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2705957 00:28:33.565 12:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2705957 /var/tmp/bdevperf.sock 00:28:33.565 12:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 2705957 ']' 00:28:33.565 12:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:33.565 12:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:28:33.565 12:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:28:33.565 12:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:33.565 12:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:28:33.565 12:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:33.565 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:33.565 12:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:28:33.565 12:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:33.565 12:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:33.565 12:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:33.565 12:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:33.566 { 00:28:33.566 "params": { 00:28:33.566 "name": "Nvme$subsystem", 00:28:33.566 "trtype": "$TEST_TRANSPORT", 00:28:33.566 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:33.566 "adrfam": "ipv4", 00:28:33.566 "trsvcid": "$NVMF_PORT", 00:28:33.566 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:33.566 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:33.566 "hdgst": ${hdgst:-false}, 00:28:33.566 "ddgst": ${ddgst:-false} 00:28:33.566 }, 00:28:33.566 "method": "bdev_nvme_attach_controller" 00:28:33.566 } 00:28:33.566 EOF 00:28:33.566 )") 00:28:33.566 12:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:28:33.566 12:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:28:33.566 12:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:28:33.566 12:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:33.566 "params": { 00:28:33.566 "name": "Nvme0", 00:28:33.566 "trtype": "tcp", 00:28:33.566 "traddr": "10.0.0.2", 00:28:33.566 "adrfam": "ipv4", 00:28:33.566 "trsvcid": "4420", 00:28:33.566 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:33.566 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:33.566 "hdgst": false, 00:28:33.566 "ddgst": false 00:28:33.566 }, 00:28:33.566 "method": "bdev_nvme_attach_controller" 00:28:33.566 }' 00:28:33.566 [2024-11-28 12:52:16.031147] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:28:33.566 [2024-11-28 12:52:16.031197] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2705957 ] 00:28:33.858 [2024-11-28 12:52:16.095298] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:33.858 [2024-11-28 12:52:16.136982] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:34.155 Running I/O for 10 seconds... 00:28:34.155 12:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:34.155 12:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:28:34.155 12:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:34.155 12:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.155 12:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:34.155 12:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.155 12:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:34.155 12:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:28:34.155 12:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:28:34.155 12:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:28:34.155 12:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:28:34.155 12:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:28:34.155 12:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:28:34.155 12:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:28:34.155 12:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:28:34.155 12:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:28:34.155 12:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.155 12:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:34.155 12:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.155 12:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=78 00:28:34.155 12:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 78 -ge 100 ']' 00:28:34.155 12:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:28:34.431 12:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:28:34.431 12:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:28:34.431 12:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:28:34.431 12:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:28:34.431 12:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.431 12:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:34.431 12:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.431 12:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=661 00:28:34.431 12:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 661 -ge 100 ']' 00:28:34.431 12:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:28:34.431 12:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:28:34.431 12:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:28:34.431 12:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:28:34.431 12:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.431 12:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:34.431 [2024-11-28 12:52:16.859687] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a83d70 is same with the state(6) to be set 00:28:34.431 [2024-11-28 12:52:16.859727] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a83d70 is same with the state(6) to be set 00:28:34.432 [2024-11-28 12:52:16.859735] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a83d70 is same with the state(6) to be set 00:28:34.432 [2024-11-28 12:52:16.859742] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a83d70 is same with the state(6) to be set 00:28:34.432 [2024-11-28 12:52:16.859748] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a83d70 is same with the state(6) to be set 00:28:34.432 [2024-11-28 12:52:16.859760] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a83d70 is same with the state(6) to be set 00:28:34.432 [2024-11-28 12:52:16.859766] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a83d70 is same with the state(6) to be set 00:28:34.432 [2024-11-28 12:52:16.859772] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a83d70 is same with the state(6) to be set 00:28:34.432 [2024-11-28 12:52:16.859778] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a83d70 is same with the state(6) to be set 00:28:34.432 [2024-11-28 12:52:16.859784] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a83d70 is same with the state(6) to be set 00:28:34.432 [2024-11-28 12:52:16.859790] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a83d70 is same with the state(6) to be set 00:28:34.432 [2024-11-28 12:52:16.859796] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a83d70 is same with the state(6) to be set 00:28:34.432 [2024-11-28 12:52:16.859802] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a83d70 is same with the state(6) to be set 00:28:34.432 [2024-11-28 12:52:16.859808] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a83d70 is same with the state(6) to be set 00:28:34.432 [2024-11-28 12:52:16.859814] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a83d70 is same with the state(6) to be set 00:28:34.432 [2024-11-28 12:52:16.859820] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a83d70 is same with the state(6) to be set 00:28:34.432 [2024-11-28 12:52:16.859826] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a83d70 is same with the state(6) to be set 00:28:34.432 [2024-11-28 12:52:16.859832] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a83d70 is same with the state(6) to be set 00:28:34.432 [2024-11-28 12:52:16.859838] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a83d70 is same with the state(6) to be set 00:28:34.432 [2024-11-28 12:52:16.859843] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a83d70 is same with the state(6) to be set 00:28:34.432 [2024-11-28 12:52:16.859850] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a83d70 is same with the state(6) to be set 00:28:34.432 [2024-11-28 12:52:16.859856] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a83d70 is same with the state(6) to be set 00:28:34.432 [2024-11-28 12:52:16.859862] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a83d70 is same with the state(6) to be set 00:28:34.432 [2024-11-28 12:52:16.859867] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a83d70 is same with the state(6) to be set 00:28:34.432 [2024-11-28 12:52:16.859873] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a83d70 is same with the state(6) to be set 00:28:34.432 [2024-11-28 12:52:16.859879] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a83d70 is same with the state(6) to be set 00:28:34.432 [2024-11-28 12:52:16.859885] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a83d70 is same with the state(6) to be set 00:28:34.432 [2024-11-28 12:52:16.859891] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a83d70 is same with the state(6) to be set 00:28:34.432 [2024-11-28 12:52:16.859897] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a83d70 is same with the state(6) to be set 00:28:34.432 [2024-11-28 12:52:16.859903] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a83d70 is same with the state(6) to be set 00:28:34.432 [2024-11-28 12:52:16.859909] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a83d70 is same with the state(6) to be set 00:28:34.432 [2024-11-28 12:52:16.859915] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a83d70 is same with the state(6) to be set 00:28:34.432 [2024-11-28 12:52:16.859921] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a83d70 is same with the state(6) to be set 00:28:34.432 [2024-11-28 12:52:16.859929] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a83d70 is same with the state(6) to be set 00:28:34.432 [2024-11-28 12:52:16.859935] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a83d70 is same with the state(6) to be set 00:28:34.432 [2024-11-28 12:52:16.859941] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a83d70 is same with the state(6) to be set 00:28:34.432 [2024-11-28 12:52:16.859951] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a83d70 is same with the state(6) to be set 00:28:34.432 [2024-11-28 12:52:16.859958] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a83d70 is same with the state(6) to be set 00:28:34.432 [2024-11-28 12:52:16.859964] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a83d70 is same with the state(6) to be set 00:28:34.432 [2024-11-28 12:52:16.860098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.432 [2024-11-28 12:52:16.860132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.432 [2024-11-28 12:52:16.860148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:98432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.432 [2024-11-28 12:52:16.860156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.432 [2024-11-28 12:52:16.860165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:98560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.432 [2024-11-28 12:52:16.860172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.432 [2024-11-28 12:52:16.860182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:98688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.432 [2024-11-28 12:52:16.860190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.432 [2024-11-28 12:52:16.860199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:98816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.432 [2024-11-28 12:52:16.860206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.432 [2024-11-28 12:52:16.860214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:98944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.432 [2024-11-28 12:52:16.860221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.432 [2024-11-28 12:52:16.860229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:99072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.432 [2024-11-28 12:52:16.860236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.432 [2024-11-28 12:52:16.860244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:99200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.432 [2024-11-28 12:52:16.860250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.432 [2024-11-28 12:52:16.860258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:99328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.432 [2024-11-28 12:52:16.860264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.432 [2024-11-28 12:52:16.860272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:99456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.432 [2024-11-28 12:52:16.860283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.432 [2024-11-28 12:52:16.860291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:99584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.432 [2024-11-28 12:52:16.860298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.432 [2024-11-28 12:52:16.860306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:99712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.432 [2024-11-28 12:52:16.860312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.432 [2024-11-28 12:52:16.860321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:99840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.432 [2024-11-28 12:52:16.860327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.432 [2024-11-28 12:52:16.860335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:99968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.432 [2024-11-28 12:52:16.860342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.432 [2024-11-28 12:52:16.860350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:100096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.432 [2024-11-28 12:52:16.860358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.433 [2024-11-28 12:52:16.860366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:100224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.433 [2024-11-28 12:52:16.860373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.433 [2024-11-28 12:52:16.860381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:100352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.433 [2024-11-28 12:52:16.860389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.433 [2024-11-28 12:52:16.860397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:100480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.433 [2024-11-28 12:52:16.860404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.433 [2024-11-28 12:52:16.860412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:100608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.433 [2024-11-28 12:52:16.860419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.433 [2024-11-28 12:52:16.860429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:100736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.433 [2024-11-28 12:52:16.860436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.433 [2024-11-28 12:52:16.860444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:100864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.433 [2024-11-28 12:52:16.860451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.433 [2024-11-28 12:52:16.860459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:100992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.433 [2024-11-28 12:52:16.860465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.433 [2024-11-28 12:52:16.860475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:101120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.433 [2024-11-28 12:52:16.860482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.433 [2024-11-28 12:52:16.860490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:101248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.433 [2024-11-28 12:52:16.860496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.433 [2024-11-28 12:52:16.860504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:101376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.433 [2024-11-28 12:52:16.860511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.433 [2024-11-28 12:52:16.860519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:101504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.433 [2024-11-28 12:52:16.860526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.433 [2024-11-28 12:52:16.860534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:101632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.433 [2024-11-28 12:52:16.860540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.433 [2024-11-28 12:52:16.860548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:101760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.433 [2024-11-28 12:52:16.860555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.433 [2024-11-28 12:52:16.860563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:101888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.433 [2024-11-28 12:52:16.860570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.433 [2024-11-28 12:52:16.860578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:102016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.433 [2024-11-28 12:52:16.860584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.433 [2024-11-28 12:52:16.860592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:102144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.433 [2024-11-28 12:52:16.860599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.433 [2024-11-28 12:52:16.860607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:102272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.433 [2024-11-28 12:52:16.860613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.433 [2024-11-28 12:52:16.860622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:102400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.433 [2024-11-28 12:52:16.860629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.433 [2024-11-28 12:52:16.860637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:102528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.433 [2024-11-28 12:52:16.860644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.433 [2024-11-28 12:52:16.860652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:102656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.433 [2024-11-28 12:52:16.860660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.433 [2024-11-28 12:52:16.860668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:102784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.433 [2024-11-28 12:52:16.860675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.433 [2024-11-28 12:52:16.860683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:102912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.433 [2024-11-28 12:52:16.860689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.433 [2024-11-28 12:52:16.860697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:103040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.433 [2024-11-28 12:52:16.860704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.433 [2024-11-28 12:52:16.860711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:103168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.433 [2024-11-28 12:52:16.860718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.433 [2024-11-28 12:52:16.860727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:103296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.433 [2024-11-28 12:52:16.860733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.433 [2024-11-28 12:52:16.860741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:103424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.433 [2024-11-28 12:52:16.860748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.433 [2024-11-28 12:52:16.860756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:103552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.433 [2024-11-28 12:52:16.860762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.433 [2024-11-28 12:52:16.860770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:103680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.433 [2024-11-28 12:52:16.860777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.433 [2024-11-28 12:52:16.860785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:103808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.433 [2024-11-28 12:52:16.860792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.433 [2024-11-28 12:52:16.860800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:103936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.433 [2024-11-28 12:52:16.860806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.433 [2024-11-28 12:52:16.860814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:104064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.434 [2024-11-28 12:52:16.860821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.434 [2024-11-28 12:52:16.860829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:104192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.434 [2024-11-28 12:52:16.860835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.434 [2024-11-28 12:52:16.860845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:104320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.434 [2024-11-28 12:52:16.860852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.434 [2024-11-28 12:52:16.860860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:104448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.434 [2024-11-28 12:52:16.860866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.434 [2024-11-28 12:52:16.860874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:104576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.434 [2024-11-28 12:52:16.860880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.434 [2024-11-28 12:52:16.860889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:104704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.434 [2024-11-28 12:52:16.860898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.434 [2024-11-28 12:52:16.860906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:104832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.434 [2024-11-28 12:52:16.860913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.434 [2024-11-28 12:52:16.860921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:104960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.434 [2024-11-28 12:52:16.860928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.434 [2024-11-28 12:52:16.860936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:105088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.434 [2024-11-28 12:52:16.860942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.434 [2024-11-28 12:52:16.860955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:105216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.434 [2024-11-28 12:52:16.860962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.434 [2024-11-28 12:52:16.860970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:105344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.434 [2024-11-28 12:52:16.860977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.434 [2024-11-28 12:52:16.860985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:105472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.434 [2024-11-28 12:52:16.860991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.434 [2024-11-28 12:52:16.861000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:105600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.434 [2024-11-28 12:52:16.861006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.434 [2024-11-28 12:52:16.861015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:105728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.434 [2024-11-28 12:52:16.861021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.434 [2024-11-28 12:52:16.861029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:105856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.434 [2024-11-28 12:52:16.861041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.434 [2024-11-28 12:52:16.861050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:105984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.434 [2024-11-28 12:52:16.861056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.434 [2024-11-28 12:52:16.861064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:106112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.434 [2024-11-28 12:52:16.861071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.434 [2024-11-28 12:52:16.861079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:106240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.434 [2024-11-28 12:52:16.861085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.434 [2024-11-28 12:52:16.861093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:106368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.434 [2024-11-28 12:52:16.861100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.434 [2024-11-28 12:52:16.861127] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.434 [2024-11-28 12:52:16.862075] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:28:34.434 task offset: 98304 on job bdev=Nvme0n1 fails 00:28:34.434 00:28:34.434 Latency(us) 00:28:34.434 [2024-11-28T11:52:16.953Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:34.434 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:34.434 Job: Nvme0n1 ended in about 0.41 seconds with error 00:28:34.434 Verification LBA range: start 0x0 length 0x400 00:28:34.434 Nvme0n1 : 0.41 1894.06 118.38 157.84 0.00 30346.14 2692.67 27468.13 00:28:34.434 [2024-11-28T11:52:16.953Z] =================================================================================================================== 00:28:34.434 [2024-11-28T11:52:16.953Z] Total : 1894.06 118.38 157.84 0.00 30346.14 2692.67 27468.13 00:28:34.434 [2024-11-28 12:52:16.864483] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:28:34.434 [2024-11-28 12:52:16.864505] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbbf510 (9): Bad file descriptor 00:28:34.434 12:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.434 12:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:28:34.434 12:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.434 12:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:34.434 [2024-11-28 12:52:16.865618] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:28:34.434 [2024-11-28 12:52:16.865685] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:28:34.434 [2024-11-28 12:52:16.865708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.434 [2024-11-28 12:52:16.865720] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:28:34.434 [2024-11-28 12:52:16.865731] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:28:34.434 [2024-11-28 12:52:16.865739] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.434 [2024-11-28 12:52:16.865745] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xbbf510 00:28:34.434 [2024-11-28 12:52:16.865765] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbbf510 (9): Bad file descriptor 00:28:34.435 [2024-11-28 12:52:16.865776] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:28:34.435 [2024-11-28 12:52:16.865782] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:28:34.435 [2024-11-28 12:52:16.865791] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:28:34.435 [2024-11-28 12:52:16.865799] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:28:34.435 12:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.435 12:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:28:35.405 12:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2705957 00:28:35.405 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (2705957) - No such process 00:28:35.405 12:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:28:35.405 12:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:28:35.405 12:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:28:35.405 12:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:28:35.405 12:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:28:35.405 12:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:28:35.405 12:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:35.405 12:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:35.405 { 00:28:35.405 "params": { 00:28:35.405 "name": "Nvme$subsystem", 00:28:35.405 "trtype": "$TEST_TRANSPORT", 00:28:35.405 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:35.405 "adrfam": "ipv4", 00:28:35.405 "trsvcid": "$NVMF_PORT", 00:28:35.405 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:35.405 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:35.405 "hdgst": ${hdgst:-false}, 00:28:35.405 "ddgst": ${ddgst:-false} 00:28:35.405 }, 00:28:35.405 "method": "bdev_nvme_attach_controller" 00:28:35.405 } 00:28:35.405 EOF 00:28:35.405 )") 00:28:35.405 12:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:28:35.405 12:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:28:35.405 12:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:28:35.405 12:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:35.405 "params": { 00:28:35.405 "name": "Nvme0", 00:28:35.405 "trtype": "tcp", 00:28:35.405 "traddr": "10.0.0.2", 00:28:35.405 "adrfam": "ipv4", 00:28:35.405 "trsvcid": "4420", 00:28:35.405 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:35.405 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:35.405 "hdgst": false, 00:28:35.405 "ddgst": false 00:28:35.405 }, 00:28:35.405 "method": "bdev_nvme_attach_controller" 00:28:35.405 }' 00:28:35.664 [2024-11-28 12:52:17.932312] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:28:35.664 [2024-11-28 12:52:17.932367] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2706335 ] 00:28:35.664 [2024-11-28 12:52:17.994248] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:35.664 [2024-11-28 12:52:18.033944] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:35.922 Running I/O for 1 seconds... 00:28:36.858 1933.00 IOPS, 120.81 MiB/s 00:28:36.858 Latency(us) 00:28:36.858 [2024-11-28T11:52:19.377Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:36.858 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:36.858 Verification LBA range: start 0x0 length 0x400 00:28:36.858 Nvme0n1 : 1.01 1980.94 123.81 0.00 0.00 31707.31 1396.20 27582.11 00:28:36.858 [2024-11-28T11:52:19.377Z] =================================================================================================================== 00:28:36.858 [2024-11-28T11:52:19.377Z] Total : 1980.94 123.81 0.00 0.00 31707.31 1396.20 27582.11 00:28:37.117 12:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:28:37.117 12:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:28:37.117 12:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:37.117 12:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:37.117 12:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:28:37.118 12:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:37.118 12:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:28:37.118 12:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:37.118 12:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:28:37.118 12:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:37.118 12:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:37.118 rmmod nvme_tcp 00:28:37.118 rmmod nvme_fabrics 00:28:37.118 rmmod nvme_keyring 00:28:37.118 12:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:37.118 12:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:28:37.118 12:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:28:37.118 12:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 2705815 ']' 00:28:37.118 12:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 2705815 00:28:37.118 12:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 2705815 ']' 00:28:37.118 12:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 2705815 00:28:37.118 12:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:28:37.118 12:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:37.118 12:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2705815 00:28:37.375 12:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:37.375 12:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:37.375 12:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2705815' 00:28:37.375 killing process with pid 2705815 00:28:37.375 12:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 2705815 00:28:37.375 12:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 2705815 00:28:37.376 [2024-11-28 12:52:19.823019] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:28:37.376 12:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:37.376 12:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:37.376 12:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:37.376 12:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:28:37.376 12:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:28:37.376 12:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:37.376 12:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:28:37.376 12:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:37.376 12:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:37.376 12:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:37.376 12:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:37.376 12:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:39.906 12:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:39.906 12:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:28:39.906 00:28:39.906 real 0m11.620s 00:28:39.906 user 0m18.386s 00:28:39.906 sys 0m5.592s 00:28:39.906 12:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:39.906 12:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:39.906 ************************************ 00:28:39.906 END TEST nvmf_host_management 00:28:39.906 ************************************ 00:28:39.906 12:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:28:39.906 12:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:39.906 12:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:39.906 12:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:39.906 ************************************ 00:28:39.906 START TEST nvmf_lvol 00:28:39.906 ************************************ 00:28:39.906 12:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:28:39.906 * Looking for test storage... 00:28:39.906 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:39.906 12:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:39.906 12:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:28:39.906 12:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:39.906 12:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:39.906 12:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:39.906 12:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:39.906 12:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:39.906 12:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:28:39.906 12:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:28:39.906 12:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:28:39.906 12:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:28:39.906 12:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:28:39.906 12:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:28:39.906 12:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:28:39.907 12:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:39.907 12:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:28:39.907 12:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:28:39.907 12:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:39.907 12:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:39.907 12:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:28:39.907 12:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:28:39.907 12:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:39.907 12:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:28:39.907 12:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:28:39.907 12:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:28:39.907 12:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:28:39.907 12:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:39.907 12:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:28:39.907 12:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:28:39.907 12:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:39.907 12:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:39.907 12:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:28:39.907 12:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:39.907 12:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:39.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:39.907 --rc genhtml_branch_coverage=1 00:28:39.907 --rc genhtml_function_coverage=1 00:28:39.907 --rc genhtml_legend=1 00:28:39.907 --rc geninfo_all_blocks=1 00:28:39.907 --rc geninfo_unexecuted_blocks=1 00:28:39.907 00:28:39.907 ' 00:28:39.907 12:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:39.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:39.907 --rc genhtml_branch_coverage=1 00:28:39.907 --rc genhtml_function_coverage=1 00:28:39.907 --rc genhtml_legend=1 00:28:39.907 --rc geninfo_all_blocks=1 00:28:39.907 --rc geninfo_unexecuted_blocks=1 00:28:39.907 00:28:39.907 ' 00:28:39.907 12:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:39.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:39.907 --rc genhtml_branch_coverage=1 00:28:39.907 --rc genhtml_function_coverage=1 00:28:39.907 --rc genhtml_legend=1 00:28:39.907 --rc geninfo_all_blocks=1 00:28:39.907 --rc geninfo_unexecuted_blocks=1 00:28:39.907 00:28:39.907 ' 00:28:39.907 12:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:39.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:39.907 --rc genhtml_branch_coverage=1 00:28:39.907 --rc genhtml_function_coverage=1 00:28:39.907 --rc genhtml_legend=1 00:28:39.907 --rc geninfo_all_blocks=1 00:28:39.907 --rc geninfo_unexecuted_blocks=1 00:28:39.907 00:28:39.907 ' 00:28:39.907 12:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:39.907 12:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:28:39.907 12:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:39.907 12:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:39.907 12:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:39.907 12:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:39.907 12:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:39.907 12:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:39.907 12:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:39.907 12:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:39.907 12:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:39.907 12:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:39.907 12:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:28:39.907 12:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:28:39.907 12:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:39.907 12:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:39.907 12:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:39.907 12:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:39.907 12:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:39.907 12:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:28:39.907 12:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:39.907 12:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:39.907 12:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:39.907 12:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:39.907 12:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:39.907 12:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:39.907 12:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:28:39.907 12:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:39.907 12:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:28:39.907 12:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:39.907 12:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:39.907 12:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:39.907 12:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:39.907 12:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:39.907 12:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:39.907 12:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:39.907 12:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:39.907 12:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:39.907 12:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:39.907 12:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:39.907 12:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:39.907 12:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:28:39.907 12:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:28:39.907 12:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:39.907 12:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:28:39.907 12:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:39.907 12:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:39.907 12:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:39.907 12:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:39.907 12:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:39.908 12:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:39.908 12:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:39.908 12:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:39.908 12:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:39.908 12:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:39.908 12:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:28:39.908 12:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:28:45.171 12:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:45.171 12:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:28:45.171 12:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:45.171 12:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:45.171 12:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:45.171 12:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:45.171 12:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:45.171 12:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:28:45.171 12:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:45.171 12:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:28:45.171 12:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:28:45.171 12:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:28:45.171 12:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:28:45.171 12:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:28:45.171 12:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:28:45.171 12:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:45.171 12:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:45.171 12:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:45.171 12:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:45.171 12:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:45.171 12:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:45.171 12:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:45.171 12:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:45.171 12:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:45.171 12:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:45.171 12:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:45.171 12:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:45.171 12:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:45.171 12:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:45.171 12:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:45.171 12:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:45.171 12:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:45.171 12:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:45.171 12:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:45.171 12:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:45.171 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:45.171 12:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:45.171 12:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:45.171 12:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:45.171 12:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:45.171 12:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:45.171 12:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:45.171 12:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:45.171 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:45.171 12:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:45.171 12:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:45.171 12:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:45.171 12:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:45.171 12:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:45.171 12:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:45.171 12:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:45.171 12:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:45.171 12:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:45.171 12:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:45.171 12:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:45.171 12:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:45.171 12:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:45.171 12:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:45.171 12:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:45.171 12:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:45.171 Found net devices under 0000:86:00.0: cvl_0_0 00:28:45.171 12:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:45.171 12:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:45.172 12:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:45.172 12:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:45.172 12:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:45.172 12:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:45.172 12:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:45.172 12:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:45.172 12:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:45.172 Found net devices under 0000:86:00.1: cvl_0_1 00:28:45.172 12:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:45.172 12:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:45.172 12:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:28:45.172 12:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:45.172 12:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:45.172 12:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:45.172 12:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:45.172 12:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:45.172 12:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:45.172 12:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:45.172 12:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:45.172 12:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:45.172 12:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:45.172 12:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:45.172 12:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:45.172 12:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:45.172 12:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:45.172 12:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:45.172 12:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:45.172 12:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:45.172 12:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:45.172 12:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:45.172 12:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:45.172 12:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:45.172 12:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:45.172 12:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:45.172 12:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:45.172 12:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:45.172 12:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:45.172 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:45.172 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.438 ms 00:28:45.172 00:28:45.172 --- 10.0.0.2 ping statistics --- 00:28:45.172 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:45.172 rtt min/avg/max/mdev = 0.438/0.438/0.438/0.000 ms 00:28:45.172 12:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:45.172 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:45.172 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.218 ms 00:28:45.172 00:28:45.172 --- 10.0.0.1 ping statistics --- 00:28:45.172 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:45.172 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:28:45.172 12:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:45.172 12:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:28:45.172 12:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:45.172 12:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:45.172 12:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:45.172 12:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:45.172 12:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:45.172 12:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:45.172 12:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:45.172 12:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:28:45.172 12:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:45.172 12:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:45.172 12:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:28:45.172 12:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=2709995 00:28:45.172 12:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:28:45.172 12:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 2709995 00:28:45.172 12:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 2709995 ']' 00:28:45.172 12:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:45.172 12:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:45.172 12:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:45.172 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:45.172 12:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:45.172 12:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:28:45.172 [2024-11-28 12:52:27.643032] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:45.172 [2024-11-28 12:52:27.643955] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:28:45.172 [2024-11-28 12:52:27.643993] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:45.432 [2024-11-28 12:52:27.710507] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:45.432 [2024-11-28 12:52:27.753685] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:45.432 [2024-11-28 12:52:27.753724] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:45.432 [2024-11-28 12:52:27.753731] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:45.432 [2024-11-28 12:52:27.753737] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:45.432 [2024-11-28 12:52:27.753742] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:45.432 [2024-11-28 12:52:27.755141] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:45.432 [2024-11-28 12:52:27.755237] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:45.432 [2024-11-28 12:52:27.755238] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:45.432 [2024-11-28 12:52:27.824499] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:45.432 [2024-11-28 12:52:27.824512] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:45.432 [2024-11-28 12:52:27.824631] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:28:45.432 [2024-11-28 12:52:27.824751] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:28:45.432 12:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:45.432 12:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:28:45.432 12:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:45.432 12:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:45.432 12:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:28:45.432 12:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:45.432 12:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:28:45.690 [2024-11-28 12:52:28.055979] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:45.690 12:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:28:45.948 12:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:28:45.948 12:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:28:46.206 12:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:28:46.206 12:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:28:46.206 12:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:28:46.464 12:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=5e8a0807-70bf-4242-b520-3e87256bf035 00:28:46.464 12:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 5e8a0807-70bf-4242-b520-3e87256bf035 lvol 20 00:28:46.721 12:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=fdecbcc4-6975-4eb0-b6e0-a470f0bcc055 00:28:46.721 12:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:28:46.979 12:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 fdecbcc4-6975-4eb0-b6e0-a470f0bcc055 00:28:47.237 12:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:47.237 [2024-11-28 12:52:29.671810] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:47.237 12:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:47.494 12:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2710356 00:28:47.494 12:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:28:47.494 12:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:28:48.427 12:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot fdecbcc4-6975-4eb0-b6e0-a470f0bcc055 MY_SNAPSHOT 00:28:48.685 12:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=095fa078-60d7-42e3-913e-276b5238d7ea 00:28:48.685 12:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize fdecbcc4-6975-4eb0-b6e0-a470f0bcc055 30 00:28:48.943 12:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 095fa078-60d7-42e3-913e-276b5238d7ea MY_CLONE 00:28:49.200 12:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=2e42b126-6151-4566-80da-1fa3d9a16ea0 00:28:49.200 12:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 2e42b126-6151-4566-80da-1fa3d9a16ea0 00:28:49.765 12:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2710356 00:28:57.891 Initializing NVMe Controllers 00:28:57.891 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:28:57.891 Controller IO queue size 128, less than required. 00:28:57.891 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:57.891 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:28:57.891 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:28:57.891 Initialization complete. Launching workers. 00:28:57.891 ======================================================== 00:28:57.891 Latency(us) 00:28:57.891 Device Information : IOPS MiB/s Average min max 00:28:57.891 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12113.70 47.32 10568.70 1604.16 61087.38 00:28:57.891 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12253.90 47.87 10450.20 1687.92 63944.32 00:28:57.891 ======================================================== 00:28:57.891 Total : 24367.60 95.19 10509.11 1604.16 63944.32 00:28:57.891 00:28:57.891 12:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:58.150 12:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete fdecbcc4-6975-4eb0-b6e0-a470f0bcc055 00:28:58.408 12:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 5e8a0807-70bf-4242-b520-3e87256bf035 00:28:58.667 12:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:28:58.667 12:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:28:58.667 12:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:28:58.667 12:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:58.667 12:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:28:58.667 12:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:58.667 12:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:28:58.667 12:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:58.667 12:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:58.667 rmmod nvme_tcp 00:28:58.667 rmmod nvme_fabrics 00:28:58.667 rmmod nvme_keyring 00:28:58.667 12:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:58.667 12:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:28:58.667 12:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:28:58.667 12:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 2709995 ']' 00:28:58.667 12:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 2709995 00:28:58.667 12:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 2709995 ']' 00:28:58.667 12:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 2709995 00:28:58.667 12:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:28:58.667 12:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:58.667 12:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2709995 00:28:58.667 12:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:58.667 12:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:58.667 12:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2709995' 00:28:58.667 killing process with pid 2709995 00:28:58.667 12:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 2709995 00:28:58.667 12:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 2709995 00:28:58.926 12:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:58.926 12:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:58.926 12:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:58.926 12:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:28:58.926 12:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:28:58.926 12:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:58.926 12:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:28:58.926 12:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:58.926 12:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:58.926 12:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:58.926 12:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:58.926 12:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:01.461 12:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:01.461 00:29:01.461 real 0m21.378s 00:29:01.461 user 0m55.712s 00:29:01.461 sys 0m9.452s 00:29:01.461 12:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:01.461 12:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:29:01.461 ************************************ 00:29:01.461 END TEST nvmf_lvol 00:29:01.461 ************************************ 00:29:01.461 12:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:29:01.461 12:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:01.461 12:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:01.461 12:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:01.461 ************************************ 00:29:01.461 START TEST nvmf_lvs_grow 00:29:01.461 ************************************ 00:29:01.461 12:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:29:01.461 * Looking for test storage... 00:29:01.461 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:01.461 12:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:01.462 12:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:29:01.462 12:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:01.462 12:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:01.462 12:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:01.462 12:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:01.462 12:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:01.462 12:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:29:01.462 12:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:29:01.462 12:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:29:01.462 12:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:29:01.462 12:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:29:01.462 12:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:29:01.462 12:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:29:01.462 12:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:01.462 12:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:29:01.462 12:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:29:01.462 12:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:01.462 12:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:01.462 12:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:29:01.462 12:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:29:01.462 12:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:01.462 12:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:29:01.462 12:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:29:01.462 12:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:29:01.462 12:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:29:01.462 12:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:01.462 12:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:29:01.462 12:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:29:01.462 12:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:01.462 12:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:01.462 12:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:29:01.462 12:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:01.462 12:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:01.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:01.462 --rc genhtml_branch_coverage=1 00:29:01.462 --rc genhtml_function_coverage=1 00:29:01.462 --rc genhtml_legend=1 00:29:01.462 --rc geninfo_all_blocks=1 00:29:01.462 --rc geninfo_unexecuted_blocks=1 00:29:01.462 00:29:01.462 ' 00:29:01.462 12:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:01.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:01.462 --rc genhtml_branch_coverage=1 00:29:01.462 --rc genhtml_function_coverage=1 00:29:01.462 --rc genhtml_legend=1 00:29:01.462 --rc geninfo_all_blocks=1 00:29:01.462 --rc geninfo_unexecuted_blocks=1 00:29:01.462 00:29:01.462 ' 00:29:01.462 12:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:01.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:01.462 --rc genhtml_branch_coverage=1 00:29:01.462 --rc genhtml_function_coverage=1 00:29:01.462 --rc genhtml_legend=1 00:29:01.462 --rc geninfo_all_blocks=1 00:29:01.462 --rc geninfo_unexecuted_blocks=1 00:29:01.462 00:29:01.462 ' 00:29:01.462 12:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:01.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:01.462 --rc genhtml_branch_coverage=1 00:29:01.462 --rc genhtml_function_coverage=1 00:29:01.462 --rc genhtml_legend=1 00:29:01.462 --rc geninfo_all_blocks=1 00:29:01.462 --rc geninfo_unexecuted_blocks=1 00:29:01.462 00:29:01.462 ' 00:29:01.462 12:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:01.462 12:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:29:01.462 12:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:01.462 12:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:01.462 12:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:01.462 12:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:01.462 12:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:01.462 12:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:01.462 12:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:01.462 12:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:01.462 12:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:01.462 12:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:01.462 12:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:29:01.462 12:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:29:01.462 12:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:01.462 12:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:01.462 12:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:01.462 12:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:01.462 12:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:01.462 12:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:29:01.462 12:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:01.462 12:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:01.462 12:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:01.462 12:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:01.462 12:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:01.462 12:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:01.462 12:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:29:01.462 12:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:01.462 12:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:29:01.462 12:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:01.462 12:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:01.462 12:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:01.462 12:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:01.462 12:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:01.462 12:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:01.462 12:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:01.463 12:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:01.463 12:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:01.463 12:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:01.463 12:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:01.463 12:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:01.463 12:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:29:01.463 12:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:01.463 12:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:01.463 12:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:01.463 12:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:01.463 12:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:01.463 12:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:01.463 12:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:01.463 12:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:01.463 12:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:01.463 12:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:01.463 12:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:29:01.463 12:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:06.732 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:06.732 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:29:06.732 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:06.732 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:06.732 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:06.732 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:06.732 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:06.732 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:29:06.732 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:06.732 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:29:06.732 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:29:06.732 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:29:06.732 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:29:06.732 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:29:06.732 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:29:06.732 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:06.732 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:06.732 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:06.732 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:06.732 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:06.732 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:06.732 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:06.732 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:06.732 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:06.732 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:06.732 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:06.732 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:06.732 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:06.732 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:06.732 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:06.732 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:06.732 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:06.732 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:06.732 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:06.732 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:06.732 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:06.732 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:06.732 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:06.732 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:06.733 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:06.733 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:06.733 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:06.733 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:06.733 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:06.733 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:06.733 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:06.733 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:06.733 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:06.733 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:06.733 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:06.733 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:06.733 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:06.733 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:06.733 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:06.733 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:06.733 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:06.733 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:06.733 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:06.733 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:06.733 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:06.733 Found net devices under 0000:86:00.0: cvl_0_0 00:29:06.733 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:06.733 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:06.733 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:06.733 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:06.733 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:06.733 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:06.733 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:06.733 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:06.733 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:06.733 Found net devices under 0000:86:00.1: cvl_0_1 00:29:06.733 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:06.733 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:06.733 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:29:06.733 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:06.733 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:06.733 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:06.733 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:06.733 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:06.733 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:06.733 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:06.733 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:06.733 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:06.733 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:06.733 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:06.733 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:06.733 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:06.733 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:06.733 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:06.733 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:06.733 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:06.733 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:06.733 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:06.992 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:06.992 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:06.992 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:06.992 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:06.992 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:06.992 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:06.992 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:06.992 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:06.992 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.353 ms 00:29:06.992 00:29:06.992 --- 10.0.0.2 ping statistics --- 00:29:06.992 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:06.992 rtt min/avg/max/mdev = 0.353/0.353/0.353/0.000 ms 00:29:06.992 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:06.992 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:06.992 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.123 ms 00:29:06.992 00:29:06.992 --- 10.0.0.1 ping statistics --- 00:29:06.992 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:06.992 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:29:06.992 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:06.992 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:29:06.992 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:06.992 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:06.992 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:06.992 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:06.992 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:06.992 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:06.992 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:06.992 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:29:06.992 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:06.992 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:06.992 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:06.992 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=2715707 00:29:06.992 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 2715707 00:29:06.992 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:29:06.992 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 2715707 ']' 00:29:06.992 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:06.992 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:06.992 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:06.992 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:06.992 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:06.992 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:06.992 [2024-11-28 12:52:49.449721] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:06.992 [2024-11-28 12:52:49.450626] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:29:06.992 [2024-11-28 12:52:49.450658] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:07.291 [2024-11-28 12:52:49.516832] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:07.291 [2024-11-28 12:52:49.560600] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:07.291 [2024-11-28 12:52:49.560636] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:07.291 [2024-11-28 12:52:49.560644] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:07.291 [2024-11-28 12:52:49.560651] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:07.291 [2024-11-28 12:52:49.560656] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:07.292 [2024-11-28 12:52:49.561249] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:07.292 [2024-11-28 12:52:49.629347] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:07.292 [2024-11-28 12:52:49.629584] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:07.292 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:07.292 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:29:07.292 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:07.292 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:07.292 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:07.292 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:07.292 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:07.550 [2024-11-28 12:52:49.865697] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:07.550 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:29:07.551 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:07.551 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:07.551 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:07.551 ************************************ 00:29:07.551 START TEST lvs_grow_clean 00:29:07.551 ************************************ 00:29:07.551 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:29:07.551 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:29:07.551 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:29:07.551 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:29:07.551 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:29:07.551 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:29:07.551 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:29:07.551 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:07.551 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:07.551 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:29:07.809 12:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:29:07.809 12:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:29:08.069 12:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=183ca06d-536a-471f-a87c-000c6b6e64ec 00:29:08.069 12:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 183ca06d-536a-471f-a87c-000c6b6e64ec 00:29:08.069 12:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:29:08.069 12:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:29:08.069 12:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:29:08.069 12:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 183ca06d-536a-471f-a87c-000c6b6e64ec lvol 150 00:29:08.328 12:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=1b47ad38-06e5-42db-86e8-354a6e6c4948 00:29:08.328 12:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:08.328 12:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:29:08.587 [2024-11-28 12:52:50.925893] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:29:08.587 [2024-11-28 12:52:50.925986] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:29:08.587 true 00:29:08.587 12:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 183ca06d-536a-471f-a87c-000c6b6e64ec 00:29:08.587 12:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:29:08.845 12:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:29:08.845 12:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:29:08.845 12:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 1b47ad38-06e5-42db-86e8-354a6e6c4948 00:29:09.102 12:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:09.361 [2024-11-28 12:52:51.714056] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:09.361 12:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:09.619 12:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2716072 00:29:09.619 12:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:09.619 12:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2716072 /var/tmp/bdevperf.sock 00:29:09.619 12:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 2716072 ']' 00:29:09.619 12:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:09.619 12:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:09.619 12:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:29:09.619 12:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:09.619 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:09.619 12:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:09.619 12:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:29:09.619 [2024-11-28 12:52:51.973317] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:29:09.619 [2024-11-28 12:52:51.973365] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2716072 ] 00:29:09.619 [2024-11-28 12:52:52.035520] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:09.619 [2024-11-28 12:52:52.080534] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:09.877 12:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:09.877 12:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:29:09.877 12:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:29:10.135 Nvme0n1 00:29:10.135 12:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:29:10.394 [ 00:29:10.394 { 00:29:10.394 "name": "Nvme0n1", 00:29:10.394 "aliases": [ 00:29:10.394 "1b47ad38-06e5-42db-86e8-354a6e6c4948" 00:29:10.394 ], 00:29:10.394 "product_name": "NVMe disk", 00:29:10.394 "block_size": 4096, 00:29:10.394 "num_blocks": 38912, 00:29:10.394 "uuid": "1b47ad38-06e5-42db-86e8-354a6e6c4948", 00:29:10.394 "numa_id": 1, 00:29:10.394 "assigned_rate_limits": { 00:29:10.394 "rw_ios_per_sec": 0, 00:29:10.394 "rw_mbytes_per_sec": 0, 00:29:10.394 "r_mbytes_per_sec": 0, 00:29:10.394 "w_mbytes_per_sec": 0 00:29:10.394 }, 00:29:10.394 "claimed": false, 00:29:10.394 "zoned": false, 00:29:10.394 "supported_io_types": { 00:29:10.394 "read": true, 00:29:10.394 "write": true, 00:29:10.394 "unmap": true, 00:29:10.394 "flush": true, 00:29:10.394 "reset": true, 00:29:10.394 "nvme_admin": true, 00:29:10.394 "nvme_io": true, 00:29:10.394 "nvme_io_md": false, 00:29:10.394 "write_zeroes": true, 00:29:10.394 "zcopy": false, 00:29:10.394 "get_zone_info": false, 00:29:10.394 "zone_management": false, 00:29:10.394 "zone_append": false, 00:29:10.394 "compare": true, 00:29:10.394 "compare_and_write": true, 00:29:10.394 "abort": true, 00:29:10.394 "seek_hole": false, 00:29:10.394 "seek_data": false, 00:29:10.394 "copy": true, 00:29:10.394 "nvme_iov_md": false 00:29:10.394 }, 00:29:10.394 "memory_domains": [ 00:29:10.394 { 00:29:10.394 "dma_device_id": "system", 00:29:10.394 "dma_device_type": 1 00:29:10.394 } 00:29:10.394 ], 00:29:10.394 "driver_specific": { 00:29:10.394 "nvme": [ 00:29:10.394 { 00:29:10.394 "trid": { 00:29:10.394 "trtype": "TCP", 00:29:10.394 "adrfam": "IPv4", 00:29:10.394 "traddr": "10.0.0.2", 00:29:10.394 "trsvcid": "4420", 00:29:10.394 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:10.394 }, 00:29:10.394 "ctrlr_data": { 00:29:10.394 "cntlid": 1, 00:29:10.394 "vendor_id": "0x8086", 00:29:10.394 "model_number": "SPDK bdev Controller", 00:29:10.394 "serial_number": "SPDK0", 00:29:10.394 "firmware_revision": "25.01", 00:29:10.394 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:10.394 "oacs": { 00:29:10.394 "security": 0, 00:29:10.394 "format": 0, 00:29:10.394 "firmware": 0, 00:29:10.394 "ns_manage": 0 00:29:10.394 }, 00:29:10.394 "multi_ctrlr": true, 00:29:10.394 "ana_reporting": false 00:29:10.394 }, 00:29:10.394 "vs": { 00:29:10.394 "nvme_version": "1.3" 00:29:10.394 }, 00:29:10.394 "ns_data": { 00:29:10.394 "id": 1, 00:29:10.394 "can_share": true 00:29:10.394 } 00:29:10.394 } 00:29:10.394 ], 00:29:10.394 "mp_policy": "active_passive" 00:29:10.394 } 00:29:10.394 } 00:29:10.394 ] 00:29:10.394 12:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2716214 00:29:10.394 12:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:10.394 12:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:29:10.394 Running I/O for 10 seconds... 00:29:11.770 Latency(us) 00:29:11.770 [2024-11-28T11:52:54.289Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:11.770 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:11.770 Nvme0n1 : 1.00 22479.00 87.81 0.00 0.00 0.00 0.00 0.00 00:29:11.770 [2024-11-28T11:52:54.289Z] =================================================================================================================== 00:29:11.770 [2024-11-28T11:52:54.289Z] Total : 22479.00 87.81 0.00 0.00 0.00 0.00 0.00 00:29:11.770 00:29:12.337 12:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 183ca06d-536a-471f-a87c-000c6b6e64ec 00:29:12.596 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:12.596 Nvme0n1 : 2.00 22669.50 88.55 0.00 0.00 0.00 0.00 0.00 00:29:12.596 [2024-11-28T11:52:55.115Z] =================================================================================================================== 00:29:12.596 [2024-11-28T11:52:55.115Z] Total : 22669.50 88.55 0.00 0.00 0.00 0.00 0.00 00:29:12.596 00:29:12.596 true 00:29:12.596 12:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 183ca06d-536a-471f-a87c-000c6b6e64ec 00:29:12.596 12:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:29:12.853 12:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:29:12.853 12:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:29:12.853 12:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2716214 00:29:13.418 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:13.418 Nvme0n1 : 3.00 22690.67 88.64 0.00 0.00 0.00 0.00 0.00 00:29:13.418 [2024-11-28T11:52:55.937Z] =================================================================================================================== 00:29:13.418 [2024-11-28T11:52:55.937Z] Total : 22690.67 88.64 0.00 0.00 0.00 0.00 0.00 00:29:13.418 00:29:14.795 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:14.795 Nvme0n1 : 4.00 22764.75 88.92 0.00 0.00 0.00 0.00 0.00 00:29:14.795 [2024-11-28T11:52:57.314Z] =================================================================================================================== 00:29:14.795 [2024-11-28T11:52:57.314Z] Total : 22764.75 88.92 0.00 0.00 0.00 0.00 0.00 00:29:14.795 00:29:15.730 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:15.730 Nvme0n1 : 5.00 22809.20 89.10 0.00 0.00 0.00 0.00 0.00 00:29:15.730 [2024-11-28T11:52:58.249Z] =================================================================================================================== 00:29:15.730 [2024-11-28T11:52:58.249Z] Total : 22809.20 89.10 0.00 0.00 0.00 0.00 0.00 00:29:15.730 00:29:16.664 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:16.664 Nvme0n1 : 6.00 22849.50 89.26 0.00 0.00 0.00 0.00 0.00 00:29:16.664 [2024-11-28T11:52:59.183Z] =================================================================================================================== 00:29:16.664 [2024-11-28T11:52:59.183Z] Total : 22849.50 89.26 0.00 0.00 0.00 0.00 0.00 00:29:16.664 00:29:17.599 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:17.599 Nvme0n1 : 7.00 22878.29 89.37 0.00 0.00 0.00 0.00 0.00 00:29:17.599 [2024-11-28T11:53:00.118Z] =================================================================================================================== 00:29:17.599 [2024-11-28T11:53:00.118Z] Total : 22878.29 89.37 0.00 0.00 0.00 0.00 0.00 00:29:17.599 00:29:18.534 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:18.534 Nvme0n1 : 8.00 22890.00 89.41 0.00 0.00 0.00 0.00 0.00 00:29:18.534 [2024-11-28T11:53:01.053Z] =================================================================================================================== 00:29:18.534 [2024-11-28T11:53:01.053Z] Total : 22890.00 89.41 0.00 0.00 0.00 0.00 0.00 00:29:18.534 00:29:19.469 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:19.469 Nvme0n1 : 9.00 22914.89 89.51 0.00 0.00 0.00 0.00 0.00 00:29:19.469 [2024-11-28T11:53:01.988Z] =================================================================================================================== 00:29:19.469 [2024-11-28T11:53:01.988Z] Total : 22914.89 89.51 0.00 0.00 0.00 0.00 0.00 00:29:19.469 00:29:20.403 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:20.403 Nvme0n1 : 10.00 22922.10 89.54 0.00 0.00 0.00 0.00 0.00 00:29:20.403 [2024-11-28T11:53:02.922Z] =================================================================================================================== 00:29:20.403 [2024-11-28T11:53:02.922Z] Total : 22922.10 89.54 0.00 0.00 0.00 0.00 0.00 00:29:20.403 00:29:20.403 00:29:20.403 Latency(us) 00:29:20.403 [2024-11-28T11:53:02.922Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:20.403 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:20.403 Nvme0n1 : 10.00 22928.21 89.56 0.00 0.00 5579.53 3205.57 14930.81 00:29:20.403 [2024-11-28T11:53:02.922Z] =================================================================================================================== 00:29:20.403 [2024-11-28T11:53:02.922Z] Total : 22928.21 89.56 0.00 0.00 5579.53 3205.57 14930.81 00:29:20.403 { 00:29:20.403 "results": [ 00:29:20.403 { 00:29:20.403 "job": "Nvme0n1", 00:29:20.403 "core_mask": "0x2", 00:29:20.403 "workload": "randwrite", 00:29:20.403 "status": "finished", 00:29:20.403 "queue_depth": 128, 00:29:20.403 "io_size": 4096, 00:29:20.403 "runtime": 10.002916, 00:29:20.403 "iops": 22928.214132758887, 00:29:20.403 "mibps": 89.5633364560894, 00:29:20.403 "io_failed": 0, 00:29:20.403 "io_timeout": 0, 00:29:20.403 "avg_latency_us": 5579.531925656494, 00:29:20.403 "min_latency_us": 3205.5652173913045, 00:29:20.403 "max_latency_us": 14930.810434782608 00:29:20.403 } 00:29:20.403 ], 00:29:20.403 "core_count": 1 00:29:20.403 } 00:29:20.660 12:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2716072 00:29:20.660 12:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 2716072 ']' 00:29:20.660 12:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 2716072 00:29:20.660 12:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:29:20.660 12:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:20.660 12:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2716072 00:29:20.660 12:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:20.660 12:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:20.660 12:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2716072' 00:29:20.660 killing process with pid 2716072 00:29:20.660 12:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 2716072 00:29:20.660 Received shutdown signal, test time was about 10.000000 seconds 00:29:20.660 00:29:20.660 Latency(us) 00:29:20.660 [2024-11-28T11:53:03.179Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:20.660 [2024-11-28T11:53:03.179Z] =================================================================================================================== 00:29:20.660 [2024-11-28T11:53:03.179Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:20.660 12:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 2716072 00:29:20.660 12:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:20.918 12:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:21.176 12:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 183ca06d-536a-471f-a87c-000c6b6e64ec 00:29:21.176 12:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:29:21.435 12:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:29:21.435 12:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:29:21.435 12:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:29:21.435 [2024-11-28 12:53:03.921738] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:29:21.694 12:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 183ca06d-536a-471f-a87c-000c6b6e64ec 00:29:21.694 12:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:29:21.694 12:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 183ca06d-536a-471f-a87c-000c6b6e64ec 00:29:21.694 12:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:21.694 12:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:21.694 12:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:21.694 12:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:21.694 12:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:21.694 12:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:21.694 12:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:21.694 12:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:29:21.694 12:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 183ca06d-536a-471f-a87c-000c6b6e64ec 00:29:21.694 request: 00:29:21.694 { 00:29:21.694 "uuid": "183ca06d-536a-471f-a87c-000c6b6e64ec", 00:29:21.694 "method": "bdev_lvol_get_lvstores", 00:29:21.694 "req_id": 1 00:29:21.694 } 00:29:21.694 Got JSON-RPC error response 00:29:21.694 response: 00:29:21.694 { 00:29:21.694 "code": -19, 00:29:21.694 "message": "No such device" 00:29:21.694 } 00:29:21.694 12:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:29:21.694 12:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:21.694 12:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:21.694 12:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:21.694 12:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:29:21.952 aio_bdev 00:29:21.952 12:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 1b47ad38-06e5-42db-86e8-354a6e6c4948 00:29:21.952 12:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=1b47ad38-06e5-42db-86e8-354a6e6c4948 00:29:21.952 12:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:29:21.952 12:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:29:21.952 12:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:29:21.952 12:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:29:21.952 12:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:29:22.211 12:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 1b47ad38-06e5-42db-86e8-354a6e6c4948 -t 2000 00:29:22.211 [ 00:29:22.211 { 00:29:22.211 "name": "1b47ad38-06e5-42db-86e8-354a6e6c4948", 00:29:22.211 "aliases": [ 00:29:22.211 "lvs/lvol" 00:29:22.211 ], 00:29:22.211 "product_name": "Logical Volume", 00:29:22.211 "block_size": 4096, 00:29:22.211 "num_blocks": 38912, 00:29:22.211 "uuid": "1b47ad38-06e5-42db-86e8-354a6e6c4948", 00:29:22.211 "assigned_rate_limits": { 00:29:22.211 "rw_ios_per_sec": 0, 00:29:22.211 "rw_mbytes_per_sec": 0, 00:29:22.211 "r_mbytes_per_sec": 0, 00:29:22.211 "w_mbytes_per_sec": 0 00:29:22.211 }, 00:29:22.211 "claimed": false, 00:29:22.211 "zoned": false, 00:29:22.211 "supported_io_types": { 00:29:22.211 "read": true, 00:29:22.211 "write": true, 00:29:22.211 "unmap": true, 00:29:22.211 "flush": false, 00:29:22.211 "reset": true, 00:29:22.211 "nvme_admin": false, 00:29:22.211 "nvme_io": false, 00:29:22.211 "nvme_io_md": false, 00:29:22.211 "write_zeroes": true, 00:29:22.211 "zcopy": false, 00:29:22.211 "get_zone_info": false, 00:29:22.211 "zone_management": false, 00:29:22.211 "zone_append": false, 00:29:22.211 "compare": false, 00:29:22.211 "compare_and_write": false, 00:29:22.211 "abort": false, 00:29:22.211 "seek_hole": true, 00:29:22.211 "seek_data": true, 00:29:22.211 "copy": false, 00:29:22.211 "nvme_iov_md": false 00:29:22.211 }, 00:29:22.211 "driver_specific": { 00:29:22.211 "lvol": { 00:29:22.211 "lvol_store_uuid": "183ca06d-536a-471f-a87c-000c6b6e64ec", 00:29:22.211 "base_bdev": "aio_bdev", 00:29:22.211 "thin_provision": false, 00:29:22.211 "num_allocated_clusters": 38, 00:29:22.211 "snapshot": false, 00:29:22.211 "clone": false, 00:29:22.211 "esnap_clone": false 00:29:22.211 } 00:29:22.211 } 00:29:22.211 } 00:29:22.211 ] 00:29:22.469 12:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:29:22.469 12:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 183ca06d-536a-471f-a87c-000c6b6e64ec 00:29:22.469 12:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:29:22.469 12:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:29:22.469 12:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 183ca06d-536a-471f-a87c-000c6b6e64ec 00:29:22.469 12:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:29:22.727 12:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:29:22.727 12:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 1b47ad38-06e5-42db-86e8-354a6e6c4948 00:29:22.986 12:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 183ca06d-536a-471f-a87c-000c6b6e64ec 00:29:23.244 12:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:29:23.244 12:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:23.244 00:29:23.244 real 0m15.846s 00:29:23.244 user 0m15.396s 00:29:23.244 sys 0m1.453s 00:29:23.244 12:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:23.244 12:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:29:23.244 ************************************ 00:29:23.244 END TEST lvs_grow_clean 00:29:23.244 ************************************ 00:29:23.504 12:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:29:23.504 12:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:23.504 12:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:23.504 12:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:23.504 ************************************ 00:29:23.504 START TEST lvs_grow_dirty 00:29:23.504 ************************************ 00:29:23.504 12:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:29:23.504 12:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:29:23.504 12:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:29:23.504 12:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:29:23.504 12:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:29:23.504 12:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:29:23.504 12:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:29:23.504 12:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:23.504 12:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:23.504 12:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:29:23.763 12:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:29:23.763 12:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:29:23.763 12:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=083cecad-7fd7-47f5-b431-5c8ec9455840 00:29:23.763 12:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 083cecad-7fd7-47f5-b431-5c8ec9455840 00:29:23.763 12:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:29:24.021 12:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:29:24.021 12:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:29:24.022 12:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 083cecad-7fd7-47f5-b431-5c8ec9455840 lvol 150 00:29:24.281 12:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=d3471182-4568-4ec6-a282-49bfabbdd1fa 00:29:24.281 12:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:24.281 12:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:29:24.281 [2024-11-28 12:53:06.789673] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:29:24.281 [2024-11-28 12:53:06.789812] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:29:24.281 true 00:29:24.540 12:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 083cecad-7fd7-47f5-b431-5c8ec9455840 00:29:24.540 12:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:29:24.540 12:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:29:24.540 12:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:29:24.799 12:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 d3471182-4568-4ec6-a282-49bfabbdd1fa 00:29:25.057 12:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:25.057 [2024-11-28 12:53:07.569844] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:25.317 12:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:25.317 12:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2718663 00:29:25.317 12:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:25.317 12:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2718663 /var/tmp/bdevperf.sock 00:29:25.317 12:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 2718663 ']' 00:29:25.317 12:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:25.317 12:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:25.317 12:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:29:25.317 12:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:25.317 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:25.317 12:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:25.317 12:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:29:25.575 [2024-11-28 12:53:07.838295] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:29:25.575 [2024-11-28 12:53:07.838345] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2718663 ] 00:29:25.575 [2024-11-28 12:53:07.899611] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:25.575 [2024-11-28 12:53:07.941889] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:25.576 12:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:25.576 12:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:29:25.576 12:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:29:26.141 Nvme0n1 00:29:26.141 12:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:29:26.141 [ 00:29:26.141 { 00:29:26.141 "name": "Nvme0n1", 00:29:26.141 "aliases": [ 00:29:26.141 "d3471182-4568-4ec6-a282-49bfabbdd1fa" 00:29:26.141 ], 00:29:26.141 "product_name": "NVMe disk", 00:29:26.141 "block_size": 4096, 00:29:26.141 "num_blocks": 38912, 00:29:26.141 "uuid": "d3471182-4568-4ec6-a282-49bfabbdd1fa", 00:29:26.141 "numa_id": 1, 00:29:26.141 "assigned_rate_limits": { 00:29:26.141 "rw_ios_per_sec": 0, 00:29:26.141 "rw_mbytes_per_sec": 0, 00:29:26.141 "r_mbytes_per_sec": 0, 00:29:26.141 "w_mbytes_per_sec": 0 00:29:26.141 }, 00:29:26.141 "claimed": false, 00:29:26.141 "zoned": false, 00:29:26.141 "supported_io_types": { 00:29:26.141 "read": true, 00:29:26.141 "write": true, 00:29:26.141 "unmap": true, 00:29:26.141 "flush": true, 00:29:26.141 "reset": true, 00:29:26.141 "nvme_admin": true, 00:29:26.141 "nvme_io": true, 00:29:26.141 "nvme_io_md": false, 00:29:26.141 "write_zeroes": true, 00:29:26.141 "zcopy": false, 00:29:26.141 "get_zone_info": false, 00:29:26.141 "zone_management": false, 00:29:26.141 "zone_append": false, 00:29:26.141 "compare": true, 00:29:26.141 "compare_and_write": true, 00:29:26.141 "abort": true, 00:29:26.141 "seek_hole": false, 00:29:26.141 "seek_data": false, 00:29:26.141 "copy": true, 00:29:26.141 "nvme_iov_md": false 00:29:26.141 }, 00:29:26.141 "memory_domains": [ 00:29:26.141 { 00:29:26.141 "dma_device_id": "system", 00:29:26.141 "dma_device_type": 1 00:29:26.141 } 00:29:26.141 ], 00:29:26.141 "driver_specific": { 00:29:26.141 "nvme": [ 00:29:26.141 { 00:29:26.141 "trid": { 00:29:26.141 "trtype": "TCP", 00:29:26.141 "adrfam": "IPv4", 00:29:26.141 "traddr": "10.0.0.2", 00:29:26.141 "trsvcid": "4420", 00:29:26.141 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:26.141 }, 00:29:26.141 "ctrlr_data": { 00:29:26.141 "cntlid": 1, 00:29:26.141 "vendor_id": "0x8086", 00:29:26.141 "model_number": "SPDK bdev Controller", 00:29:26.141 "serial_number": "SPDK0", 00:29:26.141 "firmware_revision": "25.01", 00:29:26.141 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:26.141 "oacs": { 00:29:26.141 "security": 0, 00:29:26.141 "format": 0, 00:29:26.141 "firmware": 0, 00:29:26.141 "ns_manage": 0 00:29:26.141 }, 00:29:26.141 "multi_ctrlr": true, 00:29:26.141 "ana_reporting": false 00:29:26.141 }, 00:29:26.141 "vs": { 00:29:26.141 "nvme_version": "1.3" 00:29:26.141 }, 00:29:26.141 "ns_data": { 00:29:26.141 "id": 1, 00:29:26.141 "can_share": true 00:29:26.142 } 00:29:26.142 } 00:29:26.142 ], 00:29:26.142 "mp_policy": "active_passive" 00:29:26.142 } 00:29:26.142 } 00:29:26.142 ] 00:29:26.142 12:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2718803 00:29:26.142 12:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:29:26.142 12:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:26.400 Running I/O for 10 seconds... 00:29:27.333 Latency(us) 00:29:27.333 [2024-11-28T11:53:09.852Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:27.333 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:27.333 Nvme0n1 : 1.00 22479.00 87.81 0.00 0.00 0.00 0.00 0.00 00:29:27.333 [2024-11-28T11:53:09.852Z] =================================================================================================================== 00:29:27.333 [2024-11-28T11:53:09.852Z] Total : 22479.00 87.81 0.00 0.00 0.00 0.00 0.00 00:29:27.333 00:29:28.266 12:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 083cecad-7fd7-47f5-b431-5c8ec9455840 00:29:28.266 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:28.267 Nvme0n1 : 2.00 22479.00 87.81 0.00 0.00 0.00 0.00 0.00 00:29:28.267 [2024-11-28T11:53:10.786Z] =================================================================================================================== 00:29:28.267 [2024-11-28T11:53:10.786Z] Total : 22479.00 87.81 0.00 0.00 0.00 0.00 0.00 00:29:28.267 00:29:28.524 true 00:29:28.524 12:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 083cecad-7fd7-47f5-b431-5c8ec9455840 00:29:28.524 12:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:29:28.524 12:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:29:28.524 12:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:29:28.524 12:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2718803 00:29:29.458 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:29.458 Nvme0n1 : 3.00 22606.00 88.30 0.00 0.00 0.00 0.00 0.00 00:29:29.458 [2024-11-28T11:53:11.977Z] =================================================================================================================== 00:29:29.458 [2024-11-28T11:53:11.977Z] Total : 22606.00 88.30 0.00 0.00 0.00 0.00 0.00 00:29:29.458 00:29:30.392 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:30.392 Nvme0n1 : 4.00 22701.25 88.68 0.00 0.00 0.00 0.00 0.00 00:29:30.392 [2024-11-28T11:53:12.911Z] =================================================================================================================== 00:29:30.392 [2024-11-28T11:53:12.911Z] Total : 22701.25 88.68 0.00 0.00 0.00 0.00 0.00 00:29:30.392 00:29:31.327 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:31.327 Nvme0n1 : 5.00 22733.00 88.80 0.00 0.00 0.00 0.00 0.00 00:29:31.327 [2024-11-28T11:53:13.846Z] =================================================================================================================== 00:29:31.327 [2024-11-28T11:53:13.846Z] Total : 22733.00 88.80 0.00 0.00 0.00 0.00 0.00 00:29:31.327 00:29:32.261 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:32.261 Nvme0n1 : 6.00 22796.50 89.05 0.00 0.00 0.00 0.00 0.00 00:29:32.261 [2024-11-28T11:53:14.780Z] =================================================================================================================== 00:29:32.261 [2024-11-28T11:53:14.780Z] Total : 22796.50 89.05 0.00 0.00 0.00 0.00 0.00 00:29:32.261 00:29:33.634 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:33.634 Nvme0n1 : 7.00 22841.86 89.23 0.00 0.00 0.00 0.00 0.00 00:29:33.634 [2024-11-28T11:53:16.153Z] =================================================================================================================== 00:29:33.634 [2024-11-28T11:53:16.153Z] Total : 22841.86 89.23 0.00 0.00 0.00 0.00 0.00 00:29:33.634 00:29:34.206 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:34.206 Nvme0n1 : 8.00 22875.88 89.36 0.00 0.00 0.00 0.00 0.00 00:29:34.206 [2024-11-28T11:53:16.725Z] =================================================================================================================== 00:29:34.206 [2024-11-28T11:53:16.725Z] Total : 22875.88 89.36 0.00 0.00 0.00 0.00 0.00 00:29:34.206 00:29:35.581 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:35.581 Nvme0n1 : 9.00 22902.33 89.46 0.00 0.00 0.00 0.00 0.00 00:29:35.581 [2024-11-28T11:53:18.100Z] =================================================================================================================== 00:29:35.581 [2024-11-28T11:53:18.100Z] Total : 22902.33 89.46 0.00 0.00 0.00 0.00 0.00 00:29:35.581 00:29:36.516 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:36.516 Nvme0n1 : 10.00 22923.50 89.54 0.00 0.00 0.00 0.00 0.00 00:29:36.516 [2024-11-28T11:53:19.035Z] =================================================================================================================== 00:29:36.516 [2024-11-28T11:53:19.036Z] Total : 22923.50 89.54 0.00 0.00 0.00 0.00 0.00 00:29:36.517 00:29:36.517 00:29:36.517 Latency(us) 00:29:36.517 [2024-11-28T11:53:19.036Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:36.517 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:36.517 Nvme0n1 : 10.00 22926.08 89.56 0.00 0.00 5579.97 5043.42 15614.66 00:29:36.517 [2024-11-28T11:53:19.036Z] =================================================================================================================== 00:29:36.517 [2024-11-28T11:53:19.036Z] Total : 22926.08 89.56 0.00 0.00 5579.97 5043.42 15614.66 00:29:36.517 { 00:29:36.517 "results": [ 00:29:36.517 { 00:29:36.517 "job": "Nvme0n1", 00:29:36.517 "core_mask": "0x2", 00:29:36.517 "workload": "randwrite", 00:29:36.517 "status": "finished", 00:29:36.517 "queue_depth": 128, 00:29:36.517 "io_size": 4096, 00:29:36.517 "runtime": 10.004456, 00:29:36.517 "iops": 22926.084136908594, 00:29:36.517 "mibps": 89.5550161597992, 00:29:36.517 "io_failed": 0, 00:29:36.517 "io_timeout": 0, 00:29:36.517 "avg_latency_us": 5579.969663732201, 00:29:36.517 "min_latency_us": 5043.422608695652, 00:29:36.517 "max_latency_us": 15614.664347826087 00:29:36.517 } 00:29:36.517 ], 00:29:36.517 "core_count": 1 00:29:36.517 } 00:29:36.517 12:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2718663 00:29:36.517 12:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 2718663 ']' 00:29:36.517 12:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 2718663 00:29:36.517 12:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:29:36.517 12:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:36.517 12:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2718663 00:29:36.517 12:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:36.517 12:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:36.517 12:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2718663' 00:29:36.517 killing process with pid 2718663 00:29:36.517 12:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 2718663 00:29:36.517 Received shutdown signal, test time was about 10.000000 seconds 00:29:36.517 00:29:36.517 Latency(us) 00:29:36.517 [2024-11-28T11:53:19.036Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:36.517 [2024-11-28T11:53:19.036Z] =================================================================================================================== 00:29:36.517 [2024-11-28T11:53:19.036Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:36.517 12:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 2718663 00:29:36.517 12:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:36.775 12:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:37.033 12:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 083cecad-7fd7-47f5-b431-5c8ec9455840 00:29:37.033 12:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:29:37.292 12:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:29:37.292 12:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:29:37.292 12:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2715707 00:29:37.292 12:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2715707 00:29:37.292 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2715707 Killed "${NVMF_APP[@]}" "$@" 00:29:37.292 12:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:29:37.292 12:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:29:37.292 12:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:37.292 12:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:37.292 12:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:29:37.292 12:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=2720627 00:29:37.292 12:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 2720627 00:29:37.292 12:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 2720627 ']' 00:29:37.292 12:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:37.292 12:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:37.292 12:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:37.292 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:37.292 12:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:37.292 12:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:29:37.292 12:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:29:37.292 [2024-11-28 12:53:19.659756] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:37.292 [2024-11-28 12:53:19.660693] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:29:37.292 [2024-11-28 12:53:19.660729] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:37.292 [2024-11-28 12:53:19.730220] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:37.292 [2024-11-28 12:53:19.771182] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:37.292 [2024-11-28 12:53:19.771216] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:37.292 [2024-11-28 12:53:19.771227] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:37.292 [2024-11-28 12:53:19.771233] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:37.292 [2024-11-28 12:53:19.771238] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:37.292 [2024-11-28 12:53:19.771761] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:37.551 [2024-11-28 12:53:19.841138] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:37.551 [2024-11-28 12:53:19.841372] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:37.551 12:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:37.551 12:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:29:37.551 12:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:37.551 12:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:37.551 12:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:29:37.551 12:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:37.551 12:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:29:37.809 [2024-11-28 12:53:20.074573] blobstore.c:4896:bs_recover: *NOTICE*: Performing recovery on blobstore 00:29:37.809 [2024-11-28 12:53:20.074684] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:29:37.809 [2024-11-28 12:53:20.074721] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:29:37.809 12:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:29:37.809 12:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev d3471182-4568-4ec6-a282-49bfabbdd1fa 00:29:37.809 12:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=d3471182-4568-4ec6-a282-49bfabbdd1fa 00:29:37.809 12:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:29:37.809 12:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:29:37.809 12:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:29:37.809 12:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:29:37.809 12:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:29:37.809 12:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b d3471182-4568-4ec6-a282-49bfabbdd1fa -t 2000 00:29:38.067 [ 00:29:38.067 { 00:29:38.067 "name": "d3471182-4568-4ec6-a282-49bfabbdd1fa", 00:29:38.067 "aliases": [ 00:29:38.067 "lvs/lvol" 00:29:38.067 ], 00:29:38.067 "product_name": "Logical Volume", 00:29:38.067 "block_size": 4096, 00:29:38.067 "num_blocks": 38912, 00:29:38.067 "uuid": "d3471182-4568-4ec6-a282-49bfabbdd1fa", 00:29:38.067 "assigned_rate_limits": { 00:29:38.067 "rw_ios_per_sec": 0, 00:29:38.067 "rw_mbytes_per_sec": 0, 00:29:38.067 "r_mbytes_per_sec": 0, 00:29:38.067 "w_mbytes_per_sec": 0 00:29:38.067 }, 00:29:38.067 "claimed": false, 00:29:38.067 "zoned": false, 00:29:38.067 "supported_io_types": { 00:29:38.067 "read": true, 00:29:38.067 "write": true, 00:29:38.067 "unmap": true, 00:29:38.067 "flush": false, 00:29:38.067 "reset": true, 00:29:38.067 "nvme_admin": false, 00:29:38.067 "nvme_io": false, 00:29:38.067 "nvme_io_md": false, 00:29:38.067 "write_zeroes": true, 00:29:38.067 "zcopy": false, 00:29:38.067 "get_zone_info": false, 00:29:38.067 "zone_management": false, 00:29:38.067 "zone_append": false, 00:29:38.067 "compare": false, 00:29:38.067 "compare_and_write": false, 00:29:38.067 "abort": false, 00:29:38.067 "seek_hole": true, 00:29:38.067 "seek_data": true, 00:29:38.067 "copy": false, 00:29:38.067 "nvme_iov_md": false 00:29:38.067 }, 00:29:38.067 "driver_specific": { 00:29:38.067 "lvol": { 00:29:38.067 "lvol_store_uuid": "083cecad-7fd7-47f5-b431-5c8ec9455840", 00:29:38.067 "base_bdev": "aio_bdev", 00:29:38.067 "thin_provision": false, 00:29:38.067 "num_allocated_clusters": 38, 00:29:38.067 "snapshot": false, 00:29:38.067 "clone": false, 00:29:38.067 "esnap_clone": false 00:29:38.067 } 00:29:38.067 } 00:29:38.067 } 00:29:38.067 ] 00:29:38.067 12:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:29:38.067 12:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 083cecad-7fd7-47f5-b431-5c8ec9455840 00:29:38.067 12:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:29:38.326 12:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:29:38.326 12:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 083cecad-7fd7-47f5-b431-5c8ec9455840 00:29:38.326 12:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:29:38.584 12:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:29:38.584 12:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:29:38.584 [2024-11-28 12:53:21.088225] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:29:38.843 12:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 083cecad-7fd7-47f5-b431-5c8ec9455840 00:29:38.843 12:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:29:38.843 12:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 083cecad-7fd7-47f5-b431-5c8ec9455840 00:29:38.843 12:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:38.843 12:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:38.843 12:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:38.843 12:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:38.843 12:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:38.843 12:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:38.843 12:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:38.843 12:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:29:38.843 12:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 083cecad-7fd7-47f5-b431-5c8ec9455840 00:29:38.843 request: 00:29:38.843 { 00:29:38.843 "uuid": "083cecad-7fd7-47f5-b431-5c8ec9455840", 00:29:38.843 "method": "bdev_lvol_get_lvstores", 00:29:38.843 "req_id": 1 00:29:38.843 } 00:29:38.843 Got JSON-RPC error response 00:29:38.843 response: 00:29:38.843 { 00:29:38.843 "code": -19, 00:29:38.843 "message": "No such device" 00:29:38.843 } 00:29:38.843 12:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:29:38.843 12:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:38.843 12:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:38.843 12:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:38.843 12:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:29:39.101 aio_bdev 00:29:39.101 12:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev d3471182-4568-4ec6-a282-49bfabbdd1fa 00:29:39.101 12:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=d3471182-4568-4ec6-a282-49bfabbdd1fa 00:29:39.101 12:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:29:39.101 12:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:29:39.101 12:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:29:39.101 12:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:29:39.101 12:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:29:39.359 12:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b d3471182-4568-4ec6-a282-49bfabbdd1fa -t 2000 00:29:39.617 [ 00:29:39.617 { 00:29:39.617 "name": "d3471182-4568-4ec6-a282-49bfabbdd1fa", 00:29:39.617 "aliases": [ 00:29:39.617 "lvs/lvol" 00:29:39.617 ], 00:29:39.617 "product_name": "Logical Volume", 00:29:39.617 "block_size": 4096, 00:29:39.617 "num_blocks": 38912, 00:29:39.617 "uuid": "d3471182-4568-4ec6-a282-49bfabbdd1fa", 00:29:39.617 "assigned_rate_limits": { 00:29:39.617 "rw_ios_per_sec": 0, 00:29:39.617 "rw_mbytes_per_sec": 0, 00:29:39.617 "r_mbytes_per_sec": 0, 00:29:39.617 "w_mbytes_per_sec": 0 00:29:39.617 }, 00:29:39.617 "claimed": false, 00:29:39.617 "zoned": false, 00:29:39.617 "supported_io_types": { 00:29:39.617 "read": true, 00:29:39.617 "write": true, 00:29:39.617 "unmap": true, 00:29:39.617 "flush": false, 00:29:39.617 "reset": true, 00:29:39.617 "nvme_admin": false, 00:29:39.617 "nvme_io": false, 00:29:39.617 "nvme_io_md": false, 00:29:39.617 "write_zeroes": true, 00:29:39.617 "zcopy": false, 00:29:39.617 "get_zone_info": false, 00:29:39.617 "zone_management": false, 00:29:39.617 "zone_append": false, 00:29:39.617 "compare": false, 00:29:39.617 "compare_and_write": false, 00:29:39.617 "abort": false, 00:29:39.617 "seek_hole": true, 00:29:39.617 "seek_data": true, 00:29:39.617 "copy": false, 00:29:39.617 "nvme_iov_md": false 00:29:39.617 }, 00:29:39.617 "driver_specific": { 00:29:39.617 "lvol": { 00:29:39.617 "lvol_store_uuid": "083cecad-7fd7-47f5-b431-5c8ec9455840", 00:29:39.617 "base_bdev": "aio_bdev", 00:29:39.617 "thin_provision": false, 00:29:39.617 "num_allocated_clusters": 38, 00:29:39.617 "snapshot": false, 00:29:39.617 "clone": false, 00:29:39.617 "esnap_clone": false 00:29:39.617 } 00:29:39.617 } 00:29:39.617 } 00:29:39.617 ] 00:29:39.617 12:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:29:39.617 12:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 083cecad-7fd7-47f5-b431-5c8ec9455840 00:29:39.617 12:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:29:39.617 12:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:29:39.617 12:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 083cecad-7fd7-47f5-b431-5c8ec9455840 00:29:39.617 12:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:29:39.875 12:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:29:39.875 12:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete d3471182-4568-4ec6-a282-49bfabbdd1fa 00:29:40.133 12:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 083cecad-7fd7-47f5-b431-5c8ec9455840 00:29:40.391 12:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:29:40.391 12:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:40.391 00:29:40.391 real 0m17.069s 00:29:40.391 user 0m34.540s 00:29:40.391 sys 0m3.760s 00:29:40.391 12:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:40.391 12:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:29:40.391 ************************************ 00:29:40.391 END TEST lvs_grow_dirty 00:29:40.391 ************************************ 00:29:40.650 12:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:29:40.650 12:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:29:40.650 12:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:29:40.650 12:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:29:40.650 12:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:29:40.650 12:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:29:40.650 12:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:29:40.650 12:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:29:40.650 12:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:29:40.650 nvmf_trace.0 00:29:40.650 12:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:29:40.650 12:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:29:40.650 12:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:40.650 12:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:29:40.650 12:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:40.650 12:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:29:40.650 12:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:40.650 12:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:40.650 rmmod nvme_tcp 00:29:40.650 rmmod nvme_fabrics 00:29:40.650 rmmod nvme_keyring 00:29:40.650 12:53:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:40.650 12:53:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:29:40.650 12:53:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:29:40.650 12:53:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 2720627 ']' 00:29:40.650 12:53:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 2720627 00:29:40.650 12:53:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 2720627 ']' 00:29:40.651 12:53:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 2720627 00:29:40.651 12:53:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:29:40.651 12:53:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:40.651 12:53:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2720627 00:29:40.651 12:53:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:40.651 12:53:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:40.651 12:53:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2720627' 00:29:40.651 killing process with pid 2720627 00:29:40.651 12:53:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 2720627 00:29:40.651 12:53:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 2720627 00:29:40.910 12:53:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:40.910 12:53:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:40.910 12:53:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:40.910 12:53:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:29:40.910 12:53:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:29:40.910 12:53:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:29:40.910 12:53:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:40.910 12:53:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:40.910 12:53:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:40.910 12:53:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:40.910 12:53:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:40.910 12:53:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:42.837 12:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:42.837 00:29:42.837 real 0m41.884s 00:29:42.837 user 0m52.336s 00:29:42.837 sys 0m9.981s 00:29:42.837 12:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:42.837 12:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:42.837 ************************************ 00:29:42.837 END TEST nvmf_lvs_grow 00:29:42.837 ************************************ 00:29:43.133 12:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:29:43.133 12:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:43.133 12:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:43.133 12:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:43.133 ************************************ 00:29:43.133 START TEST nvmf_bdev_io_wait 00:29:43.133 ************************************ 00:29:43.133 12:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:29:43.133 * Looking for test storage... 00:29:43.133 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:43.133 12:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:43.133 12:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:43.133 12:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:29:43.133 12:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:43.133 12:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:43.133 12:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:43.133 12:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:43.133 12:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:29:43.133 12:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:29:43.133 12:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:29:43.133 12:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:29:43.133 12:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:29:43.133 12:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:29:43.133 12:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:29:43.133 12:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:43.133 12:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:29:43.133 12:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:29:43.133 12:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:43.133 12:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:43.133 12:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:29:43.133 12:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:29:43.133 12:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:43.133 12:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:29:43.133 12:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:29:43.133 12:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:29:43.133 12:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:29:43.133 12:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:43.133 12:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:29:43.133 12:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:29:43.133 12:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:43.133 12:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:43.133 12:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:29:43.133 12:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:43.133 12:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:43.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:43.134 --rc genhtml_branch_coverage=1 00:29:43.134 --rc genhtml_function_coverage=1 00:29:43.134 --rc genhtml_legend=1 00:29:43.134 --rc geninfo_all_blocks=1 00:29:43.134 --rc geninfo_unexecuted_blocks=1 00:29:43.134 00:29:43.134 ' 00:29:43.134 12:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:43.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:43.134 --rc genhtml_branch_coverage=1 00:29:43.134 --rc genhtml_function_coverage=1 00:29:43.134 --rc genhtml_legend=1 00:29:43.134 --rc geninfo_all_blocks=1 00:29:43.134 --rc geninfo_unexecuted_blocks=1 00:29:43.134 00:29:43.134 ' 00:29:43.134 12:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:43.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:43.134 --rc genhtml_branch_coverage=1 00:29:43.134 --rc genhtml_function_coverage=1 00:29:43.134 --rc genhtml_legend=1 00:29:43.134 --rc geninfo_all_blocks=1 00:29:43.134 --rc geninfo_unexecuted_blocks=1 00:29:43.134 00:29:43.134 ' 00:29:43.134 12:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:43.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:43.134 --rc genhtml_branch_coverage=1 00:29:43.134 --rc genhtml_function_coverage=1 00:29:43.134 --rc genhtml_legend=1 00:29:43.134 --rc geninfo_all_blocks=1 00:29:43.134 --rc geninfo_unexecuted_blocks=1 00:29:43.134 00:29:43.134 ' 00:29:43.134 12:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:43.134 12:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:29:43.134 12:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:43.134 12:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:43.134 12:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:43.134 12:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:43.134 12:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:43.134 12:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:43.134 12:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:43.134 12:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:43.134 12:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:43.134 12:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:43.134 12:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:29:43.134 12:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:29:43.134 12:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:43.134 12:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:43.134 12:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:43.134 12:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:43.134 12:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:43.134 12:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:29:43.134 12:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:43.134 12:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:43.134 12:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:43.134 12:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:43.134 12:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:43.134 12:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:43.134 12:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:29:43.134 12:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:43.134 12:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:29:43.134 12:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:43.134 12:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:43.134 12:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:43.134 12:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:43.134 12:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:43.134 12:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:43.134 12:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:43.134 12:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:43.134 12:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:43.134 12:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:43.134 12:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:43.134 12:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:43.134 12:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:29:43.134 12:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:43.134 12:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:43.134 12:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:43.134 12:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:43.134 12:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:43.134 12:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:43.134 12:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:43.134 12:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:43.134 12:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:43.134 12:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:43.134 12:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:29:43.134 12:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:48.513 12:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:48.513 12:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:29:48.513 12:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:48.513 12:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:48.513 12:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:48.513 12:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:48.513 12:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:48.513 12:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:29:48.513 12:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:48.513 12:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:29:48.513 12:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:29:48.513 12:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:29:48.513 12:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:29:48.513 12:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:29:48.513 12:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:29:48.514 12:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:48.514 12:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:48.514 12:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:48.514 12:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:48.514 12:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:48.514 12:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:48.514 12:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:48.514 12:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:48.514 12:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:48.514 12:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:48.514 12:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:48.514 12:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:48.514 12:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:48.514 12:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:48.514 12:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:48.514 12:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:48.514 12:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:48.514 12:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:48.514 12:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:48.514 12:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:48.514 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:48.514 12:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:48.514 12:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:48.514 12:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:48.514 12:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:48.514 12:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:48.514 12:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:48.514 12:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:48.514 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:48.514 12:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:48.514 12:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:48.514 12:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:48.514 12:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:48.514 12:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:48.514 12:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:48.514 12:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:48.514 12:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:48.514 12:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:48.514 12:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:48.514 12:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:48.514 12:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:48.514 12:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:48.514 12:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:48.514 12:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:48.514 12:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:48.514 Found net devices under 0000:86:00.0: cvl_0_0 00:29:48.514 12:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:48.514 12:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:48.514 12:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:48.514 12:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:48.514 12:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:48.514 12:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:48.514 12:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:48.514 12:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:48.514 12:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:48.514 Found net devices under 0000:86:00.1: cvl_0_1 00:29:48.514 12:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:48.514 12:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:48.514 12:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:29:48.514 12:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:48.514 12:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:48.514 12:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:48.514 12:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:48.514 12:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:48.514 12:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:48.514 12:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:48.514 12:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:48.514 12:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:48.514 12:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:48.514 12:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:48.514 12:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:48.514 12:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:48.514 12:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:48.514 12:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:48.514 12:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:48.514 12:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:48.514 12:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:48.514 12:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:48.514 12:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:48.514 12:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:48.514 12:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:48.514 12:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:48.514 12:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:48.514 12:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:48.514 12:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:48.514 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:48.514 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.355 ms 00:29:48.514 00:29:48.514 --- 10.0.0.2 ping statistics --- 00:29:48.514 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:48.514 rtt min/avg/max/mdev = 0.355/0.355/0.355/0.000 ms 00:29:48.514 12:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:48.514 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:48.514 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:29:48.514 00:29:48.514 --- 10.0.0.1 ping statistics --- 00:29:48.514 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:48.514 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:29:48.514 12:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:48.514 12:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:29:48.514 12:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:48.514 12:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:48.514 12:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:48.514 12:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:48.514 12:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:48.514 12:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:48.514 12:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:48.772 12:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:29:48.772 12:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:48.772 12:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:48.772 12:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:48.772 12:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=2724684 00:29:48.772 12:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:29:48.772 12:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 2724684 00:29:48.772 12:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 2724684 ']' 00:29:48.772 12:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:48.772 12:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:48.772 12:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:48.772 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:48.772 12:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:48.772 12:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:48.772 [2024-11-28 12:53:31.122252] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:48.772 [2024-11-28 12:53:31.123170] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:29:48.772 [2024-11-28 12:53:31.123205] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:48.772 [2024-11-28 12:53:31.187139] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:48.772 [2024-11-28 12:53:31.230996] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:48.772 [2024-11-28 12:53:31.231034] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:48.772 [2024-11-28 12:53:31.231041] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:48.772 [2024-11-28 12:53:31.231048] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:48.772 [2024-11-28 12:53:31.231053] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:48.772 [2024-11-28 12:53:31.232537] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:48.772 [2024-11-28 12:53:31.232629] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:48.772 [2024-11-28 12:53:31.232739] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:48.772 [2024-11-28 12:53:31.232741] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:48.772 [2024-11-28 12:53:31.233057] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:48.772 12:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:48.772 12:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:29:48.772 12:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:48.772 12:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:48.772 12:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:49.031 12:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:49.031 12:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:29:49.031 12:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:49.031 12:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:49.031 12:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:49.031 12:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:29:49.031 12:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:49.031 12:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:49.031 [2024-11-28 12:53:31.359917] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:49.031 [2024-11-28 12:53:31.359995] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:49.031 [2024-11-28 12:53:31.360521] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:29:49.031 [2024-11-28 12:53:31.360985] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:29:49.031 12:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:49.031 12:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:49.031 12:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:49.031 12:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:49.031 [2024-11-28 12:53:31.373471] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:49.031 12:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:49.031 12:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:49.031 12:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:49.031 12:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:49.031 Malloc0 00:29:49.031 12:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:49.031 12:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:49.031 12:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:49.031 12:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:49.031 12:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:49.031 12:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:49.031 12:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:49.031 12:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:49.031 12:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:49.031 12:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:49.031 12:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:49.031 12:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:49.031 [2024-11-28 12:53:31.429376] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:49.031 12:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:49.032 12:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2724707 00:29:49.032 12:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:29:49.032 12:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:29:49.032 12:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2724709 00:29:49.032 12:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:29:49.032 12:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:29:49.032 12:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:49.032 12:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:49.032 { 00:29:49.032 "params": { 00:29:49.032 "name": "Nvme$subsystem", 00:29:49.032 "trtype": "$TEST_TRANSPORT", 00:29:49.032 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:49.032 "adrfam": "ipv4", 00:29:49.032 "trsvcid": "$NVMF_PORT", 00:29:49.032 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:49.032 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:49.032 "hdgst": ${hdgst:-false}, 00:29:49.032 "ddgst": ${ddgst:-false} 00:29:49.032 }, 00:29:49.032 "method": "bdev_nvme_attach_controller" 00:29:49.032 } 00:29:49.032 EOF 00:29:49.032 )") 00:29:49.032 12:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:29:49.032 12:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:29:49.032 12:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2724711 00:29:49.032 12:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:29:49.032 12:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:29:49.032 12:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:49.032 12:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:49.032 { 00:29:49.032 "params": { 00:29:49.032 "name": "Nvme$subsystem", 00:29:49.032 "trtype": "$TEST_TRANSPORT", 00:29:49.032 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:49.032 "adrfam": "ipv4", 00:29:49.032 "trsvcid": "$NVMF_PORT", 00:29:49.032 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:49.032 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:49.032 "hdgst": ${hdgst:-false}, 00:29:49.032 "ddgst": ${ddgst:-false} 00:29:49.032 }, 00:29:49.032 "method": "bdev_nvme_attach_controller" 00:29:49.032 } 00:29:49.032 EOF 00:29:49.032 )") 00:29:49.032 12:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:29:49.032 12:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:29:49.032 12:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2724714 00:29:49.032 12:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:29:49.032 12:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:29:49.032 12:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:29:49.032 12:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:29:49.032 12:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:49.032 12:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:29:49.032 12:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:29:49.032 12:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:49.032 { 00:29:49.032 "params": { 00:29:49.032 "name": "Nvme$subsystem", 00:29:49.032 "trtype": "$TEST_TRANSPORT", 00:29:49.032 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:49.032 "adrfam": "ipv4", 00:29:49.032 "trsvcid": "$NVMF_PORT", 00:29:49.032 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:49.032 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:49.032 "hdgst": ${hdgst:-false}, 00:29:49.032 "ddgst": ${ddgst:-false} 00:29:49.032 }, 00:29:49.032 "method": "bdev_nvme_attach_controller" 00:29:49.032 } 00:29:49.032 EOF 00:29:49.032 )") 00:29:49.032 12:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:29:49.032 12:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:29:49.032 12:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:29:49.032 12:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:49.032 12:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:49.032 { 00:29:49.032 "params": { 00:29:49.032 "name": "Nvme$subsystem", 00:29:49.032 "trtype": "$TEST_TRANSPORT", 00:29:49.032 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:49.032 "adrfam": "ipv4", 00:29:49.032 "trsvcid": "$NVMF_PORT", 00:29:49.032 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:49.032 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:49.032 "hdgst": ${hdgst:-false}, 00:29:49.032 "ddgst": ${ddgst:-false} 00:29:49.032 }, 00:29:49.032 "method": "bdev_nvme_attach_controller" 00:29:49.032 } 00:29:49.032 EOF 00:29:49.032 )") 00:29:49.032 12:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:29:49.032 12:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2724707 00:29:49.032 12:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:29:49.032 12:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:29:49.032 12:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:29:49.032 12:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:29:49.032 12:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:29:49.032 12:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:49.032 "params": { 00:29:49.032 "name": "Nvme1", 00:29:49.032 "trtype": "tcp", 00:29:49.032 "traddr": "10.0.0.2", 00:29:49.032 "adrfam": "ipv4", 00:29:49.032 "trsvcid": "4420", 00:29:49.032 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:49.032 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:49.032 "hdgst": false, 00:29:49.032 "ddgst": false 00:29:49.032 }, 00:29:49.032 "method": "bdev_nvme_attach_controller" 00:29:49.032 }' 00:29:49.032 12:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:29:49.032 12:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:29:49.032 12:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:49.032 "params": { 00:29:49.032 "name": "Nvme1", 00:29:49.032 "trtype": "tcp", 00:29:49.032 "traddr": "10.0.0.2", 00:29:49.032 "adrfam": "ipv4", 00:29:49.032 "trsvcid": "4420", 00:29:49.032 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:49.032 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:49.032 "hdgst": false, 00:29:49.032 "ddgst": false 00:29:49.032 }, 00:29:49.032 "method": "bdev_nvme_attach_controller" 00:29:49.032 }' 00:29:49.032 12:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:29:49.032 12:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:49.032 "params": { 00:29:49.032 "name": "Nvme1", 00:29:49.032 "trtype": "tcp", 00:29:49.032 "traddr": "10.0.0.2", 00:29:49.032 "adrfam": "ipv4", 00:29:49.032 "trsvcid": "4420", 00:29:49.032 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:49.032 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:49.032 "hdgst": false, 00:29:49.032 "ddgst": false 00:29:49.032 }, 00:29:49.032 "method": "bdev_nvme_attach_controller" 00:29:49.032 }' 00:29:49.032 12:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:29:49.032 12:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:49.032 "params": { 00:29:49.032 "name": "Nvme1", 00:29:49.032 "trtype": "tcp", 00:29:49.032 "traddr": "10.0.0.2", 00:29:49.032 "adrfam": "ipv4", 00:29:49.032 "trsvcid": "4420", 00:29:49.032 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:49.032 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:49.032 "hdgst": false, 00:29:49.032 "ddgst": false 00:29:49.032 }, 00:29:49.032 "method": "bdev_nvme_attach_controller" 00:29:49.032 }' 00:29:49.032 [2024-11-28 12:53:31.481350] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:29:49.032 [2024-11-28 12:53:31.481393] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:29:49.032 [2024-11-28 12:53:31.482320] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:29:49.032 [2024-11-28 12:53:31.482319] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:29:49.032 [2024-11-28 12:53:31.482373] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-28 12:53:31.482374] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:29:49.032 --proc-type=auto ] 00:29:49.032 [2024-11-28 12:53:31.486855] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:29:49.032 [2024-11-28 12:53:31.486896] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:29:49.290 [2024-11-28 12:53:31.683567] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:49.290 [2024-11-28 12:53:31.738363] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:29:49.290 [2024-11-28 12:53:31.741955] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:49.290 [2024-11-28 12:53:31.784677] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:29:49.290 [2024-11-28 12:53:31.793857] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:49.547 [2024-11-28 12:53:31.832119] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:29:49.547 [2024-11-28 12:53:31.887950] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:49.547 [2024-11-28 12:53:31.941443] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:49.547 Running I/O for 1 seconds... 00:29:49.547 Running I/O for 1 seconds... 00:29:49.547 Running I/O for 1 seconds... 00:29:49.804 Running I/O for 1 seconds... 00:29:50.737 12378.00 IOPS, 48.35 MiB/s 00:29:50.738 Latency(us) 00:29:50.738 [2024-11-28T11:53:33.257Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:50.738 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:29:50.738 Nvme1n1 : 1.01 12421.87 48.52 0.00 0.00 10267.60 3433.52 12081.42 00:29:50.738 [2024-11-28T11:53:33.257Z] =================================================================================================================== 00:29:50.738 [2024-11-28T11:53:33.257Z] Total : 12421.87 48.52 0.00 0.00 10267.60 3433.52 12081.42 00:29:50.738 236136.00 IOPS, 922.41 MiB/s [2024-11-28T11:53:33.257Z] 11756.00 IOPS, 45.92 MiB/s 00:29:50.738 Latency(us) 00:29:50.738 [2024-11-28T11:53:33.257Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:50.738 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:29:50.738 Nvme1n1 : 1.00 235736.41 920.85 0.00 0.00 540.12 229.73 1688.26 00:29:50.738 [2024-11-28T11:53:33.257Z] =================================================================================================================== 00:29:50.738 [2024-11-28T11:53:33.257Z] Total : 235736.41 920.85 0.00 0.00 540.12 229.73 1688.26 00:29:50.738 00:29:50.738 Latency(us) 00:29:50.738 [2024-11-28T11:53:33.257Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:50.738 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:29:50.738 Nvme1n1 : 1.01 11832.14 46.22 0.00 0.00 10787.86 3875.17 13791.05 00:29:50.738 [2024-11-28T11:53:33.257Z] =================================================================================================================== 00:29:50.738 [2024-11-28T11:53:33.257Z] Total : 11832.14 46.22 0.00 0.00 10787.86 3875.17 13791.05 00:29:50.738 11298.00 IOPS, 44.13 MiB/s 00:29:50.738 Latency(us) 00:29:50.738 [2024-11-28T11:53:33.257Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:50.738 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:29:50.738 Nvme1n1 : 1.01 11396.59 44.52 0.00 0.00 11202.13 1567.17 18350.08 00:29:50.738 [2024-11-28T11:53:33.257Z] =================================================================================================================== 00:29:50.738 [2024-11-28T11:53:33.257Z] Total : 11396.59 44.52 0.00 0.00 11202.13 1567.17 18350.08 00:29:50.997 12:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2724709 00:29:50.997 12:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2724711 00:29:50.997 12:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2724714 00:29:50.997 12:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:50.997 12:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:50.997 12:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:50.997 12:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:50.997 12:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:29:50.997 12:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:29:50.997 12:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:50.997 12:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:29:50.997 12:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:50.997 12:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:29:50.997 12:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:50.997 12:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:50.997 rmmod nvme_tcp 00:29:50.997 rmmod nvme_fabrics 00:29:50.997 rmmod nvme_keyring 00:29:50.997 12:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:50.997 12:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:29:50.997 12:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:29:50.997 12:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 2724684 ']' 00:29:50.997 12:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 2724684 00:29:50.997 12:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 2724684 ']' 00:29:50.997 12:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 2724684 00:29:50.997 12:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:29:50.997 12:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:50.997 12:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2724684 00:29:50.997 12:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:50.997 12:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:50.997 12:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2724684' 00:29:50.997 killing process with pid 2724684 00:29:50.997 12:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 2724684 00:29:50.997 12:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 2724684 00:29:51.256 12:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:51.256 12:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:51.256 12:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:51.256 12:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:29:51.256 12:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:29:51.256 12:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:29:51.256 12:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:51.256 12:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:51.256 12:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:51.256 12:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:51.256 12:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:51.256 12:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:53.791 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:53.791 00:29:53.791 real 0m10.314s 00:29:53.791 user 0m15.001s 00:29:53.791 sys 0m6.126s 00:29:53.791 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:53.791 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:53.791 ************************************ 00:29:53.791 END TEST nvmf_bdev_io_wait 00:29:53.791 ************************************ 00:29:53.791 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:29:53.791 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:53.791 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:53.791 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:53.791 ************************************ 00:29:53.791 START TEST nvmf_queue_depth 00:29:53.791 ************************************ 00:29:53.791 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:29:53.791 * Looking for test storage... 00:29:53.791 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:53.791 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:53.791 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:29:53.791 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:53.791 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:53.791 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:53.791 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:53.791 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:53.791 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:29:53.791 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:29:53.791 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:29:53.791 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:29:53.791 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:29:53.791 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:29:53.791 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:29:53.791 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:53.791 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:29:53.791 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:29:53.791 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:53.791 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:53.791 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:29:53.791 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:29:53.791 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:53.791 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:29:53.791 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:29:53.791 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:29:53.791 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:29:53.791 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:53.791 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:29:53.791 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:29:53.791 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:53.791 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:53.791 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:29:53.791 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:53.791 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:53.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:53.791 --rc genhtml_branch_coverage=1 00:29:53.791 --rc genhtml_function_coverage=1 00:29:53.791 --rc genhtml_legend=1 00:29:53.791 --rc geninfo_all_blocks=1 00:29:53.791 --rc geninfo_unexecuted_blocks=1 00:29:53.792 00:29:53.792 ' 00:29:53.792 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:53.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:53.792 --rc genhtml_branch_coverage=1 00:29:53.792 --rc genhtml_function_coverage=1 00:29:53.792 --rc genhtml_legend=1 00:29:53.792 --rc geninfo_all_blocks=1 00:29:53.792 --rc geninfo_unexecuted_blocks=1 00:29:53.792 00:29:53.792 ' 00:29:53.792 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:53.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:53.792 --rc genhtml_branch_coverage=1 00:29:53.792 --rc genhtml_function_coverage=1 00:29:53.792 --rc genhtml_legend=1 00:29:53.792 --rc geninfo_all_blocks=1 00:29:53.792 --rc geninfo_unexecuted_blocks=1 00:29:53.792 00:29:53.792 ' 00:29:53.792 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:53.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:53.792 --rc genhtml_branch_coverage=1 00:29:53.792 --rc genhtml_function_coverage=1 00:29:53.792 --rc genhtml_legend=1 00:29:53.792 --rc geninfo_all_blocks=1 00:29:53.792 --rc geninfo_unexecuted_blocks=1 00:29:53.792 00:29:53.792 ' 00:29:53.792 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:53.792 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:29:53.792 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:53.792 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:53.792 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:53.792 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:53.792 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:53.792 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:53.792 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:53.792 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:53.792 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:53.792 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:53.792 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:29:53.792 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:29:53.792 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:53.792 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:53.792 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:53.792 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:53.792 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:53.792 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:29:53.792 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:53.792 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:53.792 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:53.792 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:53.792 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:53.792 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:53.792 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:29:53.792 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:53.792 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:29:53.792 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:53.792 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:53.792 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:53.792 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:53.792 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:53.792 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:53.792 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:53.792 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:53.792 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:53.792 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:53.792 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:29:53.792 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:29:53.792 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:53.792 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:29:53.792 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:53.792 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:53.792 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:53.792 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:53.792 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:53.792 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:53.792 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:53.792 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:53.792 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:53.792 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:53.792 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:29:53.792 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:59.065 12:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:59.065 12:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:29:59.065 12:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:59.065 12:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:59.065 12:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:59.065 12:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:59.065 12:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:59.065 12:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:29:59.065 12:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:59.065 12:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:29:59.065 12:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:29:59.065 12:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:29:59.065 12:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:29:59.065 12:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:29:59.065 12:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:29:59.065 12:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:59.065 12:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:59.065 12:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:59.065 12:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:59.065 12:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:59.066 12:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:59.066 12:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:59.066 12:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:59.066 12:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:59.066 12:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:59.066 12:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:59.066 12:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:59.066 12:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:59.066 12:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:59.066 12:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:59.066 12:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:59.066 12:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:59.066 12:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:59.066 12:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:59.066 12:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:59.066 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:59.066 12:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:59.066 12:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:59.066 12:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:59.066 12:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:59.066 12:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:59.066 12:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:59.066 12:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:59.066 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:59.066 12:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:59.066 12:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:59.066 12:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:59.066 12:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:59.066 12:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:59.066 12:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:59.066 12:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:59.066 12:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:59.066 12:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:59.066 12:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:59.066 12:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:59.066 12:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:59.066 12:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:59.066 12:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:59.066 12:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:59.066 12:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:59.066 Found net devices under 0000:86:00.0: cvl_0_0 00:29:59.066 12:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:59.066 12:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:59.066 12:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:59.066 12:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:59.066 12:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:59.066 12:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:59.066 12:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:59.066 12:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:59.066 12:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:59.066 Found net devices under 0000:86:00.1: cvl_0_1 00:29:59.066 12:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:59.066 12:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:59.066 12:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:29:59.066 12:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:59.066 12:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:59.066 12:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:59.066 12:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:59.066 12:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:59.066 12:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:59.066 12:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:59.066 12:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:59.066 12:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:59.066 12:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:59.066 12:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:59.066 12:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:59.066 12:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:59.066 12:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:59.066 12:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:59.066 12:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:59.066 12:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:59.066 12:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:59.066 12:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:59.066 12:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:59.066 12:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:59.066 12:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:59.066 12:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:59.066 12:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:59.066 12:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:59.066 12:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:59.066 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:59.066 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.463 ms 00:29:59.066 00:29:59.066 --- 10.0.0.2 ping statistics --- 00:29:59.066 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:59.066 rtt min/avg/max/mdev = 0.463/0.463/0.463/0.000 ms 00:29:59.066 12:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:59.066 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:59.066 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.209 ms 00:29:59.066 00:29:59.066 --- 10.0.0.1 ping statistics --- 00:29:59.066 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:59.066 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:29:59.066 12:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:59.066 12:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:29:59.066 12:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:59.066 12:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:59.066 12:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:59.066 12:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:59.066 12:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:59.066 12:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:59.066 12:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:59.066 12:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:29:59.066 12:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:59.066 12:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:59.066 12:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:59.067 12:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=2728481 00:29:59.067 12:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 2728481 00:29:59.067 12:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:29:59.067 12:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 2728481 ']' 00:29:59.067 12:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:59.067 12:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:59.067 12:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:59.067 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:59.067 12:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:59.067 12:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:59.067 [2024-11-28 12:53:41.422718] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:59.067 [2024-11-28 12:53:41.423716] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:29:59.067 [2024-11-28 12:53:41.423757] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:59.067 [2024-11-28 12:53:41.494151] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:59.067 [2024-11-28 12:53:41.536375] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:59.067 [2024-11-28 12:53:41.536411] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:59.067 [2024-11-28 12:53:41.536419] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:59.067 [2024-11-28 12:53:41.536425] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:59.067 [2024-11-28 12:53:41.536430] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:59.067 [2024-11-28 12:53:41.536942] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:59.326 [2024-11-28 12:53:41.606113] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:59.326 [2024-11-28 12:53:41.606354] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:59.326 12:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:59.326 12:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:29:59.326 12:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:59.326 12:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:59.326 12:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:59.326 12:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:59.326 12:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:59.326 12:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.326 12:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:59.326 [2024-11-28 12:53:41.677602] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:59.326 12:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.326 12:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:59.326 12:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.326 12:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:59.326 Malloc0 00:29:59.326 12:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.326 12:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:59.326 12:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.326 12:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:59.326 12:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.326 12:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:59.326 12:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.326 12:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:59.326 12:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.326 12:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:59.326 12:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.326 12:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:59.326 [2024-11-28 12:53:41.741438] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:59.326 12:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.326 12:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=2728500 00:29:59.326 12:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:29:59.326 12:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:59.326 12:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 2728500 /var/tmp/bdevperf.sock 00:29:59.326 12:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 2728500 ']' 00:29:59.327 12:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:59.327 12:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:59.327 12:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:59.327 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:59.327 12:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:59.327 12:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:59.327 [2024-11-28 12:53:41.791573] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:29:59.327 [2024-11-28 12:53:41.791613] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2728500 ] 00:29:59.586 [2024-11-28 12:53:41.852719] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:59.586 [2024-11-28 12:53:41.894454] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:59.586 12:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:59.586 12:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:29:59.586 12:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:59.586 12:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.586 12:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:59.586 NVMe0n1 00:29:59.586 12:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.586 12:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:59.845 Running I/O for 10 seconds... 00:30:01.718 11272.00 IOPS, 44.03 MiB/s [2024-11-28T11:53:45.175Z] 11739.50 IOPS, 45.86 MiB/s [2024-11-28T11:53:46.551Z] 11608.67 IOPS, 45.35 MiB/s [2024-11-28T11:53:47.487Z] 11715.00 IOPS, 45.76 MiB/s [2024-11-28T11:53:48.423Z] 11715.00 IOPS, 45.76 MiB/s [2024-11-28T11:53:49.360Z] 11784.33 IOPS, 46.03 MiB/s [2024-11-28T11:53:50.297Z] 11828.29 IOPS, 46.20 MiB/s [2024-11-28T11:53:51.233Z] 11837.88 IOPS, 46.24 MiB/s [2024-11-28T11:53:52.611Z] 11842.56 IOPS, 46.26 MiB/s [2024-11-28T11:53:52.611Z] 11882.90 IOPS, 46.42 MiB/s 00:30:10.092 Latency(us) 00:30:10.092 [2024-11-28T11:53:52.611Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:10.092 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:30:10.092 Verification LBA range: start 0x0 length 0x4000 00:30:10.092 NVMe0n1 : 10.06 11914.61 46.54 0.00 0.00 85674.83 18464.06 55392.17 00:30:10.092 [2024-11-28T11:53:52.611Z] =================================================================================================================== 00:30:10.092 [2024-11-28T11:53:52.611Z] Total : 11914.61 46.54 0.00 0.00 85674.83 18464.06 55392.17 00:30:10.092 { 00:30:10.092 "results": [ 00:30:10.092 { 00:30:10.092 "job": "NVMe0n1", 00:30:10.092 "core_mask": "0x1", 00:30:10.092 "workload": "verify", 00:30:10.092 "status": "finished", 00:30:10.092 "verify_range": { 00:30:10.092 "start": 0, 00:30:10.092 "length": 16384 00:30:10.092 }, 00:30:10.092 "queue_depth": 1024, 00:30:10.092 "io_size": 4096, 00:30:10.092 "runtime": 10.056895, 00:30:10.092 "iops": 11914.611816072456, 00:30:10.092 "mibps": 46.54145240653303, 00:30:10.092 "io_failed": 0, 00:30:10.092 "io_timeout": 0, 00:30:10.092 "avg_latency_us": 85674.83161924446, 00:30:10.092 "min_latency_us": 18464.055652173913, 00:30:10.092 "max_latency_us": 55392.16695652174 00:30:10.092 } 00:30:10.092 ], 00:30:10.092 "core_count": 1 00:30:10.092 } 00:30:10.092 12:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 2728500 00:30:10.092 12:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 2728500 ']' 00:30:10.092 12:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 2728500 00:30:10.092 12:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:30:10.092 12:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:10.092 12:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2728500 00:30:10.092 12:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:10.092 12:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:10.092 12:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2728500' 00:30:10.092 killing process with pid 2728500 00:30:10.092 12:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 2728500 00:30:10.092 Received shutdown signal, test time was about 10.000000 seconds 00:30:10.092 00:30:10.092 Latency(us) 00:30:10.092 [2024-11-28T11:53:52.611Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:10.092 [2024-11-28T11:53:52.611Z] =================================================================================================================== 00:30:10.092 [2024-11-28T11:53:52.611Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:10.092 12:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 2728500 00:30:10.092 12:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:30:10.092 12:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:30:10.092 12:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:10.092 12:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:30:10.092 12:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:10.092 12:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:30:10.092 12:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:10.092 12:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:10.092 rmmod nvme_tcp 00:30:10.092 rmmod nvme_fabrics 00:30:10.092 rmmod nvme_keyring 00:30:10.092 12:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:10.092 12:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:30:10.092 12:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:30:10.092 12:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 2728481 ']' 00:30:10.092 12:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 2728481 00:30:10.092 12:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 2728481 ']' 00:30:10.092 12:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 2728481 00:30:10.092 12:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:30:10.092 12:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:10.092 12:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2728481 00:30:10.351 12:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:10.351 12:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:10.351 12:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2728481' 00:30:10.351 killing process with pid 2728481 00:30:10.351 12:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 2728481 00:30:10.351 12:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 2728481 00:30:10.351 12:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:10.351 12:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:10.351 12:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:10.351 12:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:30:10.351 12:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:30:10.351 12:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:10.351 12:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:30:10.351 12:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:10.351 12:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:10.351 12:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:10.351 12:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:10.351 12:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:12.883 12:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:12.883 00:30:12.883 real 0m19.107s 00:30:12.883 user 0m22.413s 00:30:12.883 sys 0m5.946s 00:30:12.883 12:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:12.883 12:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:12.883 ************************************ 00:30:12.883 END TEST nvmf_queue_depth 00:30:12.883 ************************************ 00:30:12.883 12:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:30:12.883 12:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:12.883 12:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:12.883 12:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:12.883 ************************************ 00:30:12.883 START TEST nvmf_target_multipath 00:30:12.883 ************************************ 00:30:12.883 12:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:30:12.883 * Looking for test storage... 00:30:12.883 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:12.883 12:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:12.883 12:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:30:12.883 12:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:12.883 12:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:12.884 12:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:12.884 12:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:12.884 12:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:12.884 12:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:30:12.884 12:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:30:12.884 12:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:30:12.884 12:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:30:12.884 12:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:30:12.884 12:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:30:12.884 12:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:30:12.884 12:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:12.884 12:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:30:12.884 12:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:30:12.884 12:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:12.884 12:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:12.884 12:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:30:12.884 12:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:30:12.884 12:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:12.884 12:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:30:12.884 12:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:30:12.884 12:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:30:12.884 12:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:30:12.884 12:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:12.884 12:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:30:12.884 12:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:30:12.884 12:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:12.884 12:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:12.884 12:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:30:12.884 12:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:12.884 12:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:12.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:12.884 --rc genhtml_branch_coverage=1 00:30:12.884 --rc genhtml_function_coverage=1 00:30:12.884 --rc genhtml_legend=1 00:30:12.884 --rc geninfo_all_blocks=1 00:30:12.884 --rc geninfo_unexecuted_blocks=1 00:30:12.884 00:30:12.884 ' 00:30:12.884 12:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:12.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:12.884 --rc genhtml_branch_coverage=1 00:30:12.884 --rc genhtml_function_coverage=1 00:30:12.884 --rc genhtml_legend=1 00:30:12.884 --rc geninfo_all_blocks=1 00:30:12.884 --rc geninfo_unexecuted_blocks=1 00:30:12.884 00:30:12.884 ' 00:30:12.884 12:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:12.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:12.884 --rc genhtml_branch_coverage=1 00:30:12.884 --rc genhtml_function_coverage=1 00:30:12.884 --rc genhtml_legend=1 00:30:12.884 --rc geninfo_all_blocks=1 00:30:12.884 --rc geninfo_unexecuted_blocks=1 00:30:12.884 00:30:12.884 ' 00:30:12.884 12:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:12.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:12.884 --rc genhtml_branch_coverage=1 00:30:12.884 --rc genhtml_function_coverage=1 00:30:12.884 --rc genhtml_legend=1 00:30:12.884 --rc geninfo_all_blocks=1 00:30:12.884 --rc geninfo_unexecuted_blocks=1 00:30:12.884 00:30:12.884 ' 00:30:12.884 12:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:12.884 12:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:30:12.884 12:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:12.884 12:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:12.884 12:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:12.884 12:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:12.884 12:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:12.884 12:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:12.884 12:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:12.884 12:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:12.884 12:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:12.884 12:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:12.884 12:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:30:12.884 12:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:30:12.884 12:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:12.884 12:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:12.884 12:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:12.884 12:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:12.884 12:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:12.884 12:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:30:12.884 12:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:12.884 12:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:12.884 12:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:12.884 12:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:12.884 12:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:12.884 12:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:12.884 12:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:30:12.884 12:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:12.884 12:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:30:12.884 12:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:12.884 12:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:12.884 12:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:12.884 12:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:12.884 12:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:12.884 12:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:12.884 12:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:12.884 12:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:12.884 12:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:12.885 12:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:12.885 12:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:12.885 12:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:12.885 12:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:30:12.885 12:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:12.885 12:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:30:12.885 12:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:12.885 12:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:12.885 12:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:12.885 12:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:12.885 12:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:12.885 12:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:12.885 12:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:12.885 12:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:12.885 12:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:12.885 12:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:12.885 12:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:30:12.885 12:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:30:18.151 12:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:18.151 12:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:30:18.151 12:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:18.151 12:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:18.151 12:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:18.151 12:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:18.151 12:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:18.151 12:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:30:18.151 12:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:18.151 12:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:30:18.151 12:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:30:18.151 12:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:30:18.151 12:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:30:18.151 12:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:30:18.151 12:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:30:18.151 12:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:18.151 12:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:18.151 12:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:18.151 12:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:18.151 12:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:18.151 12:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:18.151 12:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:18.151 12:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:18.151 12:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:18.151 12:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:18.151 12:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:18.151 12:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:18.151 12:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:18.151 12:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:18.151 12:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:18.151 12:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:18.151 12:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:18.151 12:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:18.151 12:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:18.151 12:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:18.151 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:18.151 12:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:18.151 12:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:18.152 12:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:18.152 12:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:18.152 12:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:18.152 12:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:18.152 12:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:18.152 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:18.152 12:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:18.152 12:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:18.152 12:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:18.152 12:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:18.152 12:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:18.152 12:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:18.152 12:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:18.152 12:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:18.152 12:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:18.152 12:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:18.152 12:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:18.152 12:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:18.152 12:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:18.152 12:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:18.152 12:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:18.152 12:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:18.152 Found net devices under 0000:86:00.0: cvl_0_0 00:30:18.152 12:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:18.152 12:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:18.152 12:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:18.152 12:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:18.152 12:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:18.152 12:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:18.152 12:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:18.152 12:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:18.152 12:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:18.152 Found net devices under 0000:86:00.1: cvl_0_1 00:30:18.152 12:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:18.152 12:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:18.152 12:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:30:18.152 12:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:18.152 12:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:18.152 12:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:18.152 12:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:18.152 12:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:18.152 12:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:18.152 12:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:18.152 12:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:18.152 12:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:18.152 12:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:18.152 12:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:18.152 12:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:18.152 12:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:18.152 12:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:18.152 12:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:18.152 12:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:18.152 12:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:18.152 12:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:18.152 12:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:18.152 12:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:18.152 12:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:18.152 12:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:18.152 12:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:18.152 12:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:18.152 12:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:18.152 12:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:18.152 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:18.152 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.355 ms 00:30:18.152 00:30:18.152 --- 10.0.0.2 ping statistics --- 00:30:18.152 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:18.152 rtt min/avg/max/mdev = 0.355/0.355/0.355/0.000 ms 00:30:18.152 12:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:18.152 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:18.152 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.142 ms 00:30:18.152 00:30:18.152 --- 10.0.0.1 ping statistics --- 00:30:18.152 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:18.152 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:30:18.152 12:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:18.152 12:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:30:18.152 12:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:18.152 12:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:18.152 12:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:18.153 12:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:18.153 12:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:18.153 12:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:18.153 12:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:18.153 12:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:30:18.153 12:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:30:18.153 only one NIC for nvmf test 00:30:18.153 12:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:30:18.153 12:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:18.153 12:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:30:18.153 12:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:18.153 12:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:30:18.153 12:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:18.153 12:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:18.153 rmmod nvme_tcp 00:30:18.412 rmmod nvme_fabrics 00:30:18.412 rmmod nvme_keyring 00:30:18.412 12:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:18.412 12:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:30:18.412 12:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:30:18.412 12:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:30:18.412 12:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:18.412 12:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:18.412 12:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:18.412 12:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:30:18.412 12:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:18.412 12:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:30:18.412 12:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:30:18.412 12:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:18.412 12:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:18.412 12:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:18.412 12:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:18.412 12:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:20.317 12:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:20.317 12:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:30:20.317 12:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:30:20.317 12:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:20.317 12:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:30:20.317 12:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:20.317 12:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:30:20.317 12:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:20.317 12:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:20.317 12:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:20.317 12:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:30:20.317 12:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:30:20.317 12:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:30:20.317 12:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:20.317 12:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:20.317 12:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:20.317 12:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:30:20.317 12:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:30:20.317 12:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:20.317 12:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:30:20.317 12:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:20.317 12:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:20.317 12:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:20.317 12:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:20.317 12:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:20.317 12:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:20.317 00:30:20.317 real 0m7.895s 00:30:20.317 user 0m1.767s 00:30:20.317 sys 0m4.131s 00:30:20.317 12:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:20.317 12:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:30:20.317 ************************************ 00:30:20.317 END TEST nvmf_target_multipath 00:30:20.317 ************************************ 00:30:20.577 12:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:30:20.577 12:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:20.577 12:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:20.577 12:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:20.577 ************************************ 00:30:20.577 START TEST nvmf_zcopy 00:30:20.577 ************************************ 00:30:20.577 12:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:30:20.577 * Looking for test storage... 00:30:20.577 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:20.578 12:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:20.578 12:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:30:20.578 12:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:20.578 12:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:20.578 12:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:20.578 12:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:20.578 12:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:20.578 12:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:30:20.578 12:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:30:20.578 12:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:30:20.578 12:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:30:20.578 12:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:30:20.578 12:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:30:20.578 12:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:30:20.578 12:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:20.578 12:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:30:20.578 12:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:30:20.578 12:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:20.578 12:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:20.578 12:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:30:20.578 12:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:30:20.578 12:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:20.578 12:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:30:20.578 12:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:30:20.578 12:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:30:20.578 12:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:30:20.578 12:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:20.578 12:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:30:20.578 12:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:30:20.578 12:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:20.578 12:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:20.578 12:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:30:20.578 12:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:20.578 12:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:20.578 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:20.578 --rc genhtml_branch_coverage=1 00:30:20.578 --rc genhtml_function_coverage=1 00:30:20.578 --rc genhtml_legend=1 00:30:20.578 --rc geninfo_all_blocks=1 00:30:20.578 --rc geninfo_unexecuted_blocks=1 00:30:20.578 00:30:20.578 ' 00:30:20.578 12:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:20.578 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:20.578 --rc genhtml_branch_coverage=1 00:30:20.578 --rc genhtml_function_coverage=1 00:30:20.578 --rc genhtml_legend=1 00:30:20.578 --rc geninfo_all_blocks=1 00:30:20.578 --rc geninfo_unexecuted_blocks=1 00:30:20.578 00:30:20.578 ' 00:30:20.578 12:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:20.578 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:20.578 --rc genhtml_branch_coverage=1 00:30:20.578 --rc genhtml_function_coverage=1 00:30:20.578 --rc genhtml_legend=1 00:30:20.578 --rc geninfo_all_blocks=1 00:30:20.578 --rc geninfo_unexecuted_blocks=1 00:30:20.578 00:30:20.578 ' 00:30:20.578 12:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:20.578 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:20.578 --rc genhtml_branch_coverage=1 00:30:20.578 --rc genhtml_function_coverage=1 00:30:20.578 --rc genhtml_legend=1 00:30:20.578 --rc geninfo_all_blocks=1 00:30:20.578 --rc geninfo_unexecuted_blocks=1 00:30:20.578 00:30:20.578 ' 00:30:20.578 12:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:20.578 12:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:30:20.578 12:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:20.578 12:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:20.578 12:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:20.578 12:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:20.578 12:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:20.578 12:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:20.578 12:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:20.578 12:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:20.578 12:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:20.578 12:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:20.578 12:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:30:20.578 12:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:30:20.578 12:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:20.578 12:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:20.578 12:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:20.578 12:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:20.578 12:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:20.578 12:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:30:20.578 12:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:20.578 12:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:20.578 12:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:20.578 12:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:20.579 12:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:20.579 12:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:20.579 12:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:30:20.579 12:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:20.579 12:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:30:20.579 12:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:20.579 12:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:20.579 12:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:20.579 12:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:20.579 12:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:20.579 12:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:20.579 12:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:20.579 12:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:20.579 12:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:20.579 12:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:20.579 12:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:30:20.579 12:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:20.579 12:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:20.579 12:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:20.579 12:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:20.579 12:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:20.579 12:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:20.579 12:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:20.579 12:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:20.579 12:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:20.579 12:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:20.579 12:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:30:20.579 12:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:25.851 12:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:25.851 12:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:30:25.851 12:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:25.851 12:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:25.851 12:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:25.851 12:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:25.851 12:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:25.851 12:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:30:25.851 12:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:25.851 12:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:30:25.851 12:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:30:25.851 12:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:30:25.851 12:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:30:25.851 12:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:30:25.851 12:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:30:25.851 12:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:25.851 12:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:25.851 12:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:25.851 12:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:25.851 12:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:25.851 12:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:25.851 12:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:25.851 12:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:25.851 12:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:25.851 12:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:25.851 12:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:25.851 12:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:25.851 12:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:25.851 12:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:25.851 12:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:25.851 12:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:25.851 12:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:25.851 12:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:25.851 12:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:25.851 12:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:25.851 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:25.851 12:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:25.852 12:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:25.852 12:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:25.852 12:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:25.852 12:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:25.852 12:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:25.852 12:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:25.852 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:25.852 12:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:25.852 12:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:25.852 12:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:25.852 12:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:25.852 12:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:25.852 12:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:25.852 12:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:25.852 12:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:25.852 12:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:25.852 12:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:25.852 12:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:25.852 12:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:25.852 12:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:25.852 12:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:25.852 12:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:25.852 12:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:25.852 Found net devices under 0000:86:00.0: cvl_0_0 00:30:25.852 12:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:25.852 12:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:25.852 12:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:25.852 12:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:25.852 12:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:25.852 12:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:25.852 12:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:25.852 12:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:25.852 12:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:25.852 Found net devices under 0000:86:00.1: cvl_0_1 00:30:25.852 12:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:25.852 12:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:25.852 12:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:30:25.852 12:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:25.852 12:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:25.852 12:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:25.852 12:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:25.852 12:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:25.852 12:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:25.852 12:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:25.852 12:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:25.852 12:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:25.852 12:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:25.852 12:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:25.852 12:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:25.852 12:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:25.852 12:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:25.852 12:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:25.852 12:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:25.852 12:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:25.852 12:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:26.111 12:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:26.111 12:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:26.111 12:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:26.111 12:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:26.111 12:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:26.111 12:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:26.111 12:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:26.111 12:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:26.111 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:26.111 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.403 ms 00:30:26.111 00:30:26.111 --- 10.0.0.2 ping statistics --- 00:30:26.111 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:26.111 rtt min/avg/max/mdev = 0.403/0.403/0.403/0.000 ms 00:30:26.112 12:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:26.112 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:26.112 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.211 ms 00:30:26.112 00:30:26.112 --- 10.0.0.1 ping statistics --- 00:30:26.112 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:26.112 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:30:26.112 12:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:26.112 12:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:30:26.112 12:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:26.112 12:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:26.112 12:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:26.112 12:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:26.112 12:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:26.112 12:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:26.112 12:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:26.112 12:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:30:26.112 12:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:26.112 12:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:26.112 12:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:26.112 12:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=2737426 00:30:26.112 12:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:30:26.112 12:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 2737426 00:30:26.112 12:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 2737426 ']' 00:30:26.112 12:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:26.112 12:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:26.112 12:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:26.112 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:26.112 12:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:26.112 12:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:26.370 [2024-11-28 12:54:08.665978] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:26.370 [2024-11-28 12:54:08.666927] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:30:26.370 [2024-11-28 12:54:08.666969] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:26.370 [2024-11-28 12:54:08.734466] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:26.370 [2024-11-28 12:54:08.773344] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:26.370 [2024-11-28 12:54:08.773382] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:26.370 [2024-11-28 12:54:08.773389] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:26.370 [2024-11-28 12:54:08.773395] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:26.370 [2024-11-28 12:54:08.773399] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:26.371 [2024-11-28 12:54:08.773958] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:26.371 [2024-11-28 12:54:08.843278] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:26.371 [2024-11-28 12:54:08.843504] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:26.371 12:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:26.371 12:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:30:26.371 12:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:26.371 12:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:26.371 12:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:26.629 12:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:26.629 12:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:30:26.629 12:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:30:26.629 12:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:26.629 12:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:26.629 [2024-11-28 12:54:08.906390] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:26.629 12:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:26.629 12:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:30:26.629 12:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:26.629 12:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:26.629 12:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:26.629 12:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:26.629 12:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:26.629 12:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:26.629 [2024-11-28 12:54:08.922553] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:26.629 12:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:26.629 12:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:26.629 12:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:26.629 12:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:26.629 12:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:26.629 12:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:30:26.629 12:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:26.629 12:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:26.629 malloc0 00:30:26.629 12:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:26.629 12:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:30:26.629 12:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:26.629 12:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:26.629 12:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:26.629 12:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:30:26.629 12:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:30:26.629 12:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:30:26.629 12:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:30:26.629 12:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:26.630 12:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:26.630 { 00:30:26.630 "params": { 00:30:26.630 "name": "Nvme$subsystem", 00:30:26.630 "trtype": "$TEST_TRANSPORT", 00:30:26.630 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:26.630 "adrfam": "ipv4", 00:30:26.630 "trsvcid": "$NVMF_PORT", 00:30:26.630 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:26.630 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:26.630 "hdgst": ${hdgst:-false}, 00:30:26.630 "ddgst": ${ddgst:-false} 00:30:26.630 }, 00:30:26.630 "method": "bdev_nvme_attach_controller" 00:30:26.630 } 00:30:26.630 EOF 00:30:26.630 )") 00:30:26.630 12:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:30:26.630 12:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:30:26.630 12:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:30:26.630 12:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:26.630 "params": { 00:30:26.630 "name": "Nvme1", 00:30:26.630 "trtype": "tcp", 00:30:26.630 "traddr": "10.0.0.2", 00:30:26.630 "adrfam": "ipv4", 00:30:26.630 "trsvcid": "4420", 00:30:26.630 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:26.630 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:26.630 "hdgst": false, 00:30:26.630 "ddgst": false 00:30:26.630 }, 00:30:26.630 "method": "bdev_nvme_attach_controller" 00:30:26.630 }' 00:30:26.630 [2024-11-28 12:54:09.007123] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:30:26.630 [2024-11-28 12:54:09.007170] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2737534 ] 00:30:26.630 [2024-11-28 12:54:09.071111] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:26.630 [2024-11-28 12:54:09.114890] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:27.196 Running I/O for 10 seconds... 00:30:29.065 8173.00 IOPS, 63.85 MiB/s [2024-11-28T11:54:12.519Z] 8295.00 IOPS, 64.80 MiB/s [2024-11-28T11:54:13.452Z] 8356.67 IOPS, 65.29 MiB/s [2024-11-28T11:54:14.828Z] 8372.00 IOPS, 65.41 MiB/s [2024-11-28T11:54:15.780Z] 8385.20 IOPS, 65.51 MiB/s [2024-11-28T11:54:16.715Z] 8398.17 IOPS, 65.61 MiB/s [2024-11-28T11:54:17.650Z] 8410.43 IOPS, 65.71 MiB/s [2024-11-28T11:54:18.586Z] 8411.25 IOPS, 65.71 MiB/s [2024-11-28T11:54:19.523Z] 8419.78 IOPS, 65.78 MiB/s [2024-11-28T11:54:19.523Z] 8425.70 IOPS, 65.83 MiB/s 00:30:37.004 Latency(us) 00:30:37.004 [2024-11-28T11:54:19.523Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:37.004 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:30:37.004 Verification LBA range: start 0x0 length 0x1000 00:30:37.004 Nvme1n1 : 10.01 8429.30 65.85 0.00 0.00 15141.57 2421.98 21883.33 00:30:37.004 [2024-11-28T11:54:19.523Z] =================================================================================================================== 00:30:37.004 [2024-11-28T11:54:19.523Z] Total : 8429.30 65.85 0.00 0.00 15141.57 2421.98 21883.33 00:30:37.263 12:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=2739355 00:30:37.263 12:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:30:37.264 12:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:37.264 12:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:30:37.264 12:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:30:37.264 12:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:30:37.264 12:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:30:37.264 12:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:37.264 12:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:37.264 { 00:30:37.264 "params": { 00:30:37.264 "name": "Nvme$subsystem", 00:30:37.264 "trtype": "$TEST_TRANSPORT", 00:30:37.264 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:37.264 "adrfam": "ipv4", 00:30:37.264 "trsvcid": "$NVMF_PORT", 00:30:37.264 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:37.264 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:37.264 "hdgst": ${hdgst:-false}, 00:30:37.264 "ddgst": ${ddgst:-false} 00:30:37.264 }, 00:30:37.264 "method": "bdev_nvme_attach_controller" 00:30:37.264 } 00:30:37.264 EOF 00:30:37.264 )") 00:30:37.264 12:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:30:37.264 [2024-11-28 12:54:19.634310] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:37.264 [2024-11-28 12:54:19.634346] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:37.264 12:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:30:37.264 12:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:30:37.264 12:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:37.264 "params": { 00:30:37.264 "name": "Nvme1", 00:30:37.264 "trtype": "tcp", 00:30:37.264 "traddr": "10.0.0.2", 00:30:37.264 "adrfam": "ipv4", 00:30:37.264 "trsvcid": "4420", 00:30:37.264 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:37.264 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:37.264 "hdgst": false, 00:30:37.264 "ddgst": false 00:30:37.264 }, 00:30:37.264 "method": "bdev_nvme_attach_controller" 00:30:37.264 }' 00:30:37.264 [2024-11-28 12:54:19.646266] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:37.264 [2024-11-28 12:54:19.646278] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:37.264 [2024-11-28 12:54:19.658263] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:37.264 [2024-11-28 12:54:19.658274] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:37.264 [2024-11-28 12:54:19.670260] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:37.264 [2024-11-28 12:54:19.670271] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:37.264 [2024-11-28 12:54:19.672455] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:30:37.264 [2024-11-28 12:54:19.672497] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2739355 ] 00:30:37.264 [2024-11-28 12:54:19.682261] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:37.264 [2024-11-28 12:54:19.682272] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:37.264 [2024-11-28 12:54:19.694263] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:37.264 [2024-11-28 12:54:19.694275] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:37.264 [2024-11-28 12:54:19.706262] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:37.264 [2024-11-28 12:54:19.706272] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:37.264 [2024-11-28 12:54:19.718261] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:37.264 [2024-11-28 12:54:19.718271] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:37.264 [2024-11-28 12:54:19.730260] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:37.264 [2024-11-28 12:54:19.730270] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:37.264 [2024-11-28 12:54:19.733605] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:37.264 [2024-11-28 12:54:19.742259] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:37.264 [2024-11-28 12:54:19.742271] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:37.264 [2024-11-28 12:54:19.754261] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:37.264 [2024-11-28 12:54:19.754274] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:37.264 [2024-11-28 12:54:19.766282] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:37.264 [2024-11-28 12:54:19.766295] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:37.264 [2024-11-28 12:54:19.776147] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:37.264 [2024-11-28 12:54:19.778261] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:37.264 [2024-11-28 12:54:19.778272] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:37.523 [2024-11-28 12:54:19.790271] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:37.523 [2024-11-28 12:54:19.790288] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:37.523 [2024-11-28 12:54:19.802283] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:37.523 [2024-11-28 12:54:19.802307] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:37.523 [2024-11-28 12:54:19.814266] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:37.523 [2024-11-28 12:54:19.814279] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:37.523 [2024-11-28 12:54:19.826262] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:37.523 [2024-11-28 12:54:19.826275] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:37.523 [2024-11-28 12:54:19.838264] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:37.523 [2024-11-28 12:54:19.838275] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:37.523 [2024-11-28 12:54:19.850263] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:37.523 [2024-11-28 12:54:19.850274] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:37.523 [2024-11-28 12:54:19.862281] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:37.523 [2024-11-28 12:54:19.862301] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:37.523 [2024-11-28 12:54:19.874270] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:37.523 [2024-11-28 12:54:19.874288] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:37.523 [2024-11-28 12:54:19.886263] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:37.523 [2024-11-28 12:54:19.886277] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:37.523 [2024-11-28 12:54:19.898264] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:37.523 [2024-11-28 12:54:19.898280] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:37.523 [2024-11-28 12:54:19.910263] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:37.523 [2024-11-28 12:54:19.910275] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:37.523 [2024-11-28 12:54:19.922261] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:37.523 [2024-11-28 12:54:19.922270] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:37.523 [2024-11-28 12:54:19.934260] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:37.523 [2024-11-28 12:54:19.934269] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:37.523 [2024-11-28 12:54:19.946262] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:37.523 [2024-11-28 12:54:19.946274] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:37.523 [2024-11-28 12:54:19.958259] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:37.523 [2024-11-28 12:54:19.958268] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:37.523 [2024-11-28 12:54:19.970259] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:37.523 [2024-11-28 12:54:19.970268] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:37.523 [2024-11-28 12:54:19.982260] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:37.523 [2024-11-28 12:54:19.982270] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:37.523 [2024-11-28 12:54:19.994263] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:37.523 [2024-11-28 12:54:19.994277] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:37.523 [2024-11-28 12:54:20.006262] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:37.523 [2024-11-28 12:54:20.006272] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:37.523 [2024-11-28 12:54:20.018260] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:37.523 [2024-11-28 12:54:20.018270] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:37.523 [2024-11-28 12:54:20.030265] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:37.523 [2024-11-28 12:54:20.030280] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:37.782 [2024-11-28 12:54:20.042272] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:37.782 [2024-11-28 12:54:20.042290] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:37.782 Running I/O for 5 seconds... 00:30:37.782 [2024-11-28 12:54:20.060588] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:37.782 [2024-11-28 12:54:20.060607] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:37.782 [2024-11-28 12:54:20.075556] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:37.782 [2024-11-28 12:54:20.075574] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:37.782 [2024-11-28 12:54:20.090896] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:37.782 [2024-11-28 12:54:20.090914] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:37.782 [2024-11-28 12:54:20.106999] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:37.782 [2024-11-28 12:54:20.107018] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:37.782 [2024-11-28 12:54:20.122621] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:37.782 [2024-11-28 12:54:20.122639] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:37.782 [2024-11-28 12:54:20.138336] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:37.782 [2024-11-28 12:54:20.138355] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:37.782 [2024-11-28 12:54:20.152256] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:37.782 [2024-11-28 12:54:20.152278] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:37.782 [2024-11-28 12:54:20.167651] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:37.782 [2024-11-28 12:54:20.167669] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:37.782 [2024-11-28 12:54:20.182802] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:37.782 [2024-11-28 12:54:20.182820] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:37.782 [2024-11-28 12:54:20.197946] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:37.782 [2024-11-28 12:54:20.197971] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:37.782 [2024-11-28 12:54:20.209656] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:37.782 [2024-11-28 12:54:20.209675] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:37.782 [2024-11-28 12:54:20.224152] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:37.782 [2024-11-28 12:54:20.224171] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:37.782 [2024-11-28 12:54:20.239168] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:37.782 [2024-11-28 12:54:20.239186] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:37.782 [2024-11-28 12:54:20.254364] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:37.782 [2024-11-28 12:54:20.254382] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:37.782 [2024-11-28 12:54:20.265561] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:37.782 [2024-11-28 12:54:20.265580] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:37.782 [2024-11-28 12:54:20.279782] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:37.782 [2024-11-28 12:54:20.279806] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:37.782 [2024-11-28 12:54:20.294682] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:37.782 [2024-11-28 12:54:20.294701] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:38.041 [2024-11-28 12:54:20.309818] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:38.041 [2024-11-28 12:54:20.309837] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:38.041 [2024-11-28 12:54:20.324094] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:38.041 [2024-11-28 12:54:20.324114] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:38.041 [2024-11-28 12:54:20.339469] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:38.041 [2024-11-28 12:54:20.339488] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:38.041 [2024-11-28 12:54:20.354309] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:38.041 [2024-11-28 12:54:20.354328] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:38.041 [2024-11-28 12:54:20.367182] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:38.041 [2024-11-28 12:54:20.367201] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:38.041 [2024-11-28 12:54:20.382286] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:38.041 [2024-11-28 12:54:20.382304] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:38.041 [2024-11-28 12:54:20.392941] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:38.041 [2024-11-28 12:54:20.392967] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:38.041 [2024-11-28 12:54:20.408113] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:38.041 [2024-11-28 12:54:20.408132] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:38.041 [2024-11-28 12:54:20.423180] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:38.041 [2024-11-28 12:54:20.423206] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:38.041 [2024-11-28 12:54:20.433796] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:38.041 [2024-11-28 12:54:20.433815] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:38.041 [2024-11-28 12:54:20.448196] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:38.041 [2024-11-28 12:54:20.448214] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:38.041 [2024-11-28 12:54:20.463405] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:38.041 [2024-11-28 12:54:20.463422] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:38.041 [2024-11-28 12:54:20.478535] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:38.041 [2024-11-28 12:54:20.478553] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:38.041 [2024-11-28 12:54:20.490253] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:38.041 [2024-11-28 12:54:20.490270] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:38.041 [2024-11-28 12:54:20.504922] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:38.041 [2024-11-28 12:54:20.504940] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:38.041 [2024-11-28 12:54:20.519983] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:38.041 [2024-11-28 12:54:20.520002] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:38.041 [2024-11-28 12:54:20.535113] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:38.042 [2024-11-28 12:54:20.535131] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:38.042 [2024-11-28 12:54:20.550021] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:38.042 [2024-11-28 12:54:20.550041] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:38.300 [2024-11-28 12:54:20.563849] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:38.300 [2024-11-28 12:54:20.563868] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:38.300 [2024-11-28 12:54:20.578964] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:38.300 [2024-11-28 12:54:20.578982] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:38.300 [2024-11-28 12:54:20.594793] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:38.300 [2024-11-28 12:54:20.594811] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:38.300 [2024-11-28 12:54:20.610463] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:38.300 [2024-11-28 12:54:20.610482] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:38.300 [2024-11-28 12:54:20.622200] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:38.300 [2024-11-28 12:54:20.622219] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:38.300 [2024-11-28 12:54:20.636747] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:38.300 [2024-11-28 12:54:20.636766] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:38.300 [2024-11-28 12:54:20.651924] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:38.300 [2024-11-28 12:54:20.651943] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:38.300 [2024-11-28 12:54:20.666932] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:38.300 [2024-11-28 12:54:20.666957] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:38.300 [2024-11-28 12:54:20.678481] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:38.300 [2024-11-28 12:54:20.678499] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:38.300 [2024-11-28 12:54:20.692080] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:38.300 [2024-11-28 12:54:20.692103] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:38.300 [2024-11-28 12:54:20.707372] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:38.300 [2024-11-28 12:54:20.707391] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:38.300 [2024-11-28 12:54:20.721873] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:38.300 [2024-11-28 12:54:20.721892] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:38.300 [2024-11-28 12:54:20.736550] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:38.300 [2024-11-28 12:54:20.736568] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:38.300 [2024-11-28 12:54:20.751676] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:38.300 [2024-11-28 12:54:20.751695] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:38.300 [2024-11-28 12:54:20.766273] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:38.300 [2024-11-28 12:54:20.766292] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:38.300 [2024-11-28 12:54:20.777161] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:38.300 [2024-11-28 12:54:20.777180] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:38.300 [2024-11-28 12:54:20.792115] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:38.300 [2024-11-28 12:54:20.792134] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:38.300 [2024-11-28 12:54:20.807816] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:38.300 [2024-11-28 12:54:20.807836] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:38.559 [2024-11-28 12:54:20.823316] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:38.559 [2024-11-28 12:54:20.823334] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:38.559 [2024-11-28 12:54:20.839340] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:38.559 [2024-11-28 12:54:20.839359] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:38.559 [2024-11-28 12:54:20.855325] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:38.559 [2024-11-28 12:54:20.855343] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:38.559 [2024-11-28 12:54:20.870672] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:38.559 [2024-11-28 12:54:20.870690] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:38.559 [2024-11-28 12:54:20.882206] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:38.559 [2024-11-28 12:54:20.882224] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:38.559 [2024-11-28 12:54:20.896435] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:38.559 [2024-11-28 12:54:20.896454] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:38.559 [2024-11-28 12:54:20.912115] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:38.559 [2024-11-28 12:54:20.912135] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:38.559 [2024-11-28 12:54:20.927543] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:38.560 [2024-11-28 12:54:20.927562] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:38.560 [2024-11-28 12:54:20.942488] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:38.560 [2024-11-28 12:54:20.942508] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:38.560 [2024-11-28 12:54:20.955257] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:38.560 [2024-11-28 12:54:20.955277] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:38.560 [2024-11-28 12:54:20.970443] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:38.560 [2024-11-28 12:54:20.970466] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:38.560 [2024-11-28 12:54:20.982332] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:38.560 [2024-11-28 12:54:20.982351] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:38.560 [2024-11-28 12:54:20.995941] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:38.560 [2024-11-28 12:54:20.995967] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:38.560 [2024-11-28 12:54:21.010638] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:38.560 [2024-11-28 12:54:21.010657] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:38.560 [2024-11-28 12:54:21.022506] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:38.560 [2024-11-28 12:54:21.022525] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:38.560 [2024-11-28 12:54:21.036030] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:38.560 [2024-11-28 12:54:21.036049] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:38.560 [2024-11-28 12:54:21.051662] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:38.560 [2024-11-28 12:54:21.051681] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:38.560 16209.00 IOPS, 126.63 MiB/s [2024-11-28T11:54:21.079Z] [2024-11-28 12:54:21.066175] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:38.560 [2024-11-28 12:54:21.066194] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:38.818 [2024-11-28 12:54:21.077155] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:38.818 [2024-11-28 12:54:21.077174] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:38.818 [2024-11-28 12:54:21.091871] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:38.818 [2024-11-28 12:54:21.091889] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:38.818 [2024-11-28 12:54:21.106697] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:38.818 [2024-11-28 12:54:21.106716] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:38.818 [2024-11-28 12:54:21.122387] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:38.818 [2024-11-28 12:54:21.122406] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:38.818 [2024-11-28 12:54:21.136190] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:38.818 [2024-11-28 12:54:21.136209] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:38.818 [2024-11-28 12:54:21.151316] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:38.818 [2024-11-28 12:54:21.151335] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:38.818 [2024-11-28 12:54:21.166479] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:38.818 [2024-11-28 12:54:21.166498] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:38.818 [2024-11-28 12:54:21.179783] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:38.818 [2024-11-28 12:54:21.179801] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:38.818 [2024-11-28 12:54:21.195160] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:38.818 [2024-11-28 12:54:21.195179] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:38.818 [2024-11-28 12:54:21.210056] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:38.818 [2024-11-28 12:54:21.210076] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:38.818 [2024-11-28 12:54:21.224631] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:38.818 [2024-11-28 12:54:21.224651] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:38.818 [2024-11-28 12:54:21.240392] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:38.818 [2024-11-28 12:54:21.240413] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:38.818 [2024-11-28 12:54:21.255554] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:38.818 [2024-11-28 12:54:21.255579] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:38.818 [2024-11-28 12:54:21.270601] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:38.818 [2024-11-28 12:54:21.270620] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:38.818 [2024-11-28 12:54:21.286348] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:38.818 [2024-11-28 12:54:21.286367] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:38.818 [2024-11-28 12:54:21.300271] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:38.818 [2024-11-28 12:54:21.300290] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:38.818 [2024-11-28 12:54:21.315553] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:38.818 [2024-11-28 12:54:21.315572] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:38.818 [2024-11-28 12:54:21.330972] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:38.818 [2024-11-28 12:54:21.330992] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:39.077 [2024-11-28 12:54:21.346479] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:39.077 [2024-11-28 12:54:21.346499] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:39.077 [2024-11-28 12:54:21.357640] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:39.077 [2024-11-28 12:54:21.357658] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:39.077 [2024-11-28 12:54:21.372330] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:39.077 [2024-11-28 12:54:21.372348] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:39.077 [2024-11-28 12:54:21.387292] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:39.077 [2024-11-28 12:54:21.387310] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:39.077 [2024-11-28 12:54:21.398217] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:39.077 [2024-11-28 12:54:21.398235] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:39.077 [2024-11-28 12:54:21.412628] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:39.077 [2024-11-28 12:54:21.412647] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:39.077 [2024-11-28 12:54:21.427471] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:39.077 [2024-11-28 12:54:21.427489] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:39.077 [2024-11-28 12:54:21.442336] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:39.077 [2024-11-28 12:54:21.442356] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:39.077 [2024-11-28 12:54:21.454713] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:39.077 [2024-11-28 12:54:21.454731] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:39.077 [2024-11-28 12:54:21.468300] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:39.077 [2024-11-28 12:54:21.468318] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:39.077 [2024-11-28 12:54:21.483531] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:39.077 [2024-11-28 12:54:21.483549] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:39.077 [2024-11-28 12:54:21.498485] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:39.077 [2024-11-28 12:54:21.498503] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:39.077 [2024-11-28 12:54:21.509926] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:39.077 [2024-11-28 12:54:21.509944] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:39.077 [2024-11-28 12:54:21.524219] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:39.077 [2024-11-28 12:54:21.524238] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:39.077 [2024-11-28 12:54:21.539361] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:39.077 [2024-11-28 12:54:21.539379] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:39.077 [2024-11-28 12:54:21.554401] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:39.077 [2024-11-28 12:54:21.554419] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:39.077 [2024-11-28 12:54:21.566321] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:39.077 [2024-11-28 12:54:21.566339] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:39.077 [2024-11-28 12:54:21.580048] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:39.077 [2024-11-28 12:54:21.580066] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:39.335 [2024-11-28 12:54:21.595592] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:39.335 [2024-11-28 12:54:21.595611] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:39.335 [2024-11-28 12:54:21.610766] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:39.335 [2024-11-28 12:54:21.610784] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:39.335 [2024-11-28 12:54:21.622564] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:39.335 [2024-11-28 12:54:21.622581] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:39.335 [2024-11-28 12:54:21.638604] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:39.335 [2024-11-28 12:54:21.638622] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:39.335 [2024-11-28 12:54:21.654011] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:39.335 [2024-11-28 12:54:21.654030] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:39.335 [2024-11-28 12:54:21.666382] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:39.335 [2024-11-28 12:54:21.666399] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:39.335 [2024-11-28 12:54:21.680181] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:39.335 [2024-11-28 12:54:21.680199] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:39.335 [2024-11-28 12:54:21.695195] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:39.335 [2024-11-28 12:54:21.695213] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:39.335 [2024-11-28 12:54:21.709464] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:39.335 [2024-11-28 12:54:21.709483] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:39.335 [2024-11-28 12:54:21.723221] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:39.335 [2024-11-28 12:54:21.723239] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:39.335 [2024-11-28 12:54:21.738764] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:39.335 [2024-11-28 12:54:21.738782] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:39.335 [2024-11-28 12:54:21.754446] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:39.335 [2024-11-28 12:54:21.754464] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:39.335 [2024-11-28 12:54:21.768153] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:39.335 [2024-11-28 12:54:21.768175] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:39.335 [2024-11-28 12:54:21.783106] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:39.335 [2024-11-28 12:54:21.783124] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:39.336 [2024-11-28 12:54:21.793872] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:39.336 [2024-11-28 12:54:21.793889] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:39.336 [2024-11-28 12:54:21.808573] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:39.336 [2024-11-28 12:54:21.808593] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:39.336 [2024-11-28 12:54:21.823649] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:39.336 [2024-11-28 12:54:21.823667] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:39.336 [2024-11-28 12:54:21.838592] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:39.336 [2024-11-28 12:54:21.838610] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:39.594 [2024-11-28 12:54:21.854614] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:39.594 [2024-11-28 12:54:21.854632] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:39.594 [2024-11-28 12:54:21.870675] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:39.594 [2024-11-28 12:54:21.870693] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:39.594 [2024-11-28 12:54:21.886240] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:39.594 [2024-11-28 12:54:21.886259] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:39.594 [2024-11-28 12:54:21.900264] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:39.594 [2024-11-28 12:54:21.900282] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:39.594 [2024-11-28 12:54:21.915539] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:39.594 [2024-11-28 12:54:21.915557] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:39.594 [2024-11-28 12:54:21.930481] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:39.594 [2024-11-28 12:54:21.930499] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:39.594 [2024-11-28 12:54:21.942639] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:39.594 [2024-11-28 12:54:21.942657] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:39.594 [2024-11-28 12:54:21.956312] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:39.594 [2024-11-28 12:54:21.956331] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:39.594 [2024-11-28 12:54:21.971610] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:39.594 [2024-11-28 12:54:21.971628] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:39.594 [2024-11-28 12:54:21.986223] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:39.594 [2024-11-28 12:54:21.986241] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:39.594 [2024-11-28 12:54:21.999201] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:39.594 [2024-11-28 12:54:21.999219] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:39.594 [2024-11-28 12:54:22.014406] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:39.594 [2024-11-28 12:54:22.014425] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:39.594 [2024-11-28 12:54:22.025985] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:39.594 [2024-11-28 12:54:22.026003] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:39.594 [2024-11-28 12:54:22.040547] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:39.594 [2024-11-28 12:54:22.040571] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:39.594 [2024-11-28 12:54:22.055609] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:39.594 [2024-11-28 12:54:22.055628] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:39.594 16328.00 IOPS, 127.56 MiB/s [2024-11-28T11:54:22.113Z] [2024-11-28 12:54:22.070746] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:39.594 [2024-11-28 12:54:22.070763] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:39.594 [2024-11-28 12:54:22.086520] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:39.594 [2024-11-28 12:54:22.086539] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:39.594 [2024-11-28 12:54:22.098288] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:39.594 [2024-11-28 12:54:22.098306] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:39.852 [2024-11-28 12:54:22.112085] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:39.852 [2024-11-28 12:54:22.112103] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:39.852 [2024-11-28 12:54:22.126980] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:39.852 [2024-11-28 12:54:22.126999] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:39.852 [2024-11-28 12:54:22.142074] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:39.852 [2024-11-28 12:54:22.142092] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:39.852 [2024-11-28 12:54:22.155250] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:39.852 [2024-11-28 12:54:22.155269] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:39.852 [2024-11-28 12:54:22.170393] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:39.852 [2024-11-28 12:54:22.170413] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:39.852 [2024-11-28 12:54:22.184114] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:39.852 [2024-11-28 12:54:22.184131] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:39.852 [2024-11-28 12:54:22.199212] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:39.852 [2024-11-28 12:54:22.199229] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:39.852 [2024-11-28 12:54:22.214840] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:39.852 [2024-11-28 12:54:22.214858] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:39.852 [2024-11-28 12:54:22.230655] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:39.852 [2024-11-28 12:54:22.230673] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:39.852 [2024-11-28 12:54:22.246115] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:39.852 [2024-11-28 12:54:22.246134] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:39.852 [2024-11-28 12:54:22.259837] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:39.852 [2024-11-28 12:54:22.259857] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:39.852 [2024-11-28 12:54:22.274958] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:39.852 [2024-11-28 12:54:22.274976] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:39.852 [2024-11-28 12:54:22.289893] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:39.852 [2024-11-28 12:54:22.289911] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:39.852 [2024-11-28 12:54:22.304580] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:39.852 [2024-11-28 12:54:22.304597] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:39.852 [2024-11-28 12:54:22.319541] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:39.852 [2024-11-28 12:54:22.319565] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:39.852 [2024-11-28 12:54:22.334461] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:39.852 [2024-11-28 12:54:22.334480] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:39.852 [2024-11-28 12:54:22.346776] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:39.852 [2024-11-28 12:54:22.346796] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:39.852 [2024-11-28 12:54:22.362057] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:39.852 [2024-11-28 12:54:22.362077] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:40.111 [2024-11-28 12:54:22.376328] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:40.111 [2024-11-28 12:54:22.376349] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:40.111 [2024-11-28 12:54:22.391718] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:40.111 [2024-11-28 12:54:22.391737] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:40.111 [2024-11-28 12:54:22.407584] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:40.111 [2024-11-28 12:54:22.407603] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:40.111 [2024-11-28 12:54:22.422454] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:40.111 [2024-11-28 12:54:22.422473] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:40.111 [2024-11-28 12:54:22.434959] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:40.111 [2024-11-28 12:54:22.434977] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:40.111 [2024-11-28 12:54:22.450614] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:40.111 [2024-11-28 12:54:22.450632] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:40.111 [2024-11-28 12:54:22.466053] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:40.111 [2024-11-28 12:54:22.466072] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:40.111 [2024-11-28 12:54:22.480573] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:40.111 [2024-11-28 12:54:22.480593] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:40.111 [2024-11-28 12:54:22.495191] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:40.111 [2024-11-28 12:54:22.495209] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:40.111 [2024-11-28 12:54:22.510163] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:40.111 [2024-11-28 12:54:22.510182] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:40.111 [2024-11-28 12:54:22.521521] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:40.111 [2024-11-28 12:54:22.521540] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:40.111 [2024-11-28 12:54:22.536187] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:40.111 [2024-11-28 12:54:22.536206] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:40.111 [2024-11-28 12:54:22.551217] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:40.111 [2024-11-28 12:54:22.551236] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:40.111 [2024-11-28 12:54:22.566294] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:40.111 [2024-11-28 12:54:22.566314] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:40.111 [2024-11-28 12:54:22.580172] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:40.111 [2024-11-28 12:54:22.580191] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:40.111 [2024-11-28 12:54:22.594945] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:40.111 [2024-11-28 12:54:22.594969] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:40.111 [2024-11-28 12:54:22.611107] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:40.111 [2024-11-28 12:54:22.611126] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:40.111 [2024-11-28 12:54:22.626650] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:40.111 [2024-11-28 12:54:22.626670] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:40.369 [2024-11-28 12:54:22.639123] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:40.369 [2024-11-28 12:54:22.639142] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:40.369 [2024-11-28 12:54:22.654408] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:40.369 [2024-11-28 12:54:22.654428] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:40.369 [2024-11-28 12:54:22.665178] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:40.369 [2024-11-28 12:54:22.665196] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:40.369 [2024-11-28 12:54:22.680294] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:40.369 [2024-11-28 12:54:22.680313] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:40.369 [2024-11-28 12:54:22.695696] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:40.369 [2024-11-28 12:54:22.695715] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:40.369 [2024-11-28 12:54:22.710786] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:40.369 [2024-11-28 12:54:22.710804] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:40.369 [2024-11-28 12:54:22.726530] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:40.369 [2024-11-28 12:54:22.726550] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:40.369 [2024-11-28 12:54:22.738942] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:40.369 [2024-11-28 12:54:22.738967] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:40.369 [2024-11-28 12:54:22.754633] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:40.369 [2024-11-28 12:54:22.754651] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:40.369 [2024-11-28 12:54:22.767240] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:40.369 [2024-11-28 12:54:22.767259] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:40.369 [2024-11-28 12:54:22.782676] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:40.369 [2024-11-28 12:54:22.782695] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:40.369 [2024-11-28 12:54:22.797851] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:40.369 [2024-11-28 12:54:22.797870] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:40.369 [2024-11-28 12:54:22.809260] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:40.369 [2024-11-28 12:54:22.809279] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:40.369 [2024-11-28 12:54:22.824250] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:40.369 [2024-11-28 12:54:22.824269] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:40.369 [2024-11-28 12:54:22.839105] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:40.369 [2024-11-28 12:54:22.839123] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:40.369 [2024-11-28 12:54:22.854337] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:40.369 [2024-11-28 12:54:22.854355] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:40.369 [2024-11-28 12:54:22.865896] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:40.369 [2024-11-28 12:54:22.865914] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:40.369 [2024-11-28 12:54:22.880326] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:40.369 [2024-11-28 12:54:22.880344] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:40.627 [2024-11-28 12:54:22.896000] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:40.627 [2024-11-28 12:54:22.896019] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:40.627 [2024-11-28 12:54:22.910875] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:40.627 [2024-11-28 12:54:22.910893] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:40.627 [2024-11-28 12:54:22.926035] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:40.627 [2024-11-28 12:54:22.926053] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:40.627 [2024-11-28 12:54:22.939711] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:40.627 [2024-11-28 12:54:22.939729] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:40.627 [2024-11-28 12:54:22.955231] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:40.627 [2024-11-28 12:54:22.955249] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:40.627 [2024-11-28 12:54:22.970496] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:40.627 [2024-11-28 12:54:22.970514] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:40.627 [2024-11-28 12:54:22.981421] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:40.627 [2024-11-28 12:54:22.981439] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:40.627 [2024-11-28 12:54:22.996635] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:40.627 [2024-11-28 12:54:22.996653] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:40.627 [2024-11-28 12:54:23.011656] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:40.627 [2024-11-28 12:54:23.011674] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:40.627 [2024-11-28 12:54:23.027027] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:40.627 [2024-11-28 12:54:23.027045] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:40.627 [2024-11-28 12:54:23.041975] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:40.627 [2024-11-28 12:54:23.041993] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:40.627 [2024-11-28 12:54:23.053633] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:40.627 [2024-11-28 12:54:23.053652] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:40.627 16343.33 IOPS, 127.68 MiB/s [2024-11-28T11:54:23.146Z] [2024-11-28 12:54:23.068023] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:40.627 [2024-11-28 12:54:23.068040] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:40.627 [2024-11-28 12:54:23.083145] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:40.627 [2024-11-28 12:54:23.083163] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:40.627 [2024-11-28 12:54:23.098458] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:40.627 [2024-11-28 12:54:23.098476] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:40.627 [2024-11-28 12:54:23.109553] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:40.627 [2024-11-28 12:54:23.109571] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:40.627 [2024-11-28 12:54:23.123921] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:40.627 [2024-11-28 12:54:23.123939] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:40.627 [2024-11-28 12:54:23.138813] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:40.627 [2024-11-28 12:54:23.138831] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:40.886 [2024-11-28 12:54:23.154782] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:40.886 [2024-11-28 12:54:23.154800] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:40.886 [2024-11-28 12:54:23.170167] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:40.886 [2024-11-28 12:54:23.170186] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:40.886 [2024-11-28 12:54:23.184401] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:40.886 [2024-11-28 12:54:23.184419] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:40.886 [2024-11-28 12:54:23.199223] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:40.886 [2024-11-28 12:54:23.199241] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:40.886 [2024-11-28 12:54:23.214111] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:40.886 [2024-11-28 12:54:23.214130] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:40.886 [2024-11-28 12:54:23.227278] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:40.886 [2024-11-28 12:54:23.227296] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:40.886 [2024-11-28 12:54:23.242527] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:40.886 [2024-11-28 12:54:23.242545] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:40.886 [2024-11-28 12:54:23.254926] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:40.886 [2024-11-28 12:54:23.254946] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:40.886 [2024-11-28 12:54:23.266516] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:40.886 [2024-11-28 12:54:23.266535] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:40.886 [2024-11-28 12:54:23.280074] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:40.886 [2024-11-28 12:54:23.280094] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:40.886 [2024-11-28 12:54:23.295115] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:40.886 [2024-11-28 12:54:23.295133] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:40.886 [2024-11-28 12:54:23.309802] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:40.886 [2024-11-28 12:54:23.309821] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:40.886 [2024-11-28 12:54:23.324137] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:40.886 [2024-11-28 12:54:23.324156] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:40.886 [2024-11-28 12:54:23.338856] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:40.886 [2024-11-28 12:54:23.338874] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:40.886 [2024-11-28 12:54:23.353384] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:40.886 [2024-11-28 12:54:23.353402] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:40.886 [2024-11-28 12:54:23.367254] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:40.886 [2024-11-28 12:54:23.367272] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:40.886 [2024-11-28 12:54:23.382165] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:40.886 [2024-11-28 12:54:23.382183] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:40.886 [2024-11-28 12:54:23.393456] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:40.886 [2024-11-28 12:54:23.393480] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:41.144 [2024-11-28 12:54:23.408243] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:41.144 [2024-11-28 12:54:23.408262] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:41.144 [2024-11-28 12:54:23.423089] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:41.144 [2024-11-28 12:54:23.423107] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:41.144 [2024-11-28 12:54:23.437825] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:41.144 [2024-11-28 12:54:23.437843] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:41.144 [2024-11-28 12:54:23.451723] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:41.144 [2024-11-28 12:54:23.451742] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:41.144 [2024-11-28 12:54:23.466601] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:41.144 [2024-11-28 12:54:23.466619] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:41.144 [2024-11-28 12:54:23.479645] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:41.144 [2024-11-28 12:54:23.479664] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:41.144 [2024-11-28 12:54:23.494965] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:41.144 [2024-11-28 12:54:23.494983] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:41.144 [2024-11-28 12:54:23.510707] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:41.144 [2024-11-28 12:54:23.510725] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:41.144 [2024-11-28 12:54:23.523014] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:41.144 [2024-11-28 12:54:23.523032] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:41.144 [2024-11-28 12:54:23.537807] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:41.144 [2024-11-28 12:54:23.537826] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:41.144 [2024-11-28 12:54:23.552005] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:41.144 [2024-11-28 12:54:23.552023] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:41.144 [2024-11-28 12:54:23.567006] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:41.144 [2024-11-28 12:54:23.567024] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:41.144 [2024-11-28 12:54:23.582259] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:41.144 [2024-11-28 12:54:23.582277] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:41.144 [2024-11-28 12:54:23.593150] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:41.144 [2024-11-28 12:54:23.593169] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:41.144 [2024-11-28 12:54:23.608569] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:41.144 [2024-11-28 12:54:23.608587] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:41.144 [2024-11-28 12:54:23.623253] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:41.144 [2024-11-28 12:54:23.623272] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:41.144 [2024-11-28 12:54:23.638884] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:41.144 [2024-11-28 12:54:23.638902] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:41.144 [2024-11-28 12:54:23.654925] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:41.144 [2024-11-28 12:54:23.654943] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:41.433 [2024-11-28 12:54:23.670182] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:41.433 [2024-11-28 12:54:23.670207] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:41.433 [2024-11-28 12:54:23.684294] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:41.433 [2024-11-28 12:54:23.684312] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:41.433 [2024-11-28 12:54:23.699554] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:41.433 [2024-11-28 12:54:23.699572] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:41.433 [2024-11-28 12:54:23.714604] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:41.433 [2024-11-28 12:54:23.714622] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:41.433 [2024-11-28 12:54:23.729494] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:41.433 [2024-11-28 12:54:23.729512] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:41.433 [2024-11-28 12:54:23.743408] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:41.433 [2024-11-28 12:54:23.743426] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:41.433 [2024-11-28 12:54:23.758726] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:41.433 [2024-11-28 12:54:23.758744] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:41.433 [2024-11-28 12:54:23.774187] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:41.433 [2024-11-28 12:54:23.774206] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:41.433 [2024-11-28 12:54:23.787757] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:41.433 [2024-11-28 12:54:23.787776] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:41.433 [2024-11-28 12:54:23.802983] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:41.433 [2024-11-28 12:54:23.803002] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:41.433 [2024-11-28 12:54:23.817989] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:41.433 [2024-11-28 12:54:23.818009] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:41.433 [2024-11-28 12:54:23.830565] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:41.433 [2024-11-28 12:54:23.830585] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:41.433 [2024-11-28 12:54:23.844273] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:41.433 [2024-11-28 12:54:23.844294] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:41.433 [2024-11-28 12:54:23.859625] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:41.433 [2024-11-28 12:54:23.859645] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:41.433 [2024-11-28 12:54:23.874757] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:41.433 [2024-11-28 12:54:23.874775] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:41.433 [2024-11-28 12:54:23.890913] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:41.433 [2024-11-28 12:54:23.890931] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:41.433 [2024-11-28 12:54:23.906373] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:41.433 [2024-11-28 12:54:23.906392] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:41.433 [2024-11-28 12:54:23.917881] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:41.433 [2024-11-28 12:54:23.917899] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:41.433 [2024-11-28 12:54:23.932396] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:41.433 [2024-11-28 12:54:23.932415] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:41.433 [2024-11-28 12:54:23.947888] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:41.433 [2024-11-28 12:54:23.947912] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:41.692 [2024-11-28 12:54:23.963709] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:41.692 [2024-11-28 12:54:23.963727] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:41.692 [2024-11-28 12:54:23.978955] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:41.692 [2024-11-28 12:54:23.978973] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:41.692 [2024-11-28 12:54:23.994792] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:41.692 [2024-11-28 12:54:23.994811] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:41.692 [2024-11-28 12:54:24.010251] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:41.692 [2024-11-28 12:54:24.010270] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:41.692 [2024-11-28 12:54:24.024093] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:41.692 [2024-11-28 12:54:24.024111] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:41.692 [2024-11-28 12:54:24.039177] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:41.692 [2024-11-28 12:54:24.039196] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:41.692 [2024-11-28 12:54:24.054110] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:41.692 [2024-11-28 12:54:24.054129] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:41.692 16373.75 IOPS, 127.92 MiB/s [2024-11-28T11:54:24.211Z] [2024-11-28 12:54:24.065439] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:41.692 [2024-11-28 12:54:24.065457] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:41.692 [2024-11-28 12:54:24.080565] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:41.692 [2024-11-28 12:54:24.080584] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:41.692 [2024-11-28 12:54:24.095838] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:41.692 [2024-11-28 12:54:24.095856] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:41.692 [2024-11-28 12:54:24.110901] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:41.692 [2024-11-28 12:54:24.110919] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:41.692 [2024-11-28 12:54:24.126591] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:41.692 [2024-11-28 12:54:24.126609] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:41.692 [2024-11-28 12:54:24.139482] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:41.692 [2024-11-28 12:54:24.139500] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:41.692 [2024-11-28 12:54:24.150480] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:41.692 [2024-11-28 12:54:24.150498] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:41.692 [2024-11-28 12:54:24.164617] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:41.692 [2024-11-28 12:54:24.164635] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:41.692 [2024-11-28 12:54:24.179154] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:41.692 [2024-11-28 12:54:24.179172] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:41.692 [2024-11-28 12:54:24.194013] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:41.692 [2024-11-28 12:54:24.194032] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:41.951 [2024-11-28 12:54:24.208528] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:41.951 [2024-11-28 12:54:24.208546] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:41.951 [2024-11-28 12:54:24.223592] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:41.951 [2024-11-28 12:54:24.223610] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:41.951 [2024-11-28 12:54:24.238438] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:41.951 [2024-11-28 12:54:24.238458] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:41.951 [2024-11-28 12:54:24.251241] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:41.951 [2024-11-28 12:54:24.251259] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:41.951 [2024-11-28 12:54:24.263838] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:41.951 [2024-11-28 12:54:24.263857] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:41.951 [2024-11-28 12:54:24.278927] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:41.951 [2024-11-28 12:54:24.278945] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:41.951 [2024-11-28 12:54:24.294484] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:41.951 [2024-11-28 12:54:24.294504] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:41.951 [2024-11-28 12:54:24.306639] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:41.951 [2024-11-28 12:54:24.306659] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:41.951 [2024-11-28 12:54:24.321410] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:41.951 [2024-11-28 12:54:24.321429] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:41.951 [2024-11-28 12:54:24.335340] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:41.951 [2024-11-28 12:54:24.335358] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:41.951 [2024-11-28 12:54:24.346746] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:41.951 [2024-11-28 12:54:24.346765] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:41.951 [2024-11-28 12:54:24.359812] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:41.951 [2024-11-28 12:54:24.359830] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:41.951 [2024-11-28 12:54:24.374872] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:41.951 [2024-11-28 12:54:24.374890] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:41.951 [2024-11-28 12:54:24.389886] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:41.951 [2024-11-28 12:54:24.389905] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:41.951 [2024-11-28 12:54:24.403116] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:41.951 [2024-11-28 12:54:24.403134] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:41.951 [2024-11-28 12:54:24.418443] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:41.951 [2024-11-28 12:54:24.418461] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:41.951 [2024-11-28 12:54:24.429258] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:41.951 [2024-11-28 12:54:24.429276] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:41.951 [2024-11-28 12:54:24.444222] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:41.951 [2024-11-28 12:54:24.444241] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:41.951 [2024-11-28 12:54:24.459352] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:41.951 [2024-11-28 12:54:24.459370] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:42.209 [2024-11-28 12:54:24.474477] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:42.209 [2024-11-28 12:54:24.474496] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:42.209 [2024-11-28 12:54:24.486987] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:42.209 [2024-11-28 12:54:24.487005] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:42.209 [2024-11-28 12:54:24.499878] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:42.209 [2024-11-28 12:54:24.499895] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:42.209 [2024-11-28 12:54:24.515175] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:42.209 [2024-11-28 12:54:24.515193] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:42.209 [2024-11-28 12:54:24.529926] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:42.209 [2024-11-28 12:54:24.529944] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:42.209 [2024-11-28 12:54:24.544124] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:42.209 [2024-11-28 12:54:24.544141] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:42.209 [2024-11-28 12:54:24.559126] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:42.209 [2024-11-28 12:54:24.559144] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:42.209 [2024-11-28 12:54:24.574452] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:42.209 [2024-11-28 12:54:24.574470] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:42.209 [2024-11-28 12:54:24.588071] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:42.209 [2024-11-28 12:54:24.588089] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:42.209 [2024-11-28 12:54:24.603078] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:42.209 [2024-11-28 12:54:24.603096] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:42.209 [2024-11-28 12:54:24.618083] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:42.209 [2024-11-28 12:54:24.618103] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:42.209 [2024-11-28 12:54:24.631911] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:42.209 [2024-11-28 12:54:24.631929] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:42.209 [2024-11-28 12:54:24.647060] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:42.209 [2024-11-28 12:54:24.647078] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:42.209 [2024-11-28 12:54:24.662048] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:42.209 [2024-11-28 12:54:24.662067] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:42.209 [2024-11-28 12:54:24.675062] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:42.209 [2024-11-28 12:54:24.675080] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:42.209 [2024-11-28 12:54:24.690482] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:42.209 [2024-11-28 12:54:24.690500] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:42.209 [2024-11-28 12:54:24.701162] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:42.209 [2024-11-28 12:54:24.701181] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:42.209 [2024-11-28 12:54:24.716404] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:42.209 [2024-11-28 12:54:24.716422] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:42.466 [2024-11-28 12:54:24.731567] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:42.466 [2024-11-28 12:54:24.731586] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:42.466 [2024-11-28 12:54:24.746332] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:42.466 [2024-11-28 12:54:24.746351] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:42.466 [2024-11-28 12:54:24.757513] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:42.466 [2024-11-28 12:54:24.757531] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:42.466 [2024-11-28 12:54:24.772539] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:42.466 [2024-11-28 12:54:24.772558] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:42.466 [2024-11-28 12:54:24.787411] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:42.466 [2024-11-28 12:54:24.787429] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:42.466 [2024-11-28 12:54:24.802351] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:42.466 [2024-11-28 12:54:24.802369] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:42.466 [2024-11-28 12:54:24.816166] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:42.466 [2024-11-28 12:54:24.816184] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:42.466 [2024-11-28 12:54:24.831103] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:42.466 [2024-11-28 12:54:24.831121] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:42.466 [2024-11-28 12:54:24.847336] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:42.466 [2024-11-28 12:54:24.847355] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:42.466 [2024-11-28 12:54:24.862238] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:42.466 [2024-11-28 12:54:24.862267] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:42.466 [2024-11-28 12:54:24.876572] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:42.466 [2024-11-28 12:54:24.876590] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:42.466 [2024-11-28 12:54:24.891529] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:42.466 [2024-11-28 12:54:24.891548] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:42.466 [2024-11-28 12:54:24.906587] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:42.466 [2024-11-28 12:54:24.906606] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:42.466 [2024-11-28 12:54:24.922307] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:42.466 [2024-11-28 12:54:24.922326] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:42.466 [2024-11-28 12:54:24.934251] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:42.466 [2024-11-28 12:54:24.934270] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:42.466 [2024-11-28 12:54:24.948661] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:42.466 [2024-11-28 12:54:24.948678] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:42.466 [2024-11-28 12:54:24.964165] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:42.466 [2024-11-28 12:54:24.964183] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:42.466 [2024-11-28 12:54:24.979355] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:42.466 [2024-11-28 12:54:24.979373] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:42.724 [2024-11-28 12:54:24.994585] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:42.724 [2024-11-28 12:54:24.994603] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:42.724 [2024-11-28 12:54:25.010764] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:42.724 [2024-11-28 12:54:25.010783] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:42.724 [2024-11-28 12:54:25.025869] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:42.724 [2024-11-28 12:54:25.025888] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:42.724 [2024-11-28 12:54:25.040279] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:42.724 [2024-11-28 12:54:25.040298] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:42.724 [2024-11-28 12:54:25.054983] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:42.724 [2024-11-28 12:54:25.055001] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:42.724 16394.60 IOPS, 128.08 MiB/s [2024-11-28T11:54:25.243Z] [2024-11-28 12:54:25.069621] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:42.724 [2024-11-28 12:54:25.069639] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:42.724 00:30:42.724 Latency(us) 00:30:42.724 [2024-11-28T11:54:25.243Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:42.724 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:30:42.724 Nvme1n1 : 5.01 16395.02 128.09 0.00 0.00 7799.09 2080.06 13335.15 00:30:42.724 [2024-11-28T11:54:25.243Z] =================================================================================================================== 00:30:42.724 [2024-11-28T11:54:25.243Z] Total : 16395.02 128.09 0.00 0.00 7799.09 2080.06 13335.15 00:30:42.724 [2024-11-28 12:54:25.078267] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:42.724 [2024-11-28 12:54:25.078285] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:42.725 [2024-11-28 12:54:25.090264] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:42.725 [2024-11-28 12:54:25.090277] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:42.725 [2024-11-28 12:54:25.102278] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:42.725 [2024-11-28 12:54:25.102295] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:42.725 [2024-11-28 12:54:25.114269] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:42.725 [2024-11-28 12:54:25.114284] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:42.725 [2024-11-28 12:54:25.126267] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:42.725 [2024-11-28 12:54:25.126280] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:42.725 [2024-11-28 12:54:25.138262] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:42.725 [2024-11-28 12:54:25.138276] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:42.725 [2024-11-28 12:54:25.150263] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:42.725 [2024-11-28 12:54:25.150276] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:42.725 [2024-11-28 12:54:25.162262] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:42.725 [2024-11-28 12:54:25.162274] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:42.725 [2024-11-28 12:54:25.174287] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:42.725 [2024-11-28 12:54:25.174303] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:42.725 [2024-11-28 12:54:25.186260] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:42.725 [2024-11-28 12:54:25.186270] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:42.725 [2024-11-28 12:54:25.198262] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:42.725 [2024-11-28 12:54:25.198271] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:42.726 [2024-11-28 12:54:25.210266] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:42.726 [2024-11-28 12:54:25.210281] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:42.726 [2024-11-28 12:54:25.222261] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:42.726 [2024-11-28 12:54:25.222274] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:42.726 [2024-11-28 12:54:25.234262] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:42.726 [2024-11-28 12:54:25.234272] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:42.985 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (2739355) - No such process 00:30:42.985 12:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 2739355 00:30:42.985 12:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:42.985 12:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:42.985 12:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:42.985 12:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:42.985 12:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:30:42.985 12:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:42.985 12:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:42.985 delay0 00:30:42.985 12:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:42.985 12:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:30:42.985 12:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:42.985 12:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:42.985 12:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:42.985 12:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:30:42.985 [2024-11-28 12:54:25.368030] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:30:49.538 [2024-11-28 12:54:31.709095] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15dba00 is same with the state(6) to be set 00:30:49.538 Initializing NVMe Controllers 00:30:49.538 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:49.538 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:49.538 Initialization complete. Launching workers. 00:30:49.538 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 291, failed: 7765 00:30:49.538 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 7982, failed to submit 74 00:30:49.538 success 7866, unsuccessful 116, failed 0 00:30:49.538 12:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:30:49.538 12:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:30:49.538 12:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:49.538 12:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:30:49.538 12:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:49.538 12:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:30:49.538 12:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:49.538 12:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:49.538 rmmod nvme_tcp 00:30:49.538 rmmod nvme_fabrics 00:30:49.538 rmmod nvme_keyring 00:30:49.538 12:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:49.538 12:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:30:49.538 12:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:30:49.538 12:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 2737426 ']' 00:30:49.538 12:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 2737426 00:30:49.538 12:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 2737426 ']' 00:30:49.538 12:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 2737426 00:30:49.538 12:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:30:49.538 12:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:49.538 12:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2737426 00:30:49.538 12:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:49.538 12:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:49.538 12:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2737426' 00:30:49.538 killing process with pid 2737426 00:30:49.538 12:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 2737426 00:30:49.538 12:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 2737426 00:30:49.538 12:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:49.538 12:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:49.538 12:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:49.538 12:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:30:49.538 12:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:30:49.538 12:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:49.538 12:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:30:49.538 12:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:49.538 12:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:49.538 12:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:49.538 12:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:49.538 12:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:52.070 12:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:52.070 00:30:52.070 real 0m31.185s 00:30:52.070 user 0m40.886s 00:30:52.070 sys 0m12.052s 00:30:52.070 12:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:52.070 12:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:52.070 ************************************ 00:30:52.070 END TEST nvmf_zcopy 00:30:52.070 ************************************ 00:30:52.070 12:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:30:52.070 12:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:52.070 12:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:52.070 12:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:52.070 ************************************ 00:30:52.070 START TEST nvmf_nmic 00:30:52.070 ************************************ 00:30:52.070 12:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:30:52.070 * Looking for test storage... 00:30:52.070 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:52.070 12:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:52.070 12:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:30:52.070 12:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:52.070 12:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:52.070 12:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:52.070 12:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:52.070 12:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:52.070 12:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:30:52.070 12:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:30:52.070 12:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:30:52.070 12:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:30:52.070 12:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:30:52.070 12:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:30:52.070 12:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:30:52.070 12:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:52.070 12:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:30:52.070 12:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:30:52.070 12:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:52.070 12:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:52.070 12:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:30:52.070 12:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:30:52.070 12:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:52.070 12:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:30:52.070 12:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:30:52.070 12:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:30:52.070 12:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:30:52.070 12:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:52.070 12:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:30:52.070 12:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:30:52.070 12:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:52.070 12:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:52.070 12:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:30:52.070 12:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:52.070 12:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:52.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:52.070 --rc genhtml_branch_coverage=1 00:30:52.070 --rc genhtml_function_coverage=1 00:30:52.070 --rc genhtml_legend=1 00:30:52.070 --rc geninfo_all_blocks=1 00:30:52.070 --rc geninfo_unexecuted_blocks=1 00:30:52.070 00:30:52.070 ' 00:30:52.070 12:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:52.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:52.070 --rc genhtml_branch_coverage=1 00:30:52.070 --rc genhtml_function_coverage=1 00:30:52.070 --rc genhtml_legend=1 00:30:52.070 --rc geninfo_all_blocks=1 00:30:52.070 --rc geninfo_unexecuted_blocks=1 00:30:52.070 00:30:52.070 ' 00:30:52.070 12:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:52.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:52.070 --rc genhtml_branch_coverage=1 00:30:52.070 --rc genhtml_function_coverage=1 00:30:52.070 --rc genhtml_legend=1 00:30:52.070 --rc geninfo_all_blocks=1 00:30:52.070 --rc geninfo_unexecuted_blocks=1 00:30:52.070 00:30:52.070 ' 00:30:52.070 12:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:52.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:52.070 --rc genhtml_branch_coverage=1 00:30:52.070 --rc genhtml_function_coverage=1 00:30:52.070 --rc genhtml_legend=1 00:30:52.070 --rc geninfo_all_blocks=1 00:30:52.070 --rc geninfo_unexecuted_blocks=1 00:30:52.070 00:30:52.070 ' 00:30:52.070 12:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:52.070 12:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:30:52.070 12:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:52.070 12:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:52.070 12:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:52.070 12:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:52.070 12:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:52.070 12:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:52.070 12:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:52.070 12:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:52.070 12:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:52.070 12:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:52.070 12:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:30:52.070 12:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:30:52.071 12:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:52.071 12:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:52.071 12:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:52.071 12:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:52.071 12:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:52.071 12:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:30:52.071 12:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:52.071 12:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:52.071 12:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:52.071 12:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:52.071 12:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:52.071 12:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:52.071 12:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:30:52.071 12:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:52.071 12:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:30:52.071 12:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:52.071 12:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:52.071 12:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:52.071 12:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:52.071 12:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:52.071 12:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:52.071 12:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:52.071 12:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:52.071 12:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:52.071 12:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:52.071 12:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:52.071 12:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:52.071 12:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:30:52.071 12:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:52.071 12:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:52.071 12:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:52.071 12:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:52.071 12:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:52.071 12:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:52.071 12:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:52.071 12:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:52.071 12:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:52.071 12:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:52.071 12:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:30:52.071 12:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:57.340 12:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:57.340 12:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:30:57.340 12:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:57.340 12:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:57.340 12:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:57.340 12:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:57.340 12:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:57.340 12:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:30:57.340 12:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:57.340 12:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:30:57.340 12:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:30:57.340 12:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:30:57.340 12:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:30:57.340 12:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:30:57.340 12:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:30:57.340 12:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:57.340 12:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:57.340 12:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:57.340 12:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:57.340 12:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:57.340 12:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:57.340 12:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:57.340 12:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:57.340 12:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:57.340 12:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:57.340 12:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:57.340 12:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:57.340 12:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:57.340 12:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:57.340 12:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:57.340 12:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:57.340 12:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:57.340 12:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:57.340 12:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:57.340 12:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:57.340 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:57.340 12:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:57.340 12:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:57.340 12:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:57.340 12:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:57.340 12:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:57.340 12:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:57.340 12:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:57.340 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:57.340 12:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:57.340 12:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:57.340 12:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:57.340 12:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:57.340 12:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:57.340 12:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:57.340 12:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:57.340 12:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:57.340 12:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:57.340 12:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:57.340 12:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:57.340 12:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:57.340 12:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:57.340 12:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:57.340 12:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:57.340 12:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:57.340 Found net devices under 0000:86:00.0: cvl_0_0 00:30:57.340 12:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:57.340 12:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:57.340 12:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:57.340 12:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:57.340 12:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:57.340 12:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:57.340 12:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:57.340 12:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:57.340 12:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:57.340 Found net devices under 0000:86:00.1: cvl_0_1 00:30:57.340 12:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:57.340 12:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:57.340 12:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:30:57.340 12:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:57.340 12:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:57.340 12:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:57.340 12:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:57.340 12:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:57.340 12:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:57.340 12:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:57.340 12:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:57.340 12:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:57.340 12:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:57.340 12:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:57.340 12:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:57.340 12:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:57.340 12:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:57.340 12:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:57.340 12:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:57.340 12:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:57.340 12:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:57.340 12:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:57.340 12:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:57.341 12:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:57.341 12:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:57.341 12:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:57.341 12:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:57.341 12:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:57.341 12:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:57.341 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:57.341 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.475 ms 00:30:57.341 00:30:57.341 --- 10.0.0.2 ping statistics --- 00:30:57.341 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:57.341 rtt min/avg/max/mdev = 0.475/0.475/0.475/0.000 ms 00:30:57.341 12:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:57.341 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:57.341 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:30:57.341 00:30:57.341 --- 10.0.0.1 ping statistics --- 00:30:57.341 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:57.341 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:30:57.341 12:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:57.341 12:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:30:57.341 12:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:57.341 12:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:57.341 12:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:57.341 12:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:57.341 12:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:57.341 12:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:57.341 12:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:57.341 12:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:30:57.341 12:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:57.341 12:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:57.341 12:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:57.341 12:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=2744710 00:30:57.341 12:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:30:57.341 12:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 2744710 00:30:57.341 12:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 2744710 ']' 00:30:57.341 12:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:57.341 12:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:57.341 12:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:57.341 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:57.341 12:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:57.341 12:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:57.599 [2024-11-28 12:54:39.859754] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:57.599 [2024-11-28 12:54:39.860685] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:30:57.599 [2024-11-28 12:54:39.860722] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:57.599 [2024-11-28 12:54:39.927644] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:57.600 [2024-11-28 12:54:39.969930] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:57.600 [2024-11-28 12:54:39.969975] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:57.600 [2024-11-28 12:54:39.969983] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:57.600 [2024-11-28 12:54:39.969989] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:57.600 [2024-11-28 12:54:39.969994] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:57.600 [2024-11-28 12:54:39.971432] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:57.600 [2024-11-28 12:54:39.971448] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:57.600 [2024-11-28 12:54:39.971541] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:57.600 [2024-11-28 12:54:39.971544] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:57.600 [2024-11-28 12:54:40.041709] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:57.600 [2024-11-28 12:54:40.041797] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:57.600 [2024-11-28 12:54:40.042084] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:57.600 [2024-11-28 12:54:40.042296] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:57.600 [2024-11-28 12:54:40.042476] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:30:57.600 12:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:57.600 12:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:30:57.600 12:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:57.600 12:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:57.600 12:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:57.600 12:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:57.600 12:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:57.600 12:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:57.600 12:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:57.600 [2024-11-28 12:54:40.108248] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:57.859 12:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:57.859 12:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:57.859 12:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:57.859 12:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:57.859 Malloc0 00:30:57.859 12:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:57.859 12:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:30:57.859 12:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:57.859 12:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:57.859 12:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:57.859 12:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:57.859 12:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:57.859 12:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:57.859 12:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:57.859 12:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:57.859 12:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:57.859 12:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:57.859 [2024-11-28 12:54:40.176166] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:57.859 12:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:57.859 12:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:30:57.859 test case1: single bdev can't be used in multiple subsystems 00:30:57.859 12:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:30:57.859 12:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:57.859 12:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:57.859 12:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:57.859 12:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:30:57.859 12:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:57.859 12:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:57.859 12:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:57.859 12:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:30:57.859 12:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:30:57.859 12:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:57.859 12:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:57.859 [2024-11-28 12:54:40.207953] bdev.c:8515:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:30:57.859 [2024-11-28 12:54:40.207973] subsystem.c:2156:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:30:57.859 [2024-11-28 12:54:40.207980] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.859 request: 00:30:57.859 { 00:30:57.859 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:30:57.859 "namespace": { 00:30:57.859 "bdev_name": "Malloc0", 00:30:57.859 "no_auto_visible": false, 00:30:57.859 "hide_metadata": false 00:30:57.859 }, 00:30:57.859 "method": "nvmf_subsystem_add_ns", 00:30:57.859 "req_id": 1 00:30:57.859 } 00:30:57.859 Got JSON-RPC error response 00:30:57.859 response: 00:30:57.859 { 00:30:57.859 "code": -32602, 00:30:57.859 "message": "Invalid parameters" 00:30:57.859 } 00:30:57.859 12:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:30:57.859 12:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:30:57.859 12:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:30:57.859 12:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:30:57.859 Adding namespace failed - expected result. 00:30:57.859 12:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:30:57.859 test case2: host connect to nvmf target in multiple paths 00:30:57.859 12:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:57.859 12:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:57.859 12:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:57.859 [2024-11-28 12:54:40.220046] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:57.859 12:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:57.859 12:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:30:58.118 12:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:30:58.377 12:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:30:58.377 12:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:30:58.377 12:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:30:58.377 12:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:30:58.377 12:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:31:00.280 12:54:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:31:00.280 12:54:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:31:00.280 12:54:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:31:00.280 12:54:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:31:00.280 12:54:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:31:00.280 12:54:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:31:00.280 12:54:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:31:00.280 [global] 00:31:00.280 thread=1 00:31:00.280 invalidate=1 00:31:00.280 rw=write 00:31:00.280 time_based=1 00:31:00.280 runtime=1 00:31:00.280 ioengine=libaio 00:31:00.280 direct=1 00:31:00.280 bs=4096 00:31:00.280 iodepth=1 00:31:00.280 norandommap=0 00:31:00.280 numjobs=1 00:31:00.280 00:31:00.280 verify_dump=1 00:31:00.280 verify_backlog=512 00:31:00.280 verify_state_save=0 00:31:00.280 do_verify=1 00:31:00.280 verify=crc32c-intel 00:31:00.280 [job0] 00:31:00.280 filename=/dev/nvme0n1 00:31:00.280 Could not set queue depth (nvme0n1) 00:31:00.538 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:00.538 fio-3.35 00:31:00.538 Starting 1 thread 00:31:01.917 00:31:01.917 job0: (groupid=0, jobs=1): err= 0: pid=2745471: Thu Nov 28 12:54:44 2024 00:31:01.917 read: IOPS=22, BW=89.9KiB/s (92.1kB/s)(92.0KiB/1023msec) 00:31:01.917 slat (nsec): min=9354, max=23277, avg=22155.91, stdev=2802.60 00:31:01.917 clat (usec): min=40871, max=41966, avg=41014.44, stdev=213.84 00:31:01.917 lat (usec): min=40880, max=41989, avg=41036.60, stdev=214.22 00:31:01.917 clat percentiles (usec): 00:31:01.917 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:31:01.917 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:31:01.917 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:31:01.917 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:31:01.917 | 99.99th=[42206] 00:31:01.917 write: IOPS=500, BW=2002KiB/s (2050kB/s)(2048KiB/1023msec); 0 zone resets 00:31:01.917 slat (nsec): min=9016, max=39722, avg=10163.57, stdev=1489.49 00:31:01.917 clat (usec): min=130, max=372, avg=142.52, stdev=20.02 00:31:01.917 lat (usec): min=140, max=411, avg=152.68, stdev=20.77 00:31:01.917 clat percentiles (usec): 00:31:01.917 | 1.00th=[ 133], 5.00th=[ 135], 10.00th=[ 137], 20.00th=[ 137], 00:31:01.917 | 30.00th=[ 139], 40.00th=[ 139], 50.00th=[ 139], 60.00th=[ 141], 00:31:01.917 | 70.00th=[ 141], 80.00th=[ 143], 90.00th=[ 145], 95.00th=[ 151], 00:31:01.917 | 99.00th=[ 245], 99.50th=[ 247], 99.90th=[ 371], 99.95th=[ 371], 00:31:01.917 | 99.99th=[ 371] 00:31:01.917 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:31:01.917 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:31:01.917 lat (usec) : 250=95.33%, 500=0.37% 00:31:01.917 lat (msec) : 50=4.30% 00:31:01.917 cpu : usr=0.39%, sys=0.39%, ctx=535, majf=0, minf=1 00:31:01.917 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:01.917 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:01.917 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:01.917 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:01.917 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:01.917 00:31:01.917 Run status group 0 (all jobs): 00:31:01.917 READ: bw=89.9KiB/s (92.1kB/s), 89.9KiB/s-89.9KiB/s (92.1kB/s-92.1kB/s), io=92.0KiB (94.2kB), run=1023-1023msec 00:31:01.917 WRITE: bw=2002KiB/s (2050kB/s), 2002KiB/s-2002KiB/s (2050kB/s-2050kB/s), io=2048KiB (2097kB), run=1023-1023msec 00:31:01.917 00:31:01.917 Disk stats (read/write): 00:31:01.917 nvme0n1: ios=69/512, merge=0/0, ticks=1005/71, in_queue=1076, util=95.19% 00:31:01.917 12:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:31:01.917 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:31:01.917 12:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:31:01.917 12:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:31:01.917 12:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:31:01.917 12:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:01.917 12:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:01.917 12:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:31:01.917 12:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:31:01.917 12:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:31:01.917 12:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:31:01.917 12:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:01.917 12:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:31:01.917 12:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:01.917 12:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:31:01.917 12:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:01.917 12:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:01.917 rmmod nvme_tcp 00:31:01.917 rmmod nvme_fabrics 00:31:01.917 rmmod nvme_keyring 00:31:02.176 12:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:02.176 12:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:31:02.176 12:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:31:02.176 12:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 2744710 ']' 00:31:02.177 12:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 2744710 00:31:02.177 12:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 2744710 ']' 00:31:02.177 12:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 2744710 00:31:02.177 12:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:31:02.177 12:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:02.177 12:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2744710 00:31:02.177 12:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:02.177 12:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:02.177 12:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2744710' 00:31:02.177 killing process with pid 2744710 00:31:02.177 12:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 2744710 00:31:02.177 12:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 2744710 00:31:02.177 12:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:02.177 12:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:02.177 12:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:02.177 12:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:31:02.436 12:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:31:02.436 12:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:02.436 12:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:31:02.436 12:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:02.436 12:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:02.436 12:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:02.436 12:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:02.436 12:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:04.341 12:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:04.341 00:31:04.341 real 0m12.626s 00:31:04.341 user 0m24.123s 00:31:04.341 sys 0m5.736s 00:31:04.341 12:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:04.341 12:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:04.341 ************************************ 00:31:04.341 END TEST nvmf_nmic 00:31:04.342 ************************************ 00:31:04.342 12:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:31:04.342 12:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:04.342 12:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:04.342 12:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:04.342 ************************************ 00:31:04.342 START TEST nvmf_fio_target 00:31:04.342 ************************************ 00:31:04.342 12:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:31:04.602 * Looking for test storage... 00:31:04.602 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:04.602 12:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:04.602 12:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:31:04.602 12:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:04.602 12:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:04.602 12:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:04.602 12:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:04.602 12:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:04.602 12:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:31:04.602 12:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:31:04.602 12:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:31:04.602 12:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:31:04.602 12:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:31:04.602 12:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:31:04.602 12:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:31:04.602 12:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:04.602 12:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:31:04.602 12:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:31:04.602 12:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:04.602 12:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:04.602 12:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:31:04.602 12:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:31:04.602 12:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:04.602 12:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:31:04.602 12:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:31:04.602 12:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:31:04.602 12:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:31:04.602 12:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:04.602 12:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:31:04.602 12:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:31:04.602 12:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:04.603 12:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:04.603 12:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:31:04.603 12:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:04.603 12:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:04.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:04.603 --rc genhtml_branch_coverage=1 00:31:04.603 --rc genhtml_function_coverage=1 00:31:04.603 --rc genhtml_legend=1 00:31:04.603 --rc geninfo_all_blocks=1 00:31:04.603 --rc geninfo_unexecuted_blocks=1 00:31:04.603 00:31:04.603 ' 00:31:04.603 12:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:04.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:04.603 --rc genhtml_branch_coverage=1 00:31:04.603 --rc genhtml_function_coverage=1 00:31:04.603 --rc genhtml_legend=1 00:31:04.603 --rc geninfo_all_blocks=1 00:31:04.603 --rc geninfo_unexecuted_blocks=1 00:31:04.603 00:31:04.603 ' 00:31:04.603 12:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:04.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:04.603 --rc genhtml_branch_coverage=1 00:31:04.603 --rc genhtml_function_coverage=1 00:31:04.603 --rc genhtml_legend=1 00:31:04.603 --rc geninfo_all_blocks=1 00:31:04.603 --rc geninfo_unexecuted_blocks=1 00:31:04.603 00:31:04.603 ' 00:31:04.603 12:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:04.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:04.603 --rc genhtml_branch_coverage=1 00:31:04.603 --rc genhtml_function_coverage=1 00:31:04.603 --rc genhtml_legend=1 00:31:04.603 --rc geninfo_all_blocks=1 00:31:04.603 --rc geninfo_unexecuted_blocks=1 00:31:04.603 00:31:04.603 ' 00:31:04.603 12:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:04.603 12:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:31:04.603 12:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:04.603 12:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:04.603 12:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:04.603 12:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:04.603 12:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:04.603 12:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:04.603 12:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:04.603 12:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:04.603 12:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:04.603 12:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:04.603 12:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:31:04.603 12:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:31:04.603 12:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:04.603 12:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:04.603 12:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:04.603 12:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:04.603 12:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:04.603 12:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:31:04.603 12:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:04.603 12:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:04.603 12:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:04.603 12:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:04.603 12:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:04.603 12:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:04.603 12:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:31:04.603 12:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:04.603 12:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:31:04.603 12:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:04.603 12:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:04.603 12:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:04.603 12:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:04.603 12:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:04.603 12:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:04.603 12:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:04.603 12:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:04.603 12:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:04.603 12:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:04.603 12:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:04.603 12:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:04.603 12:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:04.603 12:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:31:04.603 12:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:04.603 12:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:04.603 12:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:04.603 12:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:04.603 12:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:04.603 12:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:04.603 12:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:04.603 12:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:04.603 12:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:04.603 12:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:04.603 12:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:31:04.603 12:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:31:10.000 12:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:10.000 12:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:31:10.000 12:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:10.000 12:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:10.000 12:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:10.000 12:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:10.000 12:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:10.000 12:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:31:10.000 12:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:10.000 12:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:31:10.000 12:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:31:10.000 12:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:31:10.000 12:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:31:10.000 12:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:31:10.000 12:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:31:10.000 12:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:10.000 12:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:10.000 12:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:10.000 12:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:10.000 12:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:10.000 12:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:10.000 12:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:10.000 12:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:10.000 12:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:10.000 12:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:10.000 12:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:10.000 12:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:10.000 12:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:10.000 12:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:10.000 12:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:10.000 12:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:10.000 12:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:10.000 12:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:10.000 12:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:10.000 12:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:10.000 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:10.000 12:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:10.000 12:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:10.000 12:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:10.000 12:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:10.000 12:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:10.000 12:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:10.000 12:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:10.000 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:10.000 12:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:10.000 12:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:10.000 12:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:10.000 12:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:10.000 12:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:10.000 12:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:10.000 12:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:10.000 12:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:10.000 12:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:10.000 12:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:10.000 12:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:10.000 12:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:10.000 12:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:10.000 12:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:10.000 12:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:10.000 12:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:10.000 Found net devices under 0000:86:00.0: cvl_0_0 00:31:10.000 12:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:10.001 12:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:10.001 12:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:10.001 12:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:10.001 12:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:10.001 12:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:10.001 12:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:10.001 12:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:10.001 12:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:10.001 Found net devices under 0000:86:00.1: cvl_0_1 00:31:10.001 12:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:10.001 12:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:10.001 12:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:31:10.001 12:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:10.001 12:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:10.001 12:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:10.001 12:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:10.001 12:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:10.001 12:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:10.001 12:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:10.001 12:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:10.001 12:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:10.001 12:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:10.001 12:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:10.001 12:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:10.001 12:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:10.001 12:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:10.001 12:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:10.001 12:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:10.001 12:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:10.001 12:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:10.001 12:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:10.001 12:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:10.001 12:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:10.001 12:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:10.284 12:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:10.284 12:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:10.284 12:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:10.284 12:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:10.284 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:10.284 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.290 ms 00:31:10.284 00:31:10.284 --- 10.0.0.2 ping statistics --- 00:31:10.284 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:10.284 rtt min/avg/max/mdev = 0.290/0.290/0.290/0.000 ms 00:31:10.284 12:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:10.284 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:10.284 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.138 ms 00:31:10.284 00:31:10.284 --- 10.0.0.1 ping statistics --- 00:31:10.284 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:10.284 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:31:10.284 12:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:10.284 12:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:31:10.284 12:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:10.284 12:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:10.284 12:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:10.284 12:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:10.284 12:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:10.284 12:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:10.284 12:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:10.284 12:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:31:10.284 12:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:10.284 12:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:10.284 12:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:31:10.284 12:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=2749078 00:31:10.284 12:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 2749078 00:31:10.284 12:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:31:10.284 12:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 2749078 ']' 00:31:10.284 12:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:10.284 12:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:10.284 12:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:10.284 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:10.284 12:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:10.284 12:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:31:10.284 [2024-11-28 12:54:52.683429] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:10.284 [2024-11-28 12:54:52.684397] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:31:10.284 [2024-11-28 12:54:52.684432] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:10.284 [2024-11-28 12:54:52.755167] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:10.553 [2024-11-28 12:54:52.797804] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:10.553 [2024-11-28 12:54:52.797840] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:10.553 [2024-11-28 12:54:52.797848] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:10.553 [2024-11-28 12:54:52.797854] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:10.553 [2024-11-28 12:54:52.797859] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:10.553 [2024-11-28 12:54:52.799308] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:10.553 [2024-11-28 12:54:52.799405] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:10.553 [2024-11-28 12:54:52.799431] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:10.553 [2024-11-28 12:54:52.799433] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:10.553 [2024-11-28 12:54:52.868277] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:10.553 [2024-11-28 12:54:52.868404] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:10.553 [2024-11-28 12:54:52.868611] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:10.553 [2024-11-28 12:54:52.868876] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:10.553 [2024-11-28 12:54:52.869066] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:31:10.553 12:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:10.553 12:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:31:10.553 12:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:10.553 12:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:10.553 12:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:31:10.553 12:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:10.553 12:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:10.812 [2024-11-28 12:54:53.104190] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:10.812 12:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:11.071 12:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:31:11.071 12:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:11.071 12:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:31:11.071 12:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:11.330 12:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:31:11.330 12:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:11.590 12:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:31:11.590 12:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:31:11.850 12:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:12.110 12:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:31:12.110 12:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:12.110 12:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:31:12.110 12:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:12.369 12:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:31:12.369 12:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:31:12.629 12:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:31:12.887 12:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:31:12.887 12:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:12.887 12:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:31:12.888 12:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:31:13.146 12:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:13.404 [2024-11-28 12:54:55.736114] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:13.404 12:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:31:13.663 12:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:31:13.663 12:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:31:14.231 12:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:31:14.231 12:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:31:14.231 12:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:31:14.231 12:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:31:14.231 12:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:31:14.231 12:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:31:16.136 12:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:31:16.136 12:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:31:16.136 12:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:31:16.136 12:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:31:16.136 12:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:31:16.136 12:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:31:16.136 12:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:31:16.136 [global] 00:31:16.136 thread=1 00:31:16.136 invalidate=1 00:31:16.136 rw=write 00:31:16.136 time_based=1 00:31:16.136 runtime=1 00:31:16.136 ioengine=libaio 00:31:16.136 direct=1 00:31:16.136 bs=4096 00:31:16.136 iodepth=1 00:31:16.136 norandommap=0 00:31:16.136 numjobs=1 00:31:16.136 00:31:16.136 verify_dump=1 00:31:16.136 verify_backlog=512 00:31:16.136 verify_state_save=0 00:31:16.136 do_verify=1 00:31:16.136 verify=crc32c-intel 00:31:16.136 [job0] 00:31:16.136 filename=/dev/nvme0n1 00:31:16.136 [job1] 00:31:16.136 filename=/dev/nvme0n2 00:31:16.136 [job2] 00:31:16.136 filename=/dev/nvme0n3 00:31:16.136 [job3] 00:31:16.136 filename=/dev/nvme0n4 00:31:16.136 Could not set queue depth (nvme0n1) 00:31:16.136 Could not set queue depth (nvme0n2) 00:31:16.136 Could not set queue depth (nvme0n3) 00:31:16.136 Could not set queue depth (nvme0n4) 00:31:16.395 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:16.395 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:16.395 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:16.395 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:16.395 fio-3.35 00:31:16.395 Starting 4 threads 00:31:17.772 00:31:17.772 job0: (groupid=0, jobs=1): err= 0: pid=2750348: Thu Nov 28 12:55:00 2024 00:31:17.772 read: IOPS=2015, BW=8063KiB/s (8256kB/s)(8256KiB/1024msec) 00:31:17.772 slat (nsec): min=6834, max=38300, avg=8259.37, stdev=1357.10 00:31:17.772 clat (usec): min=182, max=38549, avg=250.20, stdev=843.70 00:31:17.772 lat (usec): min=189, max=38563, avg=258.46, stdev=843.84 00:31:17.772 clat percentiles (usec): 00:31:17.772 | 1.00th=[ 204], 5.00th=[ 212], 10.00th=[ 215], 20.00th=[ 221], 00:31:17.772 | 30.00th=[ 223], 40.00th=[ 225], 50.00th=[ 227], 60.00th=[ 231], 00:31:17.772 | 70.00th=[ 233], 80.00th=[ 239], 90.00th=[ 251], 95.00th=[ 273], 00:31:17.772 | 99.00th=[ 310], 99.50th=[ 318], 99.90th=[ 545], 99.95th=[ 578], 00:31:17.772 | 99.99th=[38536] 00:31:17.772 write: IOPS=2500, BW=9.77MiB/s (10.2MB/s)(10.0MiB/1024msec); 0 zone resets 00:31:17.772 slat (nsec): min=9920, max=80941, avg=11697.22, stdev=3437.96 00:31:17.772 clat (usec): min=135, max=569, avg=174.45, stdev=25.18 00:31:17.772 lat (usec): min=145, max=579, avg=186.15, stdev=26.20 00:31:17.772 clat percentiles (usec): 00:31:17.772 | 1.00th=[ 143], 5.00th=[ 147], 10.00th=[ 151], 20.00th=[ 157], 00:31:17.772 | 30.00th=[ 161], 40.00th=[ 165], 50.00th=[ 169], 60.00th=[ 174], 00:31:17.772 | 70.00th=[ 180], 80.00th=[ 188], 90.00th=[ 204], 95.00th=[ 227], 00:31:17.772 | 99.00th=[ 247], 99.50th=[ 253], 99.90th=[ 289], 99.95th=[ 562], 00:31:17.772 | 99.99th=[ 570] 00:31:17.772 bw ( KiB/s): min=10080, max=10400, per=42.67%, avg=10240.00, stdev=226.27, samples=2 00:31:17.772 iops : min= 2520, max= 2600, avg=2560.00, stdev=56.57, samples=2 00:31:17.772 lat (usec) : 250=94.77%, 500=5.13%, 750=0.09% 00:31:17.772 lat (msec) : 50=0.02% 00:31:17.772 cpu : usr=3.81%, sys=7.23%, ctx=4626, majf=0, minf=1 00:31:17.772 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:17.772 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:17.772 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:17.772 issued rwts: total=2064,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:17.772 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:17.772 job1: (groupid=0, jobs=1): err= 0: pid=2750349: Thu Nov 28 12:55:00 2024 00:31:17.772 read: IOPS=522, BW=2090KiB/s (2140kB/s)(2092KiB/1001msec) 00:31:17.772 slat (nsec): min=6684, max=27491, avg=7997.37, stdev=2923.75 00:31:17.772 clat (usec): min=185, max=41990, avg=1503.86, stdev=7045.53 00:31:17.772 lat (usec): min=192, max=42013, avg=1511.86, stdev=7048.09 00:31:17.772 clat percentiles (usec): 00:31:17.772 | 1.00th=[ 204], 5.00th=[ 210], 10.00th=[ 215], 20.00th=[ 223], 00:31:17.772 | 30.00th=[ 231], 40.00th=[ 237], 50.00th=[ 247], 60.00th=[ 258], 00:31:17.772 | 70.00th=[ 269], 80.00th=[ 297], 90.00th=[ 318], 95.00th=[ 330], 00:31:17.772 | 99.00th=[41157], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:31:17.772 | 99.99th=[42206] 00:31:17.772 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:31:17.772 slat (nsec): min=9573, max=57182, avg=11232.39, stdev=2859.73 00:31:17.772 clat (usec): min=138, max=1773, avg=190.03, stdev=66.29 00:31:17.772 lat (usec): min=149, max=1830, avg=201.26, stdev=67.56 00:31:17.772 clat percentiles (usec): 00:31:17.772 | 1.00th=[ 145], 5.00th=[ 149], 10.00th=[ 155], 20.00th=[ 167], 00:31:17.772 | 30.00th=[ 174], 40.00th=[ 180], 50.00th=[ 186], 60.00th=[ 194], 00:31:17.772 | 70.00th=[ 202], 80.00th=[ 210], 90.00th=[ 221], 95.00th=[ 227], 00:31:17.772 | 99.00th=[ 239], 99.50th=[ 289], 99.90th=[ 1352], 99.95th=[ 1778], 00:31:17.772 | 99.99th=[ 1778] 00:31:17.772 bw ( KiB/s): min= 4096, max= 4096, per=17.07%, avg=4096.00, stdev= 0.00, samples=1 00:31:17.772 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:31:17.772 lat (usec) : 250=83.26%, 500=15.58% 00:31:17.772 lat (msec) : 2=0.13%, 50=1.03% 00:31:17.772 cpu : usr=0.90%, sys=1.50%, ctx=1548, majf=0, minf=1 00:31:17.772 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:17.772 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:17.772 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:17.772 issued rwts: total=523,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:17.772 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:17.772 job2: (groupid=0, jobs=1): err= 0: pid=2750351: Thu Nov 28 12:55:00 2024 00:31:17.772 read: IOPS=1930, BW=7720KiB/s (7906kB/s)(7728KiB/1001msec) 00:31:17.772 slat (nsec): min=7310, max=48651, avg=8933.17, stdev=3056.88 00:31:17.772 clat (usec): min=212, max=2523, avg=276.83, stdev=79.23 00:31:17.772 lat (usec): min=229, max=2541, avg=285.76, stdev=79.54 00:31:17.772 clat percentiles (usec): 00:31:17.772 | 1.00th=[ 231], 5.00th=[ 239], 10.00th=[ 243], 20.00th=[ 247], 00:31:17.772 | 30.00th=[ 251], 40.00th=[ 253], 50.00th=[ 258], 60.00th=[ 262], 00:31:17.772 | 70.00th=[ 265], 80.00th=[ 273], 90.00th=[ 310], 95.00th=[ 465], 00:31:17.772 | 99.00th=[ 486], 99.50th=[ 502], 99.90th=[ 562], 99.95th=[ 2540], 00:31:17.772 | 99.99th=[ 2540] 00:31:17.772 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:31:17.772 slat (nsec): min=10749, max=71821, avg=12469.60, stdev=3001.13 00:31:17.772 clat (usec): min=141, max=323, avg=199.81, stdev=31.63 00:31:17.772 lat (usec): min=173, max=335, avg=212.28, stdev=31.69 00:31:17.772 clat percentiles (usec): 00:31:17.772 | 1.00th=[ 167], 5.00th=[ 174], 10.00th=[ 176], 20.00th=[ 180], 00:31:17.772 | 30.00th=[ 184], 40.00th=[ 188], 50.00th=[ 190], 60.00th=[ 194], 00:31:17.772 | 70.00th=[ 200], 80.00th=[ 210], 90.00th=[ 237], 95.00th=[ 293], 00:31:17.772 | 99.00th=[ 302], 99.50th=[ 306], 99.90th=[ 310], 99.95th=[ 314], 00:31:17.772 | 99.99th=[ 326] 00:31:17.772 bw ( KiB/s): min= 8192, max= 8192, per=34.13%, avg=8192.00, stdev= 0.00, samples=1 00:31:17.772 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:31:17.772 lat (usec) : 250=61.21%, 500=38.52%, 750=0.25% 00:31:17.772 lat (msec) : 4=0.03% 00:31:17.772 cpu : usr=5.20%, sys=4.80%, ctx=3984, majf=0, minf=1 00:31:17.772 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:17.772 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:17.772 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:17.772 issued rwts: total=1932,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:17.772 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:17.772 job3: (groupid=0, jobs=1): err= 0: pid=2750352: Thu Nov 28 12:55:00 2024 00:31:17.772 read: IOPS=46, BW=188KiB/s (192kB/s)(192KiB/1022msec) 00:31:17.772 slat (nsec): min=8726, max=46490, avg=14680.23, stdev=7742.28 00:31:17.772 clat (usec): min=224, max=41251, avg=18909.43, stdev=20464.39 00:31:17.772 lat (usec): min=259, max=41262, avg=18924.11, stdev=20465.67 00:31:17.772 clat percentiles (usec): 00:31:17.772 | 1.00th=[ 225], 5.00th=[ 258], 10.00th=[ 260], 20.00th=[ 265], 00:31:17.772 | 30.00th=[ 273], 40.00th=[ 277], 50.00th=[ 343], 60.00th=[40633], 00:31:17.772 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:31:17.772 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:31:17.772 | 99.99th=[41157] 00:31:17.772 write: IOPS=500, BW=2004KiB/s (2052kB/s)(2048KiB/1022msec); 0 zone resets 00:31:17.772 slat (nsec): min=10605, max=53404, avg=13953.79, stdev=5775.50 00:31:17.772 clat (usec): min=154, max=663, avg=202.94, stdev=38.49 00:31:17.772 lat (usec): min=176, max=693, avg=216.90, stdev=39.61 00:31:17.772 clat percentiles (usec): 00:31:17.772 | 1.00th=[ 172], 5.00th=[ 176], 10.00th=[ 178], 20.00th=[ 182], 00:31:17.772 | 30.00th=[ 186], 40.00th=[ 188], 50.00th=[ 192], 60.00th=[ 198], 00:31:17.772 | 70.00th=[ 206], 80.00th=[ 227], 90.00th=[ 239], 95.00th=[ 247], 00:31:17.772 | 99.00th=[ 289], 99.50th=[ 490], 99.90th=[ 660], 99.95th=[ 660], 00:31:17.772 | 99.99th=[ 660] 00:31:17.772 bw ( KiB/s): min= 4096, max= 4096, per=17.07%, avg=4096.00, stdev= 0.00, samples=1 00:31:17.772 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:31:17.772 lat (usec) : 250=87.32%, 500=8.39%, 750=0.36% 00:31:17.772 lat (msec) : 50=3.93% 00:31:17.772 cpu : usr=1.08%, sys=0.39%, ctx=560, majf=0, minf=1 00:31:17.772 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:17.772 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:17.772 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:17.772 issued rwts: total=48,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:17.772 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:17.772 00:31:17.772 Run status group 0 (all jobs): 00:31:17.772 READ: bw=17.4MiB/s (18.3MB/s), 188KiB/s-8063KiB/s (192kB/s-8256kB/s), io=17.8MiB (18.7MB), run=1001-1024msec 00:31:17.772 WRITE: bw=23.4MiB/s (24.6MB/s), 2004KiB/s-9.77MiB/s (2052kB/s-10.2MB/s), io=24.0MiB (25.2MB), run=1001-1024msec 00:31:17.772 00:31:17.772 Disk stats (read/write): 00:31:17.772 nvme0n1: ios=1774/2048, merge=0/0, ticks=404/328, in_queue=732, util=81.86% 00:31:17.772 nvme0n2: ios=197/512, merge=0/0, ticks=969/103, in_queue=1072, util=97.43% 00:31:17.772 nvme0n3: ios=1559/1737, merge=0/0, ticks=1334/326, in_queue=1660, util=97.28% 00:31:17.772 nvme0n4: ios=42/512, merge=0/0, ticks=663/104, in_queue=767, util=89.15% 00:31:17.772 12:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:31:17.772 [global] 00:31:17.772 thread=1 00:31:17.772 invalidate=1 00:31:17.772 rw=randwrite 00:31:17.772 time_based=1 00:31:17.772 runtime=1 00:31:17.772 ioengine=libaio 00:31:17.772 direct=1 00:31:17.772 bs=4096 00:31:17.772 iodepth=1 00:31:17.773 norandommap=0 00:31:17.773 numjobs=1 00:31:17.773 00:31:17.773 verify_dump=1 00:31:17.773 verify_backlog=512 00:31:17.773 verify_state_save=0 00:31:17.773 do_verify=1 00:31:17.773 verify=crc32c-intel 00:31:17.773 [job0] 00:31:17.773 filename=/dev/nvme0n1 00:31:17.773 [job1] 00:31:17.773 filename=/dev/nvme0n2 00:31:17.773 [job2] 00:31:17.773 filename=/dev/nvme0n3 00:31:17.773 [job3] 00:31:17.773 filename=/dev/nvme0n4 00:31:17.773 Could not set queue depth (nvme0n1) 00:31:17.773 Could not set queue depth (nvme0n2) 00:31:17.773 Could not set queue depth (nvme0n3) 00:31:17.773 Could not set queue depth (nvme0n4) 00:31:18.031 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:18.031 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:18.031 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:18.031 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:18.031 fio-3.35 00:31:18.031 Starting 4 threads 00:31:19.408 00:31:19.408 job0: (groupid=0, jobs=1): err= 0: pid=2750736: Thu Nov 28 12:55:01 2024 00:31:19.408 read: IOPS=2211, BW=8847KiB/s (9059kB/s)(8856KiB/1001msec) 00:31:19.408 slat (nsec): min=6326, max=30147, avg=7312.76, stdev=1029.83 00:31:19.408 clat (usec): min=195, max=530, avg=242.70, stdev=23.92 00:31:19.408 lat (usec): min=202, max=537, avg=250.02, stdev=23.96 00:31:19.408 clat percentiles (usec): 00:31:19.408 | 1.00th=[ 200], 5.00th=[ 206], 10.00th=[ 223], 20.00th=[ 231], 00:31:19.408 | 30.00th=[ 237], 40.00th=[ 241], 50.00th=[ 243], 60.00th=[ 245], 00:31:19.408 | 70.00th=[ 249], 80.00th=[ 251], 90.00th=[ 255], 95.00th=[ 265], 00:31:19.408 | 99.00th=[ 302], 99.50th=[ 437], 99.90th=[ 494], 99.95th=[ 502], 00:31:19.408 | 99.99th=[ 529] 00:31:19.408 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:31:19.408 slat (nsec): min=8958, max=40988, avg=10141.49, stdev=1661.53 00:31:19.408 clat (usec): min=133, max=508, avg=160.33, stdev=16.71 00:31:19.408 lat (usec): min=143, max=549, avg=170.47, stdev=17.47 00:31:19.408 clat percentiles (usec): 00:31:19.408 | 1.00th=[ 139], 5.00th=[ 141], 10.00th=[ 145], 20.00th=[ 149], 00:31:19.408 | 30.00th=[ 153], 40.00th=[ 155], 50.00th=[ 159], 60.00th=[ 161], 00:31:19.408 | 70.00th=[ 167], 80.00th=[ 172], 90.00th=[ 178], 95.00th=[ 184], 00:31:19.408 | 99.00th=[ 196], 99.50th=[ 202], 99.90th=[ 404], 99.95th=[ 408], 00:31:19.408 | 99.99th=[ 510] 00:31:19.408 bw ( KiB/s): min=11160, max=11160, per=62.06%, avg=11160.00, stdev= 0.00, samples=1 00:31:19.408 iops : min= 2790, max= 2790, avg=2790.00, stdev= 0.00, samples=1 00:31:19.408 lat (usec) : 250=88.63%, 500=11.31%, 750=0.06% 00:31:19.408 cpu : usr=1.60%, sys=5.00%, ctx=4774, majf=0, minf=1 00:31:19.408 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:19.408 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:19.408 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:19.408 issued rwts: total=2214,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:19.409 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:19.409 job1: (groupid=0, jobs=1): err= 0: pid=2750737: Thu Nov 28 12:55:01 2024 00:31:19.409 read: IOPS=516, BW=2064KiB/s (2114kB/s)(2116KiB/1025msec) 00:31:19.409 slat (nsec): min=6610, max=35563, avg=7797.26, stdev=2572.37 00:31:19.409 clat (usec): min=216, max=41068, avg=1543.07, stdev=7187.58 00:31:19.409 lat (usec): min=223, max=41101, avg=1550.87, stdev=7189.73 00:31:19.409 clat percentiles (usec): 00:31:19.409 | 1.00th=[ 221], 5.00th=[ 225], 10.00th=[ 227], 20.00th=[ 229], 00:31:19.409 | 30.00th=[ 231], 40.00th=[ 233], 50.00th=[ 235], 60.00th=[ 237], 00:31:19.409 | 70.00th=[ 239], 80.00th=[ 241], 90.00th=[ 247], 95.00th=[ 258], 00:31:19.409 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:31:19.409 | 99.99th=[41157] 00:31:19.409 write: IOPS=999, BW=3996KiB/s (4092kB/s)(4096KiB/1025msec); 0 zone resets 00:31:19.409 slat (nsec): min=9357, max=42215, avg=10698.97, stdev=2198.73 00:31:19.409 clat (usec): min=134, max=516, avg=185.21, stdev=22.22 00:31:19.409 lat (usec): min=156, max=558, avg=195.90, stdev=22.68 00:31:19.409 clat percentiles (usec): 00:31:19.409 | 1.00th=[ 151], 5.00th=[ 159], 10.00th=[ 163], 20.00th=[ 167], 00:31:19.409 | 30.00th=[ 172], 40.00th=[ 176], 50.00th=[ 184], 60.00th=[ 190], 00:31:19.409 | 70.00th=[ 196], 80.00th=[ 202], 90.00th=[ 212], 95.00th=[ 219], 00:31:19.409 | 99.00th=[ 235], 99.50th=[ 249], 99.90th=[ 273], 99.95th=[ 519], 00:31:19.409 | 99.99th=[ 519] 00:31:19.409 bw ( KiB/s): min= 8192, max= 8192, per=45.56%, avg=8192.00, stdev= 0.00, samples=1 00:31:19.409 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:31:19.409 lat (usec) : 250=97.36%, 500=1.48%, 750=0.06% 00:31:19.409 lat (msec) : 50=1.09% 00:31:19.409 cpu : usr=0.78%, sys=1.56%, ctx=1554, majf=0, minf=1 00:31:19.409 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:19.409 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:19.409 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:19.409 issued rwts: total=529,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:19.409 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:19.409 job2: (groupid=0, jobs=1): err= 0: pid=2750738: Thu Nov 28 12:55:01 2024 00:31:19.409 read: IOPS=22, BW=91.7KiB/s (93.9kB/s)(92.0KiB/1003msec) 00:31:19.409 slat (nsec): min=9600, max=24363, avg=20681.96, stdev=3824.75 00:31:19.409 clat (usec): min=394, max=41072, avg=39166.41, stdev=8453.41 00:31:19.409 lat (usec): min=411, max=41094, avg=39187.09, stdev=8454.18 00:31:19.409 clat percentiles (usec): 00:31:19.409 | 1.00th=[ 396], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:31:19.409 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:31:19.409 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:31:19.409 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:31:19.409 | 99.99th=[41157] 00:31:19.409 write: IOPS=510, BW=2042KiB/s (2091kB/s)(2048KiB/1003msec); 0 zone resets 00:31:19.409 slat (nsec): min=10058, max=37565, avg=11225.62, stdev=2037.81 00:31:19.409 clat (usec): min=163, max=321, avg=183.60, stdev=12.01 00:31:19.409 lat (usec): min=174, max=358, avg=194.83, stdev=12.85 00:31:19.409 clat percentiles (usec): 00:31:19.409 | 1.00th=[ 165], 5.00th=[ 169], 10.00th=[ 172], 20.00th=[ 176], 00:31:19.409 | 30.00th=[ 178], 40.00th=[ 180], 50.00th=[ 182], 60.00th=[ 186], 00:31:19.409 | 70.00th=[ 188], 80.00th=[ 192], 90.00th=[ 196], 95.00th=[ 204], 00:31:19.409 | 99.00th=[ 212], 99.50th=[ 219], 99.90th=[ 322], 99.95th=[ 322], 00:31:19.409 | 99.99th=[ 322] 00:31:19.409 bw ( KiB/s): min= 4096, max= 4096, per=22.78%, avg=4096.00, stdev= 0.00, samples=1 00:31:19.409 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:31:19.409 lat (usec) : 250=95.51%, 500=0.37% 00:31:19.409 lat (msec) : 50=4.11% 00:31:19.409 cpu : usr=0.60%, sys=0.70%, ctx=535, majf=0, minf=1 00:31:19.409 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:19.409 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:19.409 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:19.409 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:19.409 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:19.409 job3: (groupid=0, jobs=1): err= 0: pid=2750739: Thu Nov 28 12:55:01 2024 00:31:19.409 read: IOPS=21, BW=87.1KiB/s (89.2kB/s)(88.0KiB/1010msec) 00:31:19.409 slat (nsec): min=11076, max=23337, avg=20943.41, stdev=3932.76 00:31:19.409 clat (usec): min=40488, max=41089, avg=40951.25, stdev=126.09 00:31:19.409 lat (usec): min=40500, max=41111, avg=40972.19, stdev=127.55 00:31:19.409 clat percentiles (usec): 00:31:19.409 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:31:19.409 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:31:19.409 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:31:19.409 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:31:19.409 | 99.99th=[41157] 00:31:19.409 write: IOPS=506, BW=2028KiB/s (2076kB/s)(2048KiB/1010msec); 0 zone resets 00:31:19.409 slat (nsec): min=12549, max=44996, avg=13789.28, stdev=2401.86 00:31:19.409 clat (usec): min=149, max=260, avg=193.82, stdev=14.37 00:31:19.409 lat (usec): min=170, max=296, avg=207.60, stdev=14.53 00:31:19.409 clat percentiles (usec): 00:31:19.409 | 1.00th=[ 167], 5.00th=[ 172], 10.00th=[ 178], 20.00th=[ 182], 00:31:19.409 | 30.00th=[ 186], 40.00th=[ 190], 50.00th=[ 192], 60.00th=[ 196], 00:31:19.409 | 70.00th=[ 200], 80.00th=[ 204], 90.00th=[ 212], 95.00th=[ 219], 00:31:19.409 | 99.00th=[ 233], 99.50th=[ 247], 99.90th=[ 262], 99.95th=[ 262], 00:31:19.409 | 99.99th=[ 262] 00:31:19.409 bw ( KiB/s): min= 4096, max= 4096, per=22.78%, avg=4096.00, stdev= 0.00, samples=1 00:31:19.409 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:31:19.409 lat (usec) : 250=95.51%, 500=0.37% 00:31:19.409 lat (msec) : 50=4.12% 00:31:19.409 cpu : usr=0.30%, sys=1.19%, ctx=535, majf=0, minf=1 00:31:19.409 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:19.409 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:19.409 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:19.409 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:19.409 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:19.409 00:31:19.409 Run status group 0 (all jobs): 00:31:19.409 READ: bw=10.6MiB/s (11.1MB/s), 87.1KiB/s-8847KiB/s (89.2kB/s-9059kB/s), io=10.9MiB (11.4MB), run=1001-1025msec 00:31:19.409 WRITE: bw=17.6MiB/s (18.4MB/s), 2028KiB/s-9.99MiB/s (2076kB/s-10.5MB/s), io=18.0MiB (18.9MB), run=1001-1025msec 00:31:19.409 00:31:19.409 Disk stats (read/write): 00:31:19.409 nvme0n1: ios=2004/2048, merge=0/0, ticks=527/334, in_queue=861, util=87.07% 00:31:19.409 nvme0n2: ios=550/1024, merge=0/0, ticks=1279/190, in_queue=1469, util=99.19% 00:31:19.409 nvme0n3: ios=27/512, merge=0/0, ticks=870/87, in_queue=957, util=91.16% 00:31:19.409 nvme0n4: ios=44/512, merge=0/0, ticks=1725/96, in_queue=1821, util=98.43% 00:31:19.409 12:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:31:19.409 [global] 00:31:19.409 thread=1 00:31:19.409 invalidate=1 00:31:19.409 rw=write 00:31:19.409 time_based=1 00:31:19.409 runtime=1 00:31:19.409 ioengine=libaio 00:31:19.409 direct=1 00:31:19.409 bs=4096 00:31:19.409 iodepth=128 00:31:19.409 norandommap=0 00:31:19.409 numjobs=1 00:31:19.409 00:31:19.409 verify_dump=1 00:31:19.409 verify_backlog=512 00:31:19.409 verify_state_save=0 00:31:19.409 do_verify=1 00:31:19.409 verify=crc32c-intel 00:31:19.409 [job0] 00:31:19.409 filename=/dev/nvme0n1 00:31:19.409 [job1] 00:31:19.409 filename=/dev/nvme0n2 00:31:19.409 [job2] 00:31:19.409 filename=/dev/nvme0n3 00:31:19.409 [job3] 00:31:19.409 filename=/dev/nvme0n4 00:31:19.409 Could not set queue depth (nvme0n1) 00:31:19.409 Could not set queue depth (nvme0n2) 00:31:19.409 Could not set queue depth (nvme0n3) 00:31:19.409 Could not set queue depth (nvme0n4) 00:31:19.667 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:19.667 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:19.668 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:19.668 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:19.668 fio-3.35 00:31:19.668 Starting 4 threads 00:31:21.058 00:31:21.058 job0: (groupid=0, jobs=1): err= 0: pid=2751104: Thu Nov 28 12:55:03 2024 00:31:21.058 read: IOPS=3062, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1003msec) 00:31:21.058 slat (nsec): min=1126, max=62120k, avg=181884.40, stdev=1613554.97 00:31:21.058 clat (msec): min=8, max=108, avg=22.75, stdev=19.92 00:31:21.058 lat (msec): min=8, max=108, avg=22.93, stdev=20.04 00:31:21.058 clat percentiles (msec): 00:31:21.058 | 1.00th=[ 9], 5.00th=[ 11], 10.00th=[ 11], 20.00th=[ 11], 00:31:21.058 | 30.00th=[ 12], 40.00th=[ 12], 50.00th=[ 13], 60.00th=[ 14], 00:31:21.058 | 70.00th=[ 23], 80.00th=[ 36], 90.00th=[ 53], 95.00th=[ 62], 00:31:21.058 | 99.00th=[ 108], 99.50th=[ 109], 99.90th=[ 109], 99.95th=[ 109], 00:31:21.058 | 99.99th=[ 109] 00:31:21.058 write: IOPS=3533, BW=13.8MiB/s (14.5MB/s)(13.8MiB/1003msec); 0 zone resets 00:31:21.058 slat (nsec): min=1926, max=8859.2k, avg=120672.00, stdev=591964.19 00:31:21.058 clat (usec): min=1828, max=90965, avg=16030.17, stdev=11152.76 00:31:21.058 lat (usec): min=2465, max=90971, avg=16150.84, stdev=11183.52 00:31:21.058 clat percentiles (usec): 00:31:21.058 | 1.00th=[ 5080], 5.00th=[ 8586], 10.00th=[ 9241], 20.00th=[10290], 00:31:21.058 | 30.00th=[10814], 40.00th=[11076], 50.00th=[11469], 60.00th=[11994], 00:31:21.058 | 70.00th=[13829], 80.00th=[23200], 90.00th=[27657], 95.00th=[36963], 00:31:21.058 | 99.00th=[74974], 99.50th=[90702], 99.90th=[90702], 99.95th=[90702], 00:31:21.058 | 99.99th=[90702] 00:31:21.058 bw ( KiB/s): min= 8175, max=19144, per=20.61%, avg=13659.50, stdev=7756.25, samples=2 00:31:21.058 iops : min= 2043, max= 4786, avg=3414.50, stdev=1939.59, samples=2 00:31:21.058 lat (msec) : 2=0.02%, 4=0.36%, 10=9.79%, 20=61.17%, 50=22.51% 00:31:21.058 lat (msec) : 100=5.41%, 250=0.74% 00:31:21.058 cpu : usr=1.30%, sys=3.49%, ctx=352, majf=0, minf=2 00:31:21.058 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:31:21.058 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:21.058 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:21.058 issued rwts: total=3072,3544,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:21.058 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:21.058 job1: (groupid=0, jobs=1): err= 0: pid=2751105: Thu Nov 28 12:55:03 2024 00:31:21.058 read: IOPS=3062, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1003msec) 00:31:21.058 slat (nsec): min=1224, max=12546k, avg=109960.08, stdev=765565.00 00:31:21.058 clat (usec): min=2122, max=55360, avg=15305.65, stdev=6530.04 00:31:21.058 lat (usec): min=2133, max=55366, avg=15415.61, stdev=6575.61 00:31:21.058 clat percentiles (usec): 00:31:21.058 | 1.00th=[ 7111], 5.00th=[ 8225], 10.00th=[ 9634], 20.00th=[10290], 00:31:21.058 | 30.00th=[11076], 40.00th=[12911], 50.00th=[13960], 60.00th=[14615], 00:31:21.058 | 70.00th=[16712], 80.00th=[18744], 90.00th=[24249], 95.00th=[29230], 00:31:21.058 | 99.00th=[35390], 99.50th=[39584], 99.90th=[55313], 99.95th=[55313], 00:31:21.058 | 99.99th=[55313] 00:31:21.058 write: IOPS=3428, BW=13.4MiB/s (14.0MB/s)(13.4MiB/1003msec); 0 zone resets 00:31:21.058 slat (usec): min=2, max=24109, avg=175.45, stdev=1179.84 00:31:21.058 clat (usec): min=662, max=104357, avg=21792.68, stdev=19189.96 00:31:21.058 lat (msec): min=2, max=104, avg=21.97, stdev=19.32 00:31:21.058 clat percentiles (msec): 00:31:21.058 | 1.00th=[ 5], 5.00th=[ 7], 10.00th=[ 8], 20.00th=[ 11], 00:31:21.058 | 30.00th=[ 11], 40.00th=[ 12], 50.00th=[ 14], 60.00th=[ 16], 00:31:21.058 | 70.00th=[ 22], 80.00th=[ 35], 90.00th=[ 50], 95.00th=[ 63], 00:31:21.058 | 99.00th=[ 97], 99.50th=[ 102], 99.90th=[ 105], 99.95th=[ 105], 00:31:21.058 | 99.99th=[ 105] 00:31:21.058 bw ( KiB/s): min= 9524, max=16952, per=19.98%, avg=13238.00, stdev=5252.39, samples=2 00:31:21.058 iops : min= 2381, max= 4238, avg=3309.50, stdev=1313.10, samples=2 00:31:21.058 lat (usec) : 750=0.02% 00:31:21.058 lat (msec) : 4=0.49%, 10=16.20%, 20=57.53%, 50=20.63%, 100=4.79% 00:31:21.058 lat (msec) : 250=0.34% 00:31:21.058 cpu : usr=2.00%, sys=4.09%, ctx=255, majf=0, minf=1 00:31:21.058 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:31:21.058 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:21.058 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:21.058 issued rwts: total=3072,3439,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:21.058 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:21.058 job2: (groupid=0, jobs=1): err= 0: pid=2751106: Thu Nov 28 12:55:03 2024 00:31:21.058 read: IOPS=6084, BW=23.8MiB/s (24.9MB/s)(24.0MiB/1008msec) 00:31:21.058 slat (nsec): min=1384, max=10846k, avg=84596.39, stdev=708608.29 00:31:21.058 clat (usec): min=2117, max=26063, avg=11076.15, stdev=3253.90 00:31:21.058 lat (usec): min=3378, max=26090, avg=11160.74, stdev=3315.62 00:31:21.058 clat percentiles (usec): 00:31:21.058 | 1.00th=[ 6587], 5.00th=[ 7177], 10.00th=[ 7832], 20.00th=[ 8455], 00:31:21.058 | 30.00th=[ 8848], 40.00th=[ 9372], 50.00th=[10290], 60.00th=[11076], 00:31:21.058 | 70.00th=[12387], 80.00th=[14222], 90.00th=[15533], 95.00th=[17171], 00:31:21.058 | 99.00th=[20841], 99.50th=[22414], 99.90th=[24773], 99.95th=[25297], 00:31:21.058 | 99.99th=[26084] 00:31:21.058 write: IOPS=6095, BW=23.8MiB/s (25.0MB/s)(24.0MiB/1008msec); 0 zone resets 00:31:21.058 slat (usec): min=2, max=11641, avg=71.23, stdev=503.75 00:31:21.058 clat (usec): min=1469, max=21909, avg=9750.56, stdev=2692.44 00:31:21.058 lat (usec): min=1477, max=22905, avg=9821.79, stdev=2725.62 00:31:21.058 clat percentiles (usec): 00:31:21.058 | 1.00th=[ 3294], 5.00th=[ 5407], 10.00th=[ 6718], 20.00th=[ 7832], 00:31:21.058 | 30.00th=[ 8717], 40.00th=[ 9241], 50.00th=[ 9503], 60.00th=[ 9634], 00:31:21.058 | 70.00th=[10683], 80.00th=[11338], 90.00th=[13829], 95.00th=[14615], 00:31:21.058 | 99.00th=[16909], 99.50th=[16909], 99.90th=[21890], 99.95th=[21890], 00:31:21.058 | 99.99th=[21890] 00:31:21.058 bw ( KiB/s): min=20504, max=28590, per=37.04%, avg=24547.00, stdev=5717.67, samples=2 00:31:21.058 iops : min= 5126, max= 7147, avg=6136.50, stdev=1429.06, samples=2 00:31:21.058 lat (msec) : 2=0.11%, 4=0.81%, 10=55.35%, 20=42.79%, 50=0.94% 00:31:21.058 cpu : usr=6.06%, sys=5.66%, ctx=528, majf=0, minf=1 00:31:21.058 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:31:21.058 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:21.058 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:21.058 issued rwts: total=6133,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:21.058 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:21.058 job3: (groupid=0, jobs=1): err= 0: pid=2751107: Thu Nov 28 12:55:03 2024 00:31:21.058 read: IOPS=3047, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1008msec) 00:31:21.058 slat (nsec): min=1194, max=11423k, avg=110510.29, stdev=711342.97 00:31:21.058 clat (usec): min=4747, max=49762, avg=14075.45, stdev=5277.90 00:31:21.058 lat (usec): min=4755, max=49772, avg=14185.96, stdev=5334.06 00:31:21.058 clat percentiles (usec): 00:31:21.058 | 1.00th=[ 5473], 5.00th=[ 9634], 10.00th=[ 9896], 20.00th=[11600], 00:31:21.058 | 30.00th=[12256], 40.00th=[12518], 50.00th=[12911], 60.00th=[13566], 00:31:21.058 | 70.00th=[14484], 80.00th=[15008], 90.00th=[18744], 95.00th=[21103], 00:31:21.058 | 99.00th=[39060], 99.50th=[44303], 99.90th=[49546], 99.95th=[49546], 00:31:21.058 | 99.99th=[49546] 00:31:21.058 write: IOPS=3544, BW=13.8MiB/s (14.5MB/s)(14.0MiB/1008msec); 0 zone resets 00:31:21.058 slat (usec): min=2, max=24291, avg=174.44, stdev=1055.78 00:31:21.058 clat (usec): min=303, max=78117, avg=23591.72, stdev=18004.08 00:31:21.058 lat (usec): min=525, max=78126, avg=23766.16, stdev=18112.18 00:31:21.058 clat percentiles (usec): 00:31:21.058 | 1.00th=[ 3785], 5.00th=[ 7963], 10.00th=[ 8848], 20.00th=[ 9372], 00:31:21.058 | 30.00th=[11207], 40.00th=[11469], 50.00th=[12911], 60.00th=[20055], 00:31:21.058 | 70.00th=[32113], 80.00th=[38536], 90.00th=[50594], 95.00th=[58459], 00:31:21.058 | 99.00th=[78119], 99.50th=[78119], 99.90th=[78119], 99.95th=[78119], 00:31:21.058 | 99.99th=[78119] 00:31:21.058 bw ( KiB/s): min=11176, max=16351, per=20.77%, avg=13763.50, stdev=3659.28, samples=2 00:31:21.058 iops : min= 2794, max= 4087, avg=3440.50, stdev=914.29, samples=2 00:31:21.058 lat (usec) : 500=0.02%, 750=0.05% 00:31:21.058 lat (msec) : 4=0.56%, 10=18.04%, 20=56.48%, 50=19.19%, 100=5.67% 00:31:21.058 cpu : usr=2.78%, sys=5.96%, ctx=282, majf=0, minf=2 00:31:21.058 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:31:21.058 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:21.058 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:21.058 issued rwts: total=3072,3573,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:21.058 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:21.058 00:31:21.058 Run status group 0 (all jobs): 00:31:21.058 READ: bw=59.5MiB/s (62.4MB/s), 11.9MiB/s-23.8MiB/s (12.5MB/s-24.9MB/s), io=60.0MiB (62.9MB), run=1003-1008msec 00:31:21.058 WRITE: bw=64.7MiB/s (67.9MB/s), 13.4MiB/s-23.8MiB/s (14.0MB/s-25.0MB/s), io=65.2MiB (68.4MB), run=1003-1008msec 00:31:21.058 00:31:21.058 Disk stats (read/write): 00:31:21.058 nvme0n1: ios=2332/2560, merge=0/0, ticks=16727/10901, in_queue=27628, util=97.80% 00:31:21.058 nvme0n2: ios=2204/2560, merge=0/0, ticks=18078/27640, in_queue=45718, util=97.47% 00:31:21.058 nvme0n3: ios=5185/5631, merge=0/0, ticks=52465/52060, in_queue=104525, util=97.72% 00:31:21.058 nvme0n4: ios=3131/3220, merge=0/0, ticks=35856/57219, in_queue=93075, util=98.33% 00:31:21.058 12:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:31:21.058 [global] 00:31:21.058 thread=1 00:31:21.058 invalidate=1 00:31:21.058 rw=randwrite 00:31:21.058 time_based=1 00:31:21.058 runtime=1 00:31:21.058 ioengine=libaio 00:31:21.058 direct=1 00:31:21.058 bs=4096 00:31:21.059 iodepth=128 00:31:21.059 norandommap=0 00:31:21.059 numjobs=1 00:31:21.059 00:31:21.059 verify_dump=1 00:31:21.059 verify_backlog=512 00:31:21.059 verify_state_save=0 00:31:21.059 do_verify=1 00:31:21.059 verify=crc32c-intel 00:31:21.059 [job0] 00:31:21.059 filename=/dev/nvme0n1 00:31:21.059 [job1] 00:31:21.059 filename=/dev/nvme0n2 00:31:21.059 [job2] 00:31:21.059 filename=/dev/nvme0n3 00:31:21.059 [job3] 00:31:21.059 filename=/dev/nvme0n4 00:31:21.059 Could not set queue depth (nvme0n1) 00:31:21.059 Could not set queue depth (nvme0n2) 00:31:21.059 Could not set queue depth (nvme0n3) 00:31:21.059 Could not set queue depth (nvme0n4) 00:31:21.319 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:21.319 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:21.319 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:21.319 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:21.319 fio-3.35 00:31:21.319 Starting 4 threads 00:31:22.698 00:31:22.698 job0: (groupid=0, jobs=1): err= 0: pid=2751484: Thu Nov 28 12:55:04 2024 00:31:22.698 read: IOPS=4629, BW=18.1MiB/s (19.0MB/s)(18.1MiB/1003msec) 00:31:22.698 slat (nsec): min=1595, max=17856k, avg=101735.25, stdev=644799.13 00:31:22.698 clat (usec): min=2663, max=47389, avg=12948.36, stdev=5735.03 00:31:22.698 lat (usec): min=2665, max=47397, avg=13050.10, stdev=5778.03 00:31:22.698 clat percentiles (usec): 00:31:22.698 | 1.00th=[ 7504], 5.00th=[ 8848], 10.00th=[ 9372], 20.00th=[ 9896], 00:31:22.698 | 30.00th=[10814], 40.00th=[11338], 50.00th=[11863], 60.00th=[12256], 00:31:22.698 | 70.00th=[12780], 80.00th=[13698], 90.00th=[14877], 95.00th=[25297], 00:31:22.698 | 99.00th=[38011], 99.50th=[47449], 99.90th=[47449], 99.95th=[47449], 00:31:22.698 | 99.99th=[47449] 00:31:22.698 write: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec); 0 zone resets 00:31:22.698 slat (usec): min=2, max=20861, avg=97.22, stdev=667.73 00:31:22.698 clat (usec): min=5439, max=55017, avg=13058.05, stdev=6187.81 00:31:22.698 lat (usec): min=5442, max=55088, avg=13155.27, stdev=6240.67 00:31:22.698 clat percentiles (usec): 00:31:22.698 | 1.00th=[ 7111], 5.00th=[ 8848], 10.00th=[ 9634], 20.00th=[10028], 00:31:22.698 | 30.00th=[10290], 40.00th=[10421], 50.00th=[10945], 60.00th=[11731], 00:31:22.698 | 70.00th=[11994], 80.00th=[13173], 90.00th=[20841], 95.00th=[24249], 00:31:22.698 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[44827], 00:31:22.698 | 99.99th=[54789] 00:31:22.698 bw ( KiB/s): min=20080, max=20103, per=27.16%, avg=20091.50, stdev=16.26, samples=2 00:31:22.698 iops : min= 5020, max= 5025, avg=5022.50, stdev= 3.54, samples=2 00:31:22.698 lat (msec) : 4=0.18%, 10=21.30%, 20=69.79%, 50=8.70%, 100=0.02% 00:31:22.698 cpu : usr=3.69%, sys=5.59%, ctx=488, majf=0, minf=1 00:31:22.698 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:31:22.698 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:22.698 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:22.698 issued rwts: total=4643,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:22.698 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:22.698 job1: (groupid=0, jobs=1): err= 0: pid=2751485: Thu Nov 28 12:55:04 2024 00:31:22.698 read: IOPS=4589, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1004msec) 00:31:22.698 slat (nsec): min=1366, max=10546k, avg=102337.65, stdev=642204.19 00:31:22.698 clat (usec): min=1966, max=29109, avg=13493.93, stdev=3790.20 00:31:22.698 lat (usec): min=1973, max=29136, avg=13596.27, stdev=3821.37 00:31:22.698 clat percentiles (usec): 00:31:22.698 | 1.00th=[ 6128], 5.00th=[ 8225], 10.00th=[ 9372], 20.00th=[10028], 00:31:22.698 | 30.00th=[11207], 40.00th=[12256], 50.00th=[12911], 60.00th=[14222], 00:31:22.698 | 70.00th=[15533], 80.00th=[16188], 90.00th=[19006], 95.00th=[20841], 00:31:22.698 | 99.00th=[23200], 99.50th=[23987], 99.90th=[24511], 99.95th=[24511], 00:31:22.698 | 99.99th=[29230] 00:31:22.698 write: IOPS=4801, BW=18.8MiB/s (19.7MB/s)(18.8MiB/1004msec); 0 zone resets 00:31:22.698 slat (usec): min=2, max=9108, avg=99.51, stdev=584.37 00:31:22.698 clat (usec): min=800, max=33428, avg=13409.26, stdev=4689.71 00:31:22.698 lat (usec): min=2986, max=33432, avg=13508.76, stdev=4732.94 00:31:22.698 clat percentiles (usec): 00:31:22.698 | 1.00th=[ 3916], 5.00th=[ 6456], 10.00th=[ 8455], 20.00th=[10290], 00:31:22.698 | 30.00th=[10814], 40.00th=[11469], 50.00th=[12125], 60.00th=[13173], 00:31:22.698 | 70.00th=[16450], 80.00th=[17433], 90.00th=[18220], 95.00th=[20055], 00:31:22.698 | 99.00th=[30278], 99.50th=[30802], 99.90th=[33424], 99.95th=[33424], 00:31:22.698 | 99.99th=[33424] 00:31:22.698 bw ( KiB/s): min=16872, max=20672, per=25.38%, avg=18772.00, stdev=2687.01, samples=2 00:31:22.698 iops : min= 4218, max= 5168, avg=4693.00, stdev=671.75, samples=2 00:31:22.698 lat (usec) : 1000=0.01% 00:31:22.698 lat (msec) : 2=0.06%, 4=0.85%, 10=17.58%, 20=75.40%, 50=6.10% 00:31:22.698 cpu : usr=3.59%, sys=7.18%, ctx=428, majf=0, minf=1 00:31:22.698 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:31:22.698 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:22.698 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:22.698 issued rwts: total=4608,4821,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:22.698 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:22.698 job2: (groupid=0, jobs=1): err= 0: pid=2751486: Thu Nov 28 12:55:04 2024 00:31:22.698 read: IOPS=3056, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1005msec) 00:31:22.698 slat (nsec): min=1686, max=9528.8k, avg=134652.55, stdev=760750.66 00:31:22.699 clat (usec): min=10554, max=45380, avg=18126.22, stdev=6141.62 00:31:22.699 lat (usec): min=10564, max=48486, avg=18260.88, stdev=6203.75 00:31:22.699 clat percentiles (usec): 00:31:22.699 | 1.00th=[11207], 5.00th=[11863], 10.00th=[12649], 20.00th=[13566], 00:31:22.699 | 30.00th=[13960], 40.00th=[14353], 50.00th=[14877], 60.00th=[16909], 00:31:22.699 | 70.00th=[21103], 80.00th=[23725], 90.00th=[27919], 95.00th=[29492], 00:31:22.699 | 99.00th=[35914], 99.50th=[38536], 99.90th=[45351], 99.95th=[45351], 00:31:22.699 | 99.99th=[45351] 00:31:22.699 write: IOPS=3504, BW=13.7MiB/s (14.4MB/s)(13.8MiB/1005msec); 0 zone resets 00:31:22.699 slat (usec): min=2, max=8759, avg=160.12, stdev=703.20 00:31:22.699 clat (usec): min=3904, max=53598, avg=20129.12, stdev=8495.53 00:31:22.699 lat (usec): min=4761, max=53609, avg=20289.25, stdev=8556.72 00:31:22.699 clat percentiles (usec): 00:31:22.699 | 1.00th=[ 9372], 5.00th=[11863], 10.00th=[12125], 20.00th=[13304], 00:31:22.699 | 30.00th=[13698], 40.00th=[14484], 50.00th=[16909], 60.00th=[19268], 00:31:22.699 | 70.00th=[25035], 80.00th=[26608], 90.00th=[31851], 95.00th=[36439], 00:31:22.699 | 99.00th=[47973], 99.50th=[50594], 99.90th=[53740], 99.95th=[53740], 00:31:22.699 | 99.99th=[53740] 00:31:22.699 bw ( KiB/s): min=12263, max=14872, per=18.34%, avg=13567.50, stdev=1844.84, samples=2 00:31:22.699 iops : min= 3065, max= 3718, avg=3391.50, stdev=461.74, samples=2 00:31:22.699 lat (msec) : 4=0.02%, 10=0.67%, 20=63.16%, 50=35.85%, 100=0.30% 00:31:22.699 cpu : usr=1.69%, sys=4.98%, ctx=419, majf=0, minf=1 00:31:22.699 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:31:22.699 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:22.699 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:22.699 issued rwts: total=3072,3522,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:22.699 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:22.699 job3: (groupid=0, jobs=1): err= 0: pid=2751487: Thu Nov 28 12:55:04 2024 00:31:22.699 read: IOPS=4987, BW=19.5MiB/s (20.4MB/s)(19.5MiB/1003msec) 00:31:22.699 slat (nsec): min=1172, max=11704k, avg=98308.19, stdev=606886.53 00:31:22.699 clat (usec): min=1674, max=23416, avg=12574.57, stdev=2824.05 00:31:22.699 lat (usec): min=1686, max=23442, avg=12672.88, stdev=2854.47 00:31:22.699 clat percentiles (usec): 00:31:22.699 | 1.00th=[ 2245], 5.00th=[ 8455], 10.00th=[10028], 20.00th=[11338], 00:31:22.699 | 30.00th=[11731], 40.00th=[11994], 50.00th=[12387], 60.00th=[12911], 00:31:22.699 | 70.00th=[13698], 80.00th=[14484], 90.00th=[15401], 95.00th=[16319], 00:31:22.699 | 99.00th=[21365], 99.50th=[22152], 99.90th=[22152], 99.95th=[22152], 00:31:22.699 | 99.99th=[23462] 00:31:22.699 write: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec); 0 zone resets 00:31:22.699 slat (nsec): min=1960, max=12414k, avg=93014.35, stdev=558687.12 00:31:22.699 clat (usec): min=1236, max=25322, avg=12557.36, stdev=2307.29 00:31:22.699 lat (usec): min=1275, max=25329, avg=12650.37, stdev=2355.32 00:31:22.699 clat percentiles (usec): 00:31:22.699 | 1.00th=[ 6652], 5.00th=[ 9241], 10.00th=[10159], 20.00th=[11338], 00:31:22.699 | 30.00th=[11731], 40.00th=[11863], 50.00th=[12256], 60.00th=[12911], 00:31:22.699 | 70.00th=[13435], 80.00th=[13829], 90.00th=[14484], 95.00th=[15926], 00:31:22.699 | 99.00th=[21890], 99.50th=[24249], 99.90th=[25297], 99.95th=[25297], 00:31:22.699 | 99.99th=[25297] 00:31:22.699 bw ( KiB/s): min=20480, max=20480, per=27.69%, avg=20480.00, stdev= 0.00, samples=2 00:31:22.699 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=2 00:31:22.699 lat (msec) : 2=0.13%, 4=0.85%, 10=8.64%, 20=88.56%, 50=1.82% 00:31:22.699 cpu : usr=2.99%, sys=4.69%, ctx=569, majf=0, minf=2 00:31:22.699 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:31:22.699 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:22.699 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:22.699 issued rwts: total=5002,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:22.699 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:22.699 00:31:22.699 Run status group 0 (all jobs): 00:31:22.699 READ: bw=67.3MiB/s (70.6MB/s), 11.9MiB/s-19.5MiB/s (12.5MB/s-20.4MB/s), io=67.7MiB (71.0MB), run=1003-1005msec 00:31:22.699 WRITE: bw=72.2MiB/s (75.7MB/s), 13.7MiB/s-19.9MiB/s (14.4MB/s-20.9MB/s), io=72.6MiB (76.1MB), run=1003-1005msec 00:31:22.699 00:31:22.699 Disk stats (read/write): 00:31:22.699 nvme0n1: ios=3641/4096, merge=0/0, ticks=21610/21565, in_queue=43175, util=100.00% 00:31:22.699 nvme0n2: ios=3730/4096, merge=0/0, ticks=38617/38392, in_queue=77009, util=97.55% 00:31:22.699 nvme0n3: ios=2598/3028, merge=0/0, ticks=15859/18983, in_queue=34842, util=98.09% 00:31:22.699 nvme0n4: ios=4135/4311, merge=0/0, ticks=29729/31385, in_queue=61114, util=97.85% 00:31:22.699 12:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:31:22.699 12:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=2751714 00:31:22.699 12:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:31:22.699 12:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:31:22.699 [global] 00:31:22.699 thread=1 00:31:22.699 invalidate=1 00:31:22.699 rw=read 00:31:22.699 time_based=1 00:31:22.699 runtime=10 00:31:22.699 ioengine=libaio 00:31:22.699 direct=1 00:31:22.699 bs=4096 00:31:22.699 iodepth=1 00:31:22.699 norandommap=1 00:31:22.699 numjobs=1 00:31:22.699 00:31:22.699 [job0] 00:31:22.699 filename=/dev/nvme0n1 00:31:22.699 [job1] 00:31:22.699 filename=/dev/nvme0n2 00:31:22.699 [job2] 00:31:22.699 filename=/dev/nvme0n3 00:31:22.699 [job3] 00:31:22.699 filename=/dev/nvme0n4 00:31:22.699 Could not set queue depth (nvme0n1) 00:31:22.699 Could not set queue depth (nvme0n2) 00:31:22.699 Could not set queue depth (nvme0n3) 00:31:22.699 Could not set queue depth (nvme0n4) 00:31:22.699 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:22.699 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:22.699 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:22.699 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:22.699 fio-3.35 00:31:22.699 Starting 4 threads 00:31:25.994 12:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:31:25.994 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=38891520, buflen=4096 00:31:25.994 fio: pid=2751853, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:31:25.994 12:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:31:25.994 12:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:25.994 12:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:31:25.994 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=2494464, buflen=4096 00:31:25.994 fio: pid=2751852, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:31:25.994 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=46428160, buflen=4096 00:31:25.994 fio: pid=2751850, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:31:25.994 12:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:25.994 12:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:31:26.252 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=36462592, buflen=4096 00:31:26.252 fio: pid=2751851, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:31:26.252 12:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:26.252 12:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:31:26.252 00:31:26.252 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2751850: Thu Nov 28 12:55:08 2024 00:31:26.252 read: IOPS=3655, BW=14.3MiB/s (15.0MB/s)(44.3MiB/3101msec) 00:31:26.252 slat (usec): min=4, max=28159, avg=11.52, stdev=302.38 00:31:26.252 clat (usec): min=181, max=41221, avg=258.66, stdev=766.74 00:31:26.252 lat (usec): min=188, max=41229, avg=270.18, stdev=824.85 00:31:26.252 clat percentiles (usec): 00:31:26.252 | 1.00th=[ 188], 5.00th=[ 194], 10.00th=[ 204], 20.00th=[ 225], 00:31:26.252 | 30.00th=[ 237], 40.00th=[ 243], 50.00th=[ 245], 60.00th=[ 249], 00:31:26.252 | 70.00th=[ 251], 80.00th=[ 255], 90.00th=[ 269], 95.00th=[ 302], 00:31:26.252 | 99.00th=[ 351], 99.50th=[ 396], 99.90th=[ 502], 99.95th=[ 545], 00:31:26.252 | 99.99th=[41157] 00:31:26.252 bw ( KiB/s): min= 9360, max=17568, per=40.13%, avg=14737.67, stdev=2822.55, samples=6 00:31:26.252 iops : min= 2340, max= 4392, avg=3684.33, stdev=705.65, samples=6 00:31:26.252 lat (usec) : 250=66.30%, 500=33.57%, 750=0.08% 00:31:26.252 lat (msec) : 50=0.04% 00:31:26.252 cpu : usr=1.52%, sys=4.81%, ctx=11338, majf=0, minf=2 00:31:26.252 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:26.252 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:26.252 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:26.252 issued rwts: total=11336,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:26.252 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:26.252 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2751851: Thu Nov 28 12:55:08 2024 00:31:26.252 read: IOPS=2693, BW=10.5MiB/s (11.0MB/s)(34.8MiB/3305msec) 00:31:26.252 slat (usec): min=3, max=17687, avg=15.82, stdev=354.23 00:31:26.252 clat (usec): min=201, max=42041, avg=353.46, stdev=1979.64 00:31:26.252 lat (usec): min=206, max=42048, avg=369.28, stdev=2011.74 00:31:26.252 clat percentiles (usec): 00:31:26.252 | 1.00th=[ 210], 5.00th=[ 217], 10.00th=[ 225], 20.00th=[ 243], 00:31:26.252 | 30.00th=[ 247], 40.00th=[ 249], 50.00th=[ 253], 60.00th=[ 258], 00:31:26.252 | 70.00th=[ 262], 80.00th=[ 269], 90.00th=[ 285], 95.00th=[ 318], 00:31:26.252 | 99.00th=[ 408], 99.50th=[ 433], 99.90th=[41157], 99.95th=[41157], 00:31:26.252 | 99.99th=[42206] 00:31:26.252 bw ( KiB/s): min= 1208, max=15376, per=28.59%, avg=10500.17, stdev=5598.12, samples=6 00:31:26.252 iops : min= 302, max= 3844, avg=2625.00, stdev=1399.49, samples=6 00:31:26.252 lat (usec) : 250=42.14%, 500=57.51%, 750=0.10% 00:31:26.252 lat (msec) : 50=0.24% 00:31:26.252 cpu : usr=0.48%, sys=2.39%, ctx=8910, majf=0, minf=2 00:31:26.252 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:26.252 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:26.252 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:26.252 issued rwts: total=8903,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:26.252 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:26.252 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2751852: Thu Nov 28 12:55:08 2024 00:31:26.252 read: IOPS=210, BW=840KiB/s (860kB/s)(2436KiB/2899msec) 00:31:26.252 slat (usec): min=6, max=14912, avg=33.41, stdev=603.46 00:31:26.252 clat (usec): min=244, max=42038, avg=4690.77, stdev=12662.71 00:31:26.252 lat (usec): min=252, max=56035, avg=4724.20, stdev=12750.40 00:31:26.252 clat percentiles (usec): 00:31:26.252 | 1.00th=[ 251], 5.00th=[ 258], 10.00th=[ 260], 20.00th=[ 265], 00:31:26.252 | 30.00th=[ 269], 40.00th=[ 277], 50.00th=[ 281], 60.00th=[ 285], 00:31:26.252 | 70.00th=[ 293], 80.00th=[ 302], 90.00th=[40633], 95.00th=[41157], 00:31:26.252 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:31:26.252 | 99.99th=[42206] 00:31:26.252 bw ( KiB/s): min= 96, max= 4176, per=2.61%, avg=960.00, stdev=1800.29, samples=5 00:31:26.252 iops : min= 24, max= 1044, avg=240.00, stdev=450.07, samples=5 00:31:26.252 lat (usec) : 250=0.82%, 500=88.20% 00:31:26.252 lat (msec) : 50=10.82% 00:31:26.252 cpu : usr=0.03%, sys=0.28%, ctx=612, majf=0, minf=1 00:31:26.253 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:26.253 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:26.253 complete : 0=0.2%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:26.253 issued rwts: total=610,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:26.253 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:26.253 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2751853: Thu Nov 28 12:55:08 2024 00:31:26.253 read: IOPS=3551, BW=13.9MiB/s (14.5MB/s)(37.1MiB/2674msec) 00:31:26.253 slat (nsec): min=6954, max=76343, avg=8211.61, stdev=1587.72 00:31:26.253 clat (usec): min=196, max=41525, avg=268.73, stdev=996.76 00:31:26.253 lat (usec): min=204, max=41534, avg=276.94, stdev=997.08 00:31:26.253 clat percentiles (usec): 00:31:26.253 | 1.00th=[ 206], 5.00th=[ 219], 10.00th=[ 227], 20.00th=[ 233], 00:31:26.253 | 30.00th=[ 235], 40.00th=[ 239], 50.00th=[ 241], 60.00th=[ 245], 00:31:26.253 | 70.00th=[ 249], 80.00th=[ 253], 90.00th=[ 265], 95.00th=[ 273], 00:31:26.253 | 99.00th=[ 297], 99.50th=[ 359], 99.90th=[ 510], 99.95th=[41157], 00:31:26.253 | 99.99th=[41681] 00:31:26.253 bw ( KiB/s): min= 8312, max=16088, per=38.92%, avg=14291.20, stdev=3359.09, samples=5 00:31:26.253 iops : min= 2078, max= 4022, avg=3572.80, stdev=839.77, samples=5 00:31:26.253 lat (usec) : 250=72.98%, 500=26.83%, 750=0.09%, 1000=0.01% 00:31:26.253 lat (msec) : 2=0.01%, 50=0.06% 00:31:26.253 cpu : usr=2.39%, sys=5.24%, ctx=9497, majf=0, minf=2 00:31:26.253 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:26.253 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:26.253 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:26.253 issued rwts: total=9496,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:26.253 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:26.253 00:31:26.253 Run status group 0 (all jobs): 00:31:26.253 READ: bw=35.9MiB/s (37.6MB/s), 840KiB/s-14.3MiB/s (860kB/s-15.0MB/s), io=119MiB (124MB), run=2674-3305msec 00:31:26.253 00:31:26.253 Disk stats (read/write): 00:31:26.253 nvme0n1: ios=11242/0, merge=0/0, ticks=2805/0, in_queue=2805, util=92.85% 00:31:26.253 nvme0n2: ios=8069/0, merge=0/0, ticks=2944/0, in_queue=2944, util=94.04% 00:31:26.253 nvme0n3: ios=607/0, merge=0/0, ticks=2769/0, in_queue=2769, util=95.65% 00:31:26.253 nvme0n4: ios=9136/0, merge=0/0, ticks=2338/0, in_queue=2338, util=96.34% 00:31:26.511 12:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:26.511 12:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:31:26.769 12:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:26.769 12:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:31:27.027 12:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:27.027 12:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:31:27.285 12:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:27.285 12:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:31:27.285 12:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:31:27.285 12:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 2751714 00:31:27.285 12:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:31:27.285 12:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:31:27.543 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:31:27.543 12:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:31:27.543 12:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:31:27.543 12:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:31:27.543 12:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:27.543 12:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:31:27.543 12:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:27.543 12:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:31:27.543 12:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:31:27.543 12:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:31:27.543 nvmf hotplug test: fio failed as expected 00:31:27.543 12:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:27.802 12:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:31:27.802 12:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:31:27.802 12:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:31:27.802 12:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:31:27.802 12:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:31:27.802 12:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:27.802 12:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:31:27.802 12:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:27.802 12:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:31:27.802 12:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:27.802 12:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:27.802 rmmod nvme_tcp 00:31:27.802 rmmod nvme_fabrics 00:31:27.802 rmmod nvme_keyring 00:31:27.802 12:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:27.802 12:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:31:27.802 12:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:31:27.802 12:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 2749078 ']' 00:31:27.802 12:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 2749078 00:31:27.802 12:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 2749078 ']' 00:31:27.802 12:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 2749078 00:31:27.802 12:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:31:27.802 12:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:27.802 12:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2749078 00:31:27.802 12:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:27.802 12:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:27.802 12:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2749078' 00:31:27.802 killing process with pid 2749078 00:31:27.802 12:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 2749078 00:31:27.802 12:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 2749078 00:31:28.061 12:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:28.061 12:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:28.061 12:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:28.061 12:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:31:28.061 12:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:31:28.061 12:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:28.061 12:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:31:28.061 12:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:28.061 12:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:28.061 12:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:28.061 12:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:28.061 12:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:29.961 12:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:29.961 00:31:29.961 real 0m25.630s 00:31:29.961 user 1m31.672s 00:31:29.961 sys 0m11.188s 00:31:29.961 12:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:29.961 12:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:31:29.961 ************************************ 00:31:29.961 END TEST nvmf_fio_target 00:31:29.961 ************************************ 00:31:30.220 12:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:31:30.220 12:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:30.220 12:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:30.220 12:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:30.220 ************************************ 00:31:30.220 START TEST nvmf_bdevio 00:31:30.220 ************************************ 00:31:30.220 12:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:31:30.220 * Looking for test storage... 00:31:30.220 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:30.221 12:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:30.221 12:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:31:30.221 12:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:30.221 12:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:30.221 12:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:30.221 12:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:30.221 12:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:30.221 12:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:31:30.221 12:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:31:30.221 12:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:31:30.221 12:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:31:30.221 12:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:31:30.221 12:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:31:30.221 12:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:31:30.221 12:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:30.221 12:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:31:30.221 12:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:31:30.221 12:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:30.221 12:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:30.221 12:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:31:30.221 12:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:31:30.221 12:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:30.221 12:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:31:30.221 12:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:31:30.221 12:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:31:30.221 12:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:31:30.221 12:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:30.221 12:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:31:30.221 12:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:31:30.221 12:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:30.221 12:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:30.221 12:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:31:30.221 12:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:30.221 12:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:30.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:30.221 --rc genhtml_branch_coverage=1 00:31:30.221 --rc genhtml_function_coverage=1 00:31:30.221 --rc genhtml_legend=1 00:31:30.221 --rc geninfo_all_blocks=1 00:31:30.221 --rc geninfo_unexecuted_blocks=1 00:31:30.221 00:31:30.221 ' 00:31:30.221 12:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:30.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:30.221 --rc genhtml_branch_coverage=1 00:31:30.221 --rc genhtml_function_coverage=1 00:31:30.221 --rc genhtml_legend=1 00:31:30.221 --rc geninfo_all_blocks=1 00:31:30.221 --rc geninfo_unexecuted_blocks=1 00:31:30.221 00:31:30.221 ' 00:31:30.221 12:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:30.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:30.221 --rc genhtml_branch_coverage=1 00:31:30.221 --rc genhtml_function_coverage=1 00:31:30.221 --rc genhtml_legend=1 00:31:30.221 --rc geninfo_all_blocks=1 00:31:30.221 --rc geninfo_unexecuted_blocks=1 00:31:30.221 00:31:30.221 ' 00:31:30.221 12:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:30.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:30.221 --rc genhtml_branch_coverage=1 00:31:30.221 --rc genhtml_function_coverage=1 00:31:30.221 --rc genhtml_legend=1 00:31:30.221 --rc geninfo_all_blocks=1 00:31:30.221 --rc geninfo_unexecuted_blocks=1 00:31:30.221 00:31:30.221 ' 00:31:30.221 12:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:30.221 12:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:31:30.221 12:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:30.221 12:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:30.221 12:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:30.221 12:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:30.221 12:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:30.221 12:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:30.221 12:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:30.221 12:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:30.221 12:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:30.221 12:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:30.221 12:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:31:30.221 12:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:31:30.221 12:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:30.221 12:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:30.221 12:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:30.221 12:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:30.221 12:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:30.221 12:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:31:30.221 12:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:30.221 12:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:30.221 12:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:30.221 12:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:30.221 12:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:30.221 12:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:30.221 12:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:31:30.221 12:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:30.221 12:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:31:30.221 12:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:30.221 12:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:30.221 12:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:30.221 12:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:30.221 12:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:30.221 12:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:30.222 12:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:30.222 12:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:30.222 12:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:30.222 12:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:30.481 12:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:30.481 12:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:30.481 12:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:31:30.481 12:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:30.481 12:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:30.481 12:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:30.481 12:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:30.481 12:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:30.481 12:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:30.481 12:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:30.481 12:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:30.481 12:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:30.481 12:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:30.481 12:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:31:30.481 12:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:35.753 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:35.753 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:31:35.753 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:35.753 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:35.753 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:35.753 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:35.753 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:35.753 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:31:35.753 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:35.753 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:31:35.754 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:31:35.754 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:31:35.754 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:31:35.754 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:31:35.754 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:31:35.754 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:35.754 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:35.754 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:35.754 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:35.754 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:35.754 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:35.754 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:35.754 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:35.754 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:35.754 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:35.754 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:35.754 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:35.754 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:35.754 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:35.754 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:35.754 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:35.754 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:35.754 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:35.754 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:35.754 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:35.754 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:35.754 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:35.754 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:35.754 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:35.754 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:35.754 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:35.754 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:35.754 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:35.754 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:35.754 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:35.754 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:35.754 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:35.754 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:35.754 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:35.754 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:35.754 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:35.754 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:35.754 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:35.754 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:35.754 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:35.754 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:35.754 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:35.754 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:35.754 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:35.754 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:35.754 Found net devices under 0000:86:00.0: cvl_0_0 00:31:35.754 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:35.754 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:35.754 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:35.754 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:35.754 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:35.754 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:35.754 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:35.754 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:35.754 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:35.754 Found net devices under 0000:86:00.1: cvl_0_1 00:31:35.754 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:35.754 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:35.754 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:31:35.754 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:35.754 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:35.754 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:35.754 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:35.754 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:35.754 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:35.754 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:35.754 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:35.754 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:35.754 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:35.754 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:35.754 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:35.754 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:35.754 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:35.754 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:35.754 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:35.754 12:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:35.754 12:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:35.754 12:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:35.754 12:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:35.754 12:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:35.754 12:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:35.754 12:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:35.754 12:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:35.754 12:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:35.754 12:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:35.754 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:35.754 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.225 ms 00:31:35.754 00:31:35.754 --- 10.0.0.2 ping statistics --- 00:31:35.754 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:35.754 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:31:35.754 12:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:35.754 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:35.754 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.123 ms 00:31:35.754 00:31:35.754 --- 10.0.0.1 ping statistics --- 00:31:35.754 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:35.754 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:31:35.754 12:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:35.754 12:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:31:35.755 12:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:35.755 12:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:35.755 12:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:35.755 12:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:35.755 12:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:35.755 12:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:35.755 12:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:35.755 12:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:31:35.755 12:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:35.755 12:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:35.755 12:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:36.013 12:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=2756092 00:31:36.013 12:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:31:36.013 12:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 2756092 00:31:36.013 12:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 2756092 ']' 00:31:36.013 12:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:36.013 12:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:36.013 12:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:36.013 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:36.013 12:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:36.013 12:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:36.013 [2024-11-28 12:55:18.319528] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:36.013 [2024-11-28 12:55:18.320472] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:31:36.013 [2024-11-28 12:55:18.320506] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:36.013 [2024-11-28 12:55:18.387752] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:36.013 [2024-11-28 12:55:18.431076] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:36.013 [2024-11-28 12:55:18.431115] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:36.013 [2024-11-28 12:55:18.431122] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:36.013 [2024-11-28 12:55:18.431132] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:36.013 [2024-11-28 12:55:18.431137] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:36.013 [2024-11-28 12:55:18.432644] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:31:36.013 [2024-11-28 12:55:18.432750] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:31:36.013 [2024-11-28 12:55:18.432859] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:36.013 [2024-11-28 12:55:18.432859] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:31:36.013 [2024-11-28 12:55:18.501618] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:36.013 [2024-11-28 12:55:18.502751] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:31:36.013 [2024-11-28 12:55:18.502790] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:36.013 [2024-11-28 12:55:18.503044] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:36.013 [2024-11-28 12:55:18.503080] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:36.271 12:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:36.271 12:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:31:36.271 12:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:36.271 12:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:36.271 12:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:36.271 12:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:36.271 12:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:36.271 12:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:36.271 12:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:36.271 [2024-11-28 12:55:18.581375] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:36.271 12:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:36.271 12:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:36.271 12:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:36.271 12:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:36.271 Malloc0 00:31:36.271 12:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:36.271 12:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:36.271 12:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:36.271 12:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:36.271 12:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:36.271 12:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:36.271 12:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:36.271 12:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:36.271 12:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:36.271 12:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:36.271 12:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:36.271 12:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:36.271 [2024-11-28 12:55:18.661628] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:36.271 12:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:36.271 12:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:31:36.271 12:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:31:36.271 12:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:31:36.271 12:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:31:36.271 12:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:36.271 12:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:36.271 { 00:31:36.271 "params": { 00:31:36.271 "name": "Nvme$subsystem", 00:31:36.271 "trtype": "$TEST_TRANSPORT", 00:31:36.271 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:36.271 "adrfam": "ipv4", 00:31:36.271 "trsvcid": "$NVMF_PORT", 00:31:36.271 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:36.271 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:36.271 "hdgst": ${hdgst:-false}, 00:31:36.271 "ddgst": ${ddgst:-false} 00:31:36.271 }, 00:31:36.271 "method": "bdev_nvme_attach_controller" 00:31:36.271 } 00:31:36.271 EOF 00:31:36.271 )") 00:31:36.271 12:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:31:36.271 12:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:31:36.271 12:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:31:36.271 12:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:36.271 "params": { 00:31:36.271 "name": "Nvme1", 00:31:36.271 "trtype": "tcp", 00:31:36.271 "traddr": "10.0.0.2", 00:31:36.271 "adrfam": "ipv4", 00:31:36.271 "trsvcid": "4420", 00:31:36.271 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:36.271 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:36.271 "hdgst": false, 00:31:36.271 "ddgst": false 00:31:36.271 }, 00:31:36.271 "method": "bdev_nvme_attach_controller" 00:31:36.271 }' 00:31:36.271 [2024-11-28 12:55:18.712293] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:31:36.271 [2024-11-28 12:55:18.712337] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2756119 ] 00:31:36.271 [2024-11-28 12:55:18.775225] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:36.528 [2024-11-28 12:55:18.825967] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:36.528 [2024-11-28 12:55:18.825984] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:36.528 [2024-11-28 12:55:18.825988] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:36.528 I/O targets: 00:31:36.528 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:31:36.528 00:31:36.528 00:31:36.528 CUnit - A unit testing framework for C - Version 2.1-3 00:31:36.528 http://cunit.sourceforge.net/ 00:31:36.528 00:31:36.528 00:31:36.528 Suite: bdevio tests on: Nvme1n1 00:31:36.528 Test: blockdev write read block ...passed 00:31:36.785 Test: blockdev write zeroes read block ...passed 00:31:36.785 Test: blockdev write zeroes read no split ...passed 00:31:36.785 Test: blockdev write zeroes read split ...passed 00:31:36.785 Test: blockdev write zeroes read split partial ...passed 00:31:36.785 Test: blockdev reset ...[2024-11-28 12:55:19.165719] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:31:36.785 [2024-11-28 12:55:19.165786] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15dc350 (9): Bad file descriptor 00:31:36.785 [2024-11-28 12:55:19.218248] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:31:36.785 passed 00:31:36.785 Test: blockdev write read 8 blocks ...passed 00:31:36.785 Test: blockdev write read size > 128k ...passed 00:31:36.785 Test: blockdev write read invalid size ...passed 00:31:36.785 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:31:36.785 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:31:36.785 Test: blockdev write read max offset ...passed 00:31:37.041 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:31:37.041 Test: blockdev writev readv 8 blocks ...passed 00:31:37.041 Test: blockdev writev readv 30 x 1block ...passed 00:31:37.041 Test: blockdev writev readv block ...passed 00:31:37.041 Test: blockdev writev readv size > 128k ...passed 00:31:37.041 Test: blockdev writev readv size > 128k in two iovs ...passed 00:31:37.041 Test: blockdev comparev and writev ...[2024-11-28 12:55:19.431192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:37.041 [2024-11-28 12:55:19.431219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:37.041 [2024-11-28 12:55:19.431233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:37.041 [2024-11-28 12:55:19.431241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:37.041 [2024-11-28 12:55:19.431543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:37.041 [2024-11-28 12:55:19.431553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:37.041 [2024-11-28 12:55:19.431565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:37.041 [2024-11-28 12:55:19.431573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:37.041 [2024-11-28 12:55:19.431859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:37.041 [2024-11-28 12:55:19.431869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:37.041 [2024-11-28 12:55:19.431881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:37.041 [2024-11-28 12:55:19.431889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:37.041 [2024-11-28 12:55:19.432190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:37.042 [2024-11-28 12:55:19.432202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:37.042 [2024-11-28 12:55:19.432214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:37.042 [2024-11-28 12:55:19.432221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:37.042 passed 00:31:37.042 Test: blockdev nvme passthru rw ...passed 00:31:37.042 Test: blockdev nvme passthru vendor specific ...[2024-11-28 12:55:19.514379] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:37.042 [2024-11-28 12:55:19.514397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:37.042 [2024-11-28 12:55:19.514516] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:37.042 [2024-11-28 12:55:19.514526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:37.042 [2024-11-28 12:55:19.514642] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:37.042 [2024-11-28 12:55:19.514655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:37.042 [2024-11-28 12:55:19.514770] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:37.042 [2024-11-28 12:55:19.514779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:37.042 passed 00:31:37.042 Test: blockdev nvme admin passthru ...passed 00:31:37.299 Test: blockdev copy ...passed 00:31:37.299 00:31:37.299 Run Summary: Type Total Ran Passed Failed Inactive 00:31:37.299 suites 1 1 n/a 0 0 00:31:37.299 tests 23 23 23 0 0 00:31:37.299 asserts 152 152 152 0 n/a 00:31:37.299 00:31:37.299 Elapsed time = 1.178 seconds 00:31:37.299 12:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:37.299 12:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:37.299 12:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:37.299 12:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:37.299 12:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:31:37.299 12:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:31:37.299 12:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:37.299 12:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:31:37.299 12:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:37.299 12:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:31:37.299 12:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:37.299 12:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:37.299 rmmod nvme_tcp 00:31:37.299 rmmod nvme_fabrics 00:31:37.299 rmmod nvme_keyring 00:31:37.299 12:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:37.299 12:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:31:37.299 12:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:31:37.299 12:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 2756092 ']' 00:31:37.299 12:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 2756092 00:31:37.299 12:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 2756092 ']' 00:31:37.299 12:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 2756092 00:31:37.299 12:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:31:37.299 12:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:37.299 12:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2756092 00:31:37.557 12:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:31:37.557 12:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:31:37.557 12:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2756092' 00:31:37.557 killing process with pid 2756092 00:31:37.557 12:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 2756092 00:31:37.557 12:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 2756092 00:31:37.557 12:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:37.557 12:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:37.557 12:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:37.557 12:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:31:37.557 12:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:31:37.557 12:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:37.557 12:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:31:37.557 12:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:37.557 12:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:37.557 12:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:37.557 12:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:37.557 12:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:40.082 12:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:40.082 00:31:40.082 real 0m9.564s 00:31:40.082 user 0m8.451s 00:31:40.082 sys 0m4.909s 00:31:40.082 12:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:40.082 12:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:40.082 ************************************ 00:31:40.082 END TEST nvmf_bdevio 00:31:40.082 ************************************ 00:31:40.082 12:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:31:40.082 00:31:40.082 real 4m25.245s 00:31:40.082 user 9m4.919s 00:31:40.082 sys 1m46.784s 00:31:40.082 12:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:40.082 12:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:40.082 ************************************ 00:31:40.082 END TEST nvmf_target_core_interrupt_mode 00:31:40.082 ************************************ 00:31:40.082 12:55:22 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:31:40.082 12:55:22 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:40.082 12:55:22 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:40.082 12:55:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:40.082 ************************************ 00:31:40.082 START TEST nvmf_interrupt 00:31:40.082 ************************************ 00:31:40.082 12:55:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:31:40.082 * Looking for test storage... 00:31:40.082 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:40.082 12:55:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:40.082 12:55:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lcov --version 00:31:40.082 12:55:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:40.082 12:55:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:40.082 12:55:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:40.082 12:55:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:40.082 12:55:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:40.082 12:55:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:31:40.082 12:55:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:31:40.082 12:55:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:31:40.082 12:55:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:31:40.082 12:55:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:31:40.082 12:55:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:31:40.082 12:55:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:31:40.082 12:55:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:40.082 12:55:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:31:40.082 12:55:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:31:40.082 12:55:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:40.082 12:55:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:40.082 12:55:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:31:40.082 12:55:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:31:40.082 12:55:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:40.082 12:55:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:31:40.082 12:55:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:31:40.082 12:55:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:31:40.082 12:55:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:31:40.082 12:55:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:40.083 12:55:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:31:40.083 12:55:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:31:40.083 12:55:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:40.083 12:55:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:40.083 12:55:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:31:40.083 12:55:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:40.083 12:55:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:40.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:40.083 --rc genhtml_branch_coverage=1 00:31:40.083 --rc genhtml_function_coverage=1 00:31:40.083 --rc genhtml_legend=1 00:31:40.083 --rc geninfo_all_blocks=1 00:31:40.083 --rc geninfo_unexecuted_blocks=1 00:31:40.083 00:31:40.083 ' 00:31:40.083 12:55:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:40.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:40.083 --rc genhtml_branch_coverage=1 00:31:40.083 --rc genhtml_function_coverage=1 00:31:40.083 --rc genhtml_legend=1 00:31:40.083 --rc geninfo_all_blocks=1 00:31:40.083 --rc geninfo_unexecuted_blocks=1 00:31:40.083 00:31:40.083 ' 00:31:40.083 12:55:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:40.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:40.083 --rc genhtml_branch_coverage=1 00:31:40.083 --rc genhtml_function_coverage=1 00:31:40.083 --rc genhtml_legend=1 00:31:40.083 --rc geninfo_all_blocks=1 00:31:40.083 --rc geninfo_unexecuted_blocks=1 00:31:40.083 00:31:40.083 ' 00:31:40.083 12:55:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:40.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:40.083 --rc genhtml_branch_coverage=1 00:31:40.083 --rc genhtml_function_coverage=1 00:31:40.083 --rc genhtml_legend=1 00:31:40.083 --rc geninfo_all_blocks=1 00:31:40.083 --rc geninfo_unexecuted_blocks=1 00:31:40.083 00:31:40.083 ' 00:31:40.083 12:55:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:40.083 12:55:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:31:40.083 12:55:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:40.083 12:55:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:40.083 12:55:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:40.083 12:55:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:40.083 12:55:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:40.083 12:55:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:40.083 12:55:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:40.083 12:55:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:40.083 12:55:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:40.083 12:55:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:40.083 12:55:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:31:40.083 12:55:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:31:40.083 12:55:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:40.083 12:55:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:40.083 12:55:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:40.083 12:55:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:40.083 12:55:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:40.083 12:55:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:31:40.083 12:55:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:40.083 12:55:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:40.083 12:55:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:40.083 12:55:22 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:40.083 12:55:22 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:40.083 12:55:22 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:40.083 12:55:22 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:31:40.083 12:55:22 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:40.083 12:55:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:31:40.083 12:55:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:40.083 12:55:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:40.083 12:55:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:40.083 12:55:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:40.083 12:55:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:40.083 12:55:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:40.083 12:55:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:40.083 12:55:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:40.083 12:55:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:40.083 12:55:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:40.083 12:55:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:31:40.083 12:55:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:31:40.083 12:55:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:31:40.083 12:55:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:40.083 12:55:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:40.083 12:55:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:40.083 12:55:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:40.083 12:55:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:40.083 12:55:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:40.083 12:55:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:40.083 12:55:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:40.083 12:55:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:40.083 12:55:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:40.083 12:55:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:31:40.083 12:55:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:31:46.645 12:55:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:46.645 12:55:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:31:46.645 12:55:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:46.645 12:55:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:46.645 12:55:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:46.645 12:55:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:46.645 12:55:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:46.645 12:55:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:31:46.645 12:55:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:46.645 12:55:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:31:46.645 12:55:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:31:46.645 12:55:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:31:46.645 12:55:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:31:46.645 12:55:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:31:46.645 12:55:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:31:46.645 12:55:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:46.645 12:55:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:46.645 12:55:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:46.645 12:55:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:46.645 12:55:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:46.645 12:55:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:46.645 12:55:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:46.645 12:55:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:46.645 12:55:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:46.645 12:55:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:46.645 12:55:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:46.645 12:55:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:46.645 12:55:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:46.645 12:55:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:46.646 12:55:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:46.646 12:55:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:46.646 12:55:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:46.646 12:55:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:46.646 12:55:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:46.646 12:55:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:46.646 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:46.646 12:55:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:46.646 12:55:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:46.646 12:55:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:46.646 12:55:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:46.646 12:55:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:46.646 12:55:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:46.646 12:55:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:46.646 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:46.646 12:55:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:46.646 12:55:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:46.646 12:55:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:46.646 12:55:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:46.646 12:55:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:46.646 12:55:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:46.646 12:55:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:46.646 12:55:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:46.646 12:55:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:46.646 12:55:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:46.646 12:55:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:46.646 12:55:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:46.646 12:55:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:46.646 12:55:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:46.646 12:55:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:46.646 12:55:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:46.646 Found net devices under 0000:86:00.0: cvl_0_0 00:31:46.646 12:55:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:46.646 12:55:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:46.646 12:55:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:46.646 12:55:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:46.646 12:55:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:46.646 12:55:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:46.646 12:55:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:46.646 12:55:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:46.646 12:55:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:46.646 Found net devices under 0000:86:00.1: cvl_0_1 00:31:46.646 12:55:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:46.646 12:55:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:46.646 12:55:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:31:46.646 12:55:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:46.646 12:55:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:46.646 12:55:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:46.646 12:55:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:46.646 12:55:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:46.646 12:55:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:46.646 12:55:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:46.646 12:55:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:46.646 12:55:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:46.646 12:55:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:46.646 12:55:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:46.646 12:55:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:46.646 12:55:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:46.646 12:55:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:46.646 12:55:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:46.646 12:55:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:46.646 12:55:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:46.646 12:55:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:46.646 12:55:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:46.646 12:55:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:46.646 12:55:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:46.646 12:55:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:46.646 12:55:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:46.646 12:55:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:46.646 12:55:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:46.646 12:55:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:46.646 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:46.646 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.359 ms 00:31:46.646 00:31:46.646 --- 10.0.0.2 ping statistics --- 00:31:46.646 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:46.646 rtt min/avg/max/mdev = 0.359/0.359/0.359/0.000 ms 00:31:46.646 12:55:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:46.646 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:46.646 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.122 ms 00:31:46.646 00:31:46.646 --- 10.0.0.1 ping statistics --- 00:31:46.646 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:46.646 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:31:46.646 12:55:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:46.646 12:55:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:31:46.646 12:55:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:46.646 12:55:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:46.646 12:55:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:46.646 12:55:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:46.646 12:55:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:46.646 12:55:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:46.646 12:55:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:46.646 12:55:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:31:46.646 12:55:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:46.646 12:55:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:46.646 12:55:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:31:46.646 12:55:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=2759881 00:31:46.646 12:55:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:31:46.646 12:55:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 2759881 00:31:46.646 12:55:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 2759881 ']' 00:31:46.646 12:55:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:46.646 12:55:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:46.646 12:55:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:46.646 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:46.646 12:55:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:46.646 12:55:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:31:46.646 [2024-11-28 12:55:28.281133] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:46.646 [2024-11-28 12:55:28.282075] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:31:46.646 [2024-11-28 12:55:28.282119] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:46.646 [2024-11-28 12:55:28.347098] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:46.646 [2024-11-28 12:55:28.388991] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:46.646 [2024-11-28 12:55:28.389030] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:46.646 [2024-11-28 12:55:28.389037] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:46.646 [2024-11-28 12:55:28.389043] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:46.646 [2024-11-28 12:55:28.389049] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:46.646 [2024-11-28 12:55:28.390297] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:46.646 [2024-11-28 12:55:28.390305] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:46.646 [2024-11-28 12:55:28.459407] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:46.646 [2024-11-28 12:55:28.459817] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:46.646 [2024-11-28 12:55:28.459831] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:46.647 12:55:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:46.647 12:55:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0 00:31:46.647 12:55:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:46.647 12:55:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:46.647 12:55:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:31:46.647 12:55:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:46.647 12:55:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:31:46.647 12:55:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:31:46.647 12:55:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:31:46.647 12:55:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:31:46.647 5000+0 records in 00:31:46.647 5000+0 records out 00:31:46.647 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0180788 s, 566 MB/s 00:31:46.647 12:55:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:31:46.647 12:55:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:46.647 12:55:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:31:46.647 AIO0 00:31:46.647 12:55:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:46.647 12:55:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:31:46.647 12:55:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:46.647 12:55:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:31:46.647 [2024-11-28 12:55:28.587036] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:46.647 12:55:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:46.647 12:55:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:31:46.647 12:55:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:46.647 12:55:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:31:46.647 12:55:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:46.647 12:55:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:31:46.647 12:55:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:46.647 12:55:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:31:46.647 12:55:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:46.647 12:55:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:46.647 12:55:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:46.647 12:55:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:31:46.647 [2024-11-28 12:55:28.611387] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:46.647 12:55:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:46.647 12:55:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:31:46.647 12:55:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 2759881 0 00:31:46.647 12:55:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2759881 0 idle 00:31:46.647 12:55:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2759881 00:31:46.647 12:55:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:31:46.647 12:55:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:31:46.647 12:55:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:31:46.647 12:55:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:31:46.647 12:55:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:31:46.647 12:55:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:31:46.647 12:55:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:31:46.647 12:55:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:31:46.647 12:55:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:31:46.647 12:55:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2759881 -w 256 00:31:46.647 12:55:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:31:46.647 12:55:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2759881 root 20 0 128.2g 46848 34560 S 0.0 0.0 0:00.23 reactor_0' 00:31:46.647 12:55:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2759881 root 20 0 128.2g 46848 34560 S 0.0 0.0 0:00.23 reactor_0 00:31:46.647 12:55:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:31:46.647 12:55:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:31:46.647 12:55:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:31:46.647 12:55:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:31:46.647 12:55:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:31:46.647 12:55:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:31:46.647 12:55:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:31:46.647 12:55:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:31:46.647 12:55:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:31:46.647 12:55:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 2759881 1 00:31:46.647 12:55:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2759881 1 idle 00:31:46.647 12:55:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2759881 00:31:46.647 12:55:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:31:46.647 12:55:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:31:46.647 12:55:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:31:46.647 12:55:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:31:46.647 12:55:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:31:46.647 12:55:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:31:46.647 12:55:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:31:46.647 12:55:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:31:46.647 12:55:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:31:46.647 12:55:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2759881 -w 256 00:31:46.647 12:55:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:31:46.647 12:55:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2759885 root 20 0 128.2g 46848 34560 S 0.0 0.0 0:00.00 reactor_1' 00:31:46.647 12:55:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2759885 root 20 0 128.2g 46848 34560 S 0.0 0.0 0:00.00 reactor_1 00:31:46.647 12:55:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:31:46.647 12:55:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:31:46.647 12:55:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:31:46.647 12:55:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:31:46.647 12:55:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:31:46.647 12:55:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:31:46.647 12:55:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:31:46.647 12:55:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:31:46.647 12:55:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:31:46.647 12:55:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=2759977 00:31:46.647 12:55:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:31:46.647 12:55:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:31:46.647 12:55:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:31:46.647 12:55:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 2759881 0 00:31:46.647 12:55:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 2759881 0 busy 00:31:46.647 12:55:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2759881 00:31:46.647 12:55:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:31:46.647 12:55:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:31:46.647 12:55:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:31:46.647 12:55:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:31:46.647 12:55:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:31:46.647 12:55:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:31:46.647 12:55:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:31:46.647 12:55:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:31:46.647 12:55:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2759881 -w 256 00:31:46.647 12:55:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:31:46.906 12:55:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2759881 root 20 0 128.2g 47616 34560 R 99.9 0.0 0:00.45 reactor_0' 00:31:46.906 12:55:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2759881 root 20 0 128.2g 47616 34560 R 99.9 0.0 0:00.45 reactor_0 00:31:46.906 12:55:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:31:46.906 12:55:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:31:46.906 12:55:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:31:46.906 12:55:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:31:46.906 12:55:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:31:46.906 12:55:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:31:46.906 12:55:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:31:46.906 12:55:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:31:46.906 12:55:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:31:46.906 12:55:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:31:46.906 12:55:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 2759881 1 00:31:46.906 12:55:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 2759881 1 busy 00:31:46.906 12:55:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2759881 00:31:46.906 12:55:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:31:46.906 12:55:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:31:46.906 12:55:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:31:46.906 12:55:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:31:46.906 12:55:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:31:46.906 12:55:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:31:46.906 12:55:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:31:46.906 12:55:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:31:46.906 12:55:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2759881 -w 256 00:31:46.906 12:55:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:31:46.906 12:55:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2759885 root 20 0 128.2g 47616 34560 R 99.9 0.0 0:00.29 reactor_1' 00:31:46.906 12:55:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2759885 root 20 0 128.2g 47616 34560 R 99.9 0.0 0:00.29 reactor_1 00:31:46.906 12:55:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:31:46.906 12:55:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:31:46.906 12:55:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:31:46.906 12:55:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:31:46.906 12:55:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:31:46.906 12:55:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:31:46.906 12:55:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:31:46.906 12:55:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:31:46.906 12:55:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 2759977 00:31:56.884 Initializing NVMe Controllers 00:31:56.884 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:56.884 Controller IO queue size 256, less than required. 00:31:56.884 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:56.884 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:31:56.884 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:31:56.884 Initialization complete. Launching workers. 00:31:56.884 ======================================================== 00:31:56.884 Latency(us) 00:31:56.884 Device Information : IOPS MiB/s Average min max 00:31:56.884 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 16071.65 62.78 15936.87 3276.00 22750.25 00:31:56.885 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 15814.06 61.77 16195.82 5025.72 22385.91 00:31:56.885 ======================================================== 00:31:56.885 Total : 31885.71 124.55 16065.30 3276.00 22750.25 00:31:56.885 00:31:56.885 12:55:39 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:31:56.885 12:55:39 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 2759881 0 00:31:56.885 12:55:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2759881 0 idle 00:31:56.885 12:55:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2759881 00:31:56.885 12:55:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:31:56.885 12:55:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:31:56.885 12:55:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:31:56.885 12:55:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:31:56.885 12:55:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:31:56.885 12:55:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:31:56.885 12:55:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:31:56.885 12:55:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:31:56.885 12:55:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:31:56.885 12:55:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2759881 -w 256 00:31:56.885 12:55:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:31:56.885 12:55:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2759881 root 20 0 128.2g 47616 34560 S 0.0 0.0 0:20.22 reactor_0' 00:31:56.885 12:55:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:31:56.885 12:55:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2759881 root 20 0 128.2g 47616 34560 S 0.0 0.0 0:20.22 reactor_0 00:31:56.885 12:55:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:31:56.885 12:55:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:31:56.885 12:55:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:31:56.885 12:55:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:31:56.885 12:55:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:31:56.885 12:55:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:31:56.885 12:55:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:31:56.885 12:55:39 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:31:56.885 12:55:39 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 2759881 1 00:31:56.885 12:55:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2759881 1 idle 00:31:56.885 12:55:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2759881 00:31:56.885 12:55:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:31:56.885 12:55:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:31:56.885 12:55:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:31:56.885 12:55:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:31:56.885 12:55:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:31:56.885 12:55:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:31:56.885 12:55:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:31:56.885 12:55:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:31:56.885 12:55:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:31:56.885 12:55:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2759881 -w 256 00:31:56.885 12:55:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:31:57.143 12:55:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2759885 root 20 0 128.2g 47616 34560 S 0.0 0.0 0:10.00 reactor_1' 00:31:57.143 12:55:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2759885 root 20 0 128.2g 47616 34560 S 0.0 0.0 0:10.00 reactor_1 00:31:57.143 12:55:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:31:57.143 12:55:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:31:57.143 12:55:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:31:57.143 12:55:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:31:57.143 12:55:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:31:57.143 12:55:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:31:57.143 12:55:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:31:57.143 12:55:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:31:57.143 12:55:39 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:31:57.402 12:55:39 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:31:57.402 12:55:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0 00:31:57.402 12:55:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:31:57.402 12:55:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:31:57.402 12:55:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2 00:31:59.935 12:55:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:31:59.935 12:55:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:31:59.935 12:55:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:31:59.935 12:55:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:31:59.935 12:55:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:31:59.935 12:55:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0 00:31:59.935 12:55:41 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:31:59.935 12:55:41 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 2759881 0 00:31:59.935 12:55:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2759881 0 idle 00:31:59.935 12:55:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2759881 00:31:59.935 12:55:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:31:59.935 12:55:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:31:59.935 12:55:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:31:59.935 12:55:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:31:59.935 12:55:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:31:59.935 12:55:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:31:59.935 12:55:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:31:59.935 12:55:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:31:59.936 12:55:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:31:59.936 12:55:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:31:59.936 12:55:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2759881 -w 256 00:31:59.936 12:55:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2759881 root 20 0 128.2g 73728 34560 S 0.0 0.0 0:20.37 reactor_0' 00:31:59.936 12:55:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2759881 root 20 0 128.2g 73728 34560 S 0.0 0.0 0:20.37 reactor_0 00:31:59.936 12:55:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:31:59.936 12:55:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:31:59.936 12:55:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:31:59.936 12:55:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:31:59.936 12:55:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:31:59.936 12:55:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:31:59.936 12:55:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:31:59.936 12:55:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:31:59.936 12:55:42 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:31:59.936 12:55:42 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 2759881 1 00:31:59.936 12:55:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2759881 1 idle 00:31:59.936 12:55:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2759881 00:31:59.936 12:55:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:31:59.936 12:55:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:31:59.936 12:55:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:31:59.936 12:55:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:31:59.936 12:55:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:31:59.936 12:55:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:31:59.936 12:55:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:31:59.936 12:55:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:31:59.936 12:55:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:31:59.936 12:55:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2759881 -w 256 00:31:59.936 12:55:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:31:59.936 12:55:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2759885 root 20 0 128.2g 73728 34560 S 0.0 0.0 0:10.06 reactor_1' 00:31:59.936 12:55:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2759885 root 20 0 128.2g 73728 34560 S 0.0 0.0 0:10.06 reactor_1 00:31:59.936 12:55:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:31:59.936 12:55:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:31:59.936 12:55:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:31:59.936 12:55:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:31:59.936 12:55:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:31:59.936 12:55:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:31:59.936 12:55:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:31:59.936 12:55:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:31:59.936 12:55:42 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:31:59.936 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:31:59.936 12:55:42 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:31:59.936 12:55:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0 00:31:59.936 12:55:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:31:59.936 12:55:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:59.936 12:55:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:31:59.936 12:55:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:59.936 12:55:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0 00:31:59.936 12:55:42 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:31:59.936 12:55:42 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:31:59.936 12:55:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:59.936 12:55:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:31:59.936 12:55:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:59.936 12:55:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:31:59.936 12:55:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:59.936 12:55:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:59.936 rmmod nvme_tcp 00:32:00.194 rmmod nvme_fabrics 00:32:00.194 rmmod nvme_keyring 00:32:00.194 12:55:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:00.194 12:55:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:32:00.194 12:55:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:32:00.194 12:55:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 2759881 ']' 00:32:00.194 12:55:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 2759881 00:32:00.194 12:55:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 2759881 ']' 00:32:00.194 12:55:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 2759881 00:32:00.194 12:55:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname 00:32:00.194 12:55:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:00.194 12:55:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2759881 00:32:00.194 12:55:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:00.194 12:55:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:00.194 12:55:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2759881' 00:32:00.194 killing process with pid 2759881 00:32:00.194 12:55:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 2759881 00:32:00.194 12:55:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 2759881 00:32:00.453 12:55:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:00.453 12:55:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:00.453 12:55:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:00.453 12:55:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:32:00.453 12:55:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:32:00.453 12:55:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:00.453 12:55:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:32:00.453 12:55:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:00.453 12:55:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:00.453 12:55:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:00.453 12:55:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:00.453 12:55:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:02.355 12:55:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:02.355 00:32:02.355 real 0m22.617s 00:32:02.355 user 0m39.494s 00:32:02.355 sys 0m8.242s 00:32:02.355 12:55:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:02.355 12:55:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:02.355 ************************************ 00:32:02.355 END TEST nvmf_interrupt 00:32:02.355 ************************************ 00:32:02.355 00:32:02.355 real 26m44.349s 00:32:02.355 user 55m57.949s 00:32:02.355 sys 8m56.440s 00:32:02.355 12:55:44 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:02.355 12:55:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:02.355 ************************************ 00:32:02.355 END TEST nvmf_tcp 00:32:02.355 ************************************ 00:32:02.614 12:55:44 -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]] 00:32:02.614 12:55:44 -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:32:02.614 12:55:44 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:02.614 12:55:44 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:02.614 12:55:44 -- common/autotest_common.sh@10 -- # set +x 00:32:02.614 ************************************ 00:32:02.614 START TEST spdkcli_nvmf_tcp 00:32:02.614 ************************************ 00:32:02.614 12:55:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:32:02.614 * Looking for test storage... 00:32:02.614 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:32:02.614 12:55:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:02.614 12:55:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:32:02.614 12:55:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:02.614 12:55:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:02.614 12:55:45 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:02.614 12:55:45 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:02.614 12:55:45 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:02.614 12:55:45 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:32:02.614 12:55:45 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:32:02.614 12:55:45 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:32:02.614 12:55:45 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:32:02.614 12:55:45 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:32:02.614 12:55:45 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:32:02.614 12:55:45 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:32:02.614 12:55:45 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:02.614 12:55:45 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:32:02.614 12:55:45 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:32:02.614 12:55:45 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:02.614 12:55:45 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:02.614 12:55:45 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:32:02.614 12:55:45 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:32:02.614 12:55:45 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:02.614 12:55:45 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:32:02.614 12:55:45 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:32:02.614 12:55:45 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:32:02.614 12:55:45 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:32:02.614 12:55:45 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:02.614 12:55:45 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:32:02.614 12:55:45 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:32:02.615 12:55:45 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:02.615 12:55:45 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:02.615 12:55:45 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:32:02.615 12:55:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:02.615 12:55:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:02.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:02.615 --rc genhtml_branch_coverage=1 00:32:02.615 --rc genhtml_function_coverage=1 00:32:02.615 --rc genhtml_legend=1 00:32:02.615 --rc geninfo_all_blocks=1 00:32:02.615 --rc geninfo_unexecuted_blocks=1 00:32:02.615 00:32:02.615 ' 00:32:02.615 12:55:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:02.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:02.615 --rc genhtml_branch_coverage=1 00:32:02.615 --rc genhtml_function_coverage=1 00:32:02.615 --rc genhtml_legend=1 00:32:02.615 --rc geninfo_all_blocks=1 00:32:02.615 --rc geninfo_unexecuted_blocks=1 00:32:02.615 00:32:02.615 ' 00:32:02.615 12:55:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:02.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:02.615 --rc genhtml_branch_coverage=1 00:32:02.615 --rc genhtml_function_coverage=1 00:32:02.615 --rc genhtml_legend=1 00:32:02.615 --rc geninfo_all_blocks=1 00:32:02.615 --rc geninfo_unexecuted_blocks=1 00:32:02.615 00:32:02.615 ' 00:32:02.615 12:55:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:02.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:02.615 --rc genhtml_branch_coverage=1 00:32:02.615 --rc genhtml_function_coverage=1 00:32:02.615 --rc genhtml_legend=1 00:32:02.615 --rc geninfo_all_blocks=1 00:32:02.615 --rc geninfo_unexecuted_blocks=1 00:32:02.615 00:32:02.615 ' 00:32:02.615 12:55:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:32:02.615 12:55:45 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:32:02.615 12:55:45 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:32:02.615 12:55:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:02.615 12:55:45 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:32:02.615 12:55:45 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:02.615 12:55:45 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:02.615 12:55:45 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:02.615 12:55:45 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:02.615 12:55:45 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:02.615 12:55:45 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:02.615 12:55:45 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:02.615 12:55:45 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:02.615 12:55:45 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:02.615 12:55:45 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:02.615 12:55:45 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:32:02.615 12:55:45 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:32:02.615 12:55:45 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:02.615 12:55:45 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:02.615 12:55:45 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:02.615 12:55:45 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:02.615 12:55:45 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:02.615 12:55:45 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:32:02.615 12:55:45 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:02.615 12:55:45 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:02.615 12:55:45 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:02.615 12:55:45 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:02.615 12:55:45 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:02.615 12:55:45 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:02.615 12:55:45 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:32:02.615 12:55:45 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:02.615 12:55:45 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:32:02.615 12:55:45 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:02.615 12:55:45 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:02.615 12:55:45 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:02.615 12:55:45 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:02.615 12:55:45 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:02.615 12:55:45 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:02.615 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:02.615 12:55:45 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:02.615 12:55:45 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:02.615 12:55:45 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:02.615 12:55:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:32:02.615 12:55:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:32:02.615 12:55:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:32:02.615 12:55:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:32:02.615 12:55:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:02.615 12:55:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:02.615 12:55:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:32:02.615 12:55:45 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=2762762 00:32:02.615 12:55:45 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 2762762 00:32:02.615 12:55:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 2762762 ']' 00:32:02.615 12:55:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:02.615 12:55:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:02.615 12:55:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:02.615 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:02.615 12:55:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:02.615 12:55:45 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:32:02.615 12:55:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:02.873 [2024-11-28 12:55:45.151521] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:32:02.873 [2024-11-28 12:55:45.151572] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2762762 ] 00:32:02.873 [2024-11-28 12:55:45.213572] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:02.873 [2024-11-28 12:55:45.257720] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:02.873 [2024-11-28 12:55:45.257724] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:02.873 12:55:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:02.873 12:55:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0 00:32:02.873 12:55:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:32:02.873 12:55:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:02.873 12:55:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:02.873 12:55:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:32:02.873 12:55:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:32:02.873 12:55:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:32:02.873 12:55:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:02.873 12:55:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:03.130 12:55:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:32:03.130 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:32:03.130 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:32:03.130 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:32:03.130 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:32:03.130 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:32:03.130 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:32:03.130 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:32:03.130 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:32:03.130 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:32:03.130 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:32:03.130 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:03.130 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:32:03.130 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:32:03.130 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:03.130 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:32:03.130 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:32:03.130 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:32:03.130 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:32:03.130 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:03.130 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:32:03.131 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:32:03.131 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:32:03.131 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:32:03.131 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:03.131 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:32:03.131 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:32:03.131 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:32:03.131 ' 00:32:05.657 [2024-11-28 12:55:47.906268] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:07.029 [2024-11-28 12:55:49.126371] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:32:08.927 [2024-11-28 12:55:51.373369] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:32:10.825 [2024-11-28 12:55:53.303497] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:32:12.723 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:32:12.723 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:32:12.723 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:32:12.723 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:32:12.723 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:32:12.723 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:32:12.723 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:32:12.723 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:32:12.723 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:32:12.723 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:32:12.723 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:32:12.723 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:32:12.723 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:32:12.723 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:32:12.723 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:32:12.723 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:32:12.723 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:32:12.723 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:32:12.723 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:32:12.723 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:32:12.723 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:32:12.723 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:32:12.723 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:32:12.723 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:32:12.723 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:32:12.723 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:32:12.723 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:32:12.723 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:32:12.723 12:55:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:32:12.723 12:55:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:12.723 12:55:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:12.723 12:55:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:32:12.723 12:55:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:12.723 12:55:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:12.723 12:55:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:32:12.723 12:55:54 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:32:12.981 12:55:55 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:32:12.981 12:55:55 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:32:12.981 12:55:55 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:32:12.981 12:55:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:12.981 12:55:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:12.981 12:55:55 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:32:12.981 12:55:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:12.981 12:55:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:12.981 12:55:55 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:32:12.981 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:32:12.981 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:32:12.981 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:32:12.981 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:32:12.981 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:32:12.981 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:32:12.981 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:32:12.981 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:32:12.981 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:32:12.981 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:32:12.981 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:32:12.981 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:32:12.981 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:32:12.981 ' 00:32:18.244 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:32:18.244 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:32:18.244 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:32:18.244 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:32:18.244 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:32:18.244 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:32:18.244 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:32:18.244 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:32:18.244 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:32:18.244 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:32:18.244 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:32:18.244 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:32:18.244 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:32:18.244 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:32:18.244 12:56:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:32:18.244 12:56:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:18.244 12:56:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:18.244 12:56:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 2762762 00:32:18.244 12:56:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 2762762 ']' 00:32:18.244 12:56:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 2762762 00:32:18.244 12:56:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname 00:32:18.244 12:56:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:18.244 12:56:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2762762 00:32:18.244 12:56:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:18.244 12:56:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:18.244 12:56:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2762762' 00:32:18.244 killing process with pid 2762762 00:32:18.244 12:56:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 2762762 00:32:18.244 12:56:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 2762762 00:32:18.503 12:56:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:32:18.503 12:56:00 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:32:18.503 12:56:00 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 2762762 ']' 00:32:18.503 12:56:00 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 2762762 00:32:18.503 12:56:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 2762762 ']' 00:32:18.503 12:56:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 2762762 00:32:18.503 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2762762) - No such process 00:32:18.503 12:56:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 2762762 is not found' 00:32:18.503 Process with pid 2762762 is not found 00:32:18.503 12:56:00 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:32:18.503 12:56:00 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:32:18.503 12:56:00 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:32:18.503 00:32:18.503 real 0m15.873s 00:32:18.503 user 0m33.154s 00:32:18.503 sys 0m0.686s 00:32:18.503 12:56:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:18.503 12:56:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:18.503 ************************************ 00:32:18.503 END TEST spdkcli_nvmf_tcp 00:32:18.503 ************************************ 00:32:18.503 12:56:00 -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:32:18.503 12:56:00 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:18.503 12:56:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:18.503 12:56:00 -- common/autotest_common.sh@10 -- # set +x 00:32:18.503 ************************************ 00:32:18.503 START TEST nvmf_identify_passthru 00:32:18.503 ************************************ 00:32:18.503 12:56:00 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:32:18.503 * Looking for test storage... 00:32:18.503 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:18.503 12:56:00 nvmf_identify_passthru -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:18.503 12:56:00 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lcov --version 00:32:18.503 12:56:00 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:18.761 12:56:01 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:18.761 12:56:01 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:18.761 12:56:01 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:18.761 12:56:01 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:18.761 12:56:01 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:32:18.761 12:56:01 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:32:18.761 12:56:01 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:32:18.761 12:56:01 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:32:18.761 12:56:01 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:32:18.761 12:56:01 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:32:18.761 12:56:01 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:32:18.761 12:56:01 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:18.761 12:56:01 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:32:18.761 12:56:01 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:32:18.761 12:56:01 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:18.761 12:56:01 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:18.761 12:56:01 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:32:18.761 12:56:01 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:32:18.761 12:56:01 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:18.761 12:56:01 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:32:18.761 12:56:01 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:32:18.761 12:56:01 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:32:18.761 12:56:01 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:32:18.761 12:56:01 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:18.761 12:56:01 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:32:18.761 12:56:01 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:32:18.761 12:56:01 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:18.761 12:56:01 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:18.761 12:56:01 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:32:18.761 12:56:01 nvmf_identify_passthru -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:18.761 12:56:01 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:18.761 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:18.761 --rc genhtml_branch_coverage=1 00:32:18.761 --rc genhtml_function_coverage=1 00:32:18.761 --rc genhtml_legend=1 00:32:18.761 --rc geninfo_all_blocks=1 00:32:18.761 --rc geninfo_unexecuted_blocks=1 00:32:18.761 00:32:18.761 ' 00:32:18.761 12:56:01 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:18.761 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:18.761 --rc genhtml_branch_coverage=1 00:32:18.761 --rc genhtml_function_coverage=1 00:32:18.761 --rc genhtml_legend=1 00:32:18.761 --rc geninfo_all_blocks=1 00:32:18.761 --rc geninfo_unexecuted_blocks=1 00:32:18.761 00:32:18.761 ' 00:32:18.761 12:56:01 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:18.761 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:18.761 --rc genhtml_branch_coverage=1 00:32:18.761 --rc genhtml_function_coverage=1 00:32:18.761 --rc genhtml_legend=1 00:32:18.761 --rc geninfo_all_blocks=1 00:32:18.761 --rc geninfo_unexecuted_blocks=1 00:32:18.761 00:32:18.761 ' 00:32:18.761 12:56:01 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:18.761 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:18.761 --rc genhtml_branch_coverage=1 00:32:18.761 --rc genhtml_function_coverage=1 00:32:18.761 --rc genhtml_legend=1 00:32:18.761 --rc geninfo_all_blocks=1 00:32:18.761 --rc geninfo_unexecuted_blocks=1 00:32:18.761 00:32:18.761 ' 00:32:18.761 12:56:01 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:18.761 12:56:01 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:32:18.761 12:56:01 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:18.761 12:56:01 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:18.761 12:56:01 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:18.761 12:56:01 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:18.761 12:56:01 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:18.761 12:56:01 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:18.761 12:56:01 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:18.761 12:56:01 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:18.761 12:56:01 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:18.761 12:56:01 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:18.761 12:56:01 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:32:18.761 12:56:01 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:32:18.761 12:56:01 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:18.761 12:56:01 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:18.761 12:56:01 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:18.761 12:56:01 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:18.761 12:56:01 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:18.761 12:56:01 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:32:18.761 12:56:01 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:18.761 12:56:01 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:18.761 12:56:01 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:18.761 12:56:01 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:18.761 12:56:01 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:18.761 12:56:01 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:18.761 12:56:01 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:32:18.761 12:56:01 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:18.761 12:56:01 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:32:18.761 12:56:01 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:18.761 12:56:01 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:18.761 12:56:01 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:18.761 12:56:01 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:18.761 12:56:01 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:18.761 12:56:01 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:18.761 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:18.761 12:56:01 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:18.761 12:56:01 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:18.761 12:56:01 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:18.761 12:56:01 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:18.761 12:56:01 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:32:18.761 12:56:01 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:18.761 12:56:01 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:18.761 12:56:01 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:18.761 12:56:01 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:18.761 12:56:01 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:18.761 12:56:01 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:18.761 12:56:01 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:32:18.762 12:56:01 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:18.762 12:56:01 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:32:18.762 12:56:01 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:18.762 12:56:01 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:18.762 12:56:01 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:18.762 12:56:01 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:18.762 12:56:01 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:18.762 12:56:01 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:18.762 12:56:01 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:18.762 12:56:01 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:18.762 12:56:01 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:18.762 12:56:01 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:18.762 12:56:01 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:32:18.762 12:56:01 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:24.027 12:56:06 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:24.027 12:56:06 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:32:24.027 12:56:06 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:24.027 12:56:06 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:24.027 12:56:06 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:24.027 12:56:06 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:24.027 12:56:06 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:24.027 12:56:06 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:32:24.027 12:56:06 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:24.027 12:56:06 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:32:24.027 12:56:06 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:32:24.027 12:56:06 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:32:24.027 12:56:06 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:32:24.027 12:56:06 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:32:24.027 12:56:06 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:32:24.027 12:56:06 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:24.027 12:56:06 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:24.027 12:56:06 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:24.027 12:56:06 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:24.027 12:56:06 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:24.027 12:56:06 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:24.027 12:56:06 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:24.028 12:56:06 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:24.028 12:56:06 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:24.028 12:56:06 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:24.028 12:56:06 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:24.028 12:56:06 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:24.028 12:56:06 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:24.028 12:56:06 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:24.028 12:56:06 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:24.028 12:56:06 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:24.028 12:56:06 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:24.028 12:56:06 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:24.028 12:56:06 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:24.028 12:56:06 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:32:24.028 Found 0000:86:00.0 (0x8086 - 0x159b) 00:32:24.028 12:56:06 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:24.028 12:56:06 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:24.028 12:56:06 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:24.028 12:56:06 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:24.028 12:56:06 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:24.028 12:56:06 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:24.028 12:56:06 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:32:24.028 Found 0000:86:00.1 (0x8086 - 0x159b) 00:32:24.028 12:56:06 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:24.028 12:56:06 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:24.028 12:56:06 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:24.028 12:56:06 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:24.028 12:56:06 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:24.028 12:56:06 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:24.028 12:56:06 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:24.028 12:56:06 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:24.028 12:56:06 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:24.028 12:56:06 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:24.028 12:56:06 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:24.028 12:56:06 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:24.028 12:56:06 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:24.028 12:56:06 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:24.028 12:56:06 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:24.028 12:56:06 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:32:24.028 Found net devices under 0000:86:00.0: cvl_0_0 00:32:24.028 12:56:06 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:24.028 12:56:06 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:24.028 12:56:06 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:24.028 12:56:06 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:24.028 12:56:06 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:24.028 12:56:06 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:24.028 12:56:06 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:24.028 12:56:06 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:24.028 12:56:06 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:32:24.028 Found net devices under 0000:86:00.1: cvl_0_1 00:32:24.028 12:56:06 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:24.028 12:56:06 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:24.028 12:56:06 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:32:24.028 12:56:06 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:24.028 12:56:06 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:24.028 12:56:06 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:24.028 12:56:06 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:24.028 12:56:06 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:24.028 12:56:06 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:24.028 12:56:06 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:24.028 12:56:06 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:24.028 12:56:06 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:24.028 12:56:06 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:24.028 12:56:06 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:24.028 12:56:06 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:24.028 12:56:06 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:24.028 12:56:06 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:24.028 12:56:06 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:24.028 12:56:06 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:24.028 12:56:06 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:24.028 12:56:06 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:24.028 12:56:06 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:24.028 12:56:06 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:24.028 12:56:06 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:24.028 12:56:06 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:24.028 12:56:06 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:24.028 12:56:06 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:24.028 12:56:06 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:24.028 12:56:06 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:24.028 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:24.028 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.416 ms 00:32:24.028 00:32:24.028 --- 10.0.0.2 ping statistics --- 00:32:24.028 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:24.028 rtt min/avg/max/mdev = 0.416/0.416/0.416/0.000 ms 00:32:24.028 12:56:06 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:24.028 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:24.028 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.199 ms 00:32:24.028 00:32:24.028 --- 10.0.0.1 ping statistics --- 00:32:24.028 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:24.028 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:32:24.028 12:56:06 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:24.028 12:56:06 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:32:24.028 12:56:06 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:24.028 12:56:06 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:24.028 12:56:06 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:24.028 12:56:06 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:24.028 12:56:06 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:24.028 12:56:06 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:24.028 12:56:06 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:24.028 12:56:06 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:32:24.028 12:56:06 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:24.028 12:56:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:24.028 12:56:06 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:32:24.028 12:56:06 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:32:24.028 12:56:06 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:32:24.028 12:56:06 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:32:24.028 12:56:06 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:32:24.028 12:56:06 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=() 00:32:24.028 12:56:06 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs 00:32:24.028 12:56:06 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:32:24.028 12:56:06 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:32:24.028 12:56:06 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:32:24.028 12:56:06 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:32:24.028 12:56:06 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:32:24.028 12:56:06 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:5e:00.0 00:32:24.028 12:56:06 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:5e:00.0 00:32:24.028 12:56:06 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:5e:00.0 ']' 00:32:24.028 12:56:06 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:32:24.028 12:56:06 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:32:24.028 12:56:06 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:32:28.206 12:56:10 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ72430F0E1P0FGN 00:32:28.206 12:56:10 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:32:28.206 12:56:10 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:32:28.206 12:56:10 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:32:32.388 12:56:14 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:32:32.388 12:56:14 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:32:32.388 12:56:14 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:32.388 12:56:14 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:32.388 12:56:14 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:32:32.388 12:56:14 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:32.388 12:56:14 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:32.388 12:56:14 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=2769629 00:32:32.388 12:56:14 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:32.388 12:56:14 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 2769629 00:32:32.388 12:56:14 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 2769629 ']' 00:32:32.388 12:56:14 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:32.388 12:56:14 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:32.388 12:56:14 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:32:32.388 12:56:14 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:32.388 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:32.388 12:56:14 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:32.388 12:56:14 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:32.388 [2024-11-28 12:56:14.811399] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:32:32.388 [2024-11-28 12:56:14.811445] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:32.388 [2024-11-28 12:56:14.877045] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:32.646 [2024-11-28 12:56:14.921104] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:32.646 [2024-11-28 12:56:14.921139] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:32.646 [2024-11-28 12:56:14.921146] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:32.646 [2024-11-28 12:56:14.921152] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:32.646 [2024-11-28 12:56:14.921157] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:32.646 [2024-11-28 12:56:14.922564] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:32.646 [2024-11-28 12:56:14.922666] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:32.646 [2024-11-28 12:56:14.922762] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:32.646 [2024-11-28 12:56:14.922763] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:32.646 12:56:14 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:32.646 12:56:14 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0 00:32:32.646 12:56:14 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:32:32.646 12:56:14 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:32.646 12:56:14 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:32.646 INFO: Log level set to 20 00:32:32.646 INFO: Requests: 00:32:32.646 { 00:32:32.646 "jsonrpc": "2.0", 00:32:32.646 "method": "nvmf_set_config", 00:32:32.646 "id": 1, 00:32:32.646 "params": { 00:32:32.646 "admin_cmd_passthru": { 00:32:32.646 "identify_ctrlr": true 00:32:32.646 } 00:32:32.646 } 00:32:32.646 } 00:32:32.646 00:32:32.646 INFO: response: 00:32:32.646 { 00:32:32.646 "jsonrpc": "2.0", 00:32:32.646 "id": 1, 00:32:32.646 "result": true 00:32:32.646 } 00:32:32.646 00:32:32.646 12:56:14 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:32.646 12:56:14 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:32:32.646 12:56:14 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:32.646 12:56:14 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:32.646 INFO: Setting log level to 20 00:32:32.646 INFO: Setting log level to 20 00:32:32.646 INFO: Log level set to 20 00:32:32.646 INFO: Log level set to 20 00:32:32.646 INFO: Requests: 00:32:32.646 { 00:32:32.646 "jsonrpc": "2.0", 00:32:32.646 "method": "framework_start_init", 00:32:32.646 "id": 1 00:32:32.646 } 00:32:32.646 00:32:32.646 INFO: Requests: 00:32:32.646 { 00:32:32.646 "jsonrpc": "2.0", 00:32:32.646 "method": "framework_start_init", 00:32:32.646 "id": 1 00:32:32.646 } 00:32:32.646 00:32:32.646 [2024-11-28 12:56:15.039384] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:32:32.646 INFO: response: 00:32:32.646 { 00:32:32.646 "jsonrpc": "2.0", 00:32:32.646 "id": 1, 00:32:32.646 "result": true 00:32:32.646 } 00:32:32.646 00:32:32.646 INFO: response: 00:32:32.646 { 00:32:32.646 "jsonrpc": "2.0", 00:32:32.646 "id": 1, 00:32:32.646 "result": true 00:32:32.646 } 00:32:32.646 00:32:32.646 12:56:15 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:32.646 12:56:15 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:32.646 12:56:15 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:32.646 12:56:15 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:32.646 INFO: Setting log level to 40 00:32:32.646 INFO: Setting log level to 40 00:32:32.646 INFO: Setting log level to 40 00:32:32.646 [2024-11-28 12:56:15.052722] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:32.647 12:56:15 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:32.647 12:56:15 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:32:32.647 12:56:15 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:32.647 12:56:15 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:32.647 12:56:15 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 00:32:32.647 12:56:15 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:32.647 12:56:15 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:35.925 Nvme0n1 00:32:35.925 12:56:17 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:35.925 12:56:17 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:32:35.926 12:56:17 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:35.926 12:56:17 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:35.926 12:56:17 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:35.926 12:56:17 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:32:35.926 12:56:17 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:35.926 12:56:17 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:35.926 12:56:17 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:35.926 12:56:17 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:35.926 12:56:17 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:35.926 12:56:17 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:35.926 [2024-11-28 12:56:17.962136] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:35.926 12:56:17 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:35.926 12:56:17 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:32:35.926 12:56:17 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:35.926 12:56:17 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:35.926 [ 00:32:35.926 { 00:32:35.926 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:32:35.926 "subtype": "Discovery", 00:32:35.926 "listen_addresses": [], 00:32:35.926 "allow_any_host": true, 00:32:35.926 "hosts": [] 00:32:35.926 }, 00:32:35.926 { 00:32:35.926 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:32:35.926 "subtype": "NVMe", 00:32:35.926 "listen_addresses": [ 00:32:35.926 { 00:32:35.926 "trtype": "TCP", 00:32:35.926 "adrfam": "IPv4", 00:32:35.926 "traddr": "10.0.0.2", 00:32:35.926 "trsvcid": "4420" 00:32:35.926 } 00:32:35.926 ], 00:32:35.926 "allow_any_host": true, 00:32:35.926 "hosts": [], 00:32:35.926 "serial_number": "SPDK00000000000001", 00:32:35.926 "model_number": "SPDK bdev Controller", 00:32:35.926 "max_namespaces": 1, 00:32:35.926 "min_cntlid": 1, 00:32:35.926 "max_cntlid": 65519, 00:32:35.926 "namespaces": [ 00:32:35.926 { 00:32:35.926 "nsid": 1, 00:32:35.926 "bdev_name": "Nvme0n1", 00:32:35.926 "name": "Nvme0n1", 00:32:35.926 "nguid": "0A47AFD40B814309AC9F7CB27C01D5E9", 00:32:35.926 "uuid": "0a47afd4-0b81-4309-ac9f-7cb27c01d5e9" 00:32:35.926 } 00:32:35.926 ] 00:32:35.926 } 00:32:35.926 ] 00:32:35.926 12:56:17 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:35.926 12:56:17 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:32:35.926 12:56:17 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:32:35.926 12:56:17 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:32:35.926 12:56:18 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ72430F0E1P0FGN 00:32:35.926 12:56:18 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:32:35.926 12:56:18 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:32:35.926 12:56:18 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:32:35.926 12:56:18 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:32:35.926 12:56:18 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' BTLJ72430F0E1P0FGN '!=' BTLJ72430F0E1P0FGN ']' 00:32:35.926 12:56:18 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:32:35.926 12:56:18 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:35.926 12:56:18 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:35.926 12:56:18 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:35.926 12:56:18 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:35.926 12:56:18 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:32:35.926 12:56:18 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:32:35.926 12:56:18 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:35.926 12:56:18 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:32:35.926 12:56:18 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:35.926 12:56:18 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:32:35.926 12:56:18 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:35.926 12:56:18 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:35.926 rmmod nvme_tcp 00:32:35.926 rmmod nvme_fabrics 00:32:35.926 rmmod nvme_keyring 00:32:35.926 12:56:18 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:35.926 12:56:18 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:32:35.926 12:56:18 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:32:35.926 12:56:18 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 2769629 ']' 00:32:35.926 12:56:18 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 2769629 00:32:35.926 12:56:18 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 2769629 ']' 00:32:35.926 12:56:18 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 2769629 00:32:35.926 12:56:18 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname 00:32:35.926 12:56:18 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:35.926 12:56:18 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2769629 00:32:35.926 12:56:18 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:35.926 12:56:18 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:35.926 12:56:18 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2769629' 00:32:35.926 killing process with pid 2769629 00:32:35.926 12:56:18 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 2769629 00:32:35.926 12:56:18 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 2769629 00:32:37.827 12:56:19 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:37.827 12:56:19 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:37.827 12:56:19 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:37.827 12:56:19 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:32:37.827 12:56:19 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:32:37.827 12:56:19 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:37.827 12:56:19 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:32:37.827 12:56:19 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:37.827 12:56:19 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:37.827 12:56:19 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:37.827 12:56:19 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:37.827 12:56:19 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:39.731 12:56:21 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:39.731 00:32:39.731 real 0m21.055s 00:32:39.731 user 0m26.330s 00:32:39.731 sys 0m5.597s 00:32:39.731 12:56:21 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:39.731 12:56:21 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:39.731 ************************************ 00:32:39.731 END TEST nvmf_identify_passthru 00:32:39.731 ************************************ 00:32:39.731 12:56:21 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:32:39.731 12:56:21 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:39.731 12:56:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:39.731 12:56:21 -- common/autotest_common.sh@10 -- # set +x 00:32:39.731 ************************************ 00:32:39.731 START TEST nvmf_dif 00:32:39.731 ************************************ 00:32:39.731 12:56:21 nvmf_dif -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:32:39.731 * Looking for test storage... 00:32:39.731 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:39.731 12:56:22 nvmf_dif -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:39.731 12:56:22 nvmf_dif -- common/autotest_common.sh@1693 -- # lcov --version 00:32:39.731 12:56:22 nvmf_dif -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:39.731 12:56:22 nvmf_dif -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:39.731 12:56:22 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:39.731 12:56:22 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:39.731 12:56:22 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:39.731 12:56:22 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:32:39.731 12:56:22 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:32:39.731 12:56:22 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:32:39.731 12:56:22 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:32:39.731 12:56:22 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:32:39.731 12:56:22 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:32:39.731 12:56:22 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:32:39.731 12:56:22 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:39.731 12:56:22 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:32:39.731 12:56:22 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:32:39.731 12:56:22 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:39.731 12:56:22 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:39.731 12:56:22 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:32:39.731 12:56:22 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:32:39.731 12:56:22 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:39.731 12:56:22 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:32:39.731 12:56:22 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:32:39.731 12:56:22 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:32:39.731 12:56:22 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:32:39.731 12:56:22 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:39.731 12:56:22 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:32:39.731 12:56:22 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:32:39.731 12:56:22 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:39.731 12:56:22 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:39.731 12:56:22 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:32:39.731 12:56:22 nvmf_dif -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:39.731 12:56:22 nvmf_dif -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:39.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:39.731 --rc genhtml_branch_coverage=1 00:32:39.731 --rc genhtml_function_coverage=1 00:32:39.731 --rc genhtml_legend=1 00:32:39.731 --rc geninfo_all_blocks=1 00:32:39.731 --rc geninfo_unexecuted_blocks=1 00:32:39.731 00:32:39.731 ' 00:32:39.731 12:56:22 nvmf_dif -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:39.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:39.731 --rc genhtml_branch_coverage=1 00:32:39.731 --rc genhtml_function_coverage=1 00:32:39.731 --rc genhtml_legend=1 00:32:39.731 --rc geninfo_all_blocks=1 00:32:39.731 --rc geninfo_unexecuted_blocks=1 00:32:39.731 00:32:39.731 ' 00:32:39.731 12:56:22 nvmf_dif -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:39.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:39.731 --rc genhtml_branch_coverage=1 00:32:39.731 --rc genhtml_function_coverage=1 00:32:39.731 --rc genhtml_legend=1 00:32:39.731 --rc geninfo_all_blocks=1 00:32:39.731 --rc geninfo_unexecuted_blocks=1 00:32:39.731 00:32:39.731 ' 00:32:39.731 12:56:22 nvmf_dif -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:39.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:39.731 --rc genhtml_branch_coverage=1 00:32:39.731 --rc genhtml_function_coverage=1 00:32:39.731 --rc genhtml_legend=1 00:32:39.731 --rc geninfo_all_blocks=1 00:32:39.731 --rc geninfo_unexecuted_blocks=1 00:32:39.731 00:32:39.731 ' 00:32:39.731 12:56:22 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:39.731 12:56:22 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:32:39.731 12:56:22 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:39.731 12:56:22 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:39.731 12:56:22 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:39.731 12:56:22 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:39.731 12:56:22 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:39.731 12:56:22 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:39.731 12:56:22 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:39.731 12:56:22 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:39.731 12:56:22 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:39.731 12:56:22 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:39.731 12:56:22 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:32:39.731 12:56:22 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:32:39.731 12:56:22 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:39.731 12:56:22 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:39.731 12:56:22 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:39.731 12:56:22 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:39.731 12:56:22 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:39.732 12:56:22 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:32:39.732 12:56:22 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:39.732 12:56:22 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:39.732 12:56:22 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:39.732 12:56:22 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:39.732 12:56:22 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:39.732 12:56:22 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:39.732 12:56:22 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:32:39.732 12:56:22 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:39.732 12:56:22 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:32:39.732 12:56:22 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:39.732 12:56:22 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:39.732 12:56:22 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:39.732 12:56:22 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:39.732 12:56:22 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:39.732 12:56:22 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:39.732 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:39.732 12:56:22 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:39.732 12:56:22 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:39.732 12:56:22 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:39.732 12:56:22 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:32:39.732 12:56:22 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:32:39.732 12:56:22 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:32:39.732 12:56:22 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:32:39.732 12:56:22 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:32:39.732 12:56:22 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:39.732 12:56:22 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:39.732 12:56:22 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:39.732 12:56:22 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:39.732 12:56:22 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:39.732 12:56:22 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:39.732 12:56:22 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:39.732 12:56:22 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:39.732 12:56:22 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:39.732 12:56:22 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:39.732 12:56:22 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:32:39.732 12:56:22 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:45.243 12:56:26 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:45.243 12:56:26 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:32:45.243 12:56:26 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:45.243 12:56:26 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:45.243 12:56:26 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:45.243 12:56:26 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:45.243 12:56:26 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:45.243 12:56:26 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:32:45.243 12:56:26 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:45.243 12:56:26 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:32:45.243 12:56:26 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:32:45.243 12:56:26 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:32:45.243 12:56:26 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:32:45.243 12:56:26 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:32:45.243 12:56:26 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:32:45.243 12:56:26 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:45.243 12:56:26 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:45.243 12:56:26 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:45.243 12:56:26 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:45.243 12:56:26 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:45.243 12:56:26 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:45.243 12:56:26 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:45.243 12:56:26 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:45.243 12:56:26 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:45.243 12:56:26 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:45.243 12:56:26 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:45.243 12:56:26 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:45.243 12:56:26 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:45.243 12:56:26 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:45.243 12:56:26 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:45.243 12:56:26 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:45.243 12:56:26 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:45.243 12:56:26 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:45.243 12:56:26 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:45.243 12:56:26 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:32:45.243 Found 0000:86:00.0 (0x8086 - 0x159b) 00:32:45.243 12:56:26 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:45.243 12:56:26 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:45.243 12:56:26 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:45.243 12:56:26 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:45.243 12:56:26 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:45.243 12:56:26 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:45.243 12:56:27 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:32:45.243 Found 0000:86:00.1 (0x8086 - 0x159b) 00:32:45.243 12:56:27 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:45.243 12:56:27 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:45.243 12:56:27 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:45.243 12:56:27 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:45.243 12:56:27 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:45.243 12:56:27 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:45.243 12:56:27 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:45.243 12:56:27 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:45.243 12:56:27 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:45.243 12:56:27 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:45.243 12:56:27 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:45.243 12:56:27 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:45.244 12:56:27 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:45.244 12:56:27 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:45.244 12:56:27 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:45.244 12:56:27 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:32:45.244 Found net devices under 0000:86:00.0: cvl_0_0 00:32:45.244 12:56:27 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:45.244 12:56:27 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:45.244 12:56:27 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:45.244 12:56:27 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:45.244 12:56:27 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:45.244 12:56:27 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:45.244 12:56:27 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:45.244 12:56:27 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:45.244 12:56:27 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:32:45.244 Found net devices under 0000:86:00.1: cvl_0_1 00:32:45.244 12:56:27 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:45.244 12:56:27 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:45.244 12:56:27 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:32:45.244 12:56:27 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:45.244 12:56:27 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:45.244 12:56:27 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:45.244 12:56:27 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:45.244 12:56:27 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:45.244 12:56:27 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:45.244 12:56:27 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:45.244 12:56:27 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:45.244 12:56:27 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:45.244 12:56:27 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:45.244 12:56:27 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:45.244 12:56:27 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:45.244 12:56:27 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:45.244 12:56:27 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:45.244 12:56:27 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:45.244 12:56:27 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:45.244 12:56:27 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:45.244 12:56:27 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:45.244 12:56:27 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:45.244 12:56:27 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:45.244 12:56:27 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:45.244 12:56:27 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:45.244 12:56:27 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:45.244 12:56:27 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:45.244 12:56:27 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:45.244 12:56:27 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:45.244 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:45.244 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.414 ms 00:32:45.244 00:32:45.244 --- 10.0.0.2 ping statistics --- 00:32:45.244 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:45.244 rtt min/avg/max/mdev = 0.414/0.414/0.414/0.000 ms 00:32:45.244 12:56:27 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:45.244 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:45.244 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.218 ms 00:32:45.244 00:32:45.244 --- 10.0.0.1 ping statistics --- 00:32:45.244 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:45.244 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:32:45.244 12:56:27 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:45.244 12:56:27 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:32:45.244 12:56:27 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:32:45.244 12:56:27 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:47.147 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:32:47.147 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:32:47.406 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:32:47.406 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:32:47.406 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:32:47.406 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:32:47.406 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:32:47.406 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:32:47.406 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:32:47.406 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:32:47.406 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:32:47.406 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:32:47.406 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:32:47.406 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:32:47.406 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:32:47.406 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:32:47.406 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:32:47.406 12:56:29 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:47.406 12:56:29 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:47.406 12:56:29 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:47.406 12:56:29 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:47.406 12:56:29 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:47.406 12:56:29 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:47.406 12:56:29 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:32:47.406 12:56:29 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:32:47.406 12:56:29 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:47.406 12:56:29 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:47.406 12:56:29 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:47.406 12:56:29 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=2775031 00:32:47.406 12:56:29 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 2775031 00:32:47.406 12:56:29 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:32:47.406 12:56:29 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 2775031 ']' 00:32:47.406 12:56:29 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:47.406 12:56:29 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:47.406 12:56:29 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:47.406 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:47.406 12:56:29 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:47.406 12:56:29 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:47.664 [2024-11-28 12:56:29.937755] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:32:47.664 [2024-11-28 12:56:29.937802] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:47.664 [2024-11-28 12:56:30.002393] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:47.664 [2024-11-28 12:56:30.049487] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:47.664 [2024-11-28 12:56:30.049521] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:47.664 [2024-11-28 12:56:30.049529] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:47.664 [2024-11-28 12:56:30.049535] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:47.664 [2024-11-28 12:56:30.049541] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:47.664 [2024-11-28 12:56:30.050103] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:47.664 12:56:30 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:47.664 12:56:30 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:32:47.664 12:56:30 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:47.664 12:56:30 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:47.665 12:56:30 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:47.924 12:56:30 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:47.924 12:56:30 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:32:47.924 12:56:30 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:32:47.924 12:56:30 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:47.924 12:56:30 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:47.924 [2024-11-28 12:56:30.191659] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:47.924 12:56:30 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:47.924 12:56:30 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:32:47.924 12:56:30 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:47.924 12:56:30 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:47.924 12:56:30 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:47.924 ************************************ 00:32:47.924 START TEST fio_dif_1_default 00:32:47.924 ************************************ 00:32:47.924 12:56:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:32:47.924 12:56:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:32:47.924 12:56:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:32:47.924 12:56:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:32:47.924 12:56:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:32:47.924 12:56:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:32:47.924 12:56:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:32:47.924 12:56:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:47.924 12:56:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:32:47.924 bdev_null0 00:32:47.924 12:56:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:47.924 12:56:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:32:47.924 12:56:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:47.924 12:56:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:32:47.924 12:56:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:47.924 12:56:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:32:47.924 12:56:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:47.924 12:56:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:32:47.924 12:56:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:47.924 12:56:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:47.924 12:56:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:47.924 12:56:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:32:47.924 [2024-11-28 12:56:30.263999] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:47.924 12:56:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:47.924 12:56:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:32:47.924 12:56:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:32:47.924 12:56:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:32:47.924 12:56:30 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:32:47.924 12:56:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:47.924 12:56:30 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:32:47.924 12:56:30 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:47.924 12:56:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:47.924 12:56:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:32:47.924 12:56:30 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:47.924 { 00:32:47.924 "params": { 00:32:47.924 "name": "Nvme$subsystem", 00:32:47.924 "trtype": "$TEST_TRANSPORT", 00:32:47.924 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:47.924 "adrfam": "ipv4", 00:32:47.924 "trsvcid": "$NVMF_PORT", 00:32:47.924 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:47.924 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:47.924 "hdgst": ${hdgst:-false}, 00:32:47.924 "ddgst": ${ddgst:-false} 00:32:47.924 }, 00:32:47.924 "method": "bdev_nvme_attach_controller" 00:32:47.924 } 00:32:47.924 EOF 00:32:47.924 )") 00:32:47.924 12:56:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:32:47.924 12:56:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:32:47.924 12:56:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:47.924 12:56:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:32:47.924 12:56:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:32:47.924 12:56:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:47.924 12:56:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:32:47.924 12:56:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:32:47.924 12:56:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:47.924 12:56:30 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:32:47.924 12:56:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:32:47.924 12:56:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:47.924 12:56:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:32:47.924 12:56:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:32:47.924 12:56:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:47.924 12:56:30 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:32:47.924 12:56:30 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:32:47.924 12:56:30 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:47.924 "params": { 00:32:47.924 "name": "Nvme0", 00:32:47.924 "trtype": "tcp", 00:32:47.924 "traddr": "10.0.0.2", 00:32:47.924 "adrfam": "ipv4", 00:32:47.924 "trsvcid": "4420", 00:32:47.924 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:47.924 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:47.924 "hdgst": false, 00:32:47.924 "ddgst": false 00:32:47.924 }, 00:32:47.924 "method": "bdev_nvme_attach_controller" 00:32:47.925 }' 00:32:47.925 12:56:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:32:47.925 12:56:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:32:47.925 12:56:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:47.925 12:56:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:47.925 12:56:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:32:47.925 12:56:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:47.925 12:56:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:32:47.925 12:56:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:32:47.925 12:56:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:32:47.925 12:56:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:48.182 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:32:48.182 fio-3.35 00:32:48.182 Starting 1 thread 00:33:00.382 00:33:00.382 filename0: (groupid=0, jobs=1): err= 0: pid=2775298: Thu Nov 28 12:56:41 2024 00:33:00.382 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10011msec) 00:33:00.382 slat (nsec): min=4285, max=13291, avg=6242.73, stdev=380.80 00:33:00.382 clat (usec): min=40804, max=45294, avg=41008.95, stdev=293.01 00:33:00.382 lat (usec): min=40810, max=45307, avg=41015.19, stdev=293.04 00:33:00.382 clat percentiles (usec): 00:33:00.382 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:33:00.382 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:33:00.382 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:33:00.382 | 99.00th=[42206], 99.50th=[42206], 99.90th=[45351], 99.95th=[45351], 00:33:00.382 | 99.99th=[45351] 00:33:00.382 bw ( KiB/s): min= 384, max= 416, per=99.49%, avg=388.80, stdev=11.72, samples=20 00:33:00.382 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:33:00.382 lat (msec) : 50=100.00% 00:33:00.382 cpu : usr=92.90%, sys=6.86%, ctx=13, majf=0, minf=0 00:33:00.382 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:00.382 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:00.382 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:00.382 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:00.382 latency : target=0, window=0, percentile=100.00%, depth=4 00:33:00.382 00:33:00.382 Run status group 0 (all jobs): 00:33:00.382 READ: bw=390KiB/s (399kB/s), 390KiB/s-390KiB/s (399kB/s-399kB/s), io=3904KiB (3998kB), run=10011-10011msec 00:33:00.382 12:56:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:33:00.382 12:56:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:33:00.382 12:56:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:33:00.382 12:56:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:00.383 12:56:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:33:00.383 12:56:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:00.383 12:56:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:00.383 12:56:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:00.383 12:56:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:00.383 12:56:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:00.383 12:56:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:00.383 12:56:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:00.383 12:56:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:00.383 00:33:00.383 real 0m11.202s 00:33:00.383 user 0m16.258s 00:33:00.383 sys 0m0.978s 00:33:00.383 12:56:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:00.383 12:56:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:00.383 ************************************ 00:33:00.383 END TEST fio_dif_1_default 00:33:00.383 ************************************ 00:33:00.383 12:56:41 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:33:00.383 12:56:41 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:00.383 12:56:41 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:00.383 12:56:41 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:00.383 ************************************ 00:33:00.383 START TEST fio_dif_1_multi_subsystems 00:33:00.383 ************************************ 00:33:00.383 12:56:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:33:00.383 12:56:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:33:00.383 12:56:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:33:00.383 12:56:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:33:00.383 12:56:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:33:00.383 12:56:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:33:00.383 12:56:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:33:00.383 12:56:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:33:00.383 12:56:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:00.383 12:56:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:00.383 bdev_null0 00:33:00.383 12:56:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:00.383 12:56:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:00.383 12:56:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:00.383 12:56:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:00.383 12:56:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:00.383 12:56:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:00.383 12:56:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:00.383 12:56:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:00.383 12:56:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:00.383 12:56:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:00.383 12:56:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:00.383 12:56:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:00.383 [2024-11-28 12:56:41.541258] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:00.383 12:56:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:00.383 12:56:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:33:00.383 12:56:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:33:00.383 12:56:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:33:00.383 12:56:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:33:00.383 12:56:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:00.383 12:56:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:00.383 bdev_null1 00:33:00.383 12:56:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:00.383 12:56:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:33:00.383 12:56:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:00.383 12:56:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:00.383 12:56:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:00.383 12:56:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:33:00.383 12:56:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:00.383 12:56:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:00.383 12:56:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:00.383 12:56:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:00.383 12:56:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:00.383 12:56:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:00.383 12:56:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:00.383 12:56:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:33:00.383 12:56:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:33:00.383 12:56:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:33:00.383 12:56:41 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:33:00.383 12:56:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:00.383 12:56:41 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:33:00.383 12:56:41 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:00.383 12:56:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:00.383 12:56:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:33:00.383 12:56:41 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:00.383 { 00:33:00.383 "params": { 00:33:00.383 "name": "Nvme$subsystem", 00:33:00.383 "trtype": "$TEST_TRANSPORT", 00:33:00.383 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:00.383 "adrfam": "ipv4", 00:33:00.383 "trsvcid": "$NVMF_PORT", 00:33:00.383 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:00.383 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:00.383 "hdgst": ${hdgst:-false}, 00:33:00.383 "ddgst": ${ddgst:-false} 00:33:00.383 }, 00:33:00.383 "method": "bdev_nvme_attach_controller" 00:33:00.383 } 00:33:00.383 EOF 00:33:00.383 )") 00:33:00.383 12:56:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:00.383 12:56:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:33:00.383 12:56:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:33:00.383 12:56:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:00.383 12:56:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:00.383 12:56:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:00.383 12:56:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:33:00.383 12:56:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:00.383 12:56:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:00.383 12:56:41 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:33:00.383 12:56:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:33:00.383 12:56:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:33:00.383 12:56:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:00.383 12:56:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:33:00.383 12:56:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:33:00.383 12:56:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:00.383 12:56:41 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:00.383 12:56:41 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:00.383 { 00:33:00.383 "params": { 00:33:00.383 "name": "Nvme$subsystem", 00:33:00.383 "trtype": "$TEST_TRANSPORT", 00:33:00.383 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:00.383 "adrfam": "ipv4", 00:33:00.383 "trsvcid": "$NVMF_PORT", 00:33:00.383 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:00.383 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:00.383 "hdgst": ${hdgst:-false}, 00:33:00.383 "ddgst": ${ddgst:-false} 00:33:00.383 }, 00:33:00.383 "method": "bdev_nvme_attach_controller" 00:33:00.383 } 00:33:00.383 EOF 00:33:00.383 )") 00:33:00.383 12:56:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:33:00.383 12:56:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:33:00.383 12:56:41 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:33:00.383 12:56:41 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:33:00.383 12:56:41 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:33:00.383 12:56:41 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:00.383 "params": { 00:33:00.383 "name": "Nvme0", 00:33:00.383 "trtype": "tcp", 00:33:00.383 "traddr": "10.0.0.2", 00:33:00.383 "adrfam": "ipv4", 00:33:00.384 "trsvcid": "4420", 00:33:00.384 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:00.384 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:00.384 "hdgst": false, 00:33:00.384 "ddgst": false 00:33:00.384 }, 00:33:00.384 "method": "bdev_nvme_attach_controller" 00:33:00.384 },{ 00:33:00.384 "params": { 00:33:00.384 "name": "Nvme1", 00:33:00.384 "trtype": "tcp", 00:33:00.384 "traddr": "10.0.0.2", 00:33:00.384 "adrfam": "ipv4", 00:33:00.384 "trsvcid": "4420", 00:33:00.384 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:00.384 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:00.384 "hdgst": false, 00:33:00.384 "ddgst": false 00:33:00.384 }, 00:33:00.384 "method": "bdev_nvme_attach_controller" 00:33:00.384 }' 00:33:00.384 12:56:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:00.384 12:56:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:00.384 12:56:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:00.384 12:56:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:00.384 12:56:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:33:00.384 12:56:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:00.384 12:56:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:00.384 12:56:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:00.384 12:56:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:00.384 12:56:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:00.384 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:33:00.384 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:33:00.384 fio-3.35 00:33:00.384 Starting 2 threads 00:33:10.361 00:33:10.361 filename0: (groupid=0, jobs=1): err= 0: pid=2777223: Thu Nov 28 12:56:52 2024 00:33:10.361 read: IOPS=96, BW=385KiB/s (394kB/s)(3856KiB/10027msec) 00:33:10.361 slat (nsec): min=6297, max=42400, avg=8099.40, stdev=3079.37 00:33:10.361 clat (usec): min=40829, max=42143, avg=41581.03, stdev=490.88 00:33:10.361 lat (usec): min=40836, max=42157, avg=41589.13, stdev=490.95 00:33:10.361 clat percentiles (usec): 00:33:10.361 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:33:10.361 | 30.00th=[41157], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:33:10.361 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:33:10.361 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:33:10.361 | 99.99th=[42206] 00:33:10.361 bw ( KiB/s): min= 384, max= 384, per=50.14%, avg=384.00, stdev= 0.00, samples=20 00:33:10.361 iops : min= 96, max= 96, avg=96.00, stdev= 0.00, samples=20 00:33:10.361 lat (msec) : 50=100.00% 00:33:10.361 cpu : usr=96.61%, sys=3.12%, ctx=13, majf=0, minf=9 00:33:10.361 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:10.361 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:10.361 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:10.361 issued rwts: total=964,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:10.361 latency : target=0, window=0, percentile=100.00%, depth=4 00:33:10.361 filename1: (groupid=0, jobs=1): err= 0: pid=2777224: Thu Nov 28 12:56:52 2024 00:33:10.361 read: IOPS=95, BW=382KiB/s (391kB/s)(3824KiB/10020msec) 00:33:10.361 slat (nsec): min=6277, max=43005, avg=8070.31, stdev=3072.01 00:33:10.361 clat (usec): min=40955, max=42107, avg=41899.84, stdev=262.69 00:33:10.361 lat (usec): min=40962, max=42119, avg=41907.91, stdev=262.75 00:33:10.361 clat percentiles (usec): 00:33:10.361 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41681], 20.00th=[42206], 00:33:10.361 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:33:10.361 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:33:10.361 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:33:10.361 | 99.99th=[42206] 00:33:10.361 bw ( KiB/s): min= 352, max= 384, per=49.61%, avg=380.80, stdev= 9.85, samples=20 00:33:10.361 iops : min= 88, max= 96, avg=95.20, stdev= 2.46, samples=20 00:33:10.361 lat (msec) : 50=100.00% 00:33:10.361 cpu : usr=96.93%, sys=2.81%, ctx=11, majf=0, minf=9 00:33:10.361 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:10.361 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:10.361 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:10.361 issued rwts: total=956,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:10.361 latency : target=0, window=0, percentile=100.00%, depth=4 00:33:10.361 00:33:10.361 Run status group 0 (all jobs): 00:33:10.361 READ: bw=766KiB/s (784kB/s), 382KiB/s-385KiB/s (391kB/s-394kB/s), io=7680KiB (7864kB), run=10020-10027msec 00:33:10.361 12:56:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:33:10.361 12:56:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:33:10.361 12:56:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:33:10.361 12:56:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:10.361 12:56:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:33:10.361 12:56:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:10.361 12:56:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:10.361 12:56:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:10.361 12:56:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:10.361 12:56:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:10.361 12:56:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:10.361 12:56:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:10.361 12:56:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:10.361 12:56:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:33:10.361 12:56:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:33:10.361 12:56:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:33:10.361 12:56:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:10.361 12:56:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:10.361 12:56:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:10.361 12:56:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:10.361 12:56:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:33:10.361 12:56:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:10.361 12:56:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:10.361 12:56:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:10.361 00:33:10.361 real 0m11.309s 00:33:10.361 user 0m26.518s 00:33:10.361 sys 0m0.908s 00:33:10.361 12:56:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:10.361 12:56:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:10.361 ************************************ 00:33:10.361 END TEST fio_dif_1_multi_subsystems 00:33:10.361 ************************************ 00:33:10.361 12:56:52 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:33:10.361 12:56:52 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:10.361 12:56:52 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:10.361 12:56:52 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:10.621 ************************************ 00:33:10.621 START TEST fio_dif_rand_params 00:33:10.621 ************************************ 00:33:10.621 12:56:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:33:10.621 12:56:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:33:10.621 12:56:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:33:10.621 12:56:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:33:10.621 12:56:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:33:10.621 12:56:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:33:10.621 12:56:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:33:10.621 12:56:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:33:10.621 12:56:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:33:10.621 12:56:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:33:10.621 12:56:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:33:10.621 12:56:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:33:10.621 12:56:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:33:10.621 12:56:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:33:10.621 12:56:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:10.621 12:56:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:10.621 bdev_null0 00:33:10.621 12:56:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:10.621 12:56:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:10.621 12:56:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:10.621 12:56:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:10.621 12:56:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:10.621 12:56:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:10.621 12:56:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:10.621 12:56:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:10.621 12:56:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:10.621 12:56:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:10.621 12:56:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:10.621 12:56:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:10.621 [2024-11-28 12:56:52.928828] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:10.621 12:56:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:10.621 12:56:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:33:10.621 12:56:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:33:10.621 12:56:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:33:10.621 12:56:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:33:10.621 12:56:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:10.621 12:56:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:33:10.621 12:56:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:10.621 12:56:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:10.621 12:56:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:33:10.621 12:56:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:10.621 { 00:33:10.621 "params": { 00:33:10.621 "name": "Nvme$subsystem", 00:33:10.621 "trtype": "$TEST_TRANSPORT", 00:33:10.621 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:10.621 "adrfam": "ipv4", 00:33:10.621 "trsvcid": "$NVMF_PORT", 00:33:10.621 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:10.621 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:10.621 "hdgst": ${hdgst:-false}, 00:33:10.621 "ddgst": ${ddgst:-false} 00:33:10.621 }, 00:33:10.621 "method": "bdev_nvme_attach_controller" 00:33:10.621 } 00:33:10.621 EOF 00:33:10.621 )") 00:33:10.621 12:56:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:10.621 12:56:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:33:10.621 12:56:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:10.621 12:56:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:33:10.621 12:56:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:10.621 12:56:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:10.621 12:56:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:33:10.621 12:56:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:10.621 12:56:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:10.621 12:56:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:33:10.621 12:56:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:33:10.621 12:56:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:10.621 12:56:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:33:10.621 12:56:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:33:10.621 12:56:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:10.621 12:56:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:33:10.621 12:56:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:33:10.621 12:56:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:10.621 "params": { 00:33:10.621 "name": "Nvme0", 00:33:10.621 "trtype": "tcp", 00:33:10.621 "traddr": "10.0.0.2", 00:33:10.621 "adrfam": "ipv4", 00:33:10.621 "trsvcid": "4420", 00:33:10.621 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:10.621 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:10.621 "hdgst": false, 00:33:10.621 "ddgst": false 00:33:10.621 }, 00:33:10.621 "method": "bdev_nvme_attach_controller" 00:33:10.621 }' 00:33:10.621 12:56:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:10.621 12:56:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:10.621 12:56:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:10.621 12:56:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:10.621 12:56:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:10.621 12:56:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:33:10.621 12:56:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:10.621 12:56:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:10.621 12:56:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:10.621 12:56:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:10.880 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:33:10.880 ... 00:33:10.880 fio-3.35 00:33:10.880 Starting 3 threads 00:33:17.452 00:33:17.452 filename0: (groupid=0, jobs=1): err= 0: pid=2779183: Thu Nov 28 12:56:58 2024 00:33:17.452 read: IOPS=312, BW=39.1MiB/s (40.9MB/s)(196MiB/5006msec) 00:33:17.452 slat (nsec): min=6337, max=32946, avg=10989.32, stdev=2303.30 00:33:17.452 clat (usec): min=3793, max=51806, avg=9587.10, stdev=6420.84 00:33:17.452 lat (usec): min=3800, max=51819, avg=9598.09, stdev=6420.92 00:33:17.452 clat percentiles (usec): 00:33:17.452 | 1.00th=[ 4228], 5.00th=[ 5735], 10.00th=[ 6456], 20.00th=[ 7111], 00:33:17.452 | 30.00th=[ 7963], 40.00th=[ 8356], 50.00th=[ 8848], 60.00th=[ 9241], 00:33:17.452 | 70.00th=[ 9634], 80.00th=[10028], 90.00th=[10683], 95.00th=[11469], 00:33:17.452 | 99.00th=[49021], 99.50th=[50070], 99.90th=[51119], 99.95th=[51643], 00:33:17.452 | 99.99th=[51643] 00:33:17.452 bw ( KiB/s): min=35584, max=42752, per=35.62%, avg=39961.60, stdev=2119.46, samples=10 00:33:17.452 iops : min= 278, max= 334, avg=312.20, stdev=16.56, samples=10 00:33:17.452 lat (msec) : 4=0.38%, 10=78.07%, 20=19.05%, 50=2.11%, 100=0.38% 00:33:17.452 cpu : usr=93.25%, sys=6.43%, ctx=18, majf=0, minf=61 00:33:17.452 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:17.452 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:17.452 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:17.452 issued rwts: total=1564,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:17.452 latency : target=0, window=0, percentile=100.00%, depth=3 00:33:17.452 filename0: (groupid=0, jobs=1): err= 0: pid=2779184: Thu Nov 28 12:56:58 2024 00:33:17.452 read: IOPS=289, BW=36.2MiB/s (38.0MB/s)(183MiB/5045msec) 00:33:17.452 slat (nsec): min=6377, max=29723, avg=11259.21, stdev=2231.15 00:33:17.452 clat (usec): min=3668, max=51559, avg=10308.92, stdev=6487.75 00:33:17.452 lat (usec): min=3674, max=51571, avg=10320.18, stdev=6487.74 00:33:17.452 clat percentiles (usec): 00:33:17.452 | 1.00th=[ 4113], 5.00th=[ 5932], 10.00th=[ 6521], 20.00th=[ 7308], 00:33:17.452 | 30.00th=[ 8356], 40.00th=[ 8979], 50.00th=[ 9503], 60.00th=[10159], 00:33:17.452 | 70.00th=[10683], 80.00th=[11338], 90.00th=[12125], 95.00th=[12911], 00:33:17.452 | 99.00th=[48497], 99.50th=[49546], 99.90th=[51119], 99.95th=[51643], 00:33:17.452 | 99.99th=[51643] 00:33:17.452 bw ( KiB/s): min=32256, max=39680, per=33.29%, avg=37350.40, stdev=2456.80, samples=10 00:33:17.452 iops : min= 252, max= 310, avg=291.80, stdev=19.19, samples=10 00:33:17.452 lat (msec) : 4=0.48%, 10=58.34%, 20=38.58%, 50=2.19%, 100=0.41% 00:33:17.452 cpu : usr=93.58%, sys=6.11%, ctx=14, majf=0, minf=20 00:33:17.452 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:17.452 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:17.452 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:17.452 issued rwts: total=1462,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:17.452 latency : target=0, window=0, percentile=100.00%, depth=3 00:33:17.452 filename0: (groupid=0, jobs=1): err= 0: pid=2779185: Thu Nov 28 12:56:58 2024 00:33:17.452 read: IOPS=278, BW=34.9MiB/s (36.6MB/s)(175MiB/5004msec) 00:33:17.453 slat (nsec): min=6429, max=27786, avg=11069.67, stdev=2368.96 00:33:17.453 clat (usec): min=3450, max=91451, avg=10738.48, stdev=8315.13 00:33:17.453 lat (usec): min=3462, max=91464, avg=10749.55, stdev=8315.20 00:33:17.453 clat percentiles (usec): 00:33:17.453 | 1.00th=[ 4015], 5.00th=[ 5997], 10.00th=[ 6718], 20.00th=[ 7898], 00:33:17.453 | 30.00th=[ 8586], 40.00th=[ 9110], 50.00th=[ 9503], 60.00th=[ 9896], 00:33:17.453 | 70.00th=[10421], 80.00th=[10945], 90.00th=[11994], 95.00th=[12780], 00:33:17.453 | 99.00th=[50594], 99.50th=[51119], 99.90th=[91751], 99.95th=[91751], 00:33:17.453 | 99.99th=[91751] 00:33:17.453 bw ( KiB/s): min=28672, max=39424, per=31.78%, avg=35660.80, stdev=3208.92, samples=10 00:33:17.453 iops : min= 224, max= 308, avg=278.60, stdev=25.07, samples=10 00:33:17.453 lat (msec) : 4=0.86%, 10=61.68%, 20=34.10%, 50=2.15%, 100=1.22% 00:33:17.453 cpu : usr=94.10%, sys=5.58%, ctx=8, majf=0, minf=57 00:33:17.453 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:17.453 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:17.453 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:17.453 issued rwts: total=1396,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:17.453 latency : target=0, window=0, percentile=100.00%, depth=3 00:33:17.453 00:33:17.453 Run status group 0 (all jobs): 00:33:17.453 READ: bw=110MiB/s (115MB/s), 34.9MiB/s-39.1MiB/s (36.6MB/s-40.9MB/s), io=553MiB (580MB), run=5004-5045msec 00:33:17.453 12:56:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:33:17.453 12:56:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:33:17.453 12:56:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:33:17.453 12:56:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:17.453 12:56:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:33:17.453 12:56:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:17.453 12:56:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.453 12:56:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:17.453 12:56:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.453 12:56:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:17.453 12:56:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.453 12:56:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:17.453 12:56:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.453 12:56:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:33:17.453 12:56:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:33:17.453 12:56:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:33:17.453 12:56:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:33:17.453 12:56:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:33:17.453 12:56:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:33:17.453 12:56:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:33:17.453 12:56:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:33:17.453 12:56:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:33:17.453 12:56:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:33:17.453 12:56:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:33:17.453 12:56:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:33:17.453 12:56:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.453 12:56:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:17.453 bdev_null0 00:33:17.453 12:56:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.453 12:56:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:17.453 12:56:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.453 12:56:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:17.453 12:56:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.453 12:56:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:17.453 12:56:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.453 12:56:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:17.453 12:56:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.453 12:56:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:17.453 12:56:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.453 12:56:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:17.453 [2024-11-28 12:56:59.154442] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:17.453 12:56:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.453 12:56:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:33:17.453 12:56:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:33:17.453 12:56:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:33:17.453 12:56:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:33:17.453 12:56:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.453 12:56:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:17.453 bdev_null1 00:33:17.453 12:56:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.453 12:56:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:33:17.453 12:56:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.453 12:56:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:17.453 12:56:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.453 12:56:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:33:17.453 12:56:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.453 12:56:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:17.453 12:56:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.453 12:56:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:17.453 12:56:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.453 12:56:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:17.453 12:56:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.453 12:56:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:33:17.453 12:56:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:33:17.453 12:56:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:33:17.453 12:56:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:33:17.453 12:56:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.453 12:56:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:17.453 bdev_null2 00:33:17.453 12:56:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.453 12:56:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:33:17.453 12:56:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.453 12:56:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:17.453 12:56:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.453 12:56:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:33:17.453 12:56:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.453 12:56:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:17.453 12:56:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.453 12:56:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:33:17.453 12:56:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.453 12:56:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:17.453 12:56:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.453 12:56:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:33:17.453 12:56:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:33:17.453 12:56:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:33:17.453 12:56:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:33:17.453 12:56:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:17.453 12:56:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:33:17.453 12:56:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:33:17.453 12:56:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:17.453 12:56:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:17.453 12:56:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:33:17.453 12:56:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:17.453 { 00:33:17.453 "params": { 00:33:17.453 "name": "Nvme$subsystem", 00:33:17.453 "trtype": "$TEST_TRANSPORT", 00:33:17.453 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:17.453 "adrfam": "ipv4", 00:33:17.453 "trsvcid": "$NVMF_PORT", 00:33:17.453 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:17.453 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:17.453 "hdgst": ${hdgst:-false}, 00:33:17.453 "ddgst": ${ddgst:-false} 00:33:17.453 }, 00:33:17.453 "method": "bdev_nvme_attach_controller" 00:33:17.453 } 00:33:17.453 EOF 00:33:17.453 )") 00:33:17.453 12:56:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:33:17.453 12:56:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:17.453 12:56:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:17.453 12:56:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:17.454 12:56:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:17.454 12:56:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:33:17.454 12:56:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:17.454 12:56:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:17.454 12:56:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:33:17.454 12:56:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:33:17.454 12:56:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:33:17.454 12:56:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:33:17.454 12:56:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:17.454 12:56:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:33:17.454 12:56:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:17.454 12:56:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:17.454 12:56:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:17.454 { 00:33:17.454 "params": { 00:33:17.454 "name": "Nvme$subsystem", 00:33:17.454 "trtype": "$TEST_TRANSPORT", 00:33:17.454 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:17.454 "adrfam": "ipv4", 00:33:17.454 "trsvcid": "$NVMF_PORT", 00:33:17.454 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:17.454 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:17.454 "hdgst": ${hdgst:-false}, 00:33:17.454 "ddgst": ${ddgst:-false} 00:33:17.454 }, 00:33:17.454 "method": "bdev_nvme_attach_controller" 00:33:17.454 } 00:33:17.454 EOF 00:33:17.454 )") 00:33:17.454 12:56:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:33:17.454 12:56:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:33:17.454 12:56:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:33:17.454 12:56:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:33:17.454 12:56:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:33:17.454 12:56:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:33:17.454 12:56:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:17.454 12:56:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:17.454 { 00:33:17.454 "params": { 00:33:17.454 "name": "Nvme$subsystem", 00:33:17.454 "trtype": "$TEST_TRANSPORT", 00:33:17.454 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:17.454 "adrfam": "ipv4", 00:33:17.454 "trsvcid": "$NVMF_PORT", 00:33:17.454 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:17.454 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:17.454 "hdgst": ${hdgst:-false}, 00:33:17.454 "ddgst": ${ddgst:-false} 00:33:17.454 }, 00:33:17.454 "method": "bdev_nvme_attach_controller" 00:33:17.454 } 00:33:17.454 EOF 00:33:17.454 )") 00:33:17.454 12:56:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:33:17.454 12:56:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:33:17.454 12:56:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:33:17.454 12:56:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:17.454 "params": { 00:33:17.454 "name": "Nvme0", 00:33:17.454 "trtype": "tcp", 00:33:17.454 "traddr": "10.0.0.2", 00:33:17.454 "adrfam": "ipv4", 00:33:17.454 "trsvcid": "4420", 00:33:17.454 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:17.454 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:17.454 "hdgst": false, 00:33:17.454 "ddgst": false 00:33:17.454 }, 00:33:17.454 "method": "bdev_nvme_attach_controller" 00:33:17.454 },{ 00:33:17.454 "params": { 00:33:17.454 "name": "Nvme1", 00:33:17.454 "trtype": "tcp", 00:33:17.454 "traddr": "10.0.0.2", 00:33:17.454 "adrfam": "ipv4", 00:33:17.454 "trsvcid": "4420", 00:33:17.454 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:17.454 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:17.454 "hdgst": false, 00:33:17.454 "ddgst": false 00:33:17.454 }, 00:33:17.454 "method": "bdev_nvme_attach_controller" 00:33:17.454 },{ 00:33:17.454 "params": { 00:33:17.454 "name": "Nvme2", 00:33:17.454 "trtype": "tcp", 00:33:17.454 "traddr": "10.0.0.2", 00:33:17.454 "adrfam": "ipv4", 00:33:17.454 "trsvcid": "4420", 00:33:17.454 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:33:17.454 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:33:17.454 "hdgst": false, 00:33:17.454 "ddgst": false 00:33:17.454 }, 00:33:17.454 "method": "bdev_nvme_attach_controller" 00:33:17.454 }' 00:33:17.454 12:56:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:17.454 12:56:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:17.454 12:56:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:17.454 12:56:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:17.454 12:56:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:33:17.454 12:56:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:17.454 12:56:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:17.454 12:56:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:17.454 12:56:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:17.454 12:56:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:17.454 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:33:17.454 ... 00:33:17.454 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:33:17.454 ... 00:33:17.454 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:33:17.454 ... 00:33:17.454 fio-3.35 00:33:17.454 Starting 24 threads 00:33:29.667 00:33:29.668 filename0: (groupid=0, jobs=1): err= 0: pid=2780489: Thu Nov 28 12:57:10 2024 00:33:29.668 read: IOPS=559, BW=2237KiB/s (2290kB/s)(21.9MiB/10011msec) 00:33:29.668 slat (nsec): min=7383, max=96781, avg=15483.30, stdev=10162.11 00:33:29.668 clat (usec): min=12706, max=30054, avg=28492.79, stdev=1290.56 00:33:29.668 lat (usec): min=12721, max=30076, avg=28508.27, stdev=1288.26 00:33:29.668 clat percentiles (usec): 00:33:29.668 | 1.00th=[19268], 5.00th=[28181], 10.00th=[28443], 20.00th=[28443], 00:33:29.668 | 30.00th=[28443], 40.00th=[28443], 50.00th=[28705], 60.00th=[28705], 00:33:29.668 | 70.00th=[28705], 80.00th=[28705], 90.00th=[28967], 95.00th=[29230], 00:33:29.668 | 99.00th=[29754], 99.50th=[29754], 99.90th=[30016], 99.95th=[30016], 00:33:29.668 | 99.99th=[30016] 00:33:29.668 bw ( KiB/s): min= 2176, max= 2416, per=4.17%, avg=2232.80, stdev=74.05, samples=20 00:33:29.668 iops : min= 544, max= 604, avg=558.20, stdev=18.51, samples=20 00:33:29.668 lat (msec) : 20=1.07%, 50=98.93% 00:33:29.668 cpu : usr=98.55%, sys=1.06%, ctx=13, majf=0, minf=78 00:33:29.668 IO depths : 1=2.3%, 2=7.7%, 4=24.0%, 8=55.8%, 16=10.1%, 32=0.0%, >=64=0.0% 00:33:29.668 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:29.668 complete : 0=0.0%, 4=94.3%, 8=0.1%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:29.668 issued rwts: total=5598,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:29.668 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:29.668 filename0: (groupid=0, jobs=1): err= 0: pid=2780490: Thu Nov 28 12:57:10 2024 00:33:29.668 read: IOPS=557, BW=2230KiB/s (2284kB/s)(21.8MiB/10014msec) 00:33:29.668 slat (nsec): min=7384, max=93586, avg=28048.97, stdev=14496.94 00:33:29.668 clat (usec): min=17748, max=34059, avg=28473.36, stdev=864.71 00:33:29.668 lat (usec): min=17761, max=34082, avg=28501.41, stdev=864.57 00:33:29.668 clat percentiles (usec): 00:33:29.668 | 1.00th=[27657], 5.00th=[28181], 10.00th=[28181], 20.00th=[28443], 00:33:29.668 | 30.00th=[28443], 40.00th=[28443], 50.00th=[28443], 60.00th=[28443], 00:33:29.668 | 70.00th=[28705], 80.00th=[28705], 90.00th=[28967], 95.00th=[28967], 00:33:29.668 | 99.00th=[29230], 99.50th=[29492], 99.90th=[33817], 99.95th=[33817], 00:33:29.668 | 99.99th=[33817] 00:33:29.668 bw ( KiB/s): min= 2176, max= 2304, per=4.17%, avg=2229.89, stdev=64.93, samples=19 00:33:29.668 iops : min= 544, max= 576, avg=557.47, stdev=16.23, samples=19 00:33:29.668 lat (msec) : 20=0.57%, 50=99.43% 00:33:29.668 cpu : usr=98.62%, sys=1.00%, ctx=13, majf=0, minf=32 00:33:29.668 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:29.668 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:29.668 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:29.668 issued rwts: total=5584,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:29.668 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:29.668 filename0: (groupid=0, jobs=1): err= 0: pid=2780491: Thu Nov 28 12:57:10 2024 00:33:29.668 read: IOPS=556, BW=2226KiB/s (2280kB/s)(21.8MiB/10004msec) 00:33:29.668 slat (nsec): min=5462, max=94916, avg=31932.45, stdev=16891.07 00:33:29.668 clat (usec): min=9911, max=56587, avg=28500.26, stdev=1954.40 00:33:29.668 lat (usec): min=9926, max=56604, avg=28532.20, stdev=1953.46 00:33:29.668 clat percentiles (usec): 00:33:29.668 | 1.00th=[25560], 5.00th=[28181], 10.00th=[28181], 20.00th=[28181], 00:33:29.668 | 30.00th=[28443], 40.00th=[28443], 50.00th=[28443], 60.00th=[28443], 00:33:29.668 | 70.00th=[28705], 80.00th=[28705], 90.00th=[28967], 95.00th=[28967], 00:33:29.668 | 99.00th=[29492], 99.50th=[30278], 99.90th=[56361], 99.95th=[56361], 00:33:29.668 | 99.99th=[56361] 00:33:29.668 bw ( KiB/s): min= 2048, max= 2304, per=4.14%, avg=2216.42, stdev=73.20, samples=19 00:33:29.668 iops : min= 512, max= 576, avg=554.11, stdev=18.30, samples=19 00:33:29.668 lat (msec) : 10=0.14%, 20=0.43%, 50=99.14%, 100=0.29% 00:33:29.668 cpu : usr=98.36%, sys=1.22%, ctx=11, majf=0, minf=40 00:33:29.668 IO depths : 1=5.7%, 2=11.9%, 4=24.8%, 8=50.7%, 16=6.8%, 32=0.0%, >=64=0.0% 00:33:29.668 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:29.668 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:29.668 issued rwts: total=5568,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:29.668 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:29.668 filename0: (groupid=0, jobs=1): err= 0: pid=2780492: Thu Nov 28 12:57:10 2024 00:33:29.668 read: IOPS=555, BW=2224KiB/s (2277kB/s)(21.8MiB/10016msec) 00:33:29.668 slat (nsec): min=4966, max=83422, avg=29901.65, stdev=13134.66 00:33:29.668 clat (usec): min=16169, max=45508, avg=28512.99, stdev=1028.82 00:33:29.668 lat (usec): min=16176, max=45521, avg=28542.89, stdev=1028.20 00:33:29.668 clat percentiles (usec): 00:33:29.668 | 1.00th=[27919], 5.00th=[28181], 10.00th=[28181], 20.00th=[28181], 00:33:29.668 | 30.00th=[28443], 40.00th=[28443], 50.00th=[28443], 60.00th=[28443], 00:33:29.668 | 70.00th=[28705], 80.00th=[28705], 90.00th=[28967], 95.00th=[28967], 00:33:29.668 | 99.00th=[29492], 99.50th=[29754], 99.90th=[40633], 99.95th=[45351], 00:33:29.668 | 99.99th=[45351] 00:33:29.668 bw ( KiB/s): min= 2048, max= 2304, per=4.15%, avg=2222.32, stdev=70.33, samples=19 00:33:29.668 iops : min= 512, max= 576, avg=555.58, stdev=17.58, samples=19 00:33:29.668 lat (msec) : 20=0.29%, 50=99.71% 00:33:29.668 cpu : usr=97.90%, sys=1.47%, ctx=145, majf=0, minf=80 00:33:29.668 IO depths : 1=1.9%, 2=8.2%, 4=25.0%, 8=54.3%, 16=10.6%, 32=0.0%, >=64=0.0% 00:33:29.668 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:29.668 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:29.668 issued rwts: total=5568,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:29.668 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:29.668 filename0: (groupid=0, jobs=1): err= 0: pid=2780493: Thu Nov 28 12:57:10 2024 00:33:29.668 read: IOPS=556, BW=2226KiB/s (2280kB/s)(21.8MiB/10004msec) 00:33:29.668 slat (nsec): min=6976, max=52272, avg=17019.37, stdev=6704.08 00:33:29.668 clat (usec): min=19806, max=38082, avg=28609.17, stdev=747.04 00:33:29.668 lat (usec): min=19815, max=38108, avg=28626.19, stdev=746.53 00:33:29.668 clat percentiles (usec): 00:33:29.668 | 1.00th=[27919], 5.00th=[28181], 10.00th=[28443], 20.00th=[28443], 00:33:29.668 | 30.00th=[28443], 40.00th=[28443], 50.00th=[28443], 60.00th=[28705], 00:33:29.668 | 70.00th=[28705], 80.00th=[28705], 90.00th=[28967], 95.00th=[29230], 00:33:29.668 | 99.00th=[29492], 99.50th=[30016], 99.90th=[38011], 99.95th=[38011], 00:33:29.668 | 99.99th=[38011] 00:33:29.668 bw ( KiB/s): min= 2176, max= 2304, per=4.16%, avg=2223.16, stdev=63.44, samples=19 00:33:29.668 iops : min= 544, max= 576, avg=555.79, stdev=15.86, samples=19 00:33:29.668 lat (msec) : 20=0.29%, 50=99.71% 00:33:29.668 cpu : usr=98.26%, sys=1.35%, ctx=13, majf=0, minf=49 00:33:29.668 IO depths : 1=6.0%, 2=12.3%, 4=24.9%, 8=50.3%, 16=6.5%, 32=0.0%, >=64=0.0% 00:33:29.668 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:29.668 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:29.668 issued rwts: total=5568,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:29.668 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:29.668 filename0: (groupid=0, jobs=1): err= 0: pid=2780494: Thu Nov 28 12:57:10 2024 00:33:29.668 read: IOPS=556, BW=2226KiB/s (2280kB/s)(21.8MiB/10005msec) 00:33:29.668 slat (nsec): min=5127, max=53361, avg=22284.25, stdev=7145.43 00:33:29.668 clat (usec): min=10630, max=58625, avg=28550.53, stdev=1966.72 00:33:29.668 lat (usec): min=10637, max=58638, avg=28572.82, stdev=1966.33 00:33:29.668 clat percentiles (usec): 00:33:29.668 | 1.00th=[27919], 5.00th=[28181], 10.00th=[28181], 20.00th=[28443], 00:33:29.668 | 30.00th=[28443], 40.00th=[28443], 50.00th=[28443], 60.00th=[28443], 00:33:29.668 | 70.00th=[28705], 80.00th=[28705], 90.00th=[28967], 95.00th=[28967], 00:33:29.668 | 99.00th=[29492], 99.50th=[29754], 99.90th=[58459], 99.95th=[58459], 00:33:29.668 | 99.99th=[58459] 00:33:29.668 bw ( KiB/s): min= 2048, max= 2304, per=4.14%, avg=2216.42, stdev=74.55, samples=19 00:33:29.668 iops : min= 512, max= 576, avg=554.11, stdev=18.64, samples=19 00:33:29.668 lat (msec) : 20=0.57%, 50=99.14%, 100=0.29% 00:33:29.668 cpu : usr=98.47%, sys=1.16%, ctx=13, majf=0, minf=42 00:33:29.668 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:29.668 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:29.668 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:29.668 issued rwts: total=5568,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:29.668 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:29.668 filename0: (groupid=0, jobs=1): err= 0: pid=2780495: Thu Nov 28 12:57:10 2024 00:33:29.668 read: IOPS=559, BW=2236KiB/s (2290kB/s)(21.9MiB/10010msec) 00:33:29.668 slat (nsec): min=3395, max=96727, avg=28067.90, stdev=19476.32 00:33:29.668 clat (usec): min=12032, max=39919, avg=28351.23, stdev=1760.42 00:33:29.668 lat (usec): min=12041, max=39927, avg=28379.30, stdev=1760.91 00:33:29.668 clat percentiles (usec): 00:33:29.668 | 1.00th=[19792], 5.00th=[27919], 10.00th=[28181], 20.00th=[28181], 00:33:29.668 | 30.00th=[28443], 40.00th=[28443], 50.00th=[28443], 60.00th=[28443], 00:33:29.668 | 70.00th=[28705], 80.00th=[28705], 90.00th=[28705], 95.00th=[28967], 00:33:29.668 | 99.00th=[33817], 99.50th=[36963], 99.90th=[40109], 99.95th=[40109], 00:33:29.668 | 99.99th=[40109] 00:33:29.668 bw ( KiB/s): min= 2176, max= 2368, per=4.18%, avg=2235.16, stdev=69.16, samples=19 00:33:29.668 iops : min= 544, max= 592, avg=558.79, stdev=17.29, samples=19 00:33:29.668 lat (msec) : 20=1.07%, 50=98.93% 00:33:29.668 cpu : usr=98.53%, sys=1.09%, ctx=11, majf=0, minf=41 00:33:29.668 IO depths : 1=5.6%, 2=11.3%, 4=23.0%, 8=52.9%, 16=7.2%, 32=0.0%, >=64=0.0% 00:33:29.668 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:29.668 complete : 0=0.0%, 4=93.6%, 8=0.9%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:29.668 issued rwts: total=5596,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:29.668 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:29.668 filename0: (groupid=0, jobs=1): err= 0: pid=2780496: Thu Nov 28 12:57:10 2024 00:33:29.668 read: IOPS=556, BW=2228KiB/s (2281kB/s)(21.8MiB/10011msec) 00:33:29.668 slat (nsec): min=5657, max=91702, avg=32680.59, stdev=17188.74 00:33:29.668 clat (usec): min=17410, max=45147, avg=28414.58, stdev=1209.87 00:33:29.668 lat (usec): min=17444, max=45164, avg=28447.26, stdev=1210.18 00:33:29.668 clat percentiles (usec): 00:33:29.668 | 1.00th=[23987], 5.00th=[27919], 10.00th=[28181], 20.00th=[28181], 00:33:29.668 | 30.00th=[28181], 40.00th=[28443], 50.00th=[28443], 60.00th=[28443], 00:33:29.668 | 70.00th=[28443], 80.00th=[28705], 90.00th=[28705], 95.00th=[28967], 00:33:29.668 | 99.00th=[30016], 99.50th=[34341], 99.90th=[41681], 99.95th=[44827], 00:33:29.668 | 99.99th=[45351] 00:33:29.668 bw ( KiB/s): min= 2112, max= 2304, per=4.16%, avg=2226.53, stdev=69.39, samples=19 00:33:29.668 iops : min= 528, max= 576, avg=556.63, stdev=17.35, samples=19 00:33:29.668 lat (msec) : 20=0.36%, 50=99.64% 00:33:29.668 cpu : usr=98.31%, sys=1.32%, ctx=12, majf=0, minf=40 00:33:29.668 IO depths : 1=5.8%, 2=11.8%, 4=24.0%, 8=51.5%, 16=6.8%, 32=0.0%, >=64=0.0% 00:33:29.668 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:29.668 complete : 0=0.0%, 4=93.9%, 8=0.4%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:29.668 issued rwts: total=5576,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:29.668 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:29.668 filename1: (groupid=0, jobs=1): err= 0: pid=2780497: Thu Nov 28 12:57:10 2024 00:33:29.668 read: IOPS=556, BW=2226KiB/s (2280kB/s)(21.8MiB/10005msec) 00:33:29.668 slat (nsec): min=4151, max=96859, avg=35008.11, stdev=16508.45 00:33:29.668 clat (usec): min=9923, max=62100, avg=28437.79, stdev=2002.23 00:33:29.668 lat (usec): min=9940, max=62113, avg=28472.80, stdev=2001.44 00:33:29.668 clat percentiles (usec): 00:33:29.668 | 1.00th=[27919], 5.00th=[27919], 10.00th=[28181], 20.00th=[28181], 00:33:29.668 | 30.00th=[28181], 40.00th=[28443], 50.00th=[28443], 60.00th=[28443], 00:33:29.668 | 70.00th=[28443], 80.00th=[28705], 90.00th=[28705], 95.00th=[28967], 00:33:29.668 | 99.00th=[29230], 99.50th=[29492], 99.90th=[57934], 99.95th=[57934], 00:33:29.668 | 99.99th=[62129] 00:33:29.668 bw ( KiB/s): min= 2052, max= 2304, per=4.14%, avg=2216.63, stdev=74.05, samples=19 00:33:29.668 iops : min= 513, max= 576, avg=554.16, stdev=18.51, samples=19 00:33:29.668 lat (msec) : 10=0.09%, 20=0.48%, 50=99.14%, 100=0.29% 00:33:29.668 cpu : usr=98.40%, sys=1.22%, ctx=13, majf=0, minf=33 00:33:29.668 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:29.668 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:29.668 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:29.668 issued rwts: total=5568,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:29.668 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:29.668 filename1: (groupid=0, jobs=1): err= 0: pid=2780498: Thu Nov 28 12:57:10 2024 00:33:29.668 read: IOPS=556, BW=2226KiB/s (2280kB/s)(21.8MiB/10005msec) 00:33:29.668 slat (nsec): min=4290, max=50763, avg=21880.99, stdev=7137.75 00:33:29.668 clat (usec): min=10618, max=58579, avg=28561.51, stdev=1959.45 00:33:29.668 lat (usec): min=10626, max=58592, avg=28583.39, stdev=1959.04 00:33:29.668 clat percentiles (usec): 00:33:29.668 | 1.00th=[27395], 5.00th=[28181], 10.00th=[28443], 20.00th=[28443], 00:33:29.668 | 30.00th=[28443], 40.00th=[28443], 50.00th=[28443], 60.00th=[28443], 00:33:29.668 | 70.00th=[28705], 80.00th=[28705], 90.00th=[28967], 95.00th=[28967], 00:33:29.668 | 99.00th=[29492], 99.50th=[29754], 99.90th=[58459], 99.95th=[58459], 00:33:29.668 | 99.99th=[58459] 00:33:29.668 bw ( KiB/s): min= 2048, max= 2304, per=4.14%, avg=2216.42, stdev=74.55, samples=19 00:33:29.668 iops : min= 512, max= 576, avg=554.11, stdev=18.64, samples=19 00:33:29.668 lat (msec) : 20=0.57%, 50=99.14%, 100=0.29% 00:33:29.668 cpu : usr=98.37%, sys=1.26%, ctx=15, majf=0, minf=51 00:33:29.668 IO depths : 1=6.0%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.5%, 32=0.0%, >=64=0.0% 00:33:29.668 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:29.668 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:29.668 issued rwts: total=5568,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:29.668 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:29.668 filename1: (groupid=0, jobs=1): err= 0: pid=2780499: Thu Nov 28 12:57:10 2024 00:33:29.668 read: IOPS=557, BW=2231KiB/s (2284kB/s)(21.8MiB/10017msec) 00:33:29.668 slat (nsec): min=7069, max=50870, avg=17245.90, stdev=7036.56 00:33:29.668 clat (usec): min=18136, max=47174, avg=28559.91, stdev=2152.61 00:33:29.668 lat (usec): min=18144, max=47191, avg=28577.15, stdev=2152.44 00:33:29.668 clat percentiles (usec): 00:33:29.668 | 1.00th=[19006], 5.00th=[27132], 10.00th=[28181], 20.00th=[28443], 00:33:29.668 | 30.00th=[28443], 40.00th=[28443], 50.00th=[28443], 60.00th=[28705], 00:33:29.668 | 70.00th=[28705], 80.00th=[28705], 90.00th=[28967], 95.00th=[29754], 00:33:29.668 | 99.00th=[38536], 99.50th=[38536], 99.90th=[46924], 99.95th=[46924], 00:33:29.668 | 99.99th=[46924] 00:33:29.668 bw ( KiB/s): min= 2096, max= 2304, per=4.17%, avg=2230.74, stdev=70.62, samples=19 00:33:29.668 iops : min= 524, max= 576, avg=557.68, stdev=17.65, samples=19 00:33:29.668 lat (msec) : 20=1.90%, 50=98.10% 00:33:29.668 cpu : usr=98.42%, sys=1.20%, ctx=13, majf=0, minf=45 00:33:29.668 IO depths : 1=3.3%, 2=8.8%, 4=22.2%, 8=56.4%, 16=9.2%, 32=0.0%, >=64=0.0% 00:33:29.668 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:29.668 complete : 0=0.0%, 4=93.6%, 8=0.8%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:29.668 issued rwts: total=5586,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:29.668 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:29.668 filename1: (groupid=0, jobs=1): err= 0: pid=2780500: Thu Nov 28 12:57:10 2024 00:33:29.668 read: IOPS=556, BW=2226KiB/s (2280kB/s)(21.8MiB/10005msec) 00:33:29.668 slat (nsec): min=6490, max=48835, avg=21646.47, stdev=6922.95 00:33:29.668 clat (usec): min=18280, max=45187, avg=28551.43, stdev=947.85 00:33:29.668 lat (usec): min=18298, max=45200, avg=28573.08, stdev=947.46 00:33:29.668 clat percentiles (usec): 00:33:29.668 | 1.00th=[27919], 5.00th=[28181], 10.00th=[28181], 20.00th=[28443], 00:33:29.668 | 30.00th=[28443], 40.00th=[28443], 50.00th=[28443], 60.00th=[28443], 00:33:29.668 | 70.00th=[28705], 80.00th=[28705], 90.00th=[28967], 95.00th=[28967], 00:33:29.668 | 99.00th=[29230], 99.50th=[29754], 99.90th=[40633], 99.95th=[44827], 00:33:29.668 | 99.99th=[45351] 00:33:29.668 bw ( KiB/s): min= 2048, max= 2304, per=4.16%, avg=2223.16, stdev=76.45, samples=19 00:33:29.668 iops : min= 512, max= 576, avg=555.79, stdev=19.11, samples=19 00:33:29.668 lat (msec) : 20=0.29%, 50=99.71% 00:33:29.668 cpu : usr=98.55%, sys=1.08%, ctx=10, majf=0, minf=37 00:33:29.668 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:29.668 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:29.668 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:29.668 issued rwts: total=5568,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:29.668 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:29.668 filename1: (groupid=0, jobs=1): err= 0: pid=2780501: Thu Nov 28 12:57:10 2024 00:33:29.668 read: IOPS=559, BW=2238KiB/s (2291kB/s)(21.9MiB/10010msec) 00:33:29.668 slat (nsec): min=7599, max=96879, avg=27662.33, stdev=14240.22 00:33:29.668 clat (usec): min=12856, max=29800, avg=28377.82, stdev=1325.93 00:33:29.668 lat (usec): min=12901, max=29828, avg=28405.49, stdev=1325.26 00:33:29.669 clat percentiles (usec): 00:33:29.669 | 1.00th=[19268], 5.00th=[28181], 10.00th=[28181], 20.00th=[28443], 00:33:29.669 | 30.00th=[28443], 40.00th=[28443], 50.00th=[28443], 60.00th=[28443], 00:33:29.669 | 70.00th=[28705], 80.00th=[28705], 90.00th=[28967], 95.00th=[28967], 00:33:29.669 | 99.00th=[29230], 99.50th=[29492], 99.90th=[29754], 99.95th=[29754], 00:33:29.669 | 99.99th=[29754] 00:33:29.669 bw ( KiB/s): min= 2176, max= 2432, per=4.18%, avg=2233.60, stdev=77.42, samples=20 00:33:29.669 iops : min= 544, max= 608, avg=558.40, stdev=19.35, samples=20 00:33:29.669 lat (msec) : 20=1.14%, 50=98.86% 00:33:29.669 cpu : usr=98.60%, sys=1.02%, ctx=14, majf=0, minf=34 00:33:29.669 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:29.669 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:29.669 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:29.669 issued rwts: total=5600,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:29.669 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:29.669 filename1: (groupid=0, jobs=1): err= 0: pid=2780502: Thu Nov 28 12:57:10 2024 00:33:29.669 read: IOPS=582, BW=2329KiB/s (2385kB/s)(22.8MiB/10035msec) 00:33:29.669 slat (nsec): min=3269, max=55229, avg=15431.31, stdev=6775.90 00:33:29.669 clat (usec): min=6849, max=49882, avg=27329.27, stdev=4218.74 00:33:29.669 lat (usec): min=6859, max=49897, avg=27344.70, stdev=4220.13 00:33:29.669 clat percentiles (usec): 00:33:29.669 | 1.00th=[ 6915], 5.00th=[17695], 10.00th=[23987], 20.00th=[28181], 00:33:29.669 | 30.00th=[28443], 40.00th=[28443], 50.00th=[28443], 60.00th=[28443], 00:33:29.669 | 70.00th=[28705], 80.00th=[28705], 90.00th=[28705], 95.00th=[28967], 00:33:29.669 | 99.00th=[29492], 99.50th=[37487], 99.90th=[42206], 99.95th=[42206], 00:33:29.669 | 99.99th=[50070] 00:33:29.669 bw ( KiB/s): min= 2176, max= 3168, per=4.36%, avg=2330.80, stdev=276.28, samples=20 00:33:29.669 iops : min= 544, max= 792, avg=582.70, stdev=69.07, samples=20 00:33:29.669 lat (msec) : 10=2.40%, 20=4.69%, 50=92.91% 00:33:29.669 cpu : usr=98.31%, sys=1.32%, ctx=14, majf=0, minf=63 00:33:29.669 IO depths : 1=4.6%, 2=9.9%, 4=21.9%, 8=55.5%, 16=8.0%, 32=0.0%, >=64=0.0% 00:33:29.669 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:29.669 complete : 0=0.0%, 4=93.2%, 8=1.2%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:29.669 issued rwts: total=5843,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:29.669 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:29.669 filename1: (groupid=0, jobs=1): err= 0: pid=2780503: Thu Nov 28 12:57:10 2024 00:33:29.669 read: IOPS=557, BW=2232KiB/s (2285kB/s)(21.8MiB/10008msec) 00:33:29.669 slat (nsec): min=7250, max=44267, avg=16017.38, stdev=6448.47 00:33:29.669 clat (usec): min=13736, max=40872, avg=28544.21, stdev=1321.73 00:33:29.669 lat (usec): min=13748, max=40892, avg=28560.23, stdev=1321.24 00:33:29.669 clat percentiles (usec): 00:33:29.669 | 1.00th=[27919], 5.00th=[28181], 10.00th=[28443], 20.00th=[28443], 00:33:29.669 | 30.00th=[28443], 40.00th=[28443], 50.00th=[28443], 60.00th=[28705], 00:33:29.669 | 70.00th=[28705], 80.00th=[28705], 90.00th=[28967], 95.00th=[29230], 00:33:29.669 | 99.00th=[29492], 99.50th=[29754], 99.90th=[40633], 99.95th=[40633], 00:33:29.669 | 99.99th=[40633] 00:33:29.669 bw ( KiB/s): min= 2176, max= 2308, per=4.16%, avg=2227.40, stdev=64.59, samples=20 00:33:29.669 iops : min= 544, max= 577, avg=556.85, stdev=16.15, samples=20 00:33:29.669 lat (msec) : 20=0.86%, 50=99.14% 00:33:29.669 cpu : usr=98.37%, sys=1.25%, ctx=13, majf=0, minf=44 00:33:29.669 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:29.669 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:29.669 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:29.669 issued rwts: total=5584,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:29.669 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:29.669 filename1: (groupid=0, jobs=1): err= 0: pid=2780504: Thu Nov 28 12:57:10 2024 00:33:29.669 read: IOPS=559, BW=2238KiB/s (2291kB/s)(21.9MiB/10010msec) 00:33:29.669 slat (nsec): min=7350, max=74272, avg=21075.57, stdev=6999.35 00:33:29.669 clat (usec): min=12937, max=36112, avg=28418.84, stdev=1341.09 00:33:29.669 lat (usec): min=12970, max=36135, avg=28439.92, stdev=1340.30 00:33:29.669 clat percentiles (usec): 00:33:29.669 | 1.00th=[19268], 5.00th=[28181], 10.00th=[28181], 20.00th=[28443], 00:33:29.669 | 30.00th=[28443], 40.00th=[28443], 50.00th=[28443], 60.00th=[28443], 00:33:29.669 | 70.00th=[28705], 80.00th=[28705], 90.00th=[28967], 95.00th=[28967], 00:33:29.669 | 99.00th=[29230], 99.50th=[29492], 99.90th=[29754], 99.95th=[29754], 00:33:29.669 | 99.99th=[35914] 00:33:29.669 bw ( KiB/s): min= 2176, max= 2432, per=4.18%, avg=2233.60, stdev=77.42, samples=20 00:33:29.669 iops : min= 544, max= 608, avg=558.40, stdev=19.35, samples=20 00:33:29.669 lat (msec) : 20=1.14%, 50=98.86% 00:33:29.669 cpu : usr=98.27%, sys=1.35%, ctx=17, majf=0, minf=35 00:33:29.669 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:29.669 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:29.669 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:29.669 issued rwts: total=5600,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:29.669 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:29.669 filename2: (groupid=0, jobs=1): err= 0: pid=2780505: Thu Nov 28 12:57:10 2024 00:33:29.669 read: IOPS=556, BW=2227KiB/s (2280kB/s)(21.8MiB/10003msec) 00:33:29.669 slat (nsec): min=4599, max=91891, avg=34533.88, stdev=16448.27 00:33:29.669 clat (usec): min=9926, max=56452, avg=28424.87, stdev=1911.33 00:33:29.669 lat (usec): min=9940, max=56467, avg=28459.40, stdev=1910.96 00:33:29.669 clat percentiles (usec): 00:33:29.669 | 1.00th=[27919], 5.00th=[27919], 10.00th=[28181], 20.00th=[28181], 00:33:29.669 | 30.00th=[28181], 40.00th=[28443], 50.00th=[28443], 60.00th=[28443], 00:33:29.669 | 70.00th=[28443], 80.00th=[28705], 90.00th=[28705], 95.00th=[28967], 00:33:29.669 | 99.00th=[29230], 99.50th=[29492], 99.90th=[56361], 99.95th=[56361], 00:33:29.669 | 99.99th=[56361] 00:33:29.669 bw ( KiB/s): min= 2052, max= 2304, per=4.14%, avg=2216.63, stdev=74.05, samples=19 00:33:29.669 iops : min= 513, max= 576, avg=554.16, stdev=18.51, samples=19 00:33:29.669 lat (msec) : 10=0.07%, 20=0.50%, 50=99.14%, 100=0.29% 00:33:29.669 cpu : usr=98.18%, sys=1.44%, ctx=14, majf=0, minf=49 00:33:29.669 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:29.669 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:29.669 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:29.669 issued rwts: total=5568,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:29.669 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:29.669 filename2: (groupid=0, jobs=1): err= 0: pid=2780506: Thu Nov 28 12:57:10 2024 00:33:29.669 read: IOPS=559, BW=2238KiB/s (2291kB/s)(21.9MiB/10011msec) 00:33:29.669 slat (nsec): min=7866, max=78581, avg=21590.69, stdev=7261.26 00:33:29.669 clat (usec): min=12743, max=29866, avg=28418.17, stdev=1311.98 00:33:29.669 lat (usec): min=12793, max=29881, avg=28439.77, stdev=1311.18 00:33:29.669 clat percentiles (usec): 00:33:29.669 | 1.00th=[19268], 5.00th=[28181], 10.00th=[28181], 20.00th=[28443], 00:33:29.669 | 30.00th=[28443], 40.00th=[28443], 50.00th=[28443], 60.00th=[28443], 00:33:29.669 | 70.00th=[28705], 80.00th=[28705], 90.00th=[28967], 95.00th=[28967], 00:33:29.669 | 99.00th=[29492], 99.50th=[29492], 99.90th=[29754], 99.95th=[29754], 00:33:29.669 | 99.99th=[29754] 00:33:29.669 bw ( KiB/s): min= 2176, max= 2432, per=4.18%, avg=2233.60, stdev=77.42, samples=20 00:33:29.669 iops : min= 544, max= 608, avg=558.40, stdev=19.35, samples=20 00:33:29.669 lat (msec) : 20=1.14%, 50=98.86% 00:33:29.669 cpu : usr=98.44%, sys=1.14%, ctx=12, majf=0, minf=45 00:33:29.669 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:29.669 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:29.669 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:29.669 issued rwts: total=5600,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:29.669 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:29.669 filename2: (groupid=0, jobs=1): err= 0: pid=2780507: Thu Nov 28 12:57:10 2024 00:33:29.669 read: IOPS=556, BW=2227KiB/s (2281kB/s)(21.8MiB/10011msec) 00:33:29.669 slat (nsec): min=7680, max=97713, avg=39032.52, stdev=20037.44 00:33:29.669 clat (usec): min=13329, max=40317, avg=28387.12, stdev=956.64 00:33:29.669 lat (usec): min=13356, max=40374, avg=28426.16, stdev=956.32 00:33:29.669 clat percentiles (usec): 00:33:29.669 | 1.00th=[27657], 5.00th=[27919], 10.00th=[27919], 20.00th=[28181], 00:33:29.669 | 30.00th=[28181], 40.00th=[28181], 50.00th=[28443], 60.00th=[28443], 00:33:29.669 | 70.00th=[28443], 80.00th=[28705], 90.00th=[28967], 95.00th=[28967], 00:33:29.669 | 99.00th=[29492], 99.50th=[31851], 99.90th=[40109], 99.95th=[40109], 00:33:29.669 | 99.99th=[40109] 00:33:29.669 bw ( KiB/s): min= 2176, max= 2304, per=4.16%, avg=2223.20, stdev=61.74, samples=20 00:33:29.669 iops : min= 544, max= 576, avg=555.80, stdev=15.44, samples=20 00:33:29.669 lat (msec) : 20=0.22%, 50=99.78% 00:33:29.669 cpu : usr=98.42%, sys=1.19%, ctx=13, majf=0, minf=32 00:33:29.669 IO depths : 1=6.1%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:33:29.669 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:29.669 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:29.669 issued rwts: total=5574,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:29.669 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:29.669 filename2: (groupid=0, jobs=1): err= 0: pid=2780508: Thu Nov 28 12:57:10 2024 00:33:29.669 read: IOPS=556, BW=2227KiB/s (2280kB/s)(21.8MiB/10003msec) 00:33:29.669 slat (nsec): min=6320, max=91924, avg=32937.80, stdev=16822.55 00:33:29.669 clat (usec): min=9887, max=56524, avg=28421.49, stdev=1914.00 00:33:29.669 lat (usec): min=9895, max=56537, avg=28454.43, stdev=1914.04 00:33:29.669 clat percentiles (usec): 00:33:29.669 | 1.00th=[27919], 5.00th=[27919], 10.00th=[28181], 20.00th=[28181], 00:33:29.669 | 30.00th=[28181], 40.00th=[28443], 50.00th=[28443], 60.00th=[28443], 00:33:29.669 | 70.00th=[28443], 80.00th=[28705], 90.00th=[28705], 95.00th=[28967], 00:33:29.669 | 99.00th=[29230], 99.50th=[29492], 99.90th=[56361], 99.95th=[56361], 00:33:29.669 | 99.99th=[56361] 00:33:29.669 bw ( KiB/s): min= 2052, max= 2304, per=4.14%, avg=2216.63, stdev=74.05, samples=19 00:33:29.669 iops : min= 513, max= 576, avg=554.16, stdev=18.51, samples=19 00:33:29.669 lat (msec) : 10=0.18%, 20=0.40%, 50=99.14%, 100=0.29% 00:33:29.669 cpu : usr=98.37%, sys=1.25%, ctx=13, majf=0, minf=33 00:33:29.669 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:29.669 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:29.669 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:29.669 issued rwts: total=5568,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:29.669 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:29.669 filename2: (groupid=0, jobs=1): err= 0: pid=2780509: Thu Nov 28 12:57:10 2024 00:33:29.669 read: IOPS=556, BW=2226KiB/s (2280kB/s)(21.8MiB/10004msec) 00:33:29.669 slat (nsec): min=6195, max=50869, avg=22119.56, stdev=7312.08 00:33:29.669 clat (usec): min=10715, max=60925, avg=28542.14, stdev=1907.87 00:33:29.669 lat (usec): min=10730, max=60939, avg=28564.26, stdev=1907.55 00:33:29.669 clat percentiles (usec): 00:33:29.669 | 1.00th=[27919], 5.00th=[28181], 10.00th=[28181], 20.00th=[28443], 00:33:29.669 | 30.00th=[28443], 40.00th=[28443], 50.00th=[28443], 60.00th=[28443], 00:33:29.669 | 70.00th=[28705], 80.00th=[28705], 90.00th=[28967], 95.00th=[28967], 00:33:29.669 | 99.00th=[29492], 99.50th=[29754], 99.90th=[56886], 99.95th=[56886], 00:33:29.669 | 99.99th=[61080] 00:33:29.669 bw ( KiB/s): min= 2048, max= 2304, per=4.14%, avg=2216.42, stdev=74.55, samples=19 00:33:29.669 iops : min= 512, max= 576, avg=554.11, stdev=18.64, samples=19 00:33:29.669 lat (msec) : 20=0.57%, 50=99.14%, 100=0.29% 00:33:29.669 cpu : usr=98.24%, sys=1.38%, ctx=16, majf=0, minf=36 00:33:29.669 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:29.669 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:29.669 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:29.669 issued rwts: total=5568,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:29.669 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:29.669 filename2: (groupid=0, jobs=1): err= 0: pid=2780510: Thu Nov 28 12:57:10 2024 00:33:29.669 read: IOPS=557, BW=2232KiB/s (2285kB/s)(21.8MiB/10008msec) 00:33:29.669 slat (nsec): min=6227, max=86691, avg=17999.00, stdev=7331.70 00:33:29.669 clat (usec): min=13796, max=40813, avg=28529.56, stdev=1227.13 00:33:29.669 lat (usec): min=13805, max=40833, avg=28547.56, stdev=1226.91 00:33:29.669 clat percentiles (usec): 00:33:29.669 | 1.00th=[27919], 5.00th=[28181], 10.00th=[28181], 20.00th=[28443], 00:33:29.669 | 30.00th=[28443], 40.00th=[28443], 50.00th=[28443], 60.00th=[28705], 00:33:29.669 | 70.00th=[28705], 80.00th=[28705], 90.00th=[28967], 95.00th=[29230], 00:33:29.669 | 99.00th=[29492], 99.50th=[29754], 99.90th=[40633], 99.95th=[40633], 00:33:29.669 | 99.99th=[40633] 00:33:29.669 bw ( KiB/s): min= 2176, max= 2308, per=4.16%, avg=2227.40, stdev=64.59, samples=20 00:33:29.669 iops : min= 544, max= 577, avg=556.85, stdev=16.15, samples=20 00:33:29.669 lat (msec) : 20=0.86%, 50=99.14% 00:33:29.669 cpu : usr=98.37%, sys=1.24%, ctx=33, majf=0, minf=49 00:33:29.669 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:29.669 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:29.669 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:29.669 issued rwts: total=5584,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:29.669 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:29.669 filename2: (groupid=0, jobs=1): err= 0: pid=2780511: Thu Nov 28 12:57:10 2024 00:33:29.669 read: IOPS=559, BW=2238KiB/s (2291kB/s)(21.9MiB/10011msec) 00:33:29.669 slat (usec): min=7, max=106, avg=25.99, stdev=11.70 00:33:29.669 clat (usec): min=12790, max=29890, avg=28387.80, stdev=1304.82 00:33:29.669 lat (usec): min=12809, max=29921, avg=28413.79, stdev=1303.33 00:33:29.669 clat percentiles (usec): 00:33:29.669 | 1.00th=[19268], 5.00th=[28181], 10.00th=[28181], 20.00th=[28443], 00:33:29.669 | 30.00th=[28443], 40.00th=[28443], 50.00th=[28443], 60.00th=[28443], 00:33:29.669 | 70.00th=[28705], 80.00th=[28705], 90.00th=[28967], 95.00th=[28967], 00:33:29.669 | 99.00th=[29230], 99.50th=[29492], 99.90th=[29754], 99.95th=[29754], 00:33:29.669 | 99.99th=[30016] 00:33:29.669 bw ( KiB/s): min= 2176, max= 2432, per=4.18%, avg=2233.60, stdev=77.42, samples=20 00:33:29.669 iops : min= 544, max= 608, avg=558.40, stdev=19.35, samples=20 00:33:29.669 lat (msec) : 20=1.07%, 50=98.93% 00:33:29.669 cpu : usr=98.50%, sys=1.11%, ctx=17, majf=0, minf=35 00:33:29.669 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:29.669 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:29.669 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:29.669 issued rwts: total=5600,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:29.669 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:29.669 filename2: (groupid=0, jobs=1): err= 0: pid=2780512: Thu Nov 28 12:57:10 2024 00:33:29.669 read: IOPS=556, BW=2226KiB/s (2280kB/s)(21.8MiB/10004msec) 00:33:29.669 slat (usec): min=8, max=184, avg=42.18, stdev=20.52 00:33:29.669 clat (usec): min=9997, max=57473, avg=28371.91, stdev=1949.34 00:33:29.669 lat (usec): min=10014, max=57510, avg=28414.09, stdev=1949.68 00:33:29.669 clat percentiles (usec): 00:33:29.669 | 1.00th=[27657], 5.00th=[27919], 10.00th=[27919], 20.00th=[28181], 00:33:29.669 | 30.00th=[28181], 40.00th=[28181], 50.00th=[28443], 60.00th=[28443], 00:33:29.669 | 70.00th=[28443], 80.00th=[28705], 90.00th=[28705], 95.00th=[28967], 00:33:29.669 | 99.00th=[29492], 99.50th=[29492], 99.90th=[57410], 99.95th=[57410], 00:33:29.669 | 99.99th=[57410] 00:33:29.669 bw ( KiB/s): min= 2048, max= 2304, per=4.14%, avg=2216.42, stdev=74.55, samples=19 00:33:29.669 iops : min= 512, max= 576, avg=554.11, stdev=18.64, samples=19 00:33:29.669 lat (msec) : 10=0.02%, 20=0.56%, 50=99.14%, 100=0.29% 00:33:29.669 cpu : usr=98.50%, sys=1.09%, ctx=15, majf=0, minf=43 00:33:29.669 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:29.669 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:29.669 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:29.669 issued rwts: total=5568,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:29.669 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:29.669 00:33:29.669 Run status group 0 (all jobs): 00:33:29.669 READ: bw=52.2MiB/s (54.8MB/s), 2224KiB/s-2329KiB/s (2277kB/s-2385kB/s), io=524MiB (550MB), run=10003-10035msec 00:33:29.669 12:57:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:33:29.669 12:57:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:33:29.669 12:57:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:33:29.669 12:57:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:29.669 12:57:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:33:29.669 12:57:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:29.669 12:57:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:29.669 12:57:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:29.669 12:57:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:29.670 12:57:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:29.670 12:57:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:29.670 12:57:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:29.670 12:57:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:29.670 12:57:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:33:29.670 12:57:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:33:29.670 12:57:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:33:29.670 12:57:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:29.670 12:57:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:29.670 12:57:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:29.670 12:57:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:29.670 12:57:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:33:29.670 12:57:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:29.670 12:57:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:29.670 12:57:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:29.670 12:57:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:33:29.670 12:57:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:33:29.670 12:57:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:33:29.670 12:57:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:33:29.670 12:57:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:29.670 12:57:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:29.670 12:57:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:29.670 12:57:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:33:29.670 12:57:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:29.670 12:57:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:29.670 12:57:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:29.670 12:57:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:33:29.670 12:57:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:33:29.670 12:57:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:33:29.670 12:57:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:33:29.670 12:57:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:33:29.670 12:57:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:33:29.670 12:57:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:33:29.670 12:57:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:33:29.670 12:57:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:33:29.670 12:57:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:33:29.670 12:57:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:33:29.670 12:57:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:33:29.670 12:57:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:29.670 12:57:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:29.670 bdev_null0 00:33:29.670 12:57:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:29.670 12:57:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:29.670 12:57:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:29.670 12:57:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:29.670 12:57:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:29.670 12:57:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:29.670 12:57:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:29.670 12:57:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:29.670 12:57:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:29.670 12:57:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:29.670 12:57:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:29.670 12:57:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:29.670 [2024-11-28 12:57:11.047501] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:29.670 12:57:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:29.670 12:57:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:33:29.670 12:57:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:33:29.670 12:57:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:33:29.670 12:57:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:33:29.670 12:57:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:29.670 12:57:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:29.670 bdev_null1 00:33:29.670 12:57:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:29.670 12:57:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:33:29.670 12:57:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:29.670 12:57:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:29.670 12:57:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:29.670 12:57:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:33:29.670 12:57:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:29.670 12:57:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:29.670 12:57:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:29.670 12:57:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:29.670 12:57:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:29.670 12:57:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:29.670 12:57:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:29.670 12:57:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:33:29.670 12:57:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:33:29.670 12:57:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:33:29.670 12:57:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:33:29.670 12:57:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:33:29.670 12:57:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:29.670 12:57:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:29.670 12:57:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:29.670 { 00:33:29.670 "params": { 00:33:29.670 "name": "Nvme$subsystem", 00:33:29.670 "trtype": "$TEST_TRANSPORT", 00:33:29.670 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:29.670 "adrfam": "ipv4", 00:33:29.670 "trsvcid": "$NVMF_PORT", 00:33:29.670 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:29.670 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:29.670 "hdgst": ${hdgst:-false}, 00:33:29.670 "ddgst": ${ddgst:-false} 00:33:29.670 }, 00:33:29.670 "method": "bdev_nvme_attach_controller" 00:33:29.670 } 00:33:29.670 EOF 00:33:29.670 )") 00:33:29.670 12:57:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:29.670 12:57:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:33:29.670 12:57:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:29.670 12:57:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:33:29.670 12:57:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:29.670 12:57:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:33:29.670 12:57:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:29.670 12:57:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:29.670 12:57:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:33:29.670 12:57:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:29.670 12:57:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:33:29.670 12:57:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:29.670 12:57:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:29.670 12:57:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:33:29.670 12:57:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:33:29.670 12:57:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:33:29.670 12:57:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:33:29.670 12:57:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:29.670 12:57:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:29.670 12:57:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:29.670 { 00:33:29.670 "params": { 00:33:29.670 "name": "Nvme$subsystem", 00:33:29.670 "trtype": "$TEST_TRANSPORT", 00:33:29.670 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:29.670 "adrfam": "ipv4", 00:33:29.670 "trsvcid": "$NVMF_PORT", 00:33:29.670 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:29.670 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:29.670 "hdgst": ${hdgst:-false}, 00:33:29.670 "ddgst": ${ddgst:-false} 00:33:29.670 }, 00:33:29.670 "method": "bdev_nvme_attach_controller" 00:33:29.670 } 00:33:29.670 EOF 00:33:29.670 )") 00:33:29.670 12:57:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:33:29.670 12:57:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:33:29.670 12:57:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:33:29.670 12:57:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:33:29.670 12:57:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:33:29.670 12:57:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:29.670 "params": { 00:33:29.670 "name": "Nvme0", 00:33:29.670 "trtype": "tcp", 00:33:29.670 "traddr": "10.0.0.2", 00:33:29.670 "adrfam": "ipv4", 00:33:29.670 "trsvcid": "4420", 00:33:29.670 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:29.670 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:29.670 "hdgst": false, 00:33:29.670 "ddgst": false 00:33:29.670 }, 00:33:29.670 "method": "bdev_nvme_attach_controller" 00:33:29.670 },{ 00:33:29.670 "params": { 00:33:29.670 "name": "Nvme1", 00:33:29.670 "trtype": "tcp", 00:33:29.670 "traddr": "10.0.0.2", 00:33:29.670 "adrfam": "ipv4", 00:33:29.670 "trsvcid": "4420", 00:33:29.670 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:29.670 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:29.670 "hdgst": false, 00:33:29.670 "ddgst": false 00:33:29.670 }, 00:33:29.670 "method": "bdev_nvme_attach_controller" 00:33:29.670 }' 00:33:29.670 12:57:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:29.670 12:57:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:29.670 12:57:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:29.670 12:57:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:29.670 12:57:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:29.670 12:57:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:33:29.670 12:57:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:29.670 12:57:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:29.670 12:57:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:29.670 12:57:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:29.670 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:33:29.670 ... 00:33:29.670 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:33:29.670 ... 00:33:29.670 fio-3.35 00:33:29.670 Starting 4 threads 00:33:34.927 00:33:34.927 filename0: (groupid=0, jobs=1): err= 0: pid=2782932: Thu Nov 28 12:57:17 2024 00:33:34.927 read: IOPS=2554, BW=20.0MiB/s (20.9MB/s)(99.9MiB/5005msec) 00:33:34.927 slat (nsec): min=6305, max=43700, avg=9108.50, stdev=3292.03 00:33:34.927 clat (usec): min=1164, max=6839, avg=3105.98, stdev=567.15 00:33:34.927 lat (usec): min=1171, max=6846, avg=3115.09, stdev=566.98 00:33:34.927 clat percentiles (usec): 00:33:34.927 | 1.00th=[ 2024], 5.00th=[ 2343], 10.00th=[ 2540], 20.00th=[ 2704], 00:33:34.927 | 30.00th=[ 2835], 40.00th=[ 2966], 50.00th=[ 3064], 60.00th=[ 3130], 00:33:34.927 | 70.00th=[ 3195], 80.00th=[ 3392], 90.00th=[ 3818], 95.00th=[ 4293], 00:33:34.927 | 99.00th=[ 5080], 99.50th=[ 5211], 99.90th=[ 5735], 99.95th=[ 6063], 00:33:34.927 | 99.99th=[ 6849] 00:33:34.927 bw ( KiB/s): min=18896, max=22144, per=25.06%, avg=20440.00, stdev=834.30, samples=10 00:33:34.927 iops : min= 2362, max= 2768, avg=2555.00, stdev=104.29, samples=10 00:33:34.927 lat (msec) : 2=0.89%, 4=92.00%, 10=7.11% 00:33:34.927 cpu : usr=96.22%, sys=3.48%, ctx=6, majf=0, minf=9 00:33:34.927 IO depths : 1=0.5%, 2=2.9%, 4=68.7%, 8=27.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:34.927 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:34.927 complete : 0=0.0%, 4=93.0%, 8=7.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:34.927 issued rwts: total=12783,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:34.927 latency : target=0, window=0, percentile=100.00%, depth=8 00:33:34.927 filename0: (groupid=0, jobs=1): err= 0: pid=2782933: Thu Nov 28 12:57:17 2024 00:33:34.927 read: IOPS=2415, BW=18.9MiB/s (19.8MB/s)(94.4MiB/5004msec) 00:33:34.927 slat (nsec): min=6330, max=38203, avg=8993.62, stdev=3255.95 00:33:34.927 clat (usec): min=1051, max=6647, avg=3284.57, stdev=538.70 00:33:34.927 lat (usec): min=1061, max=6653, avg=3293.56, stdev=538.22 00:33:34.927 clat percentiles (usec): 00:33:34.927 | 1.00th=[ 2343], 5.00th=[ 2704], 10.00th=[ 2835], 20.00th=[ 2933], 00:33:34.927 | 30.00th=[ 3032], 40.00th=[ 3097], 50.00th=[ 3130], 60.00th=[ 3195], 00:33:34.927 | 70.00th=[ 3326], 80.00th=[ 3523], 90.00th=[ 3916], 95.00th=[ 4555], 00:33:34.927 | 99.00th=[ 5145], 99.50th=[ 5276], 99.90th=[ 5735], 99.95th=[ 5800], 00:33:34.927 | 99.99th=[ 6652] 00:33:34.927 bw ( KiB/s): min=18624, max=19936, per=23.73%, avg=19356.44, stdev=453.45, samples=9 00:33:34.927 iops : min= 2328, max= 2492, avg=2419.56, stdev=56.68, samples=9 00:33:34.927 lat (msec) : 2=0.12%, 4=90.48%, 10=9.41% 00:33:34.927 cpu : usr=95.96%, sys=3.74%, ctx=7, majf=0, minf=9 00:33:34.927 IO depths : 1=0.1%, 2=2.4%, 4=70.9%, 8=26.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:34.927 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:34.927 complete : 0=0.0%, 4=91.8%, 8=8.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:34.927 issued rwts: total=12089,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:34.927 latency : target=0, window=0, percentile=100.00%, depth=8 00:33:34.927 filename1: (groupid=0, jobs=1): err= 0: pid=2782934: Thu Nov 28 12:57:17 2024 00:33:34.927 read: IOPS=2756, BW=21.5MiB/s (22.6MB/s)(108MiB/5002msec) 00:33:34.927 slat (nsec): min=4278, max=31368, avg=9025.86, stdev=3038.46 00:33:34.927 clat (usec): min=849, max=5898, avg=2875.31, stdev=527.72 00:33:34.927 lat (usec): min=858, max=5911, avg=2884.34, stdev=527.73 00:33:34.927 clat percentiles (usec): 00:33:34.927 | 1.00th=[ 1795], 5.00th=[ 2147], 10.00th=[ 2311], 20.00th=[ 2474], 00:33:34.927 | 30.00th=[ 2606], 40.00th=[ 2737], 50.00th=[ 2835], 60.00th=[ 2966], 00:33:34.927 | 70.00th=[ 3097], 80.00th=[ 3163], 90.00th=[ 3425], 95.00th=[ 3916], 00:33:34.927 | 99.00th=[ 4621], 99.50th=[ 4752], 99.90th=[ 5211], 99.95th=[ 5407], 00:33:34.927 | 99.99th=[ 5866] 00:33:34.927 bw ( KiB/s): min=20528, max=24464, per=27.04%, avg=22051.20, stdev=1191.28, samples=10 00:33:34.927 iops : min= 2566, max= 3060, avg=2756.60, stdev=149.36, samples=10 00:33:34.927 lat (usec) : 1000=0.29% 00:33:34.927 lat (msec) : 2=2.13%, 4=93.50%, 10=4.08% 00:33:34.927 cpu : usr=95.58%, sys=4.08%, ctx=18, majf=0, minf=9 00:33:34.927 IO depths : 1=0.3%, 2=5.5%, 4=66.1%, 8=28.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:34.927 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:34.927 complete : 0=0.0%, 4=93.0%, 8=7.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:34.927 issued rwts: total=13786,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:34.927 latency : target=0, window=0, percentile=100.00%, depth=8 00:33:34.927 filename1: (groupid=0, jobs=1): err= 0: pid=2782935: Thu Nov 28 12:57:17 2024 00:33:34.927 read: IOPS=2470, BW=19.3MiB/s (20.2MB/s)(96.6MiB/5005msec) 00:33:34.927 slat (nsec): min=6305, max=41752, avg=9309.98, stdev=3351.84 00:33:34.927 clat (usec): min=1138, max=6478, avg=3210.22, stdev=490.60 00:33:34.927 lat (usec): min=1150, max=6490, avg=3219.53, stdev=490.27 00:33:34.927 clat percentiles (usec): 00:33:34.927 | 1.00th=[ 2245], 5.00th=[ 2606], 10.00th=[ 2769], 20.00th=[ 2900], 00:33:34.927 | 30.00th=[ 2999], 40.00th=[ 3064], 50.00th=[ 3130], 60.00th=[ 3163], 00:33:34.928 | 70.00th=[ 3294], 80.00th=[ 3425], 90.00th=[ 3818], 95.00th=[ 4228], 00:33:34.928 | 99.00th=[ 4948], 99.50th=[ 5211], 99.90th=[ 5604], 99.95th=[ 5997], 00:33:34.928 | 99.99th=[ 6456] 00:33:34.928 bw ( KiB/s): min=18992, max=20384, per=24.25%, avg=19778.40, stdev=372.76, samples=10 00:33:34.928 iops : min= 2374, max= 2548, avg=2472.30, stdev=46.60, samples=10 00:33:34.928 lat (msec) : 2=0.19%, 4=93.35%, 10=6.46% 00:33:34.928 cpu : usr=96.06%, sys=3.62%, ctx=12, majf=0, minf=9 00:33:34.928 IO depths : 1=0.2%, 2=2.2%, 4=70.2%, 8=27.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:34.928 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:34.928 complete : 0=0.0%, 4=92.3%, 8=7.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:34.928 issued rwts: total=12367,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:34.928 latency : target=0, window=0, percentile=100.00%, depth=8 00:33:34.928 00:33:34.928 Run status group 0 (all jobs): 00:33:34.928 READ: bw=79.6MiB/s (83.5MB/s), 18.9MiB/s-21.5MiB/s (19.8MB/s-22.6MB/s), io=399MiB (418MB), run=5002-5005msec 00:33:34.928 12:57:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:33:34.928 12:57:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:33:34.928 12:57:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:33:34.928 12:57:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:34.928 12:57:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:33:34.928 12:57:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:34.928 12:57:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:34.928 12:57:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:34.928 12:57:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:34.928 12:57:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:34.928 12:57:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:34.928 12:57:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:34.928 12:57:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:34.928 12:57:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:33:34.928 12:57:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:33:34.928 12:57:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:33:34.928 12:57:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:34.928 12:57:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:34.928 12:57:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:34.928 12:57:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:34.928 12:57:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:33:34.928 12:57:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:34.928 12:57:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:34.928 12:57:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:34.928 00:33:34.928 real 0m24.526s 00:33:34.928 user 4m50.893s 00:33:34.928 sys 0m5.555s 00:33:34.928 12:57:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:34.928 12:57:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:34.928 ************************************ 00:33:34.928 END TEST fio_dif_rand_params 00:33:34.928 ************************************ 00:33:35.185 12:57:17 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:33:35.185 12:57:17 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:35.185 12:57:17 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:35.185 12:57:17 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:35.186 ************************************ 00:33:35.186 START TEST fio_dif_digest 00:33:35.186 ************************************ 00:33:35.186 12:57:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:33:35.186 12:57:17 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:33:35.186 12:57:17 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:33:35.186 12:57:17 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:33:35.186 12:57:17 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:33:35.186 12:57:17 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:33:35.186 12:57:17 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:33:35.186 12:57:17 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:33:35.186 12:57:17 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:33:35.186 12:57:17 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:33:35.186 12:57:17 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:33:35.186 12:57:17 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:33:35.186 12:57:17 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:33:35.186 12:57:17 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:33:35.186 12:57:17 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:33:35.186 12:57:17 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:33:35.186 12:57:17 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:33:35.186 12:57:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:35.186 12:57:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:33:35.186 bdev_null0 00:33:35.186 12:57:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:35.186 12:57:17 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:35.186 12:57:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:35.186 12:57:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:33:35.186 12:57:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:35.186 12:57:17 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:35.186 12:57:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:35.186 12:57:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:33:35.186 12:57:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:35.186 12:57:17 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:35.186 12:57:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:35.186 12:57:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:33:35.186 [2024-11-28 12:57:17.530044] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:35.186 12:57:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:35.186 12:57:17 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:33:35.186 12:57:17 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:33:35.186 12:57:17 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:33:35.186 12:57:17 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:33:35.186 12:57:17 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:35.186 12:57:17 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:33:35.186 12:57:17 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:35.186 12:57:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:35.186 12:57:17 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:33:35.186 12:57:17 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:35.186 { 00:33:35.186 "params": { 00:33:35.186 "name": "Nvme$subsystem", 00:33:35.186 "trtype": "$TEST_TRANSPORT", 00:33:35.186 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:35.186 "adrfam": "ipv4", 00:33:35.186 "trsvcid": "$NVMF_PORT", 00:33:35.186 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:35.186 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:35.186 "hdgst": ${hdgst:-false}, 00:33:35.186 "ddgst": ${ddgst:-false} 00:33:35.186 }, 00:33:35.186 "method": "bdev_nvme_attach_controller" 00:33:35.186 } 00:33:35.186 EOF 00:33:35.186 )") 00:33:35.186 12:57:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:35.186 12:57:17 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:33:35.186 12:57:17 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:33:35.186 12:57:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:35.186 12:57:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:35.186 12:57:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:35.186 12:57:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:33:35.186 12:57:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:35.186 12:57:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:35.186 12:57:17 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:33:35.186 12:57:17 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:33:35.186 12:57:17 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:33:35.186 12:57:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:35.186 12:57:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:33:35.186 12:57:17 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:33:35.186 12:57:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:35.186 12:57:17 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:33:35.186 12:57:17 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:35.186 "params": { 00:33:35.186 "name": "Nvme0", 00:33:35.186 "trtype": "tcp", 00:33:35.186 "traddr": "10.0.0.2", 00:33:35.186 "adrfam": "ipv4", 00:33:35.186 "trsvcid": "4420", 00:33:35.186 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:35.186 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:35.186 "hdgst": true, 00:33:35.186 "ddgst": true 00:33:35.186 }, 00:33:35.186 "method": "bdev_nvme_attach_controller" 00:33:35.186 }' 00:33:35.186 12:57:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:35.186 12:57:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:35.186 12:57:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:35.186 12:57:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:35.186 12:57:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:33:35.186 12:57:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:35.186 12:57:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:35.186 12:57:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:35.186 12:57:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:35.186 12:57:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:35.445 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:33:35.445 ... 00:33:35.445 fio-3.35 00:33:35.445 Starting 3 threads 00:33:47.638 00:33:47.638 filename0: (groupid=0, jobs=1): err= 0: pid=2783998: Thu Nov 28 12:57:28 2024 00:33:47.638 read: IOPS=281, BW=35.2MiB/s (36.9MB/s)(354MiB/10048msec) 00:33:47.638 slat (nsec): min=6631, max=26527, avg=12085.20, stdev=1967.08 00:33:47.638 clat (usec): min=6671, max=53145, avg=10616.96, stdev=1310.95 00:33:47.638 lat (usec): min=6684, max=53159, avg=10629.04, stdev=1310.92 00:33:47.638 clat percentiles (usec): 00:33:47.638 | 1.00th=[ 8717], 5.00th=[ 9372], 10.00th=[ 9634], 20.00th=[10028], 00:33:47.638 | 30.00th=[10290], 40.00th=[10421], 50.00th=[10683], 60.00th=[10814], 00:33:47.638 | 70.00th=[10945], 80.00th=[11207], 90.00th=[11469], 95.00th=[11863], 00:33:47.638 | 99.00th=[12518], 99.50th=[12649], 99.90th=[13435], 99.95th=[47449], 00:33:47.638 | 99.99th=[53216] 00:33:47.638 bw ( KiB/s): min=34816, max=37632, per=35.42%, avg=36211.20, stdev=798.71, samples=20 00:33:47.638 iops : min= 272, max= 294, avg=282.90, stdev= 6.24, samples=20 00:33:47.638 lat (msec) : 10=21.23%, 20=78.70%, 50=0.04%, 100=0.04% 00:33:47.638 cpu : usr=93.36%, sys=6.34%, ctx=33, majf=0, minf=79 00:33:47.638 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:47.638 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:47.638 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:47.638 issued rwts: total=2831,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:47.638 latency : target=0, window=0, percentile=100.00%, depth=3 00:33:47.638 filename0: (groupid=0, jobs=1): err= 0: pid=2783999: Thu Nov 28 12:57:28 2024 00:33:47.638 read: IOPS=259, BW=32.5MiB/s (34.1MB/s)(326MiB/10045msec) 00:33:47.638 slat (nsec): min=6613, max=27734, avg=12373.48, stdev=1938.59 00:33:47.638 clat (usec): min=6891, max=47976, avg=11510.70, stdev=1257.01 00:33:47.638 lat (usec): min=6899, max=47994, avg=11523.07, stdev=1257.09 00:33:47.638 clat percentiles (usec): 00:33:47.638 | 1.00th=[ 9503], 5.00th=[10290], 10.00th=[10552], 20.00th=[10814], 00:33:47.638 | 30.00th=[11076], 40.00th=[11338], 50.00th=[11469], 60.00th=[11600], 00:33:47.638 | 70.00th=[11863], 80.00th=[12125], 90.00th=[12518], 95.00th=[12780], 00:33:47.638 | 99.00th=[13304], 99.50th=[13566], 99.90th=[14353], 99.95th=[44827], 00:33:47.638 | 99.99th=[47973] 00:33:47.638 bw ( KiB/s): min=32256, max=35072, per=32.66%, avg=33395.20, stdev=578.28, samples=20 00:33:47.638 iops : min= 252, max= 274, avg=260.90, stdev= 4.52, samples=20 00:33:47.638 lat (msec) : 10=2.41%, 20=97.51%, 50=0.08% 00:33:47.638 cpu : usr=94.35%, sys=5.35%, ctx=21, majf=0, minf=64 00:33:47.638 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:47.638 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:47.638 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:47.638 issued rwts: total=2611,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:47.638 latency : target=0, window=0, percentile=100.00%, depth=3 00:33:47.638 filename0: (groupid=0, jobs=1): err= 0: pid=2784000: Thu Nov 28 12:57:28 2024 00:33:47.638 read: IOPS=257, BW=32.2MiB/s (33.7MB/s)(323MiB/10045msec) 00:33:47.638 slat (nsec): min=6616, max=26561, avg=12488.02, stdev=1685.53 00:33:47.638 clat (usec): min=8702, max=52329, avg=11630.95, stdev=1873.20 00:33:47.638 lat (usec): min=8716, max=52342, avg=11643.44, stdev=1873.19 00:33:47.638 clat percentiles (usec): 00:33:47.638 | 1.00th=[ 9765], 5.00th=[10290], 10.00th=[10552], 20.00th=[10945], 00:33:47.638 | 30.00th=[11207], 40.00th=[11338], 50.00th=[11469], 60.00th=[11731], 00:33:47.638 | 70.00th=[11863], 80.00th=[12125], 90.00th=[12518], 95.00th=[12911], 00:33:47.638 | 99.00th=[13829], 99.50th=[14222], 99.90th=[51643], 99.95th=[52167], 00:33:47.638 | 99.99th=[52167] 00:33:47.638 bw ( KiB/s): min=30720, max=34048, per=32.32%, avg=33049.60, stdev=858.75, samples=20 00:33:47.638 iops : min= 240, max= 266, avg=258.20, stdev= 6.71, samples=20 00:33:47.638 lat (msec) : 10=2.09%, 20=97.72%, 50=0.08%, 100=0.12% 00:33:47.638 cpu : usr=93.49%, sys=6.21%, ctx=22, majf=0, minf=91 00:33:47.638 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:47.638 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:47.638 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:47.638 issued rwts: total=2584,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:47.638 latency : target=0, window=0, percentile=100.00%, depth=3 00:33:47.638 00:33:47.638 Run status group 0 (all jobs): 00:33:47.638 READ: bw=99.8MiB/s (105MB/s), 32.2MiB/s-35.2MiB/s (33.7MB/s-36.9MB/s), io=1003MiB (1052MB), run=10045-10048msec 00:33:47.638 12:57:28 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:33:47.638 12:57:28 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:33:47.638 12:57:28 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:33:47.638 12:57:28 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:47.638 12:57:28 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:33:47.638 12:57:28 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:47.638 12:57:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:47.638 12:57:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:33:47.638 12:57:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:47.638 12:57:28 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:47.638 12:57:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:47.638 12:57:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:33:47.638 12:57:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:47.638 00:33:47.638 real 0m11.112s 00:33:47.638 user 0m35.143s 00:33:47.638 sys 0m2.072s 00:33:47.638 12:57:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:47.638 12:57:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:33:47.638 ************************************ 00:33:47.638 END TEST fio_dif_digest 00:33:47.638 ************************************ 00:33:47.638 12:57:28 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:33:47.638 12:57:28 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:33:47.638 12:57:28 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:47.638 12:57:28 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:33:47.638 12:57:28 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:47.638 12:57:28 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:33:47.638 12:57:28 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:47.638 12:57:28 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:47.638 rmmod nvme_tcp 00:33:47.639 rmmod nvme_fabrics 00:33:47.639 rmmod nvme_keyring 00:33:47.639 12:57:28 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:47.639 12:57:28 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:33:47.639 12:57:28 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:33:47.639 12:57:28 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 2775031 ']' 00:33:47.639 12:57:28 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 2775031 00:33:47.639 12:57:28 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 2775031 ']' 00:33:47.639 12:57:28 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 2775031 00:33:47.639 12:57:28 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:33:47.639 12:57:28 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:47.639 12:57:28 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2775031 00:33:47.639 12:57:28 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:47.639 12:57:28 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:47.639 12:57:28 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2775031' 00:33:47.639 killing process with pid 2775031 00:33:47.639 12:57:28 nvmf_dif -- common/autotest_common.sh@973 -- # kill 2775031 00:33:47.639 12:57:28 nvmf_dif -- common/autotest_common.sh@978 -- # wait 2775031 00:33:47.639 12:57:28 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:33:47.639 12:57:28 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:33:48.573 Waiting for block devices as requested 00:33:48.833 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:33:48.833 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:33:48.833 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:33:49.092 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:33:49.092 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:33:49.092 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:33:49.092 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:33:49.351 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:33:49.351 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:33:49.351 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:33:49.351 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:33:49.610 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:33:49.610 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:33:49.610 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:33:49.610 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:33:49.868 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:33:49.868 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:33:49.868 12:57:32 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:49.868 12:57:32 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:49.868 12:57:32 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:33:49.868 12:57:32 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:49.868 12:57:32 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:33:49.868 12:57:32 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:33:49.868 12:57:32 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:49.868 12:57:32 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:49.868 12:57:32 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:49.868 12:57:32 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:49.868 12:57:32 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:52.401 12:57:34 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:52.401 00:33:52.401 real 1m12.437s 00:33:52.401 user 7m7.620s 00:33:52.401 sys 0m20.316s 00:33:52.401 12:57:34 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:52.401 12:57:34 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:52.401 ************************************ 00:33:52.401 END TEST nvmf_dif 00:33:52.401 ************************************ 00:33:52.401 12:57:34 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:33:52.401 12:57:34 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:52.401 12:57:34 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:52.401 12:57:34 -- common/autotest_common.sh@10 -- # set +x 00:33:52.401 ************************************ 00:33:52.401 START TEST nvmf_abort_qd_sizes 00:33:52.401 ************************************ 00:33:52.401 12:57:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:33:52.401 * Looking for test storage... 00:33:52.401 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:52.401 12:57:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:52.401 12:57:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lcov --version 00:33:52.401 12:57:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:52.402 12:57:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:52.402 12:57:34 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:52.402 12:57:34 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:52.402 12:57:34 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:52.402 12:57:34 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:33:52.402 12:57:34 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:33:52.402 12:57:34 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:33:52.402 12:57:34 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:33:52.402 12:57:34 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:33:52.402 12:57:34 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:33:52.402 12:57:34 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:33:52.402 12:57:34 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:52.402 12:57:34 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:33:52.402 12:57:34 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:33:52.402 12:57:34 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:52.402 12:57:34 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:52.402 12:57:34 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:33:52.402 12:57:34 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:33:52.402 12:57:34 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:52.402 12:57:34 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:33:52.402 12:57:34 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:33:52.402 12:57:34 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:33:52.402 12:57:34 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:33:52.402 12:57:34 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:52.402 12:57:34 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:33:52.402 12:57:34 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:33:52.402 12:57:34 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:52.402 12:57:34 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:52.402 12:57:34 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:33:52.402 12:57:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:52.402 12:57:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:52.402 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:52.402 --rc genhtml_branch_coverage=1 00:33:52.402 --rc genhtml_function_coverage=1 00:33:52.402 --rc genhtml_legend=1 00:33:52.402 --rc geninfo_all_blocks=1 00:33:52.402 --rc geninfo_unexecuted_blocks=1 00:33:52.402 00:33:52.402 ' 00:33:52.402 12:57:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:52.402 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:52.402 --rc genhtml_branch_coverage=1 00:33:52.402 --rc genhtml_function_coverage=1 00:33:52.402 --rc genhtml_legend=1 00:33:52.402 --rc geninfo_all_blocks=1 00:33:52.402 --rc geninfo_unexecuted_blocks=1 00:33:52.402 00:33:52.402 ' 00:33:52.402 12:57:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:52.402 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:52.402 --rc genhtml_branch_coverage=1 00:33:52.402 --rc genhtml_function_coverage=1 00:33:52.402 --rc genhtml_legend=1 00:33:52.402 --rc geninfo_all_blocks=1 00:33:52.402 --rc geninfo_unexecuted_blocks=1 00:33:52.402 00:33:52.402 ' 00:33:52.402 12:57:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:52.402 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:52.402 --rc genhtml_branch_coverage=1 00:33:52.402 --rc genhtml_function_coverage=1 00:33:52.402 --rc genhtml_legend=1 00:33:52.402 --rc geninfo_all_blocks=1 00:33:52.402 --rc geninfo_unexecuted_blocks=1 00:33:52.402 00:33:52.402 ' 00:33:52.402 12:57:34 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:52.402 12:57:34 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:33:52.402 12:57:34 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:52.402 12:57:34 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:52.402 12:57:34 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:52.402 12:57:34 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:52.402 12:57:34 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:52.402 12:57:34 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:52.402 12:57:34 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:52.402 12:57:34 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:52.402 12:57:34 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:52.402 12:57:34 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:52.402 12:57:34 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:33:52.402 12:57:34 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:33:52.402 12:57:34 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:52.402 12:57:34 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:52.402 12:57:34 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:52.402 12:57:34 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:52.402 12:57:34 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:52.402 12:57:34 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:33:52.402 12:57:34 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:52.402 12:57:34 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:52.402 12:57:34 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:52.402 12:57:34 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:52.402 12:57:34 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:52.402 12:57:34 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:52.402 12:57:34 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:33:52.402 12:57:34 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:52.402 12:57:34 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:33:52.402 12:57:34 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:52.402 12:57:34 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:52.402 12:57:34 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:52.402 12:57:34 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:52.402 12:57:34 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:52.402 12:57:34 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:52.402 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:52.402 12:57:34 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:52.402 12:57:34 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:52.402 12:57:34 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:52.402 12:57:34 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:33:52.402 12:57:34 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:52.402 12:57:34 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:52.402 12:57:34 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:52.402 12:57:34 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:52.402 12:57:34 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:52.402 12:57:34 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:52.402 12:57:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:52.402 12:57:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:52.402 12:57:34 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:52.402 12:57:34 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:52.402 12:57:34 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:33:52.402 12:57:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:33:57.668 12:57:39 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:57.668 12:57:39 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:33:57.668 12:57:39 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:57.668 12:57:39 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:57.668 12:57:39 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:57.668 12:57:39 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:57.668 12:57:39 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:57.668 12:57:39 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:33:57.668 12:57:39 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:57.668 12:57:39 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:33:57.668 12:57:39 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:33:57.668 12:57:39 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:33:57.668 12:57:39 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:33:57.668 12:57:39 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:33:57.668 12:57:39 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:33:57.668 12:57:39 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:57.668 12:57:39 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:57.668 12:57:39 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:57.668 12:57:39 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:57.668 12:57:39 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:57.668 12:57:39 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:57.668 12:57:39 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:57.668 12:57:39 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:57.668 12:57:39 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:57.668 12:57:39 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:57.668 12:57:39 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:57.668 12:57:39 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:57.668 12:57:39 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:57.668 12:57:39 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:57.668 12:57:39 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:57.668 12:57:39 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:57.668 12:57:39 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:57.668 12:57:39 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:57.668 12:57:39 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:57.668 12:57:39 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:33:57.668 Found 0000:86:00.0 (0x8086 - 0x159b) 00:33:57.668 12:57:39 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:57.668 12:57:39 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:57.668 12:57:39 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:57.668 12:57:39 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:57.668 12:57:39 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:57.668 12:57:39 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:57.668 12:57:39 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:33:57.668 Found 0000:86:00.1 (0x8086 - 0x159b) 00:33:57.668 12:57:39 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:57.668 12:57:39 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:57.668 12:57:39 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:57.668 12:57:39 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:57.668 12:57:39 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:57.668 12:57:39 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:57.668 12:57:39 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:57.668 12:57:39 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:57.668 12:57:39 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:57.668 12:57:39 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:57.668 12:57:39 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:57.668 12:57:39 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:57.669 12:57:39 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:57.669 12:57:39 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:57.669 12:57:39 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:57.669 12:57:39 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:33:57.669 Found net devices under 0000:86:00.0: cvl_0_0 00:33:57.669 12:57:39 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:57.669 12:57:39 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:57.669 12:57:39 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:57.669 12:57:39 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:57.669 12:57:39 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:57.669 12:57:39 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:57.669 12:57:39 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:57.669 12:57:39 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:57.669 12:57:39 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:33:57.669 Found net devices under 0000:86:00.1: cvl_0_1 00:33:57.669 12:57:39 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:57.669 12:57:39 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:57.669 12:57:39 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:33:57.669 12:57:39 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:57.669 12:57:39 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:57.669 12:57:39 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:57.669 12:57:39 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:57.669 12:57:39 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:57.669 12:57:39 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:57.669 12:57:39 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:57.669 12:57:39 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:57.669 12:57:39 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:57.669 12:57:39 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:57.669 12:57:39 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:57.669 12:57:39 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:57.669 12:57:39 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:57.669 12:57:39 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:57.669 12:57:39 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:57.669 12:57:39 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:57.669 12:57:39 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:57.669 12:57:39 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:57.669 12:57:40 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:57.669 12:57:40 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:57.669 12:57:40 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:57.669 12:57:40 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:57.669 12:57:40 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:57.928 12:57:40 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:57.928 12:57:40 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:57.928 12:57:40 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:57.928 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:57.928 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.404 ms 00:33:57.928 00:33:57.928 --- 10.0.0.2 ping statistics --- 00:33:57.928 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:57.928 rtt min/avg/max/mdev = 0.404/0.404/0.404/0.000 ms 00:33:57.928 12:57:40 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:57.928 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:57.928 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.203 ms 00:33:57.928 00:33:57.928 --- 10.0.0.1 ping statistics --- 00:33:57.928 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:57.928 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:33:57.928 12:57:40 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:57.928 12:57:40 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:33:57.928 12:57:40 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:33:57.928 12:57:40 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:00.458 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:34:00.458 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:34:00.458 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:34:00.458 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:34:00.458 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:34:00.458 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:34:00.458 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:34:00.458 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:34:00.458 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:34:00.458 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:34:00.458 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:34:00.458 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:34:00.458 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:34:00.458 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:34:00.459 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:34:00.459 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:34:01.394 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:34:01.394 12:57:43 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:01.394 12:57:43 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:01.394 12:57:43 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:01.394 12:57:43 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:01.394 12:57:43 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:01.394 12:57:43 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:01.394 12:57:43 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:34:01.394 12:57:43 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:01.394 12:57:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:01.394 12:57:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:01.394 12:57:43 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=2791772 00:34:01.394 12:57:43 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 2791772 00:34:01.394 12:57:43 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:34:01.394 12:57:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 2791772 ']' 00:34:01.394 12:57:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:01.394 12:57:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:01.394 12:57:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:01.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:01.394 12:57:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:01.394 12:57:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:01.394 [2024-11-28 12:57:43.851526] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:34:01.394 [2024-11-28 12:57:43.851570] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:01.652 [2024-11-28 12:57:43.917640] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:01.652 [2024-11-28 12:57:43.961526] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:01.652 [2024-11-28 12:57:43.961563] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:01.652 [2024-11-28 12:57:43.961570] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:01.652 [2024-11-28 12:57:43.961576] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:01.652 [2024-11-28 12:57:43.961581] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:01.652 [2024-11-28 12:57:43.963106] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:01.652 [2024-11-28 12:57:43.963207] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:01.652 [2024-11-28 12:57:43.963282] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:01.652 [2024-11-28 12:57:43.963284] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:01.652 12:57:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:01.652 12:57:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:34:01.652 12:57:44 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:01.652 12:57:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:01.652 12:57:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:01.652 12:57:44 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:01.652 12:57:44 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:34:01.652 12:57:44 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:34:01.652 12:57:44 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:34:01.652 12:57:44 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:34:01.652 12:57:44 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:34:01.652 12:57:44 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:5e:00.0 ]] 00:34:01.652 12:57:44 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:34:01.652 12:57:44 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:34:01.652 12:57:44 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:5e:00.0 ]] 00:34:01.652 12:57:44 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:34:01.652 12:57:44 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:34:01.652 12:57:44 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:34:01.652 12:57:44 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:34:01.652 12:57:44 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:5e:00.0 00:34:01.652 12:57:44 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:34:01.652 12:57:44 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:5e:00.0 00:34:01.652 12:57:44 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:34:01.652 12:57:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:01.652 12:57:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:01.652 12:57:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:01.652 ************************************ 00:34:01.652 START TEST spdk_target_abort 00:34:01.652 ************************************ 00:34:01.652 12:57:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:34:01.652 12:57:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:34:01.652 12:57:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:5e:00.0 -b spdk_target 00:34:01.652 12:57:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:01.652 12:57:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:04.931 spdk_targetn1 00:34:04.931 12:57:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:04.931 12:57:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:04.931 12:57:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:04.931 12:57:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:04.931 [2024-11-28 12:57:46.977501] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:04.931 12:57:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:04.931 12:57:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:34:04.931 12:57:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:04.931 12:57:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:04.931 12:57:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:04.931 12:57:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:34:04.931 12:57:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:04.931 12:57:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:04.931 12:57:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:04.931 12:57:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:34:04.931 12:57:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:04.931 12:57:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:04.931 [2024-11-28 12:57:47.025786] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:04.931 12:57:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:04.931 12:57:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:34:04.931 12:57:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:34:04.931 12:57:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:34:04.931 12:57:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:34:04.931 12:57:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:34:04.931 12:57:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:34:04.931 12:57:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:34:04.931 12:57:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:34:04.932 12:57:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:34:04.932 12:57:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:04.932 12:57:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:34:04.932 12:57:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:04.932 12:57:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:34:04.932 12:57:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:04.932 12:57:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:34:04.932 12:57:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:04.932 12:57:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:04.932 12:57:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:04.932 12:57:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:04.932 12:57:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:04.932 12:57:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:08.212 Initializing NVMe Controllers 00:34:08.212 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:34:08.212 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:34:08.212 Initialization complete. Launching workers. 00:34:08.212 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 15813, failed: 0 00:34:08.212 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1384, failed to submit 14429 00:34:08.212 success 734, unsuccessful 650, failed 0 00:34:08.212 12:57:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:08.212 12:57:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:11.495 Initializing NVMe Controllers 00:34:11.495 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:34:11.495 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:34:11.495 Initialization complete. Launching workers. 00:34:11.495 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8548, failed: 0 00:34:11.495 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1240, failed to submit 7308 00:34:11.495 success 303, unsuccessful 937, failed 0 00:34:11.495 12:57:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:11.495 12:57:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:14.775 Initializing NVMe Controllers 00:34:14.775 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:34:14.775 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:34:14.775 Initialization complete. Launching workers. 00:34:14.775 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 37732, failed: 0 00:34:14.775 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2811, failed to submit 34921 00:34:14.775 success 584, unsuccessful 2227, failed 0 00:34:14.775 12:57:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:34:14.775 12:57:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.775 12:57:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:14.775 12:57:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.775 12:57:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:34:14.775 12:57:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.775 12:57:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:15.708 12:57:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:15.708 12:57:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 2791772 00:34:15.708 12:57:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 2791772 ']' 00:34:15.708 12:57:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 2791772 00:34:15.708 12:57:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:34:15.708 12:57:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:15.708 12:57:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2791772 00:34:15.708 12:57:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:15.708 12:57:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:15.708 12:57:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2791772' 00:34:15.708 killing process with pid 2791772 00:34:15.708 12:57:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 2791772 00:34:15.708 12:57:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 2791772 00:34:15.708 00:34:15.708 real 0m14.023s 00:34:15.708 user 0m53.410s 00:34:15.708 sys 0m2.580s 00:34:15.708 12:57:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:15.708 12:57:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:15.708 ************************************ 00:34:15.708 END TEST spdk_target_abort 00:34:15.708 ************************************ 00:34:15.708 12:57:58 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:34:15.708 12:57:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:15.708 12:57:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:15.708 12:57:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:15.966 ************************************ 00:34:15.966 START TEST kernel_target_abort 00:34:15.966 ************************************ 00:34:15.966 12:57:58 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:34:15.966 12:57:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:34:15.966 12:57:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:34:15.966 12:57:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:15.966 12:57:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:15.966 12:57:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:15.966 12:57:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:15.966 12:57:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:15.966 12:57:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:15.966 12:57:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:15.966 12:57:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:15.966 12:57:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:15.966 12:57:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:34:15.966 12:57:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:34:15.966 12:57:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:34:15.966 12:57:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:15.966 12:57:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:34:15.966 12:57:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:34:15.966 12:57:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:34:15.966 12:57:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:34:15.966 12:57:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:34:15.966 12:57:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:34:15.966 12:57:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:34:18.499 Waiting for block devices as requested 00:34:18.499 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:34:18.499 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:18.757 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:18.757 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:18.757 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:18.757 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:19.015 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:19.015 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:19.015 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:19.015 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:19.274 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:19.274 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:19.274 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:19.533 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:19.533 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:19.533 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:19.533 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:19.792 12:58:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:34:19.792 12:58:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:34:19.792 12:58:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:34:19.792 12:58:02 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:34:19.792 12:58:02 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:34:19.792 12:58:02 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:34:19.792 12:58:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:34:19.792 12:58:02 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:34:19.792 12:58:02 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:34:19.792 No valid GPT data, bailing 00:34:19.792 12:58:02 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:34:19.792 12:58:02 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:34:19.792 12:58:02 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:34:19.792 12:58:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:34:19.792 12:58:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:34:19.792 12:58:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:19.792 12:58:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:34:19.792 12:58:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:34:19.792 12:58:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:34:19.792 12:58:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:34:19.792 12:58:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:34:19.792 12:58:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:34:19.792 12:58:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:34:19.792 12:58:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:34:19.792 12:58:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:34:19.792 12:58:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:34:19.792 12:58:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:34:19.792 12:58:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:34:19.792 00:34:19.792 Discovery Log Number of Records 2, Generation counter 2 00:34:19.792 =====Discovery Log Entry 0====== 00:34:19.792 trtype: tcp 00:34:19.792 adrfam: ipv4 00:34:19.792 subtype: current discovery subsystem 00:34:19.792 treq: not specified, sq flow control disable supported 00:34:19.792 portid: 1 00:34:19.792 trsvcid: 4420 00:34:19.792 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:34:19.792 traddr: 10.0.0.1 00:34:19.792 eflags: none 00:34:19.792 sectype: none 00:34:19.792 =====Discovery Log Entry 1====== 00:34:19.792 trtype: tcp 00:34:19.792 adrfam: ipv4 00:34:19.792 subtype: nvme subsystem 00:34:19.792 treq: not specified, sq flow control disable supported 00:34:19.792 portid: 1 00:34:19.792 trsvcid: 4420 00:34:19.792 subnqn: nqn.2016-06.io.spdk:testnqn 00:34:19.792 traddr: 10.0.0.1 00:34:19.792 eflags: none 00:34:19.792 sectype: none 00:34:19.792 12:58:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:34:19.792 12:58:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:34:19.792 12:58:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:34:19.792 12:58:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:34:19.792 12:58:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:34:19.792 12:58:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:34:19.792 12:58:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:34:19.792 12:58:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:34:19.792 12:58:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:34:19.792 12:58:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:19.792 12:58:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:34:19.792 12:58:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:19.792 12:58:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:34:19.792 12:58:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:19.792 12:58:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:34:20.051 12:58:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:20.051 12:58:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:34:20.051 12:58:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:20.051 12:58:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:20.051 12:58:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:20.051 12:58:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:23.330 Initializing NVMe Controllers 00:34:23.330 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:34:23.330 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:34:23.330 Initialization complete. Launching workers. 00:34:23.330 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 90336, failed: 0 00:34:23.330 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 90336, failed to submit 0 00:34:23.330 success 0, unsuccessful 90336, failed 0 00:34:23.330 12:58:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:23.330 12:58:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:26.611 Initializing NVMe Controllers 00:34:26.611 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:34:26.611 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:34:26.611 Initialization complete. Launching workers. 00:34:26.611 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 144133, failed: 0 00:34:26.611 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 36150, failed to submit 107983 00:34:26.611 success 0, unsuccessful 36150, failed 0 00:34:26.612 12:58:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:26.612 12:58:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:29.141 Initializing NVMe Controllers 00:34:29.141 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:34:29.141 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:34:29.141 Initialization complete. Launching workers. 00:34:29.141 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 135565, failed: 0 00:34:29.141 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 33950, failed to submit 101615 00:34:29.141 success 0, unsuccessful 33950, failed 0 00:34:29.141 12:58:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:34:29.141 12:58:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:34:29.141 12:58:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:34:29.141 12:58:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:29.141 12:58:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:34:29.141 12:58:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:34:29.141 12:58:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:29.141 12:58:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:34:29.141 12:58:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:34:29.397 12:58:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:31.922 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:34:31.922 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:34:31.922 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:34:31.922 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:34:31.922 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:34:31.922 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:34:31.922 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:34:31.922 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:34:31.922 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:34:31.922 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:34:31.922 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:34:31.922 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:34:31.922 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:34:31.922 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:34:31.922 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:34:31.922 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:34:32.487 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:34:32.487 00:34:32.487 real 0m16.704s 00:34:32.487 user 0m8.627s 00:34:32.487 sys 0m4.559s 00:34:32.487 12:58:14 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:32.487 12:58:14 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:32.487 ************************************ 00:34:32.487 END TEST kernel_target_abort 00:34:32.487 ************************************ 00:34:32.487 12:58:14 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:34:32.487 12:58:14 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:34:32.487 12:58:14 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:32.487 12:58:14 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:34:32.487 12:58:14 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:32.487 12:58:14 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:34:32.487 12:58:14 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:32.487 12:58:14 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:32.487 rmmod nvme_tcp 00:34:32.748 rmmod nvme_fabrics 00:34:32.748 rmmod nvme_keyring 00:34:32.748 12:58:15 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:32.748 12:58:15 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:34:32.748 12:58:15 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:34:32.748 12:58:15 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 2791772 ']' 00:34:32.748 12:58:15 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 2791772 00:34:32.748 12:58:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 2791772 ']' 00:34:32.748 12:58:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 2791772 00:34:32.748 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2791772) - No such process 00:34:32.748 12:58:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 2791772 is not found' 00:34:32.748 Process with pid 2791772 is not found 00:34:32.748 12:58:15 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:34:32.748 12:58:15 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:34:35.282 Waiting for block devices as requested 00:34:35.282 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:34:35.282 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:35.282 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:35.282 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:35.282 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:35.282 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:35.282 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:35.541 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:35.541 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:35.541 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:35.541 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:35.800 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:35.800 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:35.800 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:36.059 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:36.059 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:36.059 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:36.317 12:58:18 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:36.317 12:58:18 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:36.317 12:58:18 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:34:36.317 12:58:18 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:34:36.317 12:58:18 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:36.317 12:58:18 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:34:36.317 12:58:18 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:36.317 12:58:18 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:36.317 12:58:18 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:36.317 12:58:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:36.317 12:58:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:38.327 12:58:20 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:38.327 00:34:38.327 real 0m46.161s 00:34:38.327 user 1m5.816s 00:34:38.327 sys 0m15.064s 00:34:38.327 12:58:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:38.327 12:58:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:38.327 ************************************ 00:34:38.327 END TEST nvmf_abort_qd_sizes 00:34:38.327 ************************************ 00:34:38.327 12:58:20 -- spdk/autotest.sh@292 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:34:38.327 12:58:20 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:38.327 12:58:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:38.327 12:58:20 -- common/autotest_common.sh@10 -- # set +x 00:34:38.327 ************************************ 00:34:38.327 START TEST keyring_file 00:34:38.327 ************************************ 00:34:38.327 12:58:20 keyring_file -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:34:38.327 * Looking for test storage... 00:34:38.327 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:34:38.327 12:58:20 keyring_file -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:38.327 12:58:20 keyring_file -- common/autotest_common.sh@1693 -- # lcov --version 00:34:38.327 12:58:20 keyring_file -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:38.625 12:58:20 keyring_file -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:38.625 12:58:20 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:38.625 12:58:20 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:38.625 12:58:20 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:38.625 12:58:20 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:34:38.625 12:58:20 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:34:38.625 12:58:20 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:34:38.625 12:58:20 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:34:38.625 12:58:20 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:34:38.625 12:58:20 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:34:38.625 12:58:20 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:34:38.625 12:58:20 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:38.625 12:58:20 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:34:38.625 12:58:20 keyring_file -- scripts/common.sh@345 -- # : 1 00:34:38.625 12:58:20 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:38.625 12:58:20 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:38.625 12:58:20 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:34:38.625 12:58:20 keyring_file -- scripts/common.sh@353 -- # local d=1 00:34:38.625 12:58:20 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:38.625 12:58:20 keyring_file -- scripts/common.sh@355 -- # echo 1 00:34:38.625 12:58:20 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:34:38.625 12:58:20 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:34:38.625 12:58:20 keyring_file -- scripts/common.sh@353 -- # local d=2 00:34:38.625 12:58:20 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:38.625 12:58:20 keyring_file -- scripts/common.sh@355 -- # echo 2 00:34:38.625 12:58:20 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:34:38.625 12:58:20 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:38.625 12:58:20 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:38.625 12:58:20 keyring_file -- scripts/common.sh@368 -- # return 0 00:34:38.625 12:58:20 keyring_file -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:38.625 12:58:20 keyring_file -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:38.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:38.625 --rc genhtml_branch_coverage=1 00:34:38.625 --rc genhtml_function_coverage=1 00:34:38.625 --rc genhtml_legend=1 00:34:38.625 --rc geninfo_all_blocks=1 00:34:38.625 --rc geninfo_unexecuted_blocks=1 00:34:38.625 00:34:38.625 ' 00:34:38.625 12:58:20 keyring_file -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:38.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:38.625 --rc genhtml_branch_coverage=1 00:34:38.625 --rc genhtml_function_coverage=1 00:34:38.625 --rc genhtml_legend=1 00:34:38.625 --rc geninfo_all_blocks=1 00:34:38.625 --rc geninfo_unexecuted_blocks=1 00:34:38.625 00:34:38.625 ' 00:34:38.625 12:58:20 keyring_file -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:38.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:38.625 --rc genhtml_branch_coverage=1 00:34:38.625 --rc genhtml_function_coverage=1 00:34:38.625 --rc genhtml_legend=1 00:34:38.625 --rc geninfo_all_blocks=1 00:34:38.625 --rc geninfo_unexecuted_blocks=1 00:34:38.625 00:34:38.625 ' 00:34:38.625 12:58:20 keyring_file -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:38.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:38.625 --rc genhtml_branch_coverage=1 00:34:38.625 --rc genhtml_function_coverage=1 00:34:38.625 --rc genhtml_legend=1 00:34:38.625 --rc geninfo_all_blocks=1 00:34:38.625 --rc geninfo_unexecuted_blocks=1 00:34:38.625 00:34:38.625 ' 00:34:38.625 12:58:20 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:34:38.625 12:58:20 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:38.625 12:58:20 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:34:38.625 12:58:20 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:38.625 12:58:20 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:38.625 12:58:20 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:38.625 12:58:20 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:38.625 12:58:20 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:38.625 12:58:20 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:38.625 12:58:20 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:38.625 12:58:20 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:38.625 12:58:20 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:38.625 12:58:20 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:38.625 12:58:20 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:34:38.625 12:58:20 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:34:38.625 12:58:20 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:38.625 12:58:20 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:38.625 12:58:20 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:38.625 12:58:20 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:38.625 12:58:20 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:38.625 12:58:20 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:34:38.625 12:58:20 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:38.625 12:58:20 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:38.625 12:58:20 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:38.625 12:58:20 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:38.626 12:58:20 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:38.626 12:58:20 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:38.626 12:58:20 keyring_file -- paths/export.sh@5 -- # export PATH 00:34:38.626 12:58:20 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:38.626 12:58:20 keyring_file -- nvmf/common.sh@51 -- # : 0 00:34:38.626 12:58:20 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:38.626 12:58:20 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:38.626 12:58:20 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:38.626 12:58:20 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:38.626 12:58:20 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:38.626 12:58:20 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:38.626 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:38.626 12:58:20 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:38.626 12:58:20 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:38.626 12:58:20 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:38.626 12:58:20 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:34:38.626 12:58:20 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:34:38.626 12:58:20 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:34:38.626 12:58:20 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:34:38.626 12:58:20 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:34:38.626 12:58:20 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:34:38.626 12:58:20 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:34:38.626 12:58:20 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:34:38.626 12:58:20 keyring_file -- keyring/common.sh@17 -- # name=key0 00:34:38.626 12:58:20 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:34:38.626 12:58:20 keyring_file -- keyring/common.sh@17 -- # digest=0 00:34:38.626 12:58:20 keyring_file -- keyring/common.sh@18 -- # mktemp 00:34:38.626 12:58:20 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.oaZx5GSOEN 00:34:38.626 12:58:20 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:34:38.626 12:58:20 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:34:38.626 12:58:20 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:34:38.626 12:58:20 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:34:38.626 12:58:20 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:34:38.626 12:58:20 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:34:38.626 12:58:20 keyring_file -- nvmf/common.sh@733 -- # python - 00:34:38.626 12:58:20 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.oaZx5GSOEN 00:34:38.626 12:58:20 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.oaZx5GSOEN 00:34:38.626 12:58:20 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.oaZx5GSOEN 00:34:38.626 12:58:20 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:34:38.626 12:58:20 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:34:38.626 12:58:20 keyring_file -- keyring/common.sh@17 -- # name=key1 00:34:38.626 12:58:20 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:34:38.626 12:58:20 keyring_file -- keyring/common.sh@17 -- # digest=0 00:34:38.626 12:58:20 keyring_file -- keyring/common.sh@18 -- # mktemp 00:34:38.626 12:58:20 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.kP1b8eLkZV 00:34:38.626 12:58:20 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:34:38.626 12:58:20 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:34:38.626 12:58:20 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:34:38.626 12:58:20 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:34:38.626 12:58:20 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:34:38.626 12:58:20 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:34:38.626 12:58:20 keyring_file -- nvmf/common.sh@733 -- # python - 00:34:38.626 12:58:21 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.kP1b8eLkZV 00:34:38.626 12:58:21 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.kP1b8eLkZV 00:34:38.626 12:58:21 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.kP1b8eLkZV 00:34:38.626 12:58:21 keyring_file -- keyring/file.sh@30 -- # tgtpid=2800319 00:34:38.626 12:58:21 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:34:38.626 12:58:21 keyring_file -- keyring/file.sh@32 -- # waitforlisten 2800319 00:34:38.626 12:58:21 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 2800319 ']' 00:34:38.626 12:58:21 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:38.626 12:58:21 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:38.626 12:58:21 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:38.626 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:38.626 12:58:21 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:38.626 12:58:21 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:34:38.626 [2024-11-28 12:58:21.090945] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:34:38.626 [2024-11-28 12:58:21.091002] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2800319 ] 00:34:38.884 [2024-11-28 12:58:21.153843] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:38.884 [2024-11-28 12:58:21.196680] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:39.143 12:58:21 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:39.143 12:58:21 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:34:39.143 12:58:21 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:34:39.143 12:58:21 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.143 12:58:21 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:34:39.143 [2024-11-28 12:58:21.408117] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:39.143 null0 00:34:39.143 [2024-11-28 12:58:21.440172] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:34:39.143 [2024-11-28 12:58:21.440535] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:34:39.143 12:58:21 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.143 12:58:21 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:34:39.143 12:58:21 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:34:39.143 12:58:21 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:34:39.143 12:58:21 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:34:39.143 12:58:21 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:39.143 12:58:21 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:34:39.143 12:58:21 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:39.143 12:58:21 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:34:39.143 12:58:21 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.143 12:58:21 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:34:39.143 [2024-11-28 12:58:21.468234] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:34:39.143 request: 00:34:39.143 { 00:34:39.143 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:34:39.143 "secure_channel": false, 00:34:39.143 "listen_address": { 00:34:39.143 "trtype": "tcp", 00:34:39.143 "traddr": "127.0.0.1", 00:34:39.143 "trsvcid": "4420" 00:34:39.143 }, 00:34:39.143 "method": "nvmf_subsystem_add_listener", 00:34:39.143 "req_id": 1 00:34:39.143 } 00:34:39.143 Got JSON-RPC error response 00:34:39.143 response: 00:34:39.143 { 00:34:39.143 "code": -32602, 00:34:39.143 "message": "Invalid parameters" 00:34:39.143 } 00:34:39.143 12:58:21 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:34:39.143 12:58:21 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:34:39.143 12:58:21 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:39.143 12:58:21 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:39.143 12:58:21 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:39.143 12:58:21 keyring_file -- keyring/file.sh@47 -- # bperfpid=2800346 00:34:39.143 12:58:21 keyring_file -- keyring/file.sh@49 -- # waitforlisten 2800346 /var/tmp/bperf.sock 00:34:39.143 12:58:21 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:34:39.143 12:58:21 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 2800346 ']' 00:34:39.143 12:58:21 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:39.143 12:58:21 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:39.143 12:58:21 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:39.143 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:39.143 12:58:21 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:39.143 12:58:21 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:34:39.143 [2024-11-28 12:58:21.519874] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:34:39.143 [2024-11-28 12:58:21.519918] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2800346 ] 00:34:39.143 [2024-11-28 12:58:21.580951] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:39.143 [2024-11-28 12:58:21.624129] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:39.401 12:58:21 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:39.401 12:58:21 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:34:39.401 12:58:21 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.oaZx5GSOEN 00:34:39.401 12:58:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.oaZx5GSOEN 00:34:39.401 12:58:21 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.kP1b8eLkZV 00:34:39.401 12:58:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.kP1b8eLkZV 00:34:39.659 12:58:22 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:34:39.659 12:58:22 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:34:39.659 12:58:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:39.659 12:58:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:39.659 12:58:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:39.917 12:58:22 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.oaZx5GSOEN == \/\t\m\p\/\t\m\p\.\o\a\Z\x\5\G\S\O\E\N ]] 00:34:39.917 12:58:22 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:34:39.917 12:58:22 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:34:39.917 12:58:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:39.917 12:58:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:39.917 12:58:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:34:40.175 12:58:22 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.kP1b8eLkZV == \/\t\m\p\/\t\m\p\.\k\P\1\b\8\e\L\k\Z\V ]] 00:34:40.175 12:58:22 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:34:40.175 12:58:22 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:34:40.175 12:58:22 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:40.175 12:58:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:40.175 12:58:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:40.175 12:58:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:40.175 12:58:22 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:34:40.175 12:58:22 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:34:40.175 12:58:22 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:34:40.175 12:58:22 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:40.175 12:58:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:40.175 12:58:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:34:40.175 12:58:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:40.432 12:58:22 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:34:40.432 12:58:22 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:34:40.432 12:58:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:34:40.689 [2024-11-28 12:58:23.027534] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:34:40.689 nvme0n1 00:34:40.689 12:58:23 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:34:40.689 12:58:23 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:34:40.689 12:58:23 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:40.689 12:58:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:40.689 12:58:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:40.689 12:58:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:40.947 12:58:23 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:34:40.947 12:58:23 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:34:40.947 12:58:23 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:40.947 12:58:23 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:34:40.947 12:58:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:40.947 12:58:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:40.947 12:58:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:34:41.205 12:58:23 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:34:41.205 12:58:23 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:41.205 Running I/O for 1 seconds... 00:34:42.140 17661.00 IOPS, 68.99 MiB/s 00:34:42.140 Latency(us) 00:34:42.140 [2024-11-28T11:58:24.659Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:42.140 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:34:42.140 nvme0n1 : 1.00 17713.46 69.19 0.00 0.00 7213.52 2592.95 12765.27 00:34:42.140 [2024-11-28T11:58:24.659Z] =================================================================================================================== 00:34:42.140 [2024-11-28T11:58:24.659Z] Total : 17713.46 69.19 0.00 0.00 7213.52 2592.95 12765.27 00:34:42.140 { 00:34:42.140 "results": [ 00:34:42.140 { 00:34:42.140 "job": "nvme0n1", 00:34:42.140 "core_mask": "0x2", 00:34:42.140 "workload": "randrw", 00:34:42.140 "percentage": 50, 00:34:42.140 "status": "finished", 00:34:42.140 "queue_depth": 128, 00:34:42.140 "io_size": 4096, 00:34:42.140 "runtime": 1.004321, 00:34:42.140 "iops": 17713.460138740502, 00:34:42.140 "mibps": 69.19320366695509, 00:34:42.140 "io_failed": 0, 00:34:42.140 "io_timeout": 0, 00:34:42.140 "avg_latency_us": 7213.517984994012, 00:34:42.140 "min_latency_us": 2592.946086956522, 00:34:42.140 "max_latency_us": 12765.27304347826 00:34:42.140 } 00:34:42.140 ], 00:34:42.140 "core_count": 1 00:34:42.140 } 00:34:42.140 12:58:24 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:34:42.140 12:58:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:34:42.398 12:58:24 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:34:42.398 12:58:24 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:34:42.398 12:58:24 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:42.398 12:58:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:42.398 12:58:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:42.398 12:58:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:42.656 12:58:25 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:34:42.656 12:58:25 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:34:42.656 12:58:25 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:34:42.656 12:58:25 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:42.656 12:58:25 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:42.656 12:58:25 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:34:42.656 12:58:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:42.914 12:58:25 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:34:42.914 12:58:25 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:34:42.914 12:58:25 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:34:42.914 12:58:25 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:34:42.914 12:58:25 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:34:42.914 12:58:25 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:42.914 12:58:25 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:34:42.914 12:58:25 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:42.914 12:58:25 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:34:42.914 12:58:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:34:42.914 [2024-11-28 12:58:25.392563] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:34:42.914 [2024-11-28 12:58:25.393111] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f1210 (107): Transport endpoint is not connected 00:34:42.914 [2024-11-28 12:58:25.394105] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f1210 (9): Bad file descriptor 00:34:42.914 [2024-11-28 12:58:25.395107] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:34:42.914 [2024-11-28 12:58:25.395117] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:34:42.914 [2024-11-28 12:58:25.395125] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:34:42.914 [2024-11-28 12:58:25.395133] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:34:42.914 request: 00:34:42.914 { 00:34:42.914 "name": "nvme0", 00:34:42.914 "trtype": "tcp", 00:34:42.914 "traddr": "127.0.0.1", 00:34:42.914 "adrfam": "ipv4", 00:34:42.914 "trsvcid": "4420", 00:34:42.914 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:42.914 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:42.914 "prchk_reftag": false, 00:34:42.914 "prchk_guard": false, 00:34:42.914 "hdgst": false, 00:34:42.914 "ddgst": false, 00:34:42.914 "psk": "key1", 00:34:42.914 "allow_unrecognized_csi": false, 00:34:42.914 "method": "bdev_nvme_attach_controller", 00:34:42.914 "req_id": 1 00:34:42.914 } 00:34:42.914 Got JSON-RPC error response 00:34:42.914 response: 00:34:42.914 { 00:34:42.914 "code": -5, 00:34:42.914 "message": "Input/output error" 00:34:42.914 } 00:34:42.914 12:58:25 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:34:42.914 12:58:25 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:42.914 12:58:25 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:42.914 12:58:25 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:42.914 12:58:25 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:34:42.914 12:58:25 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:34:42.914 12:58:25 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:42.914 12:58:25 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:42.914 12:58:25 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:42.914 12:58:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:43.173 12:58:25 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:34:43.173 12:58:25 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:34:43.173 12:58:25 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:43.173 12:58:25 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:34:43.173 12:58:25 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:43.173 12:58:25 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:34:43.173 12:58:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:43.432 12:58:25 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:34:43.432 12:58:25 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:34:43.432 12:58:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:34:43.691 12:58:25 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:34:43.691 12:58:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:34:43.691 12:58:26 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:34:43.691 12:58:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:43.691 12:58:26 keyring_file -- keyring/file.sh@78 -- # jq length 00:34:43.950 12:58:26 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:34:43.950 12:58:26 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.oaZx5GSOEN 00:34:43.950 12:58:26 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.oaZx5GSOEN 00:34:43.950 12:58:26 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:34:43.950 12:58:26 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.oaZx5GSOEN 00:34:43.950 12:58:26 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:34:43.950 12:58:26 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:43.950 12:58:26 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:34:43.950 12:58:26 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:43.950 12:58:26 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.oaZx5GSOEN 00:34:43.950 12:58:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.oaZx5GSOEN 00:34:44.208 [2024-11-28 12:58:26.551151] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.oaZx5GSOEN': 0100660 00:34:44.208 [2024-11-28 12:58:26.551177] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:34:44.208 request: 00:34:44.208 { 00:34:44.208 "name": "key0", 00:34:44.208 "path": "/tmp/tmp.oaZx5GSOEN", 00:34:44.208 "method": "keyring_file_add_key", 00:34:44.208 "req_id": 1 00:34:44.208 } 00:34:44.208 Got JSON-RPC error response 00:34:44.208 response: 00:34:44.208 { 00:34:44.208 "code": -1, 00:34:44.208 "message": "Operation not permitted" 00:34:44.208 } 00:34:44.208 12:58:26 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:34:44.208 12:58:26 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:44.208 12:58:26 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:44.208 12:58:26 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:44.208 12:58:26 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.oaZx5GSOEN 00:34:44.208 12:58:26 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.oaZx5GSOEN 00:34:44.208 12:58:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.oaZx5GSOEN 00:34:44.466 12:58:26 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.oaZx5GSOEN 00:34:44.467 12:58:26 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:34:44.467 12:58:26 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:34:44.467 12:58:26 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:44.467 12:58:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:44.467 12:58:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:44.467 12:58:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:44.467 12:58:26 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:34:44.467 12:58:26 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:34:44.467 12:58:26 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:34:44.467 12:58:26 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:34:44.467 12:58:26 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:34:44.467 12:58:26 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:44.467 12:58:26 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:34:44.467 12:58:26 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:44.467 12:58:26 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:34:44.467 12:58:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:34:44.725 [2024-11-28 12:58:27.124685] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.oaZx5GSOEN': No such file or directory 00:34:44.725 [2024-11-28 12:58:27.124711] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:34:44.725 [2024-11-28 12:58:27.124726] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:34:44.725 [2024-11-28 12:58:27.124733] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:34:44.725 [2024-11-28 12:58:27.124757] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:34:44.725 [2024-11-28 12:58:27.124763] bdev_nvme.c:6769:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:34:44.725 request: 00:34:44.725 { 00:34:44.725 "name": "nvme0", 00:34:44.725 "trtype": "tcp", 00:34:44.725 "traddr": "127.0.0.1", 00:34:44.725 "adrfam": "ipv4", 00:34:44.725 "trsvcid": "4420", 00:34:44.725 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:44.725 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:44.725 "prchk_reftag": false, 00:34:44.725 "prchk_guard": false, 00:34:44.725 "hdgst": false, 00:34:44.725 "ddgst": false, 00:34:44.725 "psk": "key0", 00:34:44.725 "allow_unrecognized_csi": false, 00:34:44.725 "method": "bdev_nvme_attach_controller", 00:34:44.725 "req_id": 1 00:34:44.725 } 00:34:44.725 Got JSON-RPC error response 00:34:44.725 response: 00:34:44.725 { 00:34:44.725 "code": -19, 00:34:44.725 "message": "No such device" 00:34:44.725 } 00:34:44.725 12:58:27 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:34:44.725 12:58:27 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:44.725 12:58:27 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:44.725 12:58:27 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:44.725 12:58:27 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:34:44.725 12:58:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:34:44.984 12:58:27 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:34:44.984 12:58:27 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:34:44.984 12:58:27 keyring_file -- keyring/common.sh@17 -- # name=key0 00:34:44.984 12:58:27 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:34:44.984 12:58:27 keyring_file -- keyring/common.sh@17 -- # digest=0 00:34:44.984 12:58:27 keyring_file -- keyring/common.sh@18 -- # mktemp 00:34:44.984 12:58:27 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.d9bV7nXjCy 00:34:44.984 12:58:27 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:34:44.984 12:58:27 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:34:44.984 12:58:27 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:34:44.984 12:58:27 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:34:44.984 12:58:27 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:34:44.984 12:58:27 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:34:44.984 12:58:27 keyring_file -- nvmf/common.sh@733 -- # python - 00:34:44.984 12:58:27 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.d9bV7nXjCy 00:34:44.984 12:58:27 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.d9bV7nXjCy 00:34:44.984 12:58:27 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.d9bV7nXjCy 00:34:44.984 12:58:27 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.d9bV7nXjCy 00:34:44.984 12:58:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.d9bV7nXjCy 00:34:45.243 12:58:27 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:34:45.243 12:58:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:34:45.502 nvme0n1 00:34:45.502 12:58:27 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:34:45.502 12:58:27 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:34:45.502 12:58:27 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:45.502 12:58:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:45.502 12:58:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:45.502 12:58:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:45.761 12:58:28 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:34:45.761 12:58:28 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:34:45.761 12:58:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:34:45.761 12:58:28 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:34:45.761 12:58:28 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:34:45.761 12:58:28 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:45.761 12:58:28 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:45.761 12:58:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:46.020 12:58:28 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:34:46.020 12:58:28 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:34:46.020 12:58:28 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:34:46.020 12:58:28 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:46.020 12:58:28 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:46.020 12:58:28 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:46.020 12:58:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:46.279 12:58:28 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:34:46.279 12:58:28 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:34:46.279 12:58:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:34:46.539 12:58:28 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:34:46.539 12:58:28 keyring_file -- keyring/file.sh@105 -- # jq length 00:34:46.539 12:58:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:46.539 12:58:29 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:34:46.539 12:58:29 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.d9bV7nXjCy 00:34:46.539 12:58:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.d9bV7nXjCy 00:34:46.798 12:58:29 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.kP1b8eLkZV 00:34:46.798 12:58:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.kP1b8eLkZV 00:34:47.056 12:58:29 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:34:47.057 12:58:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:34:47.316 nvme0n1 00:34:47.316 12:58:29 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:34:47.316 12:58:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:34:47.575 12:58:29 keyring_file -- keyring/file.sh@113 -- # config='{ 00:34:47.575 "subsystems": [ 00:34:47.575 { 00:34:47.575 "subsystem": "keyring", 00:34:47.575 "config": [ 00:34:47.575 { 00:34:47.575 "method": "keyring_file_add_key", 00:34:47.575 "params": { 00:34:47.575 "name": "key0", 00:34:47.575 "path": "/tmp/tmp.d9bV7nXjCy" 00:34:47.575 } 00:34:47.575 }, 00:34:47.575 { 00:34:47.575 "method": "keyring_file_add_key", 00:34:47.575 "params": { 00:34:47.575 "name": "key1", 00:34:47.575 "path": "/tmp/tmp.kP1b8eLkZV" 00:34:47.575 } 00:34:47.575 } 00:34:47.575 ] 00:34:47.575 }, 00:34:47.575 { 00:34:47.575 "subsystem": "iobuf", 00:34:47.575 "config": [ 00:34:47.575 { 00:34:47.575 "method": "iobuf_set_options", 00:34:47.575 "params": { 00:34:47.575 "small_pool_count": 8192, 00:34:47.575 "large_pool_count": 1024, 00:34:47.575 "small_bufsize": 8192, 00:34:47.575 "large_bufsize": 135168, 00:34:47.575 "enable_numa": false 00:34:47.575 } 00:34:47.575 } 00:34:47.575 ] 00:34:47.575 }, 00:34:47.575 { 00:34:47.575 "subsystem": "sock", 00:34:47.575 "config": [ 00:34:47.575 { 00:34:47.575 "method": "sock_set_default_impl", 00:34:47.575 "params": { 00:34:47.575 "impl_name": "posix" 00:34:47.575 } 00:34:47.575 }, 00:34:47.575 { 00:34:47.575 "method": "sock_impl_set_options", 00:34:47.575 "params": { 00:34:47.575 "impl_name": "ssl", 00:34:47.575 "recv_buf_size": 4096, 00:34:47.575 "send_buf_size": 4096, 00:34:47.575 "enable_recv_pipe": true, 00:34:47.575 "enable_quickack": false, 00:34:47.575 "enable_placement_id": 0, 00:34:47.575 "enable_zerocopy_send_server": true, 00:34:47.575 "enable_zerocopy_send_client": false, 00:34:47.575 "zerocopy_threshold": 0, 00:34:47.575 "tls_version": 0, 00:34:47.575 "enable_ktls": false 00:34:47.575 } 00:34:47.575 }, 00:34:47.575 { 00:34:47.576 "method": "sock_impl_set_options", 00:34:47.576 "params": { 00:34:47.576 "impl_name": "posix", 00:34:47.576 "recv_buf_size": 2097152, 00:34:47.576 "send_buf_size": 2097152, 00:34:47.576 "enable_recv_pipe": true, 00:34:47.576 "enable_quickack": false, 00:34:47.576 "enable_placement_id": 0, 00:34:47.576 "enable_zerocopy_send_server": true, 00:34:47.576 "enable_zerocopy_send_client": false, 00:34:47.576 "zerocopy_threshold": 0, 00:34:47.576 "tls_version": 0, 00:34:47.576 "enable_ktls": false 00:34:47.576 } 00:34:47.576 } 00:34:47.576 ] 00:34:47.576 }, 00:34:47.576 { 00:34:47.576 "subsystem": "vmd", 00:34:47.576 "config": [] 00:34:47.576 }, 00:34:47.576 { 00:34:47.576 "subsystem": "accel", 00:34:47.576 "config": [ 00:34:47.576 { 00:34:47.576 "method": "accel_set_options", 00:34:47.576 "params": { 00:34:47.576 "small_cache_size": 128, 00:34:47.576 "large_cache_size": 16, 00:34:47.576 "task_count": 2048, 00:34:47.576 "sequence_count": 2048, 00:34:47.576 "buf_count": 2048 00:34:47.576 } 00:34:47.576 } 00:34:47.576 ] 00:34:47.576 }, 00:34:47.576 { 00:34:47.576 "subsystem": "bdev", 00:34:47.576 "config": [ 00:34:47.576 { 00:34:47.576 "method": "bdev_set_options", 00:34:47.576 "params": { 00:34:47.576 "bdev_io_pool_size": 65535, 00:34:47.576 "bdev_io_cache_size": 256, 00:34:47.576 "bdev_auto_examine": true, 00:34:47.576 "iobuf_small_cache_size": 128, 00:34:47.576 "iobuf_large_cache_size": 16 00:34:47.576 } 00:34:47.576 }, 00:34:47.576 { 00:34:47.576 "method": "bdev_raid_set_options", 00:34:47.576 "params": { 00:34:47.576 "process_window_size_kb": 1024, 00:34:47.576 "process_max_bandwidth_mb_sec": 0 00:34:47.576 } 00:34:47.576 }, 00:34:47.576 { 00:34:47.576 "method": "bdev_iscsi_set_options", 00:34:47.576 "params": { 00:34:47.576 "timeout_sec": 30 00:34:47.576 } 00:34:47.576 }, 00:34:47.576 { 00:34:47.576 "method": "bdev_nvme_set_options", 00:34:47.576 "params": { 00:34:47.576 "action_on_timeout": "none", 00:34:47.576 "timeout_us": 0, 00:34:47.576 "timeout_admin_us": 0, 00:34:47.576 "keep_alive_timeout_ms": 10000, 00:34:47.576 "arbitration_burst": 0, 00:34:47.576 "low_priority_weight": 0, 00:34:47.576 "medium_priority_weight": 0, 00:34:47.576 "high_priority_weight": 0, 00:34:47.576 "nvme_adminq_poll_period_us": 10000, 00:34:47.576 "nvme_ioq_poll_period_us": 0, 00:34:47.576 "io_queue_requests": 512, 00:34:47.576 "delay_cmd_submit": true, 00:34:47.576 "transport_retry_count": 4, 00:34:47.576 "bdev_retry_count": 3, 00:34:47.576 "transport_ack_timeout": 0, 00:34:47.576 "ctrlr_loss_timeout_sec": 0, 00:34:47.576 "reconnect_delay_sec": 0, 00:34:47.576 "fast_io_fail_timeout_sec": 0, 00:34:47.576 "disable_auto_failback": false, 00:34:47.576 "generate_uuids": false, 00:34:47.576 "transport_tos": 0, 00:34:47.576 "nvme_error_stat": false, 00:34:47.576 "rdma_srq_size": 0, 00:34:47.576 "io_path_stat": false, 00:34:47.576 "allow_accel_sequence": false, 00:34:47.576 "rdma_max_cq_size": 0, 00:34:47.576 "rdma_cm_event_timeout_ms": 0, 00:34:47.576 "dhchap_digests": [ 00:34:47.576 "sha256", 00:34:47.576 "sha384", 00:34:47.576 "sha512" 00:34:47.576 ], 00:34:47.576 "dhchap_dhgroups": [ 00:34:47.576 "null", 00:34:47.576 "ffdhe2048", 00:34:47.576 "ffdhe3072", 00:34:47.576 "ffdhe4096", 00:34:47.576 "ffdhe6144", 00:34:47.576 "ffdhe8192" 00:34:47.576 ] 00:34:47.576 } 00:34:47.576 }, 00:34:47.576 { 00:34:47.576 "method": "bdev_nvme_attach_controller", 00:34:47.576 "params": { 00:34:47.576 "name": "nvme0", 00:34:47.576 "trtype": "TCP", 00:34:47.576 "adrfam": "IPv4", 00:34:47.576 "traddr": "127.0.0.1", 00:34:47.576 "trsvcid": "4420", 00:34:47.576 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:47.576 "prchk_reftag": false, 00:34:47.576 "prchk_guard": false, 00:34:47.576 "ctrlr_loss_timeout_sec": 0, 00:34:47.576 "reconnect_delay_sec": 0, 00:34:47.576 "fast_io_fail_timeout_sec": 0, 00:34:47.576 "psk": "key0", 00:34:47.576 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:47.576 "hdgst": false, 00:34:47.576 "ddgst": false, 00:34:47.576 "multipath": "multipath" 00:34:47.576 } 00:34:47.576 }, 00:34:47.576 { 00:34:47.576 "method": "bdev_nvme_set_hotplug", 00:34:47.576 "params": { 00:34:47.576 "period_us": 100000, 00:34:47.576 "enable": false 00:34:47.576 } 00:34:47.576 }, 00:34:47.576 { 00:34:47.576 "method": "bdev_wait_for_examine" 00:34:47.576 } 00:34:47.576 ] 00:34:47.576 }, 00:34:47.576 { 00:34:47.576 "subsystem": "nbd", 00:34:47.576 "config": [] 00:34:47.576 } 00:34:47.576 ] 00:34:47.576 }' 00:34:47.576 12:58:29 keyring_file -- keyring/file.sh@115 -- # killprocess 2800346 00:34:47.576 12:58:29 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 2800346 ']' 00:34:47.576 12:58:29 keyring_file -- common/autotest_common.sh@958 -- # kill -0 2800346 00:34:47.576 12:58:29 keyring_file -- common/autotest_common.sh@959 -- # uname 00:34:47.576 12:58:29 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:47.576 12:58:29 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2800346 00:34:47.576 12:58:29 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:47.576 12:58:29 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:47.576 12:58:29 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2800346' 00:34:47.576 killing process with pid 2800346 00:34:47.576 12:58:29 keyring_file -- common/autotest_common.sh@973 -- # kill 2800346 00:34:47.576 Received shutdown signal, test time was about 1.000000 seconds 00:34:47.576 00:34:47.576 Latency(us) 00:34:47.576 [2024-11-28T11:58:30.095Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:47.576 [2024-11-28T11:58:30.095Z] =================================================================================================================== 00:34:47.576 [2024-11-28T11:58:30.095Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:47.576 12:58:29 keyring_file -- common/autotest_common.sh@978 -- # wait 2800346 00:34:47.836 12:58:30 keyring_file -- keyring/file.sh@118 -- # bperfpid=2801856 00:34:47.836 12:58:30 keyring_file -- keyring/file.sh@120 -- # waitforlisten 2801856 /var/tmp/bperf.sock 00:34:47.836 12:58:30 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 2801856 ']' 00:34:47.836 12:58:30 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:47.836 12:58:30 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:34:47.836 12:58:30 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:47.836 12:58:30 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:47.836 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:47.836 12:58:30 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:34:47.836 "subsystems": [ 00:34:47.836 { 00:34:47.836 "subsystem": "keyring", 00:34:47.836 "config": [ 00:34:47.836 { 00:34:47.836 "method": "keyring_file_add_key", 00:34:47.836 "params": { 00:34:47.836 "name": "key0", 00:34:47.836 "path": "/tmp/tmp.d9bV7nXjCy" 00:34:47.836 } 00:34:47.836 }, 00:34:47.836 { 00:34:47.836 "method": "keyring_file_add_key", 00:34:47.836 "params": { 00:34:47.836 "name": "key1", 00:34:47.836 "path": "/tmp/tmp.kP1b8eLkZV" 00:34:47.836 } 00:34:47.836 } 00:34:47.836 ] 00:34:47.836 }, 00:34:47.836 { 00:34:47.836 "subsystem": "iobuf", 00:34:47.836 "config": [ 00:34:47.836 { 00:34:47.836 "method": "iobuf_set_options", 00:34:47.836 "params": { 00:34:47.836 "small_pool_count": 8192, 00:34:47.836 "large_pool_count": 1024, 00:34:47.836 "small_bufsize": 8192, 00:34:47.836 "large_bufsize": 135168, 00:34:47.836 "enable_numa": false 00:34:47.836 } 00:34:47.836 } 00:34:47.836 ] 00:34:47.836 }, 00:34:47.836 { 00:34:47.836 "subsystem": "sock", 00:34:47.836 "config": [ 00:34:47.836 { 00:34:47.836 "method": "sock_set_default_impl", 00:34:47.836 "params": { 00:34:47.836 "impl_name": "posix" 00:34:47.836 } 00:34:47.836 }, 00:34:47.836 { 00:34:47.836 "method": "sock_impl_set_options", 00:34:47.836 "params": { 00:34:47.836 "impl_name": "ssl", 00:34:47.836 "recv_buf_size": 4096, 00:34:47.836 "send_buf_size": 4096, 00:34:47.836 "enable_recv_pipe": true, 00:34:47.836 "enable_quickack": false, 00:34:47.836 "enable_placement_id": 0, 00:34:47.836 "enable_zerocopy_send_server": true, 00:34:47.836 "enable_zerocopy_send_client": false, 00:34:47.836 "zerocopy_threshold": 0, 00:34:47.836 "tls_version": 0, 00:34:47.836 "enable_ktls": false 00:34:47.836 } 00:34:47.836 }, 00:34:47.836 { 00:34:47.836 "method": "sock_impl_set_options", 00:34:47.836 "params": { 00:34:47.836 "impl_name": "posix", 00:34:47.836 "recv_buf_size": 2097152, 00:34:47.836 "send_buf_size": 2097152, 00:34:47.836 "enable_recv_pipe": true, 00:34:47.836 "enable_quickack": false, 00:34:47.836 "enable_placement_id": 0, 00:34:47.836 "enable_zerocopy_send_server": true, 00:34:47.836 "enable_zerocopy_send_client": false, 00:34:47.836 "zerocopy_threshold": 0, 00:34:47.836 "tls_version": 0, 00:34:47.836 "enable_ktls": false 00:34:47.836 } 00:34:47.836 } 00:34:47.836 ] 00:34:47.836 }, 00:34:47.836 { 00:34:47.836 "subsystem": "vmd", 00:34:47.836 "config": [] 00:34:47.836 }, 00:34:47.836 { 00:34:47.836 "subsystem": "accel", 00:34:47.836 "config": [ 00:34:47.836 { 00:34:47.836 "method": "accel_set_options", 00:34:47.836 "params": { 00:34:47.836 "small_cache_size": 128, 00:34:47.836 "large_cache_size": 16, 00:34:47.836 "task_count": 2048, 00:34:47.836 "sequence_count": 2048, 00:34:47.836 "buf_count": 2048 00:34:47.836 } 00:34:47.836 } 00:34:47.836 ] 00:34:47.836 }, 00:34:47.836 { 00:34:47.836 "subsystem": "bdev", 00:34:47.836 "config": [ 00:34:47.836 { 00:34:47.836 "method": "bdev_set_options", 00:34:47.836 "params": { 00:34:47.836 "bdev_io_pool_size": 65535, 00:34:47.836 "bdev_io_cache_size": 256, 00:34:47.836 "bdev_auto_examine": true, 00:34:47.836 "iobuf_small_cache_size": 128, 00:34:47.836 "iobuf_large_cache_size": 16 00:34:47.836 } 00:34:47.836 }, 00:34:47.836 { 00:34:47.836 "method": "bdev_raid_set_options", 00:34:47.836 "params": { 00:34:47.836 "process_window_size_kb": 1024, 00:34:47.836 "process_max_bandwidth_mb_sec": 0 00:34:47.836 } 00:34:47.836 }, 00:34:47.836 { 00:34:47.836 "method": "bdev_iscsi_set_options", 00:34:47.836 "params": { 00:34:47.836 "timeout_sec": 30 00:34:47.836 } 00:34:47.836 }, 00:34:47.836 { 00:34:47.836 "method": "bdev_nvme_set_options", 00:34:47.836 "params": { 00:34:47.836 "action_on_timeout": "none", 00:34:47.836 "timeout_us": 0, 00:34:47.836 "timeout_admin_us": 0, 00:34:47.836 "keep_alive_timeout_ms": 10000, 00:34:47.836 "arbitration_burst": 0, 00:34:47.836 "low_priority_weight": 0, 00:34:47.836 "medium_priority_weight": 0, 00:34:47.836 "high_priority_weight": 0, 00:34:47.836 "nvme_adminq_poll_period_us": 10000, 00:34:47.836 "nvme_ioq_poll_period_us": 0, 00:34:47.836 "io_queue_requests": 512, 00:34:47.836 "delay_cmd_submit": true, 00:34:47.836 "transport_retry_count": 4, 00:34:47.836 "bdev_retry_count": 3, 00:34:47.836 "transport_ack_timeout": 0, 00:34:47.836 "ctrlr_loss_timeout_sec": 0, 00:34:47.836 "reconnect_delay_sec": 0, 00:34:47.836 "fast_io_fail_timeout_sec": 0, 00:34:47.836 "disable_auto_failback": false, 00:34:47.836 "generate_uuids": false, 00:34:47.836 "transport_tos": 0, 00:34:47.836 "nvme_error_stat": false, 00:34:47.836 "rdma_srq_size": 0, 00:34:47.836 "io_path_stat": false, 00:34:47.836 "allow_accel_sequence": false, 00:34:47.836 "rdma_max_cq_size": 0, 00:34:47.836 "rdma_cm_event_timeout_ms": 0, 00:34:47.836 "dhchap_digests": [ 00:34:47.836 "sha256", 00:34:47.836 "sha384", 00:34:47.836 "sha512" 00:34:47.836 ], 00:34:47.836 "dhchap_dhgroups": [ 00:34:47.836 "null", 00:34:47.836 "ffdhe2048", 00:34:47.836 "ffdhe3072", 00:34:47.836 "ffdhe4096", 00:34:47.836 "ffdhe6144", 00:34:47.837 "ffdhe8192" 00:34:47.837 ] 00:34:47.837 } 00:34:47.837 }, 00:34:47.837 { 00:34:47.837 "method": "bdev_nvme_attach_controller", 00:34:47.837 "params": { 00:34:47.837 "name": "nvme0", 00:34:47.837 "trtype": "TCP", 00:34:47.837 "adrfam": "IPv4", 00:34:47.837 "traddr": "127.0.0.1", 00:34:47.837 "trsvcid": "4420", 00:34:47.837 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:47.837 "prchk_reftag": false, 00:34:47.837 "prchk_guard": false, 00:34:47.837 "ctrlr_loss_timeout_sec": 0, 00:34:47.837 "reconnect_delay_sec": 0, 00:34:47.837 "fast_io_fail_timeout_sec": 0, 00:34:47.837 "psk": "key0", 00:34:47.837 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:47.837 "hdgst": false, 00:34:47.837 "ddgst": false, 00:34:47.837 "multipath": "multipath" 00:34:47.837 } 00:34:47.837 }, 00:34:47.837 { 00:34:47.837 "method": "bdev_nvme_set_hotplug", 00:34:47.837 "params": { 00:34:47.837 "period_us": 100000, 00:34:47.837 "enable": false 00:34:47.837 } 00:34:47.837 }, 00:34:47.837 { 00:34:47.837 "method": "bdev_wait_for_examine" 00:34:47.837 } 00:34:47.837 ] 00:34:47.837 }, 00:34:47.837 { 00:34:47.837 "subsystem": "nbd", 00:34:47.837 "config": [] 00:34:47.837 } 00:34:47.837 ] 00:34:47.837 }' 00:34:47.837 12:58:30 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:47.837 12:58:30 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:34:47.837 [2024-11-28 12:58:30.145202] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:34:47.837 [2024-11-28 12:58:30.145253] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2801856 ] 00:34:47.837 [2024-11-28 12:58:30.206746] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:47.837 [2024-11-28 12:58:30.246742] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:48.095 [2024-11-28 12:58:30.409892] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:34:48.662 12:58:30 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:48.662 12:58:30 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:34:48.662 12:58:30 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:34:48.662 12:58:30 keyring_file -- keyring/file.sh@121 -- # jq length 00:34:48.662 12:58:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:48.662 12:58:31 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:34:48.921 12:58:31 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:34:48.921 12:58:31 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:34:48.921 12:58:31 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:48.921 12:58:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:48.921 12:58:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:48.921 12:58:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:48.921 12:58:31 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:34:48.921 12:58:31 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:34:48.921 12:58:31 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:34:48.921 12:58:31 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:48.921 12:58:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:48.921 12:58:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:34:48.921 12:58:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:49.180 12:58:31 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:34:49.180 12:58:31 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:34:49.180 12:58:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:34:49.180 12:58:31 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:34:49.440 12:58:31 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:34:49.440 12:58:31 keyring_file -- keyring/file.sh@1 -- # cleanup 00:34:49.440 12:58:31 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.d9bV7nXjCy /tmp/tmp.kP1b8eLkZV 00:34:49.440 12:58:31 keyring_file -- keyring/file.sh@20 -- # killprocess 2801856 00:34:49.440 12:58:31 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 2801856 ']' 00:34:49.440 12:58:31 keyring_file -- common/autotest_common.sh@958 -- # kill -0 2801856 00:34:49.440 12:58:31 keyring_file -- common/autotest_common.sh@959 -- # uname 00:34:49.440 12:58:31 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:49.440 12:58:31 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2801856 00:34:49.440 12:58:31 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:49.440 12:58:31 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:49.440 12:58:31 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2801856' 00:34:49.440 killing process with pid 2801856 00:34:49.440 12:58:31 keyring_file -- common/autotest_common.sh@973 -- # kill 2801856 00:34:49.440 Received shutdown signal, test time was about 1.000000 seconds 00:34:49.440 00:34:49.440 Latency(us) 00:34:49.440 [2024-11-28T11:58:31.959Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:49.440 [2024-11-28T11:58:31.959Z] =================================================================================================================== 00:34:49.440 [2024-11-28T11:58:31.959Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:34:49.440 12:58:31 keyring_file -- common/autotest_common.sh@978 -- # wait 2801856 00:34:49.699 12:58:31 keyring_file -- keyring/file.sh@21 -- # killprocess 2800319 00:34:49.699 12:58:31 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 2800319 ']' 00:34:49.699 12:58:31 keyring_file -- common/autotest_common.sh@958 -- # kill -0 2800319 00:34:49.699 12:58:31 keyring_file -- common/autotest_common.sh@959 -- # uname 00:34:49.699 12:58:31 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:49.699 12:58:31 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2800319 00:34:49.699 12:58:32 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:49.699 12:58:32 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:49.699 12:58:32 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2800319' 00:34:49.699 killing process with pid 2800319 00:34:49.699 12:58:32 keyring_file -- common/autotest_common.sh@973 -- # kill 2800319 00:34:49.699 12:58:32 keyring_file -- common/autotest_common.sh@978 -- # wait 2800319 00:34:49.970 00:34:49.971 real 0m11.594s 00:34:49.971 user 0m28.790s 00:34:49.971 sys 0m2.597s 00:34:49.971 12:58:32 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:49.971 12:58:32 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:34:49.971 ************************************ 00:34:49.971 END TEST keyring_file 00:34:49.971 ************************************ 00:34:49.971 12:58:32 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:34:49.971 12:58:32 -- spdk/autotest.sh@294 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:34:49.971 12:58:32 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:34:49.971 12:58:32 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:49.971 12:58:32 -- common/autotest_common.sh@10 -- # set +x 00:34:49.971 ************************************ 00:34:49.971 START TEST keyring_linux 00:34:49.971 ************************************ 00:34:49.971 12:58:32 keyring_linux -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:34:49.971 Joined session keyring: 598776110 00:34:49.971 * Looking for test storage... 00:34:50.234 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:34:50.234 12:58:32 keyring_linux -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:50.234 12:58:32 keyring_linux -- common/autotest_common.sh@1693 -- # lcov --version 00:34:50.234 12:58:32 keyring_linux -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:50.234 12:58:32 keyring_linux -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:50.234 12:58:32 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:50.234 12:58:32 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:50.234 12:58:32 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:50.234 12:58:32 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:34:50.234 12:58:32 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:34:50.234 12:58:32 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:34:50.234 12:58:32 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:34:50.234 12:58:32 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:34:50.234 12:58:32 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:34:50.234 12:58:32 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:34:50.234 12:58:32 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:50.234 12:58:32 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:34:50.234 12:58:32 keyring_linux -- scripts/common.sh@345 -- # : 1 00:34:50.234 12:58:32 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:50.234 12:58:32 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:50.234 12:58:32 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:34:50.234 12:58:32 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:34:50.234 12:58:32 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:50.234 12:58:32 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:34:50.234 12:58:32 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:34:50.234 12:58:32 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:34:50.234 12:58:32 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:34:50.234 12:58:32 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:50.234 12:58:32 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:34:50.234 12:58:32 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:34:50.234 12:58:32 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:50.234 12:58:32 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:50.235 12:58:32 keyring_linux -- scripts/common.sh@368 -- # return 0 00:34:50.235 12:58:32 keyring_linux -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:50.235 12:58:32 keyring_linux -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:50.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:50.235 --rc genhtml_branch_coverage=1 00:34:50.235 --rc genhtml_function_coverage=1 00:34:50.235 --rc genhtml_legend=1 00:34:50.235 --rc geninfo_all_blocks=1 00:34:50.235 --rc geninfo_unexecuted_blocks=1 00:34:50.235 00:34:50.235 ' 00:34:50.235 12:58:32 keyring_linux -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:50.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:50.235 --rc genhtml_branch_coverage=1 00:34:50.235 --rc genhtml_function_coverage=1 00:34:50.235 --rc genhtml_legend=1 00:34:50.235 --rc geninfo_all_blocks=1 00:34:50.235 --rc geninfo_unexecuted_blocks=1 00:34:50.235 00:34:50.235 ' 00:34:50.235 12:58:32 keyring_linux -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:50.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:50.235 --rc genhtml_branch_coverage=1 00:34:50.235 --rc genhtml_function_coverage=1 00:34:50.235 --rc genhtml_legend=1 00:34:50.235 --rc geninfo_all_blocks=1 00:34:50.235 --rc geninfo_unexecuted_blocks=1 00:34:50.235 00:34:50.235 ' 00:34:50.235 12:58:32 keyring_linux -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:50.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:50.235 --rc genhtml_branch_coverage=1 00:34:50.235 --rc genhtml_function_coverage=1 00:34:50.235 --rc genhtml_legend=1 00:34:50.235 --rc geninfo_all_blocks=1 00:34:50.235 --rc geninfo_unexecuted_blocks=1 00:34:50.235 00:34:50.235 ' 00:34:50.235 12:58:32 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:34:50.235 12:58:32 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:50.235 12:58:32 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:34:50.235 12:58:32 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:50.235 12:58:32 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:50.235 12:58:32 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:50.235 12:58:32 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:50.235 12:58:32 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:50.235 12:58:32 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:50.235 12:58:32 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:50.235 12:58:32 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:50.235 12:58:32 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:50.235 12:58:32 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:50.235 12:58:32 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:34:50.235 12:58:32 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:34:50.235 12:58:32 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:50.235 12:58:32 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:50.235 12:58:32 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:50.235 12:58:32 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:50.235 12:58:32 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:50.235 12:58:32 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:34:50.235 12:58:32 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:50.235 12:58:32 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:50.235 12:58:32 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:50.235 12:58:32 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:50.235 12:58:32 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:50.235 12:58:32 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:50.235 12:58:32 keyring_linux -- paths/export.sh@5 -- # export PATH 00:34:50.235 12:58:32 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:50.235 12:58:32 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:34:50.235 12:58:32 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:50.235 12:58:32 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:50.235 12:58:32 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:50.235 12:58:32 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:50.235 12:58:32 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:50.235 12:58:32 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:50.235 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:50.235 12:58:32 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:50.235 12:58:32 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:50.235 12:58:32 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:50.235 12:58:32 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:34:50.235 12:58:32 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:34:50.235 12:58:32 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:34:50.235 12:58:32 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:34:50.235 12:58:32 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:34:50.235 12:58:32 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:34:50.235 12:58:32 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:34:50.235 12:58:32 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:34:50.235 12:58:32 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:34:50.235 12:58:32 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:34:50.235 12:58:32 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:34:50.235 12:58:32 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:34:50.235 12:58:32 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:34:50.235 12:58:32 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:34:50.235 12:58:32 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:34:50.235 12:58:32 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:34:50.235 12:58:32 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:34:50.235 12:58:32 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:34:50.235 12:58:32 keyring_linux -- nvmf/common.sh@733 -- # python - 00:34:50.235 12:58:32 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:34:50.235 12:58:32 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:34:50.235 /tmp/:spdk-test:key0 00:34:50.235 12:58:32 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:34:50.235 12:58:32 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:34:50.235 12:58:32 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:34:50.235 12:58:32 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:34:50.235 12:58:32 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:34:50.235 12:58:32 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:34:50.235 12:58:32 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:34:50.235 12:58:32 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:34:50.235 12:58:32 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:34:50.235 12:58:32 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:34:50.235 12:58:32 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:34:50.235 12:58:32 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:34:50.235 12:58:32 keyring_linux -- nvmf/common.sh@733 -- # python - 00:34:50.235 12:58:32 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:34:50.236 12:58:32 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:34:50.236 /tmp/:spdk-test:key1 00:34:50.236 12:58:32 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:34:50.236 12:58:32 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=2802403 00:34:50.236 12:58:32 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 2802403 00:34:50.236 12:58:32 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 2802403 ']' 00:34:50.236 12:58:32 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:50.236 12:58:32 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:50.236 12:58:32 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:50.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:50.236 12:58:32 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:50.236 12:58:32 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:34:50.236 [2024-11-28 12:58:32.717207] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:34:50.236 [2024-11-28 12:58:32.717259] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2802403 ] 00:34:50.494 [2024-11-28 12:58:32.780284] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:50.494 [2024-11-28 12:58:32.823728] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:50.753 12:58:33 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:50.753 12:58:33 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:34:50.753 12:58:33 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:34:50.753 12:58:33 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.753 12:58:33 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:34:50.753 [2024-11-28 12:58:33.029839] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:50.753 null0 00:34:50.753 [2024-11-28 12:58:33.061892] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:34:50.753 [2024-11-28 12:58:33.062261] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:34:50.753 12:58:33 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.753 12:58:33 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:34:50.753 899557456 00:34:50.753 12:58:33 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:34:50.753 1062575159 00:34:50.753 12:58:33 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:34:50.753 12:58:33 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=2802421 00:34:50.753 12:58:33 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 2802421 /var/tmp/bperf.sock 00:34:50.753 12:58:33 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 2802421 ']' 00:34:50.753 12:58:33 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:50.753 12:58:33 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:50.753 12:58:33 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:50.753 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:50.753 12:58:33 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:50.753 12:58:33 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:34:50.753 [2024-11-28 12:58:33.116821] Starting SPDK v25.01-pre git sha1 bf92c7a42 / DPDK 24.03.0 initialization... 00:34:50.753 [2024-11-28 12:58:33.116864] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2802421 ] 00:34:50.753 [2024-11-28 12:58:33.177954] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:50.753 [2024-11-28 12:58:33.221476] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:51.012 12:58:33 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:51.012 12:58:33 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:34:51.012 12:58:33 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:34:51.012 12:58:33 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:34:51.012 12:58:33 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:34:51.012 12:58:33 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:34:51.271 12:58:33 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:34:51.271 12:58:33 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:34:51.528 [2024-11-28 12:58:33.886822] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:34:51.528 nvme0n1 00:34:51.528 12:58:33 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:34:51.528 12:58:33 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:34:51.528 12:58:33 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:34:51.528 12:58:33 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:34:51.528 12:58:33 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:51.528 12:58:33 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:34:51.786 12:58:34 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:34:51.786 12:58:34 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:34:51.786 12:58:34 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:34:51.786 12:58:34 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:34:51.786 12:58:34 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:51.786 12:58:34 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:34:51.786 12:58:34 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:52.044 12:58:34 keyring_linux -- keyring/linux.sh@25 -- # sn=899557456 00:34:52.044 12:58:34 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:34:52.044 12:58:34 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:34:52.044 12:58:34 keyring_linux -- keyring/linux.sh@26 -- # [[ 899557456 == \8\9\9\5\5\7\4\5\6 ]] 00:34:52.044 12:58:34 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 899557456 00:34:52.044 12:58:34 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:34:52.044 12:58:34 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:52.044 Running I/O for 1 seconds... 00:34:52.979 19083.00 IOPS, 74.54 MiB/s 00:34:52.979 Latency(us) 00:34:52.979 [2024-11-28T11:58:35.498Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:52.979 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:34:52.979 nvme0n1 : 1.01 19080.41 74.53 0.00 0.00 6683.11 3704.21 9118.05 00:34:52.979 [2024-11-28T11:58:35.498Z] =================================================================================================================== 00:34:52.979 [2024-11-28T11:58:35.498Z] Total : 19080.41 74.53 0.00 0.00 6683.11 3704.21 9118.05 00:34:52.979 { 00:34:52.979 "results": [ 00:34:52.979 { 00:34:52.979 "job": "nvme0n1", 00:34:52.979 "core_mask": "0x2", 00:34:52.979 "workload": "randread", 00:34:52.979 "status": "finished", 00:34:52.979 "queue_depth": 128, 00:34:52.979 "io_size": 4096, 00:34:52.979 "runtime": 1.006844, 00:34:52.979 "iops": 19080.413648986338, 00:34:52.979 "mibps": 74.53286581635288, 00:34:52.979 "io_failed": 0, 00:34:52.979 "io_timeout": 0, 00:34:52.979 "avg_latency_us": 6683.109291823299, 00:34:52.979 "min_latency_us": 3704.208695652174, 00:34:52.979 "max_latency_us": 9118.052173913044 00:34:52.979 } 00:34:52.979 ], 00:34:52.979 "core_count": 1 00:34:52.979 } 00:34:52.979 12:58:35 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:34:52.979 12:58:35 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:34:53.237 12:58:35 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:34:53.237 12:58:35 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:34:53.237 12:58:35 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:34:53.237 12:58:35 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:34:53.237 12:58:35 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:53.237 12:58:35 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:34:53.495 12:58:35 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:34:53.495 12:58:35 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:34:53.495 12:58:35 keyring_linux -- keyring/linux.sh@23 -- # return 00:34:53.495 12:58:35 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:34:53.495 12:58:35 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:34:53.495 12:58:35 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:34:53.495 12:58:35 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:34:53.495 12:58:35 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:53.495 12:58:35 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:34:53.495 12:58:35 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:53.495 12:58:35 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:34:53.495 12:58:35 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:34:53.753 [2024-11-28 12:58:36.070870] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:34:53.753 [2024-11-28 12:58:36.071438] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1afa0 (107): Transport endpoint is not connected 00:34:53.753 [2024-11-28 12:58:36.072433] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1afa0 (9): Bad file descriptor 00:34:53.753 [2024-11-28 12:58:36.073435] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:34:53.753 [2024-11-28 12:58:36.073444] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:34:53.753 [2024-11-28 12:58:36.073451] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:34:53.753 [2024-11-28 12:58:36.073459] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:34:53.753 request: 00:34:53.753 { 00:34:53.753 "name": "nvme0", 00:34:53.753 "trtype": "tcp", 00:34:53.753 "traddr": "127.0.0.1", 00:34:53.753 "adrfam": "ipv4", 00:34:53.753 "trsvcid": "4420", 00:34:53.753 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:53.753 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:53.753 "prchk_reftag": false, 00:34:53.753 "prchk_guard": false, 00:34:53.753 "hdgst": false, 00:34:53.753 "ddgst": false, 00:34:53.753 "psk": ":spdk-test:key1", 00:34:53.753 "allow_unrecognized_csi": false, 00:34:53.753 "method": "bdev_nvme_attach_controller", 00:34:53.753 "req_id": 1 00:34:53.753 } 00:34:53.753 Got JSON-RPC error response 00:34:53.753 response: 00:34:53.753 { 00:34:53.753 "code": -5, 00:34:53.753 "message": "Input/output error" 00:34:53.753 } 00:34:53.753 12:58:36 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:34:53.753 12:58:36 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:53.753 12:58:36 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:53.753 12:58:36 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:53.753 12:58:36 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:34:53.753 12:58:36 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:34:53.753 12:58:36 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:34:53.753 12:58:36 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:34:53.753 12:58:36 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:34:53.753 12:58:36 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:34:53.753 12:58:36 keyring_linux -- keyring/linux.sh@33 -- # sn=899557456 00:34:53.753 12:58:36 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 899557456 00:34:53.753 1 links removed 00:34:53.753 12:58:36 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:34:53.753 12:58:36 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:34:53.753 12:58:36 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:34:53.753 12:58:36 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:34:53.753 12:58:36 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:34:53.753 12:58:36 keyring_linux -- keyring/linux.sh@33 -- # sn=1062575159 00:34:53.753 12:58:36 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 1062575159 00:34:53.753 1 links removed 00:34:53.753 12:58:36 keyring_linux -- keyring/linux.sh@41 -- # killprocess 2802421 00:34:53.753 12:58:36 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 2802421 ']' 00:34:53.753 12:58:36 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 2802421 00:34:53.753 12:58:36 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:34:53.753 12:58:36 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:53.753 12:58:36 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2802421 00:34:53.753 12:58:36 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:53.753 12:58:36 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:53.753 12:58:36 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2802421' 00:34:53.753 killing process with pid 2802421 00:34:53.753 12:58:36 keyring_linux -- common/autotest_common.sh@973 -- # kill 2802421 00:34:53.753 Received shutdown signal, test time was about 1.000000 seconds 00:34:53.753 00:34:53.753 Latency(us) 00:34:53.753 [2024-11-28T11:58:36.272Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:53.753 [2024-11-28T11:58:36.272Z] =================================================================================================================== 00:34:53.753 [2024-11-28T11:58:36.272Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:53.753 12:58:36 keyring_linux -- common/autotest_common.sh@978 -- # wait 2802421 00:34:54.011 12:58:36 keyring_linux -- keyring/linux.sh@42 -- # killprocess 2802403 00:34:54.011 12:58:36 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 2802403 ']' 00:34:54.011 12:58:36 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 2802403 00:34:54.011 12:58:36 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:34:54.011 12:58:36 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:54.011 12:58:36 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2802403 00:34:54.011 12:58:36 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:54.011 12:58:36 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:54.011 12:58:36 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2802403' 00:34:54.011 killing process with pid 2802403 00:34:54.011 12:58:36 keyring_linux -- common/autotest_common.sh@973 -- # kill 2802403 00:34:54.011 12:58:36 keyring_linux -- common/autotest_common.sh@978 -- # wait 2802403 00:34:54.269 00:34:54.269 real 0m4.266s 00:34:54.269 user 0m8.003s 00:34:54.269 sys 0m1.389s 00:34:54.269 12:58:36 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:54.269 12:58:36 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:34:54.269 ************************************ 00:34:54.269 END TEST keyring_linux 00:34:54.269 ************************************ 00:34:54.269 12:58:36 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:34:54.269 12:58:36 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:34:54.269 12:58:36 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:34:54.269 12:58:36 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:34:54.269 12:58:36 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:34:54.269 12:58:36 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:34:54.269 12:58:36 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:34:54.269 12:58:36 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:34:54.269 12:58:36 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:34:54.269 12:58:36 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:34:54.269 12:58:36 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:34:54.269 12:58:36 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:34:54.269 12:58:36 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:34:54.269 12:58:36 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:34:54.269 12:58:36 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:34:54.269 12:58:36 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:34:54.269 12:58:36 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:34:54.269 12:58:36 -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:54.269 12:58:36 -- common/autotest_common.sh@10 -- # set +x 00:34:54.269 12:58:36 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:34:54.269 12:58:36 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:34:54.269 12:58:36 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:34:54.269 12:58:36 -- common/autotest_common.sh@10 -- # set +x 00:34:59.529 INFO: APP EXITING 00:34:59.529 INFO: killing all VMs 00:34:59.529 INFO: killing vhost app 00:34:59.529 INFO: EXIT DONE 00:35:01.421 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:35:01.421 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:35:01.421 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:35:01.421 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:35:01.421 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:35:01.421 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:35:01.421 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:35:01.421 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:35:01.421 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:35:01.421 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:35:01.421 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:35:01.421 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:35:01.421 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:35:01.421 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:35:01.421 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:35:01.421 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:35:01.421 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:35:04.703 Cleaning 00:35:04.703 Removing: /var/run/dpdk/spdk0/config 00:35:04.703 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:35:04.703 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:35:04.703 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:35:04.703 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:35:04.703 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:35:04.703 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:35:04.703 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:35:04.703 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:35:04.703 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:35:04.703 Removing: /var/run/dpdk/spdk0/hugepage_info 00:35:04.703 Removing: /var/run/dpdk/spdk1/config 00:35:04.703 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:35:04.703 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:35:04.703 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:35:04.703 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:35:04.703 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:35:04.703 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:35:04.703 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:35:04.703 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:35:04.703 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:35:04.703 Removing: /var/run/dpdk/spdk1/hugepage_info 00:35:04.703 Removing: /var/run/dpdk/spdk2/config 00:35:04.703 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:35:04.703 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:35:04.703 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:35:04.703 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:35:04.703 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:35:04.703 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:35:04.703 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:35:04.703 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:35:04.703 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:35:04.703 Removing: /var/run/dpdk/spdk2/hugepage_info 00:35:04.703 Removing: /var/run/dpdk/spdk3/config 00:35:04.703 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:35:04.703 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:35:04.703 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:35:04.703 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:35:04.703 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:35:04.703 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:35:04.703 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:35:04.703 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:35:04.703 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:35:04.703 Removing: /var/run/dpdk/spdk3/hugepage_info 00:35:04.703 Removing: /var/run/dpdk/spdk4/config 00:35:04.703 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:35:04.703 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:35:04.703 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:35:04.703 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:35:04.703 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:35:04.703 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:35:04.703 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:35:04.703 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:35:04.703 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:35:04.703 Removing: /var/run/dpdk/spdk4/hugepage_info 00:35:04.703 Removing: /dev/shm/bdev_svc_trace.1 00:35:04.703 Removing: /dev/shm/nvmf_trace.0 00:35:04.703 Removing: /dev/shm/spdk_tgt_trace.pid2329574 00:35:04.703 Removing: /var/run/dpdk/spdk0 00:35:04.703 Removing: /var/run/dpdk/spdk1 00:35:04.703 Removing: /var/run/dpdk/spdk2 00:35:04.703 Removing: /var/run/dpdk/spdk3 00:35:04.703 Removing: /var/run/dpdk/spdk4 00:35:04.703 Removing: /var/run/dpdk/spdk_pid2327438 00:35:04.703 Removing: /var/run/dpdk/spdk_pid2328497 00:35:04.703 Removing: /var/run/dpdk/spdk_pid2329574 00:35:04.703 Removing: /var/run/dpdk/spdk_pid2330209 00:35:04.703 Removing: /var/run/dpdk/spdk_pid2331157 00:35:04.703 Removing: /var/run/dpdk/spdk_pid2331183 00:35:04.703 Removing: /var/run/dpdk/spdk_pid2332197 00:35:04.703 Removing: /var/run/dpdk/spdk_pid2332376 00:35:04.703 Removing: /var/run/dpdk/spdk_pid2332619 00:35:04.703 Removing: /var/run/dpdk/spdk_pid2334245 00:35:04.703 Removing: /var/run/dpdk/spdk_pid2335531 00:35:04.703 Removing: /var/run/dpdk/spdk_pid2335828 00:35:04.703 Removing: /var/run/dpdk/spdk_pid2336117 00:35:04.703 Removing: /var/run/dpdk/spdk_pid2336419 00:35:04.703 Removing: /var/run/dpdk/spdk_pid2336709 00:35:04.703 Removing: /var/run/dpdk/spdk_pid2336961 00:35:04.703 Removing: /var/run/dpdk/spdk_pid2337168 00:35:04.703 Removing: /var/run/dpdk/spdk_pid2337458 00:35:04.703 Removing: /var/run/dpdk/spdk_pid2338241 00:35:04.703 Removing: /var/run/dpdk/spdk_pid2341236 00:35:04.703 Removing: /var/run/dpdk/spdk_pid2341492 00:35:04.703 Removing: /var/run/dpdk/spdk_pid2341648 00:35:04.703 Removing: /var/run/dpdk/spdk_pid2341762 00:35:04.703 Removing: /var/run/dpdk/spdk_pid2342050 00:35:04.703 Removing: /var/run/dpdk/spdk_pid2342254 00:35:04.703 Removing: /var/run/dpdk/spdk_pid2342537 00:35:04.703 Removing: /var/run/dpdk/spdk_pid2342747 00:35:04.703 Removing: /var/run/dpdk/spdk_pid2343014 00:35:04.703 Removing: /var/run/dpdk/spdk_pid2343030 00:35:04.703 Removing: /var/run/dpdk/spdk_pid2343286 00:35:04.703 Removing: /var/run/dpdk/spdk_pid2343294 00:35:04.703 Removing: /var/run/dpdk/spdk_pid2343854 00:35:04.703 Removing: /var/run/dpdk/spdk_pid2344108 00:35:04.703 Removing: /var/run/dpdk/spdk_pid2344401 00:35:04.703 Removing: /var/run/dpdk/spdk_pid2348106 00:35:04.703 Removing: /var/run/dpdk/spdk_pid2352328 00:35:04.703 Removing: /var/run/dpdk/spdk_pid2362531 00:35:04.703 Removing: /var/run/dpdk/spdk_pid2363454 00:35:04.703 Removing: /var/run/dpdk/spdk_pid2367667 00:35:04.703 Removing: /var/run/dpdk/spdk_pid2367987 00:35:04.703 Removing: /var/run/dpdk/spdk_pid2372183 00:35:04.703 Removing: /var/run/dpdk/spdk_pid2378071 00:35:04.703 Removing: /var/run/dpdk/spdk_pid2380679 00:35:04.703 Removing: /var/run/dpdk/spdk_pid2390856 00:35:04.703 Removing: /var/run/dpdk/spdk_pid2399626 00:35:04.703 Removing: /var/run/dpdk/spdk_pid2401421 00:35:04.703 Removing: /var/run/dpdk/spdk_pid2402353 00:35:04.703 Removing: /var/run/dpdk/spdk_pid2419537 00:35:04.703 Removing: /var/run/dpdk/spdk_pid2423582 00:35:04.703 Removing: /var/run/dpdk/spdk_pid2469243 00:35:04.703 Removing: /var/run/dpdk/spdk_pid2474490 00:35:04.703 Removing: /var/run/dpdk/spdk_pid2480241 00:35:04.703 Removing: /var/run/dpdk/spdk_pid2486517 00:35:04.703 Removing: /var/run/dpdk/spdk_pid2486578 00:35:04.703 Removing: /var/run/dpdk/spdk_pid2487429 00:35:04.703 Removing: /var/run/dpdk/spdk_pid2488347 00:35:04.703 Removing: /var/run/dpdk/spdk_pid2489267 00:35:04.703 Removing: /var/run/dpdk/spdk_pid2489738 00:35:04.703 Removing: /var/run/dpdk/spdk_pid2489835 00:35:04.703 Removing: /var/run/dpdk/spdk_pid2490142 00:35:04.703 Removing: /var/run/dpdk/spdk_pid2490196 00:35:04.703 Removing: /var/run/dpdk/spdk_pid2490208 00:35:04.703 Removing: /var/run/dpdk/spdk_pid2491120 00:35:04.703 Removing: /var/run/dpdk/spdk_pid2492031 00:35:04.703 Removing: /var/run/dpdk/spdk_pid2492902 00:35:04.703 Removing: /var/run/dpdk/spdk_pid2493412 00:35:04.703 Removing: /var/run/dpdk/spdk_pid2493423 00:35:04.703 Removing: /var/run/dpdk/spdk_pid2493653 00:35:04.703 Removing: /var/run/dpdk/spdk_pid2494871 00:35:04.703 Removing: /var/run/dpdk/spdk_pid2495874 00:35:04.703 Removing: /var/run/dpdk/spdk_pid2503955 00:35:04.703 Removing: /var/run/dpdk/spdk_pid2533098 00:35:04.703 Removing: /var/run/dpdk/spdk_pid2537550 00:35:04.703 Removing: /var/run/dpdk/spdk_pid2539205 00:35:04.703 Removing: /var/run/dpdk/spdk_pid2541097 00:35:04.703 Removing: /var/run/dpdk/spdk_pid2541185 00:35:04.703 Removing: /var/run/dpdk/spdk_pid2541419 00:35:04.703 Removing: /var/run/dpdk/spdk_pid2541444 00:35:04.703 Removing: /var/run/dpdk/spdk_pid2541939 00:35:04.703 Removing: /var/run/dpdk/spdk_pid2544165 00:35:04.703 Removing: /var/run/dpdk/spdk_pid2544931 00:35:04.703 Removing: /var/run/dpdk/spdk_pid2545436 00:35:04.703 Removing: /var/run/dpdk/spdk_pid2547541 00:35:04.703 Removing: /var/run/dpdk/spdk_pid2548022 00:35:04.703 Removing: /var/run/dpdk/spdk_pid2548741 00:35:04.703 Removing: /var/run/dpdk/spdk_pid2552789 00:35:04.703 Removing: /var/run/dpdk/spdk_pid2558150 00:35:04.703 Removing: /var/run/dpdk/spdk_pid2558152 00:35:04.703 Removing: /var/run/dpdk/spdk_pid2558154 00:35:04.703 Removing: /var/run/dpdk/spdk_pid2561812 00:35:04.703 Removing: /var/run/dpdk/spdk_pid2570099 00:35:04.703 Removing: /var/run/dpdk/spdk_pid2574100 00:35:04.703 Removing: /var/run/dpdk/spdk_pid2580097 00:35:04.703 Removing: /var/run/dpdk/spdk_pid2581395 00:35:04.703 Removing: /var/run/dpdk/spdk_pid2582727 00:35:04.703 Removing: /var/run/dpdk/spdk_pid2584047 00:35:04.703 Removing: /var/run/dpdk/spdk_pid2589277 00:35:04.703 Removing: /var/run/dpdk/spdk_pid2593464 00:35:04.703 Removing: /var/run/dpdk/spdk_pid2597400 00:35:04.703 Removing: /var/run/dpdk/spdk_pid2604761 00:35:04.703 Removing: /var/run/dpdk/spdk_pid2604770 00:35:04.703 Removing: /var/run/dpdk/spdk_pid2609245 00:35:04.703 Removing: /var/run/dpdk/spdk_pid2609446 00:35:04.703 Removing: /var/run/dpdk/spdk_pid2609538 00:35:04.703 Removing: /var/run/dpdk/spdk_pid2609951 00:35:04.703 Removing: /var/run/dpdk/spdk_pid2609956 00:35:04.703 Removing: /var/run/dpdk/spdk_pid2614434 00:35:04.703 Removing: /var/run/dpdk/spdk_pid2615005 00:35:04.703 Removing: /var/run/dpdk/spdk_pid2619339 00:35:04.703 Removing: /var/run/dpdk/spdk_pid2621886 00:35:04.703 Removing: /var/run/dpdk/spdk_pid2627282 00:35:04.703 Removing: /var/run/dpdk/spdk_pid2632388 00:35:04.703 Removing: /var/run/dpdk/spdk_pid2641658 00:35:04.703 Removing: /var/run/dpdk/spdk_pid2648550 00:35:04.703 Removing: /var/run/dpdk/spdk_pid2648600 00:35:04.703 Removing: /var/run/dpdk/spdk_pid2667211 00:35:04.703 Removing: /var/run/dpdk/spdk_pid2667691 00:35:04.961 Removing: /var/run/dpdk/spdk_pid2668319 00:35:04.961 Removing: /var/run/dpdk/spdk_pid2668847 00:35:04.961 Removing: /var/run/dpdk/spdk_pid2669544 00:35:04.961 Removing: /var/run/dpdk/spdk_pid2670062 00:35:04.961 Removing: /var/run/dpdk/spdk_pid2670552 00:35:04.961 Removing: /var/run/dpdk/spdk_pid2671240 00:35:04.961 Removing: /var/run/dpdk/spdk_pid2675282 00:35:04.961 Removing: /var/run/dpdk/spdk_pid2675519 00:35:04.961 Removing: /var/run/dpdk/spdk_pid2681494 00:35:04.961 Removing: /var/run/dpdk/spdk_pid2681648 00:35:04.961 Removing: /var/run/dpdk/spdk_pid2687200 00:35:04.961 Removing: /var/run/dpdk/spdk_pid2691627 00:35:04.961 Removing: /var/run/dpdk/spdk_pid2701352 00:35:04.961 Removing: /var/run/dpdk/spdk_pid2701830 00:35:04.961 Removing: /var/run/dpdk/spdk_pid2705957 00:35:04.961 Removing: /var/run/dpdk/spdk_pid2706335 00:35:04.961 Removing: /var/run/dpdk/spdk_pid2710356 00:35:04.961 Removing: /var/run/dpdk/spdk_pid2716072 00:35:04.961 Removing: /var/run/dpdk/spdk_pid2718663 00:35:04.961 Removing: /var/run/dpdk/spdk_pid2728500 00:35:04.961 Removing: /var/run/dpdk/spdk_pid2737534 00:35:04.961 Removing: /var/run/dpdk/spdk_pid2739355 00:35:04.961 Removing: /var/run/dpdk/spdk_pid2740206 00:35:04.961 Removing: /var/run/dpdk/spdk_pid2756119 00:35:04.961 Removing: /var/run/dpdk/spdk_pid2759977 00:35:04.961 Removing: /var/run/dpdk/spdk_pid2762762 00:35:04.961 Removing: /var/run/dpdk/spdk_pid2770114 00:35:04.961 Removing: /var/run/dpdk/spdk_pid2770193 00:35:04.961 Removing: /var/run/dpdk/spdk_pid2775145 00:35:04.961 Removing: /var/run/dpdk/spdk_pid2777110 00:35:04.961 Removing: /var/run/dpdk/spdk_pid2779071 00:35:04.961 Removing: /var/run/dpdk/spdk_pid2780122 00:35:04.961 Removing: /var/run/dpdk/spdk_pid2782615 00:35:04.961 Removing: /var/run/dpdk/spdk_pid2783812 00:35:04.961 Removing: /var/run/dpdk/spdk_pid2792399 00:35:04.961 Removing: /var/run/dpdk/spdk_pid2792858 00:35:04.961 Removing: /var/run/dpdk/spdk_pid2793320 00:35:04.961 Removing: /var/run/dpdk/spdk_pid2795583 00:35:04.961 Removing: /var/run/dpdk/spdk_pid2796051 00:35:04.961 Removing: /var/run/dpdk/spdk_pid2796559 00:35:04.961 Removing: /var/run/dpdk/spdk_pid2800319 00:35:04.961 Removing: /var/run/dpdk/spdk_pid2800346 00:35:04.961 Removing: /var/run/dpdk/spdk_pid2801856 00:35:04.961 Removing: /var/run/dpdk/spdk_pid2802403 00:35:04.961 Removing: /var/run/dpdk/spdk_pid2802421 00:35:04.961 Clean 00:35:04.961 12:58:47 -- common/autotest_common.sh@1453 -- # return 0 00:35:04.961 12:58:47 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:35:04.961 12:58:47 -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:04.961 12:58:47 -- common/autotest_common.sh@10 -- # set +x 00:35:05.219 12:58:47 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:35:05.219 12:58:47 -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:05.219 12:58:47 -- common/autotest_common.sh@10 -- # set +x 00:35:05.219 12:58:47 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:35:05.219 12:58:47 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:35:05.219 12:58:47 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:35:05.219 12:58:47 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:35:05.219 12:58:47 -- spdk/autotest.sh@398 -- # hostname 00:35:05.219 12:58:47 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-wfp-08 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:35:05.219 geninfo: WARNING: invalid characters removed from testname! 00:35:27.144 12:59:08 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:35:29.045 12:59:11 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:35:30.946 12:59:13 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:35:32.846 12:59:15 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:35:34.748 12:59:17 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:35:36.651 12:59:19 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:35:38.554 12:59:21 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:35:38.554 12:59:21 -- spdk/autorun.sh@1 -- $ timing_finish 00:35:38.554 12:59:21 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:35:38.554 12:59:21 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:35:38.554 12:59:21 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:35:38.554 12:59:21 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:35:38.554 + [[ -n 2250241 ]] 00:35:38.554 + sudo kill 2250241 00:35:38.564 [Pipeline] } 00:35:38.582 [Pipeline] // stage 00:35:38.588 [Pipeline] } 00:35:38.604 [Pipeline] // timeout 00:35:38.610 [Pipeline] } 00:35:38.624 [Pipeline] // catchError 00:35:38.629 [Pipeline] } 00:35:38.647 [Pipeline] // wrap 00:35:38.653 [Pipeline] } 00:35:38.669 [Pipeline] // catchError 00:35:38.678 [Pipeline] stage 00:35:38.680 [Pipeline] { (Epilogue) 00:35:38.694 [Pipeline] catchError 00:35:38.697 [Pipeline] { 00:35:38.712 [Pipeline] echo 00:35:38.714 Cleanup processes 00:35:38.722 [Pipeline] sh 00:35:39.012 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:35:39.012 2812755 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:35:39.027 [Pipeline] sh 00:35:39.312 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:35:39.312 ++ grep -v 'sudo pgrep' 00:35:39.312 ++ awk '{print $1}' 00:35:39.312 + sudo kill -9 00:35:39.312 + true 00:35:39.325 [Pipeline] sh 00:35:39.614 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:35:51.830 [Pipeline] sh 00:35:52.115 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:35:52.115 Artifacts sizes are good 00:35:52.131 [Pipeline] archiveArtifacts 00:35:52.140 Archiving artifacts 00:35:52.262 [Pipeline] sh 00:35:52.590 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:35:52.629 [Pipeline] cleanWs 00:35:52.662 [WS-CLEANUP] Deleting project workspace... 00:35:52.662 [WS-CLEANUP] Deferred wipeout is used... 00:35:52.680 [WS-CLEANUP] done 00:35:52.690 [Pipeline] } 00:35:52.725 [Pipeline] // catchError 00:35:52.733 [Pipeline] sh 00:35:53.009 + logger -p user.info -t JENKINS-CI 00:35:53.017 [Pipeline] } 00:35:53.028 [Pipeline] // stage 00:35:53.033 [Pipeline] } 00:35:53.045 [Pipeline] // node 00:35:53.050 [Pipeline] End of Pipeline 00:35:53.081 Finished: SUCCESS